News this month that a group of stakeholders convened by the U.S. Education Department agreed on a new federal approach to assessing colleges offered fresh evidence that we as a country have decided to judge the value of higher education based primarily on students’ economic outcomes.
The mechanism approved by the federal negotiating panel will set minimum earnings thresholds for graduates of academic programs at all colleges and universities; programs that fail to hit the mark will lose federal loan access or even Pell Grant funds, depending on how widespread the failure is.
Building a new government accountability scheme around postcollege economic outcomes makes sense: Ensuring that learners come out of their educational experience better off financially than they would have been otherwise is a logical minimum requirement.
But it reflects a larger problem, which is that we don’t have good ways of defining, let alone measuring, what quality or success look like in postsecondary education. And those of us who believe in higher education have erred badly by letting politicians and critics judge it exclusively by a narrow economic outcome like postgraduation salary.
Most importantly, we’ve never come close to being able to measure learning—how much students cognitively gain from a course of study or academic experience. What a game changer it would be if we could—we’d really know which institutions actually help their learners grow the most. (I suspect such a measurement would upend our thinking about which colleges and universities are “the best,” and that part of why we haven’t ever solved this problem is because it wouldn’t be in the interest of the institutions that are most esteemed now.)
Instead we look for proxies, and as our ability to track people’s movements between education and work has improved, we’ve focused on postcollege economic outcomes as our primary (if not exclusive) way of judging whether institutions serve learners well.
That’s logical in many ways:
- Most learners cite career success as their top reason for pursuing postsecondary education and training,
- Federal and state governments invest in higher education in large part because of the institutions’ economic contributions, and
- It’s comparatively easy. We can’t expect politicians with limited understanding and expertise to develop sophisticated accountability systems.
But overdependence on postcollege economic outcomes to judge higher education’s success and value ignores the full range of benefits that colleges and universities purport to deliver for individuals and for society collectively. It also has a range of potential unintended consequences, including deterring students from entering fields that don’t pay well (and institutions from supporting those fields).
Many academic leaders hoped that if they ignored calls for accountability, the demands would fade. But in that vacuum, we ended up with limited, flawed tools for assessing the industry’s performance.
The resulting loss of public confidence has damaged higher education, and turning that tide won’t be easy. But it’s not too late—if college leaders take seriously their need to marshal proof (not just words) that their institutions are delivering on what they promise.
What would that look like? College leaders need to collectively define for themselves and for the public how their institutions are willing to be held accountable for what they say they do for learners and for the public good.
This needs to be a serious attempt to say (1) this is what we purport to provide to individuals and to society, (2) this is how we will gauge success in achieving those goals, and (3) we commit to publicly reporting on our progress.
Pushback against this sort of measurement and accountability (excluding those who simply don’t believe colleges should have to prove themselves, who at this point must be ignored) tends to focus on two reasonable complications: (a) different types of institutions do different things and have differing missions, and (b) some of what colleges and universities do can be difficult (and perhaps impossible) to measure.
On argument (a), it’s certainly true that any effort to compare the full contributions of major research universities and of community colleges, for example, would need to focus on different things. The research university indicators might account for how many inventions their scientists have developed and how many graduate students they train; the community college indicators might include reskilling of unemployed workers and ESL classes for new immigrants preparing to become citizens.
But in their core functioning focused on undergraduate learners, most colleges do pretty much the same thing: try to help them achieve their educational goals, including a mix of the practical (developing knowledge, skills and preparation for work), the personal (intellectual and personal growth), and the collective (contributions to society, including being engaged participants in communities and society).
And on critique (b), yes, it’s true that some of what colleges and universities say they do may be hard to measure. But have we really tried? There are lots of big brains on college and university campuses: Couldn’t a working group find ways to quantify whether or not participation in a postsecondary course of study produces people with greater intercultural understanding or empathy? Or that they are more likely to donate to charity or to vote in national elections?
The goal of this initiative would be to develop (through the collective participation of a diverse group of institutional and other stakeholders, through an existing association or a new coalition of the willing created expressly for this purpose) a broadly framed but very specific menu of indicators that would present a fuller picture of whether colleges and universities are delivering on the promises they make to students and to society more broadly. Ideally we’d generate institution-level data that would scaffold up to an industrywide portrait.
The information would almost certainly give college leaders fodder to make a better public case about what their institutions already do well. But it would just as likely also reveal areas where the institutions fall short of what they say in their mission statements and where they collectively need to improve, and provide a scorecard of sorts to show progress over time.
At the core, it would give them a way of showing, to themselves and to their critics, that they are willing to look at their own performance and prove their value, rather than just asserting it as they have arrogantly done for a long time. Colleges and universities would get public credit for being willing to hold themselves accountable.
What would we want to measure, and how would we do so? Smarter people than me would need to help answer those questions, but possible areas of exploration include the following, based on ground laid over the years by the Gates Foundation’s Postsecondary Value Commission, Lumina and Gallup in a 2023 report, and others.
Economic indicators might include:
- Lifetime earnings
- Employment and unemployment rates/job placement in desired field
- Return on investment (comparing learners’ spending on their education with their lifetime earnings)
- Social mobility (Do colleges help people advance up the economic ladder? Can we update the 2017 Chetty data to become a regular part of the landscape?)
- Debt repayment
Noneconomic indicators might include:
- Employer alignment (Do higher education programs help students develop the skills and knowledge employers demand—technical skills like AI readiness and “human skills” such as critical thinking, problem-solving and creativity?)
- Civic and democratic engagement (voting rates, charitable contributions)
- Empathy and social cohesion (Does going to college make us more empathetic? More inclined to understand those who are different? Less racist?)
- Health and emotional well-being/happiness (Surely with all the health data out there, one might be able to document some correlation, if not causation?)
- Intercultural/global understanding
Most of the indicators above would gauge contributions to individuals, rather than to society as a whole (though obviously some accrue to society). Those who believe we’ve stopped viewing higher education as a public good might argue for trying to measure the contributions institutions make to local and national economies (through their research, role as employers. etc.), as community anchors (medically, culturally, spiritually), and the like.
Higher education has serious work to do to earn back the American public’s trust and confidence. Argumentation won’t suffice. I recognize that it may be hard to find (or develop) tangible information to build a data-based case that colleges and universities do what they say they do in their mission statements and promotional brochures.
But could it hurt to try? What we’re doing now isn’t working.
Doug Lederman was editor and co-founder of Inside Higher Ed from 2004 through 2024. He is now principal of Lederman Advisory Services.
