Roosevelt Institute | Cornell University

Obama's Scorecard Isn't An A, But It's a Solid B+

By Phoebe KellerPublished October 18, 2015

The newly unveiled "College Scorecard" evaluates colleges and universities based on their measured "access, affordability and outcomes." Although many have found fault with the methodology of data collection or expansion of federal regulation accompanying the introduction of the new tool, its release is a step towards enabling students to invest more prudently in strong schools.

The Obama administration’s new “College Scorecard” represents the culmination of an expanding federal presence in the higher-education sector. The scorecard has existed for two years, but this month the Department of Education released the revamped version, which includes a more cohesive set of data and a host of new variables. The interactive tool is designed to assist prospective college students in their school search by providing them with information about a university’s tuition cost, graduation rate and mean graduate salary.  However, the release of this storm of data has drawn criticism from those who call the metrics “simplistic” and point to flaws inherent in the Integrated Postsecondary Education Data Source (IPEDS) fount of information. Despite the need to resolve a few methodological issues, the release of the Scorecard represents a significant step towards illuminating the murky world of college financials.  Ideally this new tool will empower students, rather than government officials, to fund the strongest schools simply by acting as rational consumers.
The release of this data takes place against the backdrop of increasing federal oversight of higher education. A variety of different government policies have been a hotly contested in recent months as Obama focuses on ameliorating the burden of student debt and 2016 candidates such a Bernie Sanders, Hillary Clinton and Marco Rubio offer their plans for overhaul of the higher education system.  The delay of the Department of Education’s annual release of college rankings this summer signaled a significant shift in the federal metrics used to evaluate a university’s “cost and effectiveness.”
A number of accusations against notorious for-profit colleges like Corinthian Colleges -- whose students filed a $2.5 billion dollar claim accusing the school of deceptive and fraudulent advertising -- have further spurred on the desire for additional government regulation of the higher education arena.  In analyzing the new Scorecard, many speculate that the tool may eventually be used to reallocate federal funds based any given school’s Scorecard “grade.”  
Specifically, the Scorecard takes a three-pronged approach in evaluating schools.  Any given college is “graded” based on its access (tuition cost and Pell grants available), affordability (average scholarship amount) and outcome (graduation rates and average graduate income).  Some critics claim that the pool of federal data, which is drawn from IPEDS, is so flawed that it compromises the entire effort.  The holes in the Scorecard’s tactics do not seem serious enough to undermine the project as whole; however, some aspects of the methodology should and can be adjusted.
For example, IPEDS data counts part-time students and transfers only in the denominator of a school’s graduation rate, effectively considering them dropouts.  For many students at community colleges, transferring is actually a sign of success, and thus the school’s graduation rate may be deflated and misleading.  Students who transfer should either be removed entirely from graduation rate calculations or should compose an entirely separate statistic.  Similarly, IPEDS double counts students who default on multiple different loans.  Unless default rates are adjusted to measure the aggregate amount owed, rather than simply the number of defaults, students should not be recorded twice.  
One of the most serious deficiencies of the IPEDS data is its failure to distinguish between the different programs available at a school, only publishing statistics for an entire institution.  The tool would become vastly more useful if it was tailored to display data from different departments within schools, as many schools may have poor aggregate statistics but boast a few very strong programs.
These and other failings of the IPEDS information are certainly cause for concern, but they do not obstruct the project’s aim of empowering students with meaningful data.  As the Scorecard is further developed, hopefully many of these failings will be corrected and the tool further refined.  However, there is no basis for the declaration that a Scorecard of any kind is “overly simplistic” and callous in its evaluation of measures like the salaries of liberal arts degrees as opposed to graduates of perhaps more pragmatics programs.  
One of the most ubiquitous complaints – that the Scorecard will never be able to take the place of careful, thorough research by prospective students, who will use data that already exists --  demonstrates a fundamental misconception about which students will benefit the most from the new method of college evaluation.  For students with plentiful resources, including informed guidance counselors, concerned parents, and general discernment in evaluating different schools, this Scorecard may not make much of a difference. However for students who lack these resources, specifically low-income or first-generation students, this tool could prove invaluable in determining the true worth of a university’s degree.   
And while much of this data does exist in other places, (U.S. News and World Report ect.), this Scorecard is truly one of the first ventures that aims to asses the success of a school by holding them accountable not just for how many of their students graduate, but for how they fare after graduation. The introduction of the “mean salary of graduates” factor into college evaluations, if utilized, will help students avoid the crippling burden of unnecessary student loan debt.  
Although this tool does in some ways represent an increase in federal oversight of higher education, the Scorecard is more fundamentally a means to equip students to act as rational consumers.  If the data provided proves reliable, certain schools will have to adjust their tuition or programs as demand for their product declines.  The Scorecard is a great equalizer for low-income students and a significant step towards the illumination of the true value of a college degree. Ideally, it will challenge schools to stay competitive simply by equipping students to make better bets.