Deprecated: Function get_magic_quotes_gpc() is deprecated in /var/www/html/fr/freeliberal.com/textpattern/lib/constants.php on line 136
The Problems with College Rankings · Free Liberal

Free Liberal

Coordinating towards higher values

The Problems with College Rankings

by , Columnist , Free Liberal

Making the Rankings

Higher education ranking surveys attract increasing attention with each new media release. At a April 2010 meeting of the Association to Advance Collegiate Schools of Business, Robert Morris of U. S. News and World Report stated that visits to ranking pages were currently exceeding fifteen million per month.

Universities ascending in the ratings boast with pride about their independently documented quality. Those descending quibble over the validity of the metrics employed but still enjoy the benefits of the notoriety and focused public attention. Parents and students regularly refer to the ranking reports when deciding their university preferences. Their decisions obviously carry significant financial burdens and life-changing outcomes. The media that sponsors the rankings uses the controversy to sell copies. The stakes are high for all constituencies. I suggest that only two are well served.

Each of these constituencies has strong but differing interests in these annual rankings. Institutional ascent is widely assumed to burnish prestige thus attracting higher quality faculty and students. A higher ranking is also assumed to attract more research contracts, philanthropy and general goodwill.

Descent suggests the reverse, an institution perhaps in decline. Quality recruits among faculty and students may decline employment and admission offers. Grants and philanthropy opportunities may also be jeopardized. Yet the losers still have ample opportunities to voice their carefully crafted explanations and rejoinders.

The American entertainer George M. Cohan may have been insightful when he said, “I don’t care what you say about me, as long as you say something about me, and as long as you spell my name right.” Declining institutions always seem to have an explanation which draws media attention.

The media, often the survey sponsors, win either way with wide public attention. The buzz they create grows each year. The releases of survey results appear to attract increased readership. It is said that US News and World Report that news stand sales jump 50% with the issue of its college surveys. Website hits climb to 10 million within three days compared to the 500,000 mouthy averages. One can safely assume that the bulk of this added attention comes from the one constituency, parents and students, that has the greatest interest in rankings’ relevance and efficacy comparing specific institutions. I suggest that they are the least well served.

To support my point, I suggest it is important to probe beneath the media hype to gauge the utility of these increasingly popular surveys to parents and students. Two questions are in order: What data supports these assessments of relative superiority? More important than institutional bragging rights aside, what is their utility in helping parents and students in making their university choice?

Each survey sponsors employs its select, yet similar, blend of indicators, which in aggregate are purported suggest the relative quality of the institutions surveyed. Let’s look at the makeup of three of the most recognized schemes. U.S. News and World Report Best Colleges, Times Higher Education QS World University Rankings and Shanghai Jiao Tong University.

U.S. News and World Report

U.S. News and World Report has grown from a single ranking survey of U.S. institutions into stable of specific regional and disciplinary national and international surveys. Its international rankings are based on the QS World University Rankings system.

It is based upon the following metric and weighs. I will add my own characterizations.

  1. Academic Peer Review, a composite globally balanced survey of nearly 10,000 academics, is the centerpiece of the scoring scheme weighted at 40%, and is characterized as an input measure.
  2. Employer Review, a composite globally balanced survey, includes employer appraisals of hard and soft skills of bachelor and master graduates, weighted at 10% of the scoring scheme and is characterized as a proxy output measure. I say proxy because it is a composite of what is essentially subjective consumer after purchase opinion.
  3. Faculty Student Ratio, universally available metric, weighted at 20%. Low faculty to student ratios have long been a pedagogical ideal. I characterized as an input measure.
  4. Citations per Faculty, based on the Scopus abstract and citation database of research literature, includes latest five complete years of data with the total citation count factored against the number of faculty, weighted at 20%, and is as an input measure.
  5. International Faculty, said to reflect the attraction of quality faculty to world class institution, weighted at 5%, and is characterized as an input measure.
  6. International Students, said to reflect the attraction of quality faculty to world class institution, weighted at 5%, and is characterized as an input measure.

With English as the current global lingua, metrics 5 and 6 are flawed. They clearly favor institutions employing English as the language of instruction. Even with these relatively low weights, otherwise excellent institutions teaching in non-English tongues are disadvantaged.

Times Higher Education – QS World University Rankings

The recently terminated Times Higher Education – QS World University Rankings collaboration (2004—2009)4 focused on a nearly identical blend of quality indicators

  1. Academic peer review, weighted at 40%, and is characterized as an input measure.
  2. Employer review, weighted at 10% and again is characterized as a proxy output measure
  3. International faculty score, weighted at 5%, and again is characterized as an input measure with the same concern as noted above.
  4. International student score, weighted at 5%, and again is characterized as an input measure with the same concern as noted above.
  5. Faculty/student score, weighted at 20%, and again is characterized as an input measure with the same concern as noted above.
  6. Citation/Faculty score, weighted at 20%, and again is characterized as an input measure.

Shanghai Jiao Tong University’s Academic Ranking of World Universities

The Shanghai Jiao Tong University’s Academic Ranking of World Universities places the greatest emphasis on emphasis on aggregate faculty research productivity.

  1. Alumni of an institution winning Nobel Prizes and Fields Medals, weighted at 10% and is characterized as a proxy output measure.
  2. Staff of an institution winning Nobel Prizes and Fields Medals, weighted at 20% and is characterized as an input measure
  3. Highly cited researchers in 21 broad subject categories, weighted at 20% and is characterized as an input measure.
  4. Articles published in Nature and Science, weighted at 20% and is characterized as an input measure.
  5. Articles Indexed in Science Citation Index-Expanded and Social Science Citation Index, weighted at 20% and is characterized as an input measure.
  6. Academic performance with respect to the size of an Institution, weighted at 10% and is characterized as a leveling measure.

Alumni and faculty Nobel Prizes and Field Medal winners undoubtedly burnish institutional prestige. While the faculty metric is more representative of truly significant past achievements, some decades old, it is an extension of the measure 3 through 5. I characterize the alumni metric as a very weak proxy of an output measure. Assigning credit to alumni, bachelors, masters or doctorate alumni ignores all of pre-enrollment and post-graduate experiences that contribute to the recipient’s achievements. It assumes an unverifiable impact on the recipient’s career. Neither of these prestige metrics have a verifiable relationship to the quality of learning experience.

Excepting the peer and employer assessments, the remaining input or output metrics currently in vogue share a common characteristic. They are based upon table lookups of existing databases. The peer and employer digitally-based survey assessments are amassed annually. Thus each new ranking is based on relatively current and easily obtainable information. With clear tilt towards input measures serves the sponsors’ needs for efficiency and timeliness. The institutions enjoy regular media exposure and the media enjoy added readership and accompanying revenues.

Quality Ingredients Don’t Guarantee a Scrumptious Cake

While their specific ingredients may differ, all three share a characteristic flaw that limits their utility to students and parents in selecting an institution. The bulk of their indicators are input variables. Presumably, if you start with quality ingredients the product will be superior. This is a fallacious assumption at best. I posit that reliance on input indicators has a parallel in the kitchen. Starting with the highest quality ingredients does not guarantee that the resulting cake will be eatable. How the ingredients are blended and prepared at each step of the process will ultimately determine the quality of the end product. What occurs in the classrooms, laboratories, faculty offices and beyond, over the total undergraduate experience directly influences the quality of the output. Students and parents should have access to data on what graduates know that they didn’t know at admission. They need to know what value has been added.

There are three reasons for the heavy reliance on the traditional array of metrics. I assign the first two to the sponsors. The third is institution based. One, these once a year measures are relatively easy — i.e. inexpensive — to assemble from available databases. Even the survey data are digitally collected within a relatively short period of time. Two, the traditional undergraduate experience extends over three or four years. This interval may be considered too long for the contemporary news cycle. Three, colleges and universities have never been forthcoming about the quality of their output.

As acceptance — begrudging or otherwise — of rankings has settled into the higher education environment, the debate has moved on to how to improve their methodology to provide more useful and legitimate data on which to base well-informed decisions. Both the sponsors and institutions have responsibilities, if students and parents are to be better served.

What are the alternatives for better serving students and parents needs for relevant output information? In the ideal long-term, the major ranking systems would expand their output metrics to include more relevant measures of output. These additions should be weighted at the expense of the input measures. Each institution could be asked to respond to a common set of output or value-added measures. Unfortunately, the concept of value added in higher education remains ill defined and contentious within much of the higher education community. One could envision something more extreme. Two rankings could be published. One would focus on the input metrics and another on the output. In the meantime those needed output information will have to look elsewhere.

While the OECD’s Assessing Higher Education Learning Outcomes project may provide the best promise, its four strands — Generic Skills, Discipline-specific, Learning in context and Value-Added — are in separate feasibility studies and years away from implementation. In the meantime, repeated annual cohorts of students and parents must make their decisions. The status quo will not serve them well. Unfortunately, there are not many in the short-term remedies. I suggest that there may be two short-term bridge solutions that may be worth examining while we await the ideal.

Since students and parents undoubtedly visit their websites, confident and proactive institutions could independently publish their output metrics. These could include aggregate scores on national, major field examinations, program accreditations and the institution’s locally produced evidence of value added data on their websites and public relations releases. Results of employer/graduate school and graduate surveys of satisfaction with the aggregate learning experience would also be helpful to students and parents. Absence of such data will send a clear signal regarding the institution’s confidence in its output no matter how impressive its input metrics. A more ambitious subsequent step would be for the major ranking sponsors to simply add a link to each institution willing to publish its output data.

With either option students and parents could quickly learn which institutions had the confidence and some justified pride in their product. Both have the potential of validating the long assumed strong connection between the quality of input and the quality of output. I have a hunch that institutions lagging in the traditional input reliant rankings will show very well when countervailing output measures are made available.

About

William Patrick Leonard is Vice Dean of Academic and Student Services at the Solbridge International School of Business in Daejeon, Korea. He is Advisor on Higher Education Policy at the Rio Grande Foundation.