POST-SECONDARY EDUCATION REPORT FAILS TO PASS MUSTER
Saturday, September 18, 2004
Handing out grades is a tricky business, and figuring out what they mean is sometimes even harder. Getting an A in calculus is harder than getting an A in "math for poets" or whatever a college calls its course for people who hate math and are no good at it but have to take one math course to graduate.
And that's just in one department. Getting As in math and physical science courses is harder than in humanities and social sciences -- you may take that as just the snobbery of a former mathematician, but I base it on the observation that math and science majors commonly cross the aisle to take upper-division seminars outside their field, but the traffic in the other direction is about as heavy as the boats escaping from Florida to Cuba.
Please keep this level of uncertainty in mind as I introduce you to the 2004 edition of Measuring Up, a report from the National Center for Public Policy and Higher Education, which assigns grades to the states in several areas relating to postsecondary education, including preparation for college or other training, rates of participation, affordability, rates of retention and completion and benefits the state receives from having a more highly educated population (the whole report is at highereducation.org online).
The center's first report came out in 2000. I panned it then, largely because it combined all these disparate factors into a single grade, which made no sense at all. I am pleased to report that they aren't doing that any more, but the same kind of methodological quirks show up even within the five graded categories. For instance, what does it mean to combine the percent of the population voting with the amount by which having a bachelor's degree increases income? Why does the former count for 10.5 percent of the "benefits" grade and the latter, 18.75 percent?
And if the purpose is to drive state policy, why grade the states on matters over which state policy has no influence?
Pat Callan, the center's president, was in Denver last month and stopped by the News office to talk about the upcoming report. He was very generous with his time, and I hate to disappoint him, but I'm just not convinced that what they are trying to do can be done.
In the area of preparation, Callan said, 17 or 18 states (Colorado among them) fail to collect data on such things as how many students take algebra in eighth grade or are still taking math in their senior year. The report solves the problem of missing data in a category by using the average of all the other variables in that category. As a way of encouraging states to start collecting useful data, maybe it will help. But it has unpredictable effects on a state's grades in that category.
Colorado's affordability grade dropped from a B-minus in 2000 to a D-minus in 2004. Anyone familiar with the state's dismal revenues over the past several years would likely assume the lower grade resulted from decreased support for higher ed. But that's wrong; the ability of an average family to pay for college has scarcely changed over the past decade. The center just changed the indicators it uses to calculate the grade.
It's not that it shouldn't change indicators as circumstances change; sometimes it has to, if for example the federal government stops collecting a particular kind of information. It's just that if grades change as a result, policymakers can't know what the grades mean.
One table in the affordability chart says it considers the family's ability to pay for college for "20 percent of the population with the lowest income." But center staff confirmed that what they really mean is the lowest household income quintile as defined by the U.S. Census, and that quintile contains only about 15 percent of the population.
Also, the grades are not adjusted for demographic differences between the states. So, for instance, a state could theoretically be tops in the nation for high-school graduation rates for each particular racial and ethnic group, and yet have a low graduation rate overall because it had relatively more people from groups whose graduation rate is lower, particularly Hispanics.
Callan said that's because they don't want to give states any excuses for low performance, and I agree with that. But it does bear on whether grades are really comparable between states. The notes to the report say, "factors like wealth and economic vitality had about a 25 percent influence on grades, and that race and ethnicity had about a 10 percent influence." That's substantial.
All in all, the report falls short of its goals. I think I'll give it a C-minus.