SAT'S PROBLEM IS THE STUDENTS, NOT THE TEST

Jul. 11, 1993
Los Angeles Daily News

     Boys do better than girls on standardized tests such as the
Scholastic Assessment Test, and critics say that results from
"gender bias." I'm skeptical.

      Mostly I'm skeptical because a lot of the people advancing
this claim have a proprietary interest to defend. A whole industry
has sprung up to offer courses in how to do better on the SAT test.
The College Board, which sponsors the SAT and changed its name from
the Scholastic Aptitude Test in March, gives excellent advice on
that subject, and it's free: Take a demanding program of academic
courses in high school. But nobody will profit (except the
students) if students take that advice.

     The bias hunters say they can sniff out questions that tend to
favor boys or girls. But their notions - that questions that refer
to sports favor boys, for instance - are simplistic. I know less
about sports than almost anybody you could meet, but who needs to
be a sports fan to figure out that basketball is to basket as
soccer is to goal?

      Cooking questions don't necessarily favor girls, either. Men
did better than women on this one. "A recipe calls for 1 cup of
nuts, 5 cups of chocolate chips, and -1/3 cup of raisins. What is
the ratio of nuts to chips to raisins?" (It's 3:15:1.)

      Because stereotypical ideas about who will find questions
hard can so easily be wrong, the Educational Testing Service, which
prepares the test, does a statistical analysis to identify
questions that are answered differently by different groups of
people with the same ability. Such questions are either eliminated
or balanced with other questions.

      So if the bias doesn't lie in the individual questions, why
do scores turn out differently? Most probably, because they are
measuring a real difference, especially in mathematics. Women are
less likely to take college prep courses, and they are more likely
to be the first in their family to attend college, and both of
these factors have a large effect on scores.

      On the verbal part of the exam, the 1992 average score for
women was 419 (on a scale from 200 to 800) and for men, 428. That
difference is quite small, compared with the gap that results from
different high school preparation; from 348 (weak) to 464 (strong).
Parents' education has an even stronger effect.

      In the mid-'70s, when more men than women took the SAT, there
was almost no gap in verbal scores.

      The math gap is wider, from 456 to 499. But again, it's
smaller than the range just for women alone, which runs from 388
(parents with no high school diploma) to 510 (parent with a
graduate degree).

      Besides the gender differences in scores, there are sizable
ethnic differences. There are also significant effects from
household income, and from location, whether by state or urban vs.
rural. And all these factors interact, in ways that are not all all
obvious or intuitive.

      The "gender-bias" theorists like to make much of the fact
that women generally earn higher grades than men with the same SAT
scores, a phenomenon they describe as "underpredicting" the
performance of women.

      One could just as logically say that the tests overpredict
the performance of men, but that wouldn't be nearly as gratifying
to people who want to feel aggrieved.

      "Promoting a test which underpredicts the performance of the
majority of its consumers," writes chief theorist Phyllis Rosser,
"is more than consumer fraud, it is irresponsible and damaging.
After nearly a quarter of a century of this inequity, women cannot
wait any longer to be equally included in the talent pool."

      Rosser is the director of the Equality in Testing Project in
Holmdel, N.J., and author of "The SAT Gender Gap: Identifying the
Causes," published in 1989 by the Center for Women Policy Studies.

      Among the ills Rosser attributes to the gender gap are a loss
in women's self-esteem when they learn their SAT scores, lowered
aspirations for college and, for all I know, the hole in the ozone
layer and radioactive spinach.

      Changing the SAT to lower men's scores would eliminate the
gender gap just as effectively as raising women's scores, but one
would inflate women's self-esteem and the other wouldn't. Neither
would improve the effectiveness of the test in predicting college
performance. If the scores changed, the formula used to calculate
performance would change too, but the predictions wouldn't. If you
want to measure something three inches long, it doesn't matter
where on the ruler you start.

      There's nothing wrong with courses that coach students in how
to do better on the SAT, as long as they work. I taught a course
like that five years ago, in Shanghai. It covered the Graduate
Record Examination, which is very similar to the SAT, just harder.
There wasn't any great secret to what we did in the class, which
was to practice taking sample exams with the same time limits as
real exams, and then to talk about why the answers were what the
answer book said they were. And it did work. Almost everybody
improved somewhat, and a few people made spectacular gains.

      Most of my students had never met a native speaker of English
before. For that reason they needed a teacher, although otherwise
they could probably have taught themselves. There were a lot of
cultural gaps that someone had to fill in. Such gaps don't mean the
test is biased, though, just that it is measuring something that's
important for success in graduate work - namely, fluency in
English. If I had to take a college admission test in Chinese, I'd
register as brain-dead, because I can't read Chinese. But that's a
fair estimate of my likelihood of succeeding as a student in a
Chinese-speaking university (nil).

      The Chinese lesson I would like to leave you with is that
there was no gender gap in mathematics in my Shanghai classes. Both
the women and the men were so far above the U.S. average on the
quantitative section of the GRE that we decided it wasn't important
to practice that part of the test more than once. They weren't math
and science students, either; most of them were in the humanities.

      The gap that worries me most, in other words, is not the
gender gap - it's the nationality gap. Instead of fretting about
whether standardized tests are equally fair to men and women, I
think it's far more important to find out why both sexes do so
badly.
