A compelling piece appeared on the American Physical Society News website a while ago that just came to my attention. (Thank you, Michael Fisher!) Author Casey W. Miller, an associate professor of physics at the University of South Florida, asks the physics community to consider the poor record their discipline holds around gender, racial, and ethnic inclusion. That pattern has been documented for years and is the subject of plenty of conversation, but as Miller makes clear, it is not a problem that exerts any broad or consistent practical purchase on the field.
Miller’s column is a model of careful argumentation that is worth keeping on hand for its clarity around an intractable social problem. But I think there’s one particularly transferable lesson in the piece: Miller makes a direct and powerful connection between university ranking systems (such as that propagated by US News) and a lack of diversity in physics graduate programs.
The GRE scores of admitted students factor into these numeric comparisons among programs in many disciplines, and with the ACT and SAT test scores deployed by undergraduate programs impel admissions decisions for virtually all U.S. schools. Not surprising to the readers of this blog, most likely, is Miller’s case that the heavy reliance by physics graduate programs on GRE scores impedes gender and racial diversity in that field. We learn that women and students of minority background intending to pursue the physical sciences tend to score lower on the GREs, often falling below cut-offs for admission. But more surprising perhaps, Miller then summarizes previous studies that have shown GRE scores to be poor predictors of research success among physics students, undeniably ” the aim of the PhD.”
What’s going on here? How does a field like physics, that many of us would generally think of as profoundly reflective about its own knowledge-making, about its own ways of seeking and handling data, end up with such a deeply skewed and selective relationship to data? By defaulting to conventional (and discriminatory) ideas about how easily people can be converted to data.
That many factors determine an individual’s performance on a standardized test has long been understood by researchers, and the list of those factors keeps growing. Physics professor Suzanne Amador Kane reminded me about the article in the NYTimes by Po Bronson and Ashley Merryman a few months ago. That piece summarized new research on biological contributors to students’ stress-while-testing and the variable psychological reactions that different students have to that physiological experience. We should of course approach all such genomic and bodily explanations with great care because, given the strength of discriminatory social structures in the U.S., those explanations tend to displace social factors in our analyses. But that’s all the more reason to question the very term “standardized testing.” And, to remember that the link customarily projected between STEM fields’ selectivity and practitioners’ promise or rigor, as I keep saying, needs to be seen as an arbitrary one.
What I’d highlight from Miller’s version of things is this: The use of scores certainly restricts participation in higher education and relies upon discriminatory social categories. But it also serves as a perfect disguise for our exclusionary educational habits; the symbolic values of testing and ranking are immense in STEM disciplines. Score-based admissions and the resultant rankings of universities on their basis suggest the pursuit of both quality and impartiality by higher ed. Those commitments are assuredly claimed by all disciplines, but the world of STEM expertise has a special investment in the objectivity of quantification. Throughout the world of science, comparisons among bits of data (as rankings by their nature perform), reassert the value of both individual measurements and of the metric itself…that is, they help validate the very act of measurement. But the understanding of test scores as a reflection of students’ promise and the veneration of those scores through school rankings are far from fair, and Miller helps us step back from that habitual, uncritical, numerical embrace.