Bout to Be MORE Racist: The Philly Magnet School Decision
Recently, I got this email from my dear friend Eli Goldblatt (at Temple U). He put me in contact with a committed and engaged pair of teachers who are concerned about the changes to admissions for magnet schools in the Philadelphia area, and more specifically they were concerned about a 90-minute, timed writing exam that students would now have to take for several of the magnet schools admissions processes. The exam would be scored by a computer, an automated scoring technology. The move is meant to centralize decisions and eliminate local decisions that were understood before as racist, or having too much bias that led to racist outcomes in some of the magnet schools. So the changes are meant to "increase diversity" (that's good), but as far as I can tell, they mean by this to "eliminate bias" in decisions by people for admission, which opens a host of other racist problems that it seems clear they've not thought through.
These two Philly teachers wanted my opinion and help. As I was writing up my response to their email, I realized perhaps others would like to hear my response. It may offer others something too. I've reproduced the main part of it below with only slight edits for readability in this venue.
But what exactly is this data point that the district official feels is so important to have, that is, what is the writing assessment actually testing? It seems to me given the time constraints etc. of the test that the automated scoring technology can only assess “correct English” by using some construct that is defined by the programmers of the technology. First, you gotta trust those programmers that they know enough about what makes for good writing. Then, you have to trust that the technology can assess writing in a way that is valid enough for the decisions you are trying to make from it. In this case, I think, it’s who gets the opportunity to go to certain magnet schools. That is a pretty important decision to be left to computers and programs that educators didn’t help design, nor understand how they work to produce whatever they produce.
Programs like the one your district is looking to use cannot assess, for instance, something like a compelling argument, or originality. I don’t even know of any automated program that can UNDERSTAND language, at least not in the ways that humans do. Les Perelman from MIT has debunked these technologies for years. You might take a look at his Babel generator, which produces junk sentences that end up scoring very high by these technologies. The writing exam is also a standardized test, and I’m sure you know of the racist history those have -- I've also blogged about this too, such as "Is the Assessment of Language Colonialist?" (pitched to students in "Where Does Grading Come From?"). Norbert Elliot’s award winning book, On A Scale (2005) offers a comprehensive history of large scale testing that is closely linked to eugenics movements and the like. Patricia Ericsson and Richard Haswell’s Machine Scoring of Student Essays (2006) also offers lots of research on this topic that points to the problems with such automated scoring technologies. I’ve written a book that draws on these histories too to make the argument that all writing assessment, but especially such writing exams, are racist (see chapter 1 of Antiracist Writing Assessment Ecologies, 2015). They really don’t work any other way. And using computers and automated scoring programs do NOT change that outcome. People, computer programmers and their biases, make those programs.
And of course, computer programmers are more likely to know less about the influences of racist biases in literacy practices, language, and how those things are assessed. They just don’t learn about how language works or how to assess it. So they are the wrong people to be in charge of such projects, which are by default racist projects in our current society. If one wishes to argue the opposite, the burden of proof is on them to prove how their program and the writing test is antiracist, how it is not going to produce the racist results that have always been produced by such tests. Turning your head away from this burden of proof equates to having a status quo, racist, and white language supremacist assessment. Here’s one way I know computer programmers don’t know much about language or racism. They are not trained in it, nor do they get any significant experiences in understanding it in college. At ASU, for instance, our BS in Computer Science, requires 1 credit – not one course, 1 credit – in Computing Ethics. And we have a good program.
But let me offer a different kind of argument, one from those disciplines of Math and science. Cathy O’Neil’s famous book Weapons of Math Destruction (2017) and Ruha Benjamin’s Race After Technology (2019) each explain in different ways how our technologies, which are filled with “black boxes” that you and I and likely the district official cannot open up and see what’s in them, are a problem, often a racist problem. For instance, take the actual construct that the technology is using to make judgements on student writing. What exactly is it seeing and tabulating? What is the construct? The programmers would HAVE to know this if they were going to program the thing, and so the company HAS to know this. If they cannot produce an articulation of that construct, then they are writing code blindly. It’s like asking a teacher how they evaluated your paper and they tell you, “I judged by whether your paper was good or not, and I just know what good papers are. I don’t need to tell you.” That’s a shit answer, of course, so we shouldn’t accept similar ones from the companies we pay money to in order to do this kind of work. If you’re taking taxpayers’ money to assess their kids’ languaging, then you better have a really good, clear, precise answer to that construct question.
And when you look at the construct, what you’ll likely find is a white, middle class English standard that ignores many students being tested, and privileges the same kinds of students from the same places. It’s the normalcy of racism, the status quo that is white language supremacy. And it’s all in the guise of neutrality, a neutrality or non-bias, that is not possible when you judge shit – when we judge anything. To judge language means you apply particular biases to whatever you are judging. That is how judgement works. The definition of judging and making decisions always involves biases. It has to. So the district official doesn’t understand much about what assessment actually is, or he’s lying to you.
But we might also wonder: How are those algorithms in the program put together? As O’Neil explains in her book about a range of math-related technologies, that is, ones with algorithms embedded in computer code, there is lots of bias in those programs and that bias is conventional racial and class bias inherited from their programmers and the data used to validate those programs – that is, the data used to teach the programs how to be “better” predictors, better judges. And do you know what the key criterion for a better computer judge is? It’s acting like a human one. That is, it is replicating consistently the same kinds of biases that people have. And if you were a good computer programmer making a program that would judge language, you would want to replicate as close as possible human judgement of language, because that is the best way to simulate human language exchanges that the program is meant to give you some insight on, right? Ruha Benjamin offers several principles for what she calls, “the new Jim code,” that is, abolitionist tools in our technological society. She may be a good reference for the district and others as well.
Using such a system with such a construct and standard ignores the 1974 NCTE statement, Students’ Right to Their Own Language. This is a statement that was reaffirmed in November 2003, then again in November 2014. It states:
This Ain’t Another Statement! This is a DEMAND for Black Linguistic Justice!” It makes five demands of teachers and institutions:
Language scholars long ago denied that the myth of a standard American dialect has any validity. The claim that any one dialect is unacceptable amounts to an attempt of one social group to exert its dominance over another. Such a claim leads to false advice for speakers and writers, and immoral advice for humans. A nation proud of its diverse heritage and its cultural and racial variety will preserve its heritage of dialects. We affirm strongly that teachers must have the experiences and training that will enable them to respect diversity and uphold the right of students to their own language.
- teachers stop using academic language and standard English as the accepted communicative norm, which reflects White Mainstream English!
- teachers stop teaching Black students to code-switch! Instead, we must teach Black students about anti-Black linguistic racism and white linguistic supremacy!
- political discussions and praxis center Black Language as teacher-researcher activism for classrooms and communities!
- teachers develop and teach Black Linguistic Consciousness that works to decolonize the mind (and/or) language, unlearn white supremacy, and unravel anti-Black linguistic racism!
- Black dispositions are centered in the research and teaching of Black Language!
Does the technology address these demands for Black linguistic justice? Is it programmed to understand and rate fairly and favorably Black English? Is that in its construct? Is that in its algorithms? Can they show you how they’ve programed that into the machine, so to speak? But wait, there’s still more. In June 2021, NCTE/CCCC published another statement, CCCC Statement on White Language Supremacy. This statement offers a definition and characteristics of white language supremacy (WLS) in society and schools. Here’s part of what this statement offers as WLS:
WLS assists white supremacy by using language to control reality and resources by defining and evaluating people, places, things, reading, writing, rhetoric, pedagogies, and processes in multiple ways that damage our students and our democracy. It imposes a worldview that is simultaneously pro-white, cisgender, male, heteronormative, patriarchal, ableist, racist, and capitalist (Inoue, 2019b; Pritchard, 2017). This worldview structures WLS as the default condition in schools, academic disciplines, professions, media, and society at large. WLS is, thus, structural and usually a part of the standard operating procedures of classrooms, disciplines, and professions. This means that WLS is a condition that assumes its worldview as the normative one that allegedly everyone has access to regardless of their cultural, social, or language histories (Inoue, 2021). WLS perpetuates many forms of systemic and structural violence.
We might ask how the school district has used this statement to make or rethink their decision to use this writing test and technology. How have they asked the company and programmers to address WLS and Black linguistic justice in the programming code and writing construct used? How does the test and technology NOT reproduce white language supremacy, the normal way of things? More broadly, how would the decision makers address the very real concerns that using such a test with automated scoring addresses the conclusions that such statements above by the leading national organization on teaching English make?
Finally, you might make an appeal to the fact that if such a test ends up causing disparate impact in its outcomes across different racial groups, then the school district is breaking the law. Mya Poe and John Cogan offer a clear explanation of how this works along racial lines in their very good article, “Civil Rights and Writing Assessment: Using the Disparate Impact Approach as a Fairness Methodology to Evaluate Social Impact” (2016). It is a blueprint for how to do this work, but most importantly, it can be a deterrent if the school district realizes that they open themselves up to a lawsuit that likely they will lose. The burden of proof is shifted, as they say, to the school or program using this method. If your school district will not listen to this, then I’d go to every magnet school involved, their principals, faculty, students, and parents, and bring this up, disparate impact. It likely will occur along income lines and racial lines. Think “interest convergence,” which is a Critical Race Theory term that means that for change to happen the interests of the ruling group must align or converge with those under rule, or those oppressed (see, Delgado and Stefancic, Critical Race Theory, p. 22; chapter 5, Derrick Bell, Silent Covenants: Brown v. Board of Education and the Unfilled Hopes for Racial Reform, 2004). We all have an interest in such assessments and their social engineering that they do as a matter of course. But one interest of those making decisions at the school board or district, or at local magnet schools, that may converge with yours is that they don’t want to be sued. They don’t want to break the law.
This blog is offered for free in order to engage language and literacy teachers of all levels in antiracist work and dialogue. The hope is that it will help raise enough money to do more substantial and ongoing antiracist work by funding the Asao and Kelly Inoue Antiracist Teaching Endowment, housed at Oregon State University. Read more about the endowment on my endowment page. Please consider donating to the endowment. Thank you for your help and engagement.