Recently, Erik Gilbert, an Associate Dean and Professor of History at Arkansas State University, wrote an opinion column in the Chronicle of Higher Education called, "Does Assessment Make Colleges Better? Who Knows?" It's a very good question, one I've asked myself several times during my career; however, I've usually asked it in a different way: "HOW can assessment make my college (or program) better? How can I know?" It seems, part of the answer would have to be student participation -- but I'm getting ahead of myself.
|Gilbert's column in the Chronicle|
Gilbert begins his piece with his own search for colleges for his sons. The features of any college worth considering to him and his sons did not take them to college assessments or their results -- not that a parent or prospective student would be able to readily find such results on many web sites. When I think about how my sons and I will pick colleges in the near future, we'll likely consider the same kinds of things Gilbert describes: faculty interests, students' abilities to work with faculty on projects, extracurricular amenities like rock climbing walls and rec centers. We likely won't look at any program assessment results from, say the writing program, or a major of interest. Then again, because I'm in this field, I know where many of the good writing programs are. I know their directors, and I've read their published results.
Invisible Good vs. Visible Bad Writing Assessment Still, Gilbert has a point. What good is assessment? Does it do any good for teachers or students? How would we know? Recently on the WPA-L listserv, Galen Leonhardy asked about the role of the writing assessment theorist, which got a number of responses. Rich Haswell, a long-time writing assessment researcher (now retired), responded with a funny answer that has a kernel of truth to it:
The role of good writing-assessment theorists is to regard history, ideas, and findings and come to conclusions that are disregarded by everyone else.
The role of bad writing-assessment theorists is to shape history, ideas and findings to fit the druthers of their bosses.While Haswell's cynically humorous response doesn't answer the question of whether writing assessment (or any assessment) is good or not, it does point to why educators often see so much bad assessment. The good writing assessment is disregarded or ignored by everyone. We never see it, or it's off everyone's radar. The bad assessments are just supplications to higher ups, and they are the ones most likely visible, even if they don't factor into many students' college application decisions.
Yes, Haswell isn't meaning his answer to be an actual answer, and I know he'd say in reality good and bad assessment is not so cut and dry. We often must do assessments with which we do not agree because we are required to. What is good and bad assessment is contextual and based on the unique place, people, and purposes of any school. What is important to hear in Haswell's response is the kernel of truth, that good writing assessment is often ignored or disregarded because it is assessment. Is it a case of everyone, including well-meaning teachers, parents, and students, throwing out the good assessments with all the bad ones? Guilt by association?
Are We Using the Wrong Criteria To Assess? Gilbert sees another reason to ask the question about assessments' abilities to make colleges better, which may help us develop one answer for why good assessments are so often disregarded. He says near the end of his article:
And most troubling of all is that the fundamental premise of assessment is that the problems we need to test for and try to fix are found in the classroom and the curriculum. So while we are agonizing about whether we need to change how we present the unit on cyclohexane because 45 percent of the students did not meet the learning outcome, budgets are being cut, students are working full-time jobs, and debt loads are growing.So we spend money on assessment, which only exacerbates the problems of cost in higher ed, where money can often be hard to come by for good programs and scholarships. We focus too much on matters inside the classroom, when factors outside of it matter just as much or more to students' present and future success. For Gilbert, what makes good assessment, then, is assessment that uses external criteria for validating its decisions. His examples are of "changes in a college’s reputation, ranking, or employment prospects for its students." These are surely good things to know, things worth measuring in some way, things parents and students care about and would find to be good measures of value and worth in schools. Of course, they are not the only reasons to go to school. They do seem to assume that the existence of a college (and the reason to go to college) is to prepare students for their roles in a corporate Capitalist economy.
Still, to carry Gilbert's argument a bit further, assessing for academic outcomes won't tell you whether students have employment prospects after graduation (we all need to work and earn money) since the external factors that control those prospects are mostly outside of the students' learning and higher education generally. Beyond the fluctuations of Capitalist labor markets controlled by political decisions and other things, how can, for instance, a history or philosophy department control the socio-cultural narratives and values in our society today that devalue such worthwhile degrees, that see them as worthless? Is it not important to have a good number of citizens who have thought and can think carefully about history and ethics in just about every position possible? Of course, it is. But what job is there for a philosophy major or a history major today?
Thus, I think good assessment in higher ed doesn't have to placate to the whims of the Capitalist marketplace, or of society, or current cultural trends. I'm not saying that education ignores such things, but it must offer a corrective, a counter voice to society, to the Capitalist marketplace. If we don't, who will? So good assessment might provide students with experiences to participate in learning and its assessment, to participate in judgment, which means they participate in meaning-making. Put another way: good assessment might ask students to make judgments on learning and have those judgments count in the classroom in real ways. This kind of good assessment can make colleges better because the criterion we are validating our decisions by are students themselves.
What's Good Writing Assessment?
So let me focus on good writing assessment, since that is what I do, but I don't think my ideas are exclusive to writing assessment. Writing assessment should not be primarily about measurement. That is, I don't think educators should design assessments in order to simply measure student value, proficiency, or accomplishments. These things require academic standards and yardsticks that by their nature are problematic when used with/against diverse student populations. I've discussed in various places this phenomenon, so I won't go into it here (for my most thorough argument about this question, you can read Antiracist Writing Assessment Ecologies: Teaching and Assessing for a Socially Just Future). But beyond this problem, assessments that measure, that look to find out who meets levels of proficiency and who do not, usually make students feel poked at, like lab rats being tested. In these instances, assessment is usually done last, an add on to the class or lesson/unit. It is often in the way, or something to get out of the way, so that the real work of the class can continue. Students feel this, and feel that they are not trusted to learn without assessments -- tests. This kind of bad assessment constructs an adversarial relationship between teacher and students. It assumes students will do what they can not to learn, unless they are made accountable through tests.
On the other hand, good writing assessment is an entire ecology that involves students in processes of inquiry that build agency and stake in their learning. In this kind of classroom, students don't need to be tested, instead they test themselves through their learning processes. Assessment becomes central to the course and its pedagogy, to what students are doing on a weekly basis. It becomes the main reason students are there -- and students can see and feel it. This kind of good assessment doesn't even need to produce grades or rankings or numbers. What it produces is student learning, questions, dialogue, exchanges, problems, and contingent solutions. Learning is always contingent and complex, so why do we think its measurement will be anything but contingent and complex? In a simplistic way, I'm taking the old adage that we learn most when we teach others, and modifying that adage: we learn most when we assess each other. But don't misunderstand me. When I say assessment, I don't mean grade. Grading, by its nature as a ranking mechanism, is usually bad, harmful (see Alfie Kohn for a good argument against grading).
Testing An Assessment
A simple way to test or validate an assessment in a classroom, for instance, might be to ask: what does this assessment do for students and their learning? Or How does this assessment engage students in learning? Better answers for this validation question will suggest the centrality of students in all learning/assessing processes, and a deep, compassionate trust in them to do the work asked, which ironically results in higher degrees of fairness in the assessment system (a discussion of fairness is for another day). For instance, some answers to this validation question might run along the following lines and should have evidence to support them:
- engages students in processes of inquiry and exchange of judgments based on the content of the course and the students' interests or personal connections to course content
- helps students build/construct authority or confidence in what they are learning and its usefulness to them now and in their foreseeable futures
- constructs power and ownership of the classroom's assessment ecology, which includes power to affect directly their course grades
What would this kind of classroom writing assessment look like? That's more complex and dependent on the school, students, teacher, constraints, etc. But I've offered a few ways to see how I've done it in my classroom in my recent book. I'll offer just this one bit of advice here. Good writing assessment gets students to help teachers do it in significant ways. It doesn't produce grades or numbers. It produces student agency, dialogues and exchanges that are about the learning, and intrinsic motivation to learn.
And to do this kind of assessment, it typically requires that a course be rethought so that assessment comes first in its design, not last, not something to get out of the way, but something to dwell on. I use a grading contract in all my courses to determine course grades. This gets rid of all the other grades on individual papers, and opens up the classroom for more critical discussions of judgment. I design processes of assessment with my students that they do, and that we go over afterwards. I ask them to continually reflect each week on these processes. And we think a lot about the contract and what it asks of us. To see my class in detail, see "A Grade-less Writing Course that Focuses on Labor and Assessing."
So how can we have good assessment in college? I think, it starts and ends with us (teachers and administrators) getting out of the way, and letting students lead, engage in it, and showing us how it's all done.