Jillian Lederman, a fellow at the Wall Street Journal, has published some intriguing (and sure to be controversial) opinions regarding disabilities accommodation in testing, including in law schools. There is no doubt that law schools have been wondering how to legally and ethically operate in this space over the past decades, both in instruction and examination—at both of the schools at which I have spent the lion’s share of my time, faculty and staff have grappled with the issues, only to then grapple with them again. At least some aspects seem ‘essentially contested’ spaces in which everyone wants to do right, but in which it can be difficult to reach agreement on precisely what ‘right’ entails.
But if there are difficult questions there, there may be some easy questions as well. At the least, there seem questions worth asking that have nothing to do with disability, and therefore on which professors who are not expert in those areas can meaningfully opine.
And here may be one: why did law schools ever adopt the absurdly long issue spotter, and why do so many schools and professors continue to use it? I’m thinking here of the truthful student claim that, ‘I could have answered every single issue, but there simply wasn’t time; I didn’t step out of the testing room a single time, and I couldn’t finish!’ My sense is that there are a reasonable number of such honest claims; if so, why is such exam a good one? (For the professor doubting my claim, I challenge him or her to sit down and take his or her own examination, or a colleagues’ lengthy examination, under timed examination conditions; distressingly, many professors don’t do this.)
Like most professors, I had no educational background in pedagogy of any kind, and my initial examinations were absurd. (Of course, phew, they still produced individual grades and a curve that strongly correlated to same student performance in other courses… but absurd is absurd.) Why were they absurd? Because they were ridiculously too long—in my case, a ridiculously too long combination of objective questions, a traditional issue spotter, and a policy question. Why did I make them ridiculously too long? Because (1) I modeled them on the examinations I had taken as student and on those my colleagues generously shared with me, and (2) I feared otherwise they would be too easy. ‘If I can rather easily answer the questions,’ many a new professor thinks, ‘then so will all my students, and it will destroy the curve!’ Wrong. But it feels right at the time. Thus, over the years, my exams began to shrink. (As have many standardized exams in different fields from high school to professional school, but I’ll leave such broader thoughts to another day.)
And as I began to think more about this, I began to check for correlations in my grading. Since I most often graded by giving a point for anything helpful towards a right answer (a ‘check’), there were no particular numbers of points for any particular issue. (There are advantages and disadvantages to this and every other grading scheme, but I won’t get into them here.) One upshot is that it was easy to see whether, say, a student grade on five pages of exam writing correlated to the same student’s grade on ten pages of exam writing. In other words, if I calculated and assigned grades only based upon the first five pages of every student paper, would each grade closely track the correlate when I graded every written page? Unsurprisingly—is there reason to think students tend to get knowledgeable or ignorant just about page six?—the correlations were strong. Moreover, the correlations between grading on the objective portion and essay portion were also strong. And thus, I began to wonder, why are we giving, as one student in Ms. Lederman’s article calls them, “racehorse exams”?
Is it because lawyers often bill by the hour, and so we think this is an essential lawyerly function? I have grown increasingly skeptical… in my own work, at least, I would much rather a colleague who takes (reasonably) longer and produces better. Is there reason to think partners in law firms wish otherwise? How often does a partner in a law firm rush into an associate’s office, hand her ten pages of absurd yet cursory facts that raise nearly every legal issue in a law school course, and demand a written analysis of them all in three or four hours? My practice experience is thin, but, again, put me down for ‘skeptical.’
So, in short, where’s the examination fire? Why not give a reasonable exam that most students can finish comfortably within the examination time, having sufficient time to carefully read, to call to mind the law they have learned, and to apply that law… producing answers that are objectively correct or objectively wrong?
This is, for what it is worth, what I now try to do in my courses, and it has led me to entirely objective examinations of roughly fifty questions. This format works particularly well for me because I personally do well spending countless hours crafting questions, but less well reading countless rushed student recitations of roughly the same answer. It seems wise—or at the very least reasonable—for any professor to play to her strengths. Others will therefore reach different particulars. And since this has always been a core issue of academic freedom as we run American legal education—how to teach and how to test—I’d never seek to limit a colleague’s choice. Besides, if we are to learn, it is probably only collectively through individually different choice.
I think we could—and probably should—have robust debates on many specifics regarding examination. But I haven’t seen a persuasive-to-me justification for the traditional, lengthy “racehorse” law school issue spotter. Is it out there? If not, we could solve part of what Ms. Lederman describes as a problem without risking discriminating against anybody—by giving better exams for everybody.

Leave a Reply