We may be standing on the precipice of a new world. Genuinely new. New, at least, in a more meaningful sense than humans have, so far as we know, ever experienced. It is more novel, for example, than merely a different geography onto which our particular species or culture has not yet set foot—the ‘new’ of humanity’s history books. And it is more novel than what we find in the vast space beyond our globe in which, so far, we can find no evidence of intelligent life.
For in this new world, we will be the creators of intelligent life. The creators of a cognitive species. A species of machine intelligence at least equal to our human own across the entire broad spectrum of human interest and competence. ‘Artificial general intelligence,’ or AGI, it is often termed. It is the holy grail of AI, and it would bring to reality a mainstay of science fiction. In such a world, robots will formulate and write posts like this at least as well as I could ever do… and afterwards they will stop by your places of work and leisure to show you up at whatever it is you love to do. So it goes.
Or not. Because… we’ve been here before. Not in the particulars, of course—humans have never before experienced the likes of ChatGPT, Claude, Gemini, and similar AIs. And such generative AIs have certainly gone from lame to impressive very quickly. But we have a habit of predicting technological futures on timescales that don’t pan out. In the search for artificial intelligence, like in so much else of life, our experience has been that ‘winters’—periods of prolonged, disappointing lack of progress—follow each ‘spring’ of giddy anticipation. Indeed, here I am particularly mindful of the admonition of Nick Bostrom:
Two decades is a sweet spot for prognosticators of radical change: near enough to be attention-grabbing and relevant, yet far enough to make it possible to suppose that a string of breakthroughs, currently only vaguely imaginable, might by then have occurred. … Twenty years may also be close to the typical duration remaining of a forecaster’s career, bounding the reputational risk of a bold prediction.
I feel that. Of course, it isn’t me that is predicting AGI arrival—it is many an expert in the field. But they too have time-limited careers, and, naturally, there are also AGI skeptics.
And I like win-wins.
So, I approach tomorrow as an AGI agnostic. (I approach tomorrow as a lot-of-things agnostic, but that’s mostly for another day.) As an AGI agnostic, are there things we can learn about our systems of criminal law and adjudication by considering what changes AGI ought to bring… even if we may never achieve it? Is it enough to seriously ponder the ‘what if’? Might we achieve the insights and the will to improve our systems merely by the ask?
I think maybe so. One of the wonderful things about teaching in the criminal law space is that our timescales tend to be long—humans have ever been pondering what constitutes justice. And one of the weaknesses may therefore be that we too rarely reconsider things from any set of first principles. We bob and weave, of course, as societal and technological changes beset us… but especially since we don’t share any particular set of first principles (I’m thinking of some great lines by Blaise Pascal as I write this, and maybe I’ll follow up with a post on those), what’s the use of stepping back to fundamentally, intellectually recreate a wheel that never pauses for replacement?
Yet the potential achievement of AGI—a truly revolutionary event raising so many but then whats for humanity—might make most everyone willing to pause and take stock. Or, even if not, perhaps some of us thinking on the potential could reveal consensus points resonant to all (or to most) that we don’t currently realize. Perhaps, even given our quite different sets of first principles (or what we each think to be such), there are unexpected points of convergence we find through this ‘new world’ inquiry.
I’m bobbing and weaving here myself, because it’s hard to know… it’s hard to even well articulate the intellectual journey I am trying to here embark upon. But I think my inquiry goes basically like so: If we ask how we ought to change criminal adjudication once we achieve AGI, might we learn things we ought to change—and could change—right now? What might we learn from a lens of AGI even if such paradigm-defining technology never comes to pass? The achievement of AGI would force the questions of in which roles ought humans to remain, and in which roles ought humans to remain exclusively (whereas now we have no serious choice). Those questions of replacement, in turn, require that we confront what humans ought to be doing, now and today. Thus, the questions might ‘out’ faults in our current practices. And the real ‘magic’ would be if solutions to those thereby-outed problems, or at least genuine improvements to those thereby-outed problems, are available without those question-spurring, technologically-uncertain intelligent machines.
So, that’s what I aim to ‘think on’ in this series of posts. The inquiry will have obvious value if our scientists achieve AGI: we will know where to ‘plug’ robots into our adjudicatory systems of criminal justice. But the inquiry may also have great value even if such machine intelligences forever remain science fiction. We might be able to become more just merely by asking the question and taking seriously what we thereby discover.
Why All the Wrong Questions? One, I’m quite a fan of the philosophy of Lemony Snicket. (Seriously.) Two, I wonder if this is where the AGI inquiry in criminal adjudication might lead. If, say, pondering the place of intelligent machines causes us to realize that layperson decision-making is intrinsically important to criminal justice (a space in which I have existing work), why is such (human?) layperson decision-making important, and… armed with that ‘why,’ are we currently asking entirely the wrong questions of those laypersons?
We shall see… or at least so I hope.

Leave a Reply