All the Wrong Questions (Part 3) – Normative Opacity in the Age of Strong AI

Posted

So far in this series, I’ve introduced the theme of learning through the lens of AI replacement, we’ve taken a brief detour on what we don’t know with Blaise Pascal, and I’ve asked whether we might be using entirely the wrong questions for lay participants in our systems of criminal adjudication.  (By the way, did everyone see that in DC the feds can’t get a grand jury to indict a ham sandwich?  Er… a sandwich thrower, I mean… they can’t get a grand jury to indict a sandwich thrower for a felony.  I’m quite chuffed by that outcome for a number of reasons, but that’s a story for another time.)

So, back to business—I want to know what questions we ought to be asking our juries… but the right answer isn’t readily apparent to me.  I convince myself one way (indeed, at one point I drafted quite far into a paper entitled An Honest Verdict that argued for ‘proven’ and ‘not proven’), but then I query whether the inclination to Senecan mercy is indefinable and so requires opacity, and whether we get that opacity through our currently strange system… and I start all over again.

Here I want to raise another thread: If I do end up—or you do end up—thinking we want the ‘mush’ or ‘opacity’ of something like “guilty” despite a jury oath to do something quite different (was every element proved beyond a reasonable doubt)… might it be there are other things in criminal justice that we, like Fleetwood Mac, just don’t want to know?

I’m not thinking of acoustic separation where, say, Colonel Nathan Jessep might do well with knowing even though I can’t handle the same (if that’s even a fair extension of that legal philosophic concept).  I’m thinking situations in which it might be important that nobody knows, such as (perhaps) the ‘social scoring’ uses of AI banned in the EU AI Act.

Consider, for example, the economic cost of enacting a criminal law, here meaning a novel crime.  This strikes me as an extremely difficult thing to know, because it might affect social norms, policing, prosecutions, adjudications, incarcerations… the list goes on and on.  Of course, legislatures and non-profits attempt such calculations, and I suspect everyone appreciates the attempt as a good thing: without a fiscal impact statement or critical study, we might be acting in quite an economically irrational way.  So, we try.

But trying is one thing.  Might success be altogether quite different?  If substantive criminal law is supposed to be about something more than dollars and cents (‘is it?’ being an open question, I realize), could it be harmful to know those costs because then they can’t be ignored?

The question of desirable ignorance is nothing new.  “Where ignorance is bliss, ‘tis folly to be wise,” taught Thomas Gray.  But it seems a question, according to recent works like Mark Lilla’s Ignorance and Bliss: On Wanting Not to Know, that is undertheorized.  And if we lack a developed philosophy generally, the matter seems rather open when it comes to specific contexts in criminal justice.

And—back to my overarching theme in all of this—the question may soon be much more salient.  Since one thing even narrow AI has often proven very good at is finding patterns we humans could not before perceive… if we are on our way to truly intelligent machines of significantly greater capacity… and if there are any certain criminal justice things we societally don’t want to know, we would presumably have to decide that before the capacity becomes available.  Once somebody knows, we at best have acoustic separation, and that strikes me as quite a different thing.  The horse cannot be put back in the barn.  So, this is another space in which the potential for AGI makes a longstanding philosophic question potentially more germane.  What was once the domain of poets, songwriters, and other philosophers might need to become a domain of criminal justice theorists and legislators.

And juries might once again be a useful case study.  If I’m right about role-reversibility and Senecan mercy, a jury’s ‘right’ action might be inexplicable.  It might be right because it was, say, maximally role-reversible, and that’s the best we get in this imperfect world.  Indeed, if juries (and judges) know they are being effectively tallied and observed, just like the ‘observer effect’ in science, we may destroy what it was we were hoping for them to do.  Or, a critic might of course push back, any opacity is a recipe for hiding biases and stupidities, and sunshine remains the best disinfectant (as, to give an investigatory example, seemed famously to be the case with Terry stops in New York City).

I’m quite unsure, but I’m intrigued by spaces of intentional ‘normative opacity,’ and I plan to continue to look into work like that of Lilla as I try and figure this out.  (So, if you have other literature suggestions, please share!)  Are there certain domains within the criminal law where ignorance should be our desire?  Where the points of the legal realists of yore ought to be optimized, not negated?  Strong AI threatens to be a force multiplier, potentially eliminating the only (opaque) option we’ve ever really known… are there areas in which we ought to legally push back?

Finally, if there are domains of normatively desired opacity, do those directly map as domains in which we ought to retain human decision-making?  Is it that we turn everything else over to truly intelligent machines, but not these things?  Or do the two matters share some different relation, or might they even be orthogonal to one another?


Subscribe

If you’d like to receive an email when new posts appear, subscribe here.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *