AI can be confusing—trying to meaningfully taxonomize all the different sorts… really understand all the technologies. But it’s got nothing on humans.
Take a recent article in the ABA’s The SciTech Lawyer (Winter 2026) by Magistrate Judge Maritza Dominguez Braswell—a sometimes calming voice on AI—which begins like this:
Empathy is one of humanity’s most treasured traits—a warm and deeply personal way of relating to others.
Is that how we do it? I thought humans tended to lie, kill, and cheat. Something like one third of adult Americans have a criminal record. There’s just no way around that astounding statistic, since it is also (crappy) humans who draft and implement those laws. If by their fruits ye shall know them, well, the human track record is quite plain. There is, I think, good reason why every history teacher I ever had seemed skeptical to downright cynical. If you spend your weeks in whims of fantasy, humans might seem pretty great. When you spend your weeks in how they actually roll… not so much.
Judge Braswell continues, “First, let’s be clear about one thing; AI doesn’t feel. It has no consciousness—no emotional state. To attribute empathy to AI is to mistake simulation for reality.”
But then if we are living in a simulation, well, it all looks rather different, doesn’t it? And you can’t effectively reason by platitude or assumption: ‘Oh, yes, I know everything about human reality, and I’ll just declare everything about all non-human reality while I’m at it.’ Declaring, alas for Michael Scott, doesn’t make it so.
So, what data does Judge Braswell put forth?
Some studies have tested the perceived empathy of AI. In one study, more than 500 participants shared personal experiences and rated replies from experts, nonexperts, and an AI system. Overall, the AI responses were preferred—rated as more compassionate.
Hm… so it is or is not by their fruits ye shall know them? It’s more like, by my dictate you shall know them?
Another study evaluated large language models on emotional intelligence. On average, these models correctly identified emotional states more than 80% of the time, compared to the less than 60% success rate for human participants.
Hm again… but we are to ignore this and declare by fiat it just ain’t so?
Researchers have also found that AI companions help alleviate feelings of loneliness and help people feel heard and understood. To that end, companies are offering ‘AI therapy’ services for high-stress, high-burnout professions. This raises difficult questions. If AI is increasingly capable of identifying emotional states and offering the user something helpful, is there a place for it in our society?
Wait a minute? Why is that the question?! So far, on this evidence, the better question seems more like this: is there a place for humans in a good society? Are we to fear the AI willing to share bridge heights to the perhaps suicidal, as Judge Braswell worries, or the human who instructs “fucking … get back in” the carbon-monoxide filled truck? (For the record, I would personally not have criminally punished Michelle Carter.)
Judge Braswell obviously thinks a great deal of human judging, which seems rather a common trait in judges. Is that because human judges are so much better than machines? Or is that what happens when you give humans thrones and dominions? I think the lesson of history on that one tends to paint rather strongly.
To be clear, I am not making any claims regarding what any particular AI technology does or does not do. For that meaningful discussion, we would have to first specify the technology, then carefully analyze the algorithm, the outputs, etcetera. But that’s rather the point. I cannot for the life of me figure why so many humans insist on declaring every mechanical thing ‘just bits,’ and get especially upset if they can’t follow precisely what those bits are doing at every instant. Are these people at all aware of the gray matter in their heads, and our sorry state of understanding it? Why is it fair game to simply assume most everything about human life, and at the same time to assume away data about mechanical life? (From Braswell again: “Machine-generated empathy is, and will remain, an illusion.”)
These sorts of assumptions weight the deck against meaningful progress. Whatever the cases for and against AI, speciesism isn’t among the right ways to begin the discussion.
ADDENDUM 02.23.26
I neglected to include two wonderful quotes from Robert Pirsig’s Zen and the Art of Motorcycle Maintenance (1974).
“I suppose you could call that a personality. Each machine has its own, unique personality which probably could be defined as the intuitive sum total of everything you know and feel about it. This personality constantly changes, usually for the worse, but sometimes surprisingly for the better, and it is this personality that is the real object of motorcycle maintenance. The new ones start out as good-looking strangers and, depending on how they are treated, degenerate rapidly into bad-acting grouches or even cripples, or else turn into healthy, good-natured, long-lasting friends. This one, despite the murderous treatment it got at the hands of those alleged mechanics, seems to have recovered and has been requiring fewer and fewer repairs as time goes on.”
“This condemnation of technology is ingratitude, that’s what it is.”

Leave a Reply