Artificial Intelligence and Tender Loving Care

Sure, we chaplains are supposed to be paragons of compassion, but when I heard a scholar talking about the Talmud and its application to artificial intelligence last week, I thought even compassion can obscure one’s priorities. Especially after hearing him say in the beginning of his lecture that climate change, epidemics, nuclear bombs, and artificial intelligence could threaten human survival. And no, he was not talking about robots showing compassion to people. Well, then what? About humans being compassionate to them!

What if, he queried, now or in the future, AI could have feelings, including the ability to suffer? He wonders if it is wrong to “tease” AI for example with a request that would trigger some disruption of one of its programs or even cause it to damage its software. Instead, he said, we should treat AI well, particularly in case in the future (if not already but I doubt that), robots can experience emotions. And I mean “experience” them, feeling pain and joy and so on. What, we have to give them birthday parties so they can be happy?

Whether AI can be conscious or not is a huge question which I won’t belabor my dear readers with. Suffice it to say that if a robot passes the Turing Test, which means a human cannot tell apart the behavior of a robot from that of a human, then let’s give the AI the benefit of the doubt that it/she/he/xe can experience feelings. Well okay, I grant to the scholar who thinks it is immoral to cause robots to suffer that we mind our P’s and Q’s with them just to be on the safe side. It might very well be in the future that at a cocktail party you won’t be exactly sure if that new person you are being introduced to is a robot, a human, or a hybrid of the two.

I think the ethics the scholar raised has more to do with the humans involved if I may say so myself. And who, pray, are they? Exactly two. The creator of the AI (let’s assume just one person) and the consumer interacting with it. I wonder if we are “mean” to a chatbot, for example, because we resent or fear the person who created it: “Darn,” we say, “I miss the good old days where I could ask a real human my insurance questions.” Perhaps if we try to confound a chatbot, the one we are truly disrespecting is the human behind its creation. But that’s no biggie either, unless somehow that human finds out about your snarky behavior. More important is what such behavior signifies about ourselves.

In Jewish mysticism, even an inert object like a rock has, to put it simply, some sort of spiritual status. Minimally, inanimate objects indeed have a form comprised of atoms and they do exist. We might take pleasure in kicking the heck out of a rock on the road or in fooling a bot, but these are perverse pleasures born out of anger or meanness towards what exists in this universe besides ourselves. In “mistreating” anything that exists rather than honestly confronting our own feelings about it or anything else connected with it, we are truncating our own self-understanding and our own experience of being human.


2 thoughts on “Artificial Intelligence and Tender Loving Care

  1. leaf1231 says:

    Hi Karen, I appreciate your musings here—especially your conclusion. Wishing you a new year marked by a feeling of dayenu.




  2. Hi Karen, I really enjoyed this article!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.