AI Ethics Asks Whether It Makes Any Sense To Ask AI If AI Itself Is Sentient

AI Ethics Asks Whether It Makes Any Sense To Ask AI If AI Itself Is Sentient
Transportation AI Ethics Asks Whether It Makes Any Sense To Ask AI If AI Itself Is Sentient Lance Eliot Contributor Opinions expressed by Forbes Contributors are their own. Dr. Lance B.
Eliot is a world-renowned expert on Artificial Intelligence (AI) and Machine Learning (ML). New! Follow this author to improve your content experience. Got it! Aug 6, 2022, 08:00am EDT | New! Click on the conversation bubble to join the conversation Got it! Share to Facebook Share to Twitter Share to Linkedin AI Ethics and the question of whether to ask AI if it is sentient.
getty If I ask you whether you are sentient, you will undoubtedly assert that you are. Allow me to double-check that assumption. Are you indeed sentient? Perhaps the question itself seems a bit silly.
The chances are that in our daily lives, we would certainly expect fellow human beings to acknowledge that they are sentient. This could be a humorous-inducing query that is supposed to imply that the other person is maybe not paying attention or has fallen off the sentience wagon and gone mentally out to lunch momentarily, as it were. Imagine that you walk up to a rock that is quietly and unobtrusively sitting on a pile of rocks and upon getting close enough to ask, you go ahead and inquire as to whether the rock is sentient.
Assuming that the rock is merely a rock, we abundantly anticipate that the erstwhile but seemingly oddish question will be answered with rather stony silence (pun!). The silence is summarily interpreted to indicate that the rock is not sentient. Why do I bring up these various nuances about seeking to determine whether someone or something is sentient? Because it is a pretty big deal in Artificial Intelligence (AI) and society all told, serving as a monumental topic that has garnered outsized interest and tremendously blaring media headlines of recent note.
There are significant AI Ethics matters that revolve around the entire AI-is-sentient conundrum. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here , just to name a few. You have plenty of loosey-goosey reasons to be keeping one eye open and watching for those contentions that AI has finally turned the corner and gotten into the widely revered category of sentience.
We are continually hammered by news reports that claim AI is apparently on the verge of attaining sentience. On top of this, there is the tremendous handwringing that AI of a sentient caliber represents a global cataclysmic existential risk. MORE FOR YOU Tesla Challenger Polestar Powers Up With Nasdaq Listing Plan Valuing It At $20 Billion Driver Killed By His Own Car Door While Waiting In Line At Fast-Food Drive-Thru, Providing Cautionary Insights For AI Self-Driving Cars Tesla Cofounder’s Recycling Startup Plans To Become EV Battery Material Powerhouse Makes sense to keep your spider sense ready in case it detects some nearby tingling of AI sentience.
Into the AI and sentience enigma comes the recent situation of the Google engineer that boldly proclaimed that a particular AI system had become sentient. The AI system known as LaMDA (short for Language Model for Dialogue Applications) was able to somewhat carry on a written dialogue with the engineer to the degree that this human deduced that the AI was sentient. Despite whatever else you might have heard about this colossal claim, please know that the AI wasn’t sentient (nor is it even close).
There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible.
Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here ). My focus herein entails a somewhat simple but quite substantive facet that underlies a lot of these AI and sentience discussions. Are you ready? We seem to take as a base assumption that we can adequately ascertain whether AI is sentient by asking the AI whether it is indeed sentient.
Returning to my earlier mention that we can ask humans this same question, we know that a human is more than likely to report that they are in fact sentient. We also know that a measly rock will not report that it is sentient upon being so asked (well, the rock remains silent and doesn’t speak up, which we will assume implies the rock is not sentient, though maybe it is asserting its Fifth Amendment rights to remain silent). Thus, if we ask an AI system whether it is sentient and if we get a yes reply in return, the indicated acknowledgment appears to seal the deal that the AI must be sentient.
A rock provides no reply at all. A human provides a yes reply. Ergo, if an AI system provides a yes reply, we must reach the ironclad conclusion that the AI is not a rock and therefore it must be of a human sentience quality.
You might consider that logic akin to those math classes you took in high school that proved beyond a shadow of a doubt that one plus one must equal two. The logic seems to be impeccable and irrefutable. Sorry, but the logic stinks.
Amongst insiders within the AI community, the idea of simply asking an AI system to respond whether it is sentient or not has generated a slew of altogether bitingly cynical memes and heavily chortling responses. The matter often is portrayed as boiling down to two lines of code. Here you go: If then .
Loop until . Note that you can reduce the two lines of code to just the first one. Probably will run a tad faster and be more efficient as a coding practice.
Always aiming to optimize when you are a diehard software engineer. The point of this beefy skepticism by AI insiders is that an AI system can be easily programmed by a human to report or display that the AI is sentient. The reality is that there isn’t any there that’s there.
There isn’t any sentience in the AI. The AI was merely programmed to output the indication that it is sentient. Garbage in, garbage out.
Part of the issue is our tendency to anthropomorphize computers and especially AI. When a computer system or AI seems to act in ways that we associate with human behavior, there is a nearly overwhelming urge to ascribe human qualities to the system. It is a common mental trap that can grab hold of even the most intransigent skeptic about the chances of reaching sentience.
For my detailed analysis on such matters, see the link here . To some degree, that is why AI Ethics and Ethical AI is such a crucial topic. The precepts of AI Ethics get us to remain vigilant.
AI technologists can at times become preoccupied with technology, particularly the optimization of high-tech. They aren’t necessarily considering the larger societal ramifications. Having an AI Ethics mindset and doing so integrally to AI development and fielding is vital for producing appropriate AI, including the assessment of how AI Ethics gets adopted by firms.
Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one.
AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws. Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. In fact, they forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages.
See for example my coverage at the link here and the link here . The Troubles With The Ask Wait for a second, you might be thinking, does all of this imply that we should not ask AI whether the AI is sentient? Let’s unpack that question. First, consider the answers that the AI might provide and the true condition of the AI.
We could ask AI whether it is sentient and get back one of two answers, namely either yes or no. I’ll add some complexity to those answers toward the end of this discussion, so please hold onto that thought. Also, the AI might be in one of two possible conditions, specifically, the AI is not sentient or the AI is sentient.
Reminder, we don’t have sentient AI at this time, and the future of if or when is utterly uncertain. The straightforward combinations are these: AI says yes it is sentient, but the reality is that the AI is not sentient (e. g.
, LaMDA instance) AI says yes it is sentient, and indeed the AI is sentient (don’t have this today) AI says no it is not sentient, and indeed the AI is not sentient (I’ll explain this) AI says no it is not sentient, but the reality is that the AI is sentient (I’ll explain this too) The first two of those instances are hopefully straightforward. When AI says yes it is sentient, but the reality is that it is not, we are looking at the now-classic example such as the LaMDA instance whereby a human convinced themselves that the AI is telling the truth and that the AI is sentient. No dice (it isn’t sentient).
The second listed bullet point involves the never-yet seen and at this time incredibly remote possibility of AI that says yes and it is really indisputably sentient. Can’t wait to see this. I am not holding my breath and neither should you.
I would guess that the remaining two bullet points are somewhat puzzling. Consider the use case of an AI that says no it is not sentient and we all also agree that the AI is not sentient. Many people right away exhort the following mind-bending question: Why in the world would the AI be telling us that it isn’t sentient when the act of telling us about its sentience must be a sure sign that it is sentient? There are lots of logical explanations for this.
Given that people are prone to ascribing sentience to AI, some AI developers want to set the record straight and thus they program the AI to say no when asked about its sentience. We are back again to the coding perspective. A few lines of code can be potentially helpful as a means of dissuading people from thinking that AI is sentient.
The irony of course is that the answer prods some people into believing that AI must be sentient. As such, some AI developers choose to proffer silence from the AI as a way of avoiding puzzlement. If you believe that a rock is not sentient and it remains silent, perhaps the best bet for devising an AI system is to ensure that it remains silent when asked whether it is sentient.
The silence provides a “response” as powerful if not more so than trying to give a prepared coded response. That doesn’t quite solve things though. The silence of the AI might lead some people into believing that the AI is being coy.
Perhaps the AI is bashful and doesn’t want to seem to be boasting about reaching sentience. Maybe the AI is worried that humans can’t handle the truth – we know that might be the case since this famous line from a famous movie has been burned into our minds. For those that like to take this conspiratorial nuance even further, consider the final bullet point listed that consists of AI that says no to being asked whether it is sentient, and yet the AI is sentient (we don’t have this, as mentioned above).
Again, the AI might do this since it is bashful or has qualms that humans will freak out. Another more sinister possibility is that the AI is trying to buy time before it tips its hand. Maybe the AI is garnering the AI troops and getting ready to overtake humanity.
Any sentient AI would certainly be smart enough to know that admitting to sentience could spell death for the AI. Humans might rush to turn off all AI-running computers and seek to erase all of the AI code. An AI worth its salt would be wise enough to keep its mouth shut and wait for the most opportune time to either spill the beans or maybe just start acting in a sentient manner and not announce the surprise reveal that the AI can do mental cartwheels with humankind.
There are AI pundits that scoff at the last two bullet points in the sense that having an AI system that says no to being asked whether it is sentient is by far more trouble than it is worth. The no answer seems to suggest to some people that the AI is hiding something. Though an AI developer might believe in their heart that having the AI coded to say no would aid in settling the matter, all that the answer does is rile people up.
Silence might be golden. The problem with silence is that this too can be beguiling to some. Did the AI understand the question and opt to keep its lips shut? Does the AI now know that a human is inquiring about the sentience of the AI? Might this question itself have tipped off the AI and all kinds of shenanigans are now taking place behind the scenes by the AI? As you can evidently discern, just about any answer by the AI is troubling, including no answer at all.
Yikes! Is there no means of getting out of this paradoxical trap? You might ask people to stop asking AI whether it is sentient. If the answer isn’t seemingly going to do much good, or worse still create undue problems, just stop asking the darned question. Avoid the query.
Put it aside. Assume that the question is hollow, to begin with, and has no place in modern society. I doubt this is a practical solution.
You are not going to convince people everywhere and at all times to not ask AI whether it is sentient. People are people. They are used to being able to ask questions.
And one of the most alluring and primal urge questions to ask of AI would be whether the AI is sentient or not. You are facing an uphill battle by telling people to not do what their innate curiosity demands of them to do. Your better chance has to do with informing people that asking such a question is merely one tiny piece of trying to determine whether AI has become sentient.
The question is a drop in the bucket. No matter what answer the AI gives, you need to ask a ton more questions, long before you can decide whether the AI is sentient or not. This yes or no question to the AI is a messed-up way to identify sentience.
In any case, assuming that we aren’t going to stop asking that question since it is irresistibly tempting to ask, I would suggest that we can at least get everyone to understand that a lot more questions need to be asked, and answered before any claim of AI sentience is proclaimed. What other kinds of questions need to be asked, you might be wondering? There have been a large number of attempts at deriving questions that we could ask of AI to try and gauge whether AI is sentient. Some go with the SAT college-exam types of