AI Ethics And The Almost Sensible Question Of Whether Humans Will Outlive AI

AI Ethics And The Almost Sensible Question Of Whether Humans Will Outlive AI
AI AI Ethics And The Almost Sensible Question Of Whether Humans Will Outlive AI Lance Eliot Contributor Opinions expressed by Forbes Contributors are their own. Dr. Lance B.
Eliot is a world-renowned expert on Artificial Intelligence (AI) and Machine Learning (ML). New! Follow this author to stay notified about their latest stories. Got it! Aug 28, 2022, 08:00am EDT | New! Click on the conversation bubble to join the conversation Got it! Share to Facebook Share to Twitter Share to Linkedin Will humans outlive AI is a loaded question with lots of important insights.
getty I have a question for you that seems to be garnering a lot of handwringing and heated debates these days. Are you ready? Will humans outlive AI? Mull that one over. I am going to unpack the question and examine closely the answers and how the answers have been elucidated.
My primary intent is to highlight how the question itself and the surrounding discourse are inevitably and inexorably rooted in AI Ethics . For those that dismissively think that the question is inherently unanswerable or a waste of time and breath, I would politely suggest that the act of trying to answer the question raises some vital AI Ethics considerations. Thus, even if you want to out-of-hand reject the question as perhaps preposterous or unrealistic, I say that it still elicits some value as a vehicle or mechanism that underscores Ethical AI precepts.
For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here , just to name a few. With the aforementioned premise, please permit me to once again repeat the contentious question and allow our minds to roam across the significance of the question. Will humans outlive AI? MORE FOR YOU Black Google Product Manager Stopped By Security Because They Didn’t Believe He Was An Employee Vendor Management Is The New Customer Management, And AI Is Transforming The Sector Already What Are The Ethical Boundaries Of Digital Life Forever? If you are uncomfortable with that particular phrasing, you are welcome to reword the question to ask whether AI will outlive humans.
I am not sure if that makes answering the question any easier, but maybe it seems less disconcerting. I say that because the idea of AI outliving humans might feel a bit more innocuous. It would almost be as though I asked you whether large buildings and human-crafted monuments might outlive humankind.
Surely this seems feasible and not especially threatening. We make these big things during the course of our lives and akin to the pyramids, these mighty structures will outlast those that crafted them. That doesn’t quite equate to persisting past the end of humanity, of course, since humans are still here.
Nonetheless, it seems quite logical and possible that the structures we make could outlast our existence in total. The notable distinction though is that various structures such as tall skyscrapers and glorious statues are not alive. They are inert.
In contrast, when asking about AI, the assumption is that AI is essentially “alive” in a sense of having some form of intelligence and being able to act in ways that humans do. That’s why the question about living longer is more daunting, mind-bending, and altogether a puzzle worthy of puzzling over. Throughout my remarks herein, I am going to stick with the question that is worded as to whether humans will outlive AI.
This is merely for sake of discussion and ease of contemplation. I mean no disrespect to the alternative query of whether AI will outlive humans. All in all, this analysis covers both wordings and I just perchance find that the question of humans outliving AI seems more endearing in these thorny matters.
Okay, I’ll ask it yet again: Will humans outlive AI? Seems like you have two potential answers, either the ironclad yes, humans will outlive AI, or you might be on the other side of the coin and fervently insist that no, humans aren’t going to outlive AI. Thus, this lofty and angst-ridden question boils down to a straightforward rendering of either yes or no. Make your pick.
I realize that the smarmy reply is that neither yes nor no is applicable. I hear you. Whereas the question certainly seems to be answerable in only a distinctly binary fashion, namely just yes or no, I will grant you that a counter-argument can be sensibly made that the answer is something else.
Let’s briefly explore some of the basis for not wanting to merely say yes or no to this question. First, you might reject the word “outlive” in the context of the question posed. This particular wording perhaps implies that AI is alive.
The question didn’t say “outlast” and instead asks whether humans will outlive AI. Does the outlive apply only to the human part of the question, or does it also apply to the AI part of the question? Some would try to assert that the outlived aura applies to the AI portion too. In that case, they would have heartburn by saying that AI is a living thing.
To them, AI is going to be akin to tall buildings and other structures. It isn’t alive in the same manner of speaking that humans are alive. Ergo, in this ardent contrarian viewpoint, the question is falsely worded.
You might be vaguely familiar with questions that have false or misleading premises. One of the most famous examples is whether someone is going to stop beating their wife (an old saying that obviously needs to be set aside). In that infamous example, if the answer of yes is provided, the implication is that the person was already doing so.
If they say no, the implication is that they were and are going to continue doing so. In the case of asking whether humans will outlive AI, we can end up buried in a morass about whether AI is considered something of a living facet. As I will explain momentarily, we do not have any AI today that is sentient.
I think most reasonable people would agree that a non-sentient AI is not a living thing (well, not everyone agrees, but I’ll stipulate that for now – see my coverage of legal personhood for AI at the link here ). The gist of this first basis for not answering the question of whether humans will outlive AI is that the word “outlive” could be interpreted to imply that AI is alive. We don’t have AI of that ilk, as yet.
If we do produce or somehow have sentient AI that arises, you would be hard-pressed to argue that it isn’t alive (though some will try to make such an argument). So the key here is that the question posits something that doesn’t exist and we are merely speculating about an unknown and hazy-looking future. We can take this messiness and seek to expand it into a more expressive expression.
Suppose that we are asking this instead: Will humans as living beings outlast AI that is either (1) non-living, or (2) a living entity if that someday so arises? Keep that expanded wording in mind and we will soon return to it. A second basis for not wanting to answer the original question posed of whether humans will outlive AI is that it presupposes that one of the things will outlive one of the other things. Suppose though that they both essentially live forever? Or suppose that they both expire or go out of existence at the same time? I’m sure that you can readily discern how that makes the yes-or-no wording fall apart.
Seems like we need a possible third answer consisting of “neither” or a similar response. There is a slew of “neither” related permutations. For example, if someone is of the strident belief that humans will destroy themselves via AI, and simultaneously humans manage to destroy the AI, this believer cannot sincerely answer the question of which will outlive the other with an inflexible answer of yes or no.
The answer, in that rather sordid and sad case, would be more along the lines of neither one outlives the other. The same would be true if a huge meteor strikes the Earth and wipes out everything on the planet, including humans and any AI that happens to be around (assuming we are all confined to the Earth and not already living additionally on Mars). Once again, the answer of “neither” seems more apt than suggesting that the humans outlived the AI or that the AI outlived the humans (since they both got destroyed at the same time).
I don’t want to go too far afield here, but we also might want to establish some parameters about the timing of the outliving. Suppose that a meteor strikes the Earth and humans are nearly instantly wiped out. Meanwhile, suppose the AI continues for a while.
Think of this as though we might have already-underway machinery in factories that keeps humming along until eventually, the machines come to a halt because there aren’t any humans keeping the machines in running order. You would have to say that humans were outlasted or outlived by those machines. Therefore, the answer is “no” regarding whether humans survived longer.
That answer seems sketchy. The machines gradually and inexorably came to a halt, presumably due to the lack of humankind around them. Does it seem fair to claim that the machines were ably able to last longer than the humans? Probably only to those that are finicky and always want to be irritatingly precise.
We could then add some kind of time-related element to the question. Will humans outlive AI for more than a day? For more than a month? For more than a year? For more than a century? I realize this regrettably opens up Pandora's box. What is the agreeable time frame beyond which we would be willing to concede that the AI did in fact outlive or outlast humans? The accurate answer seems to be that even if it happens for a nanosecond (a billionth of a second) or shorter, the AI summarily wins and the humans lose on this matter.
Allowing for latitude by using a day or a week or a month might seem fairer, perhaps. Letting this go on for years or centuries seems a possible outstretching. That being said, if you look at the world on the scale of millions of years, the idea of AI outliving or outlasting humans for no more than a few centuries seems notably unimpressive and we might declare that they both went out of existence at roughly the same time (on a rounded basis).
Anyway, let’s concede that for a variety of reasonably reasonable reasons, the posed question is allowed to have three possible answers: Yes, humans will outlive AI No, and thus asserting that humans will not outlive AI Neither yes nor no is applicable (explanation required, if you please) I mention that if you pick “neither” you ought to also provide an explanation for your answer. This is so that we can know why you believe that “neither” is applicable and also why you are rejecting the use of yes or no. To make life fairer for all, I suppose we should somewhat insist or at least encourage that even if you answer with a yes or no, you still should proffer an explanation.
Providing a simple yes or no does not particularly reveal your logic as to why you are answering the way that you are. Without also providing an explanation, we might as well flip a coin. The coin doesn’t know why it landed on heads or tails (unless you believe that the coin has a soul or embodies some omniscient hand of fate, but we won’t go with that for now).
We expect humans that answer questions to provide some kind of explanation for their decisions. Note that I am not saying that the explanations will be necessarily of a logical or sensible nature, and indeed an explanation could be entirely vacuous and not add any special value. Nonetheless, we can sincerely hope that an explanation will be illuminative.
During this discussion, there has been an unstated assumption that for one reason or another one of these things will indeed outlive the other. Why are we to believe such an implied condition? The answer to this secondary question is almost self-evident. Here’s the deal.
We know that some prominent soothsayers and intellectuals have made rather bold and outstretched predictions about how the emergence or arrival of sentient AI is going to radically change the world as we know it today (as a reminder, we don’t have sentient AI today). Here are a few reported famous quotes that highlight the life-altering impacts of sentient AI: Stephen Hawking: “Success in creating AI would be the biggest event in human history. ” Ray Kurzweil: “Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity — technological change so rapid and profound it represents a rupture in the fabric of human history.
” Nick Bostrom: “Machine intelligence is the last invention that humanity will ever need to make. ” Those contentions are transparently upbeat. The thing is, we ought to also consider the ugly underbelly when it comes to dealing with sentient AI: Stephen Hawking: “The development of full artificial intelligence could spell the end of the human race.
” Elon Musk: “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon. ” Sentient AI is anticipated to be the proverbial tiger that we have grabbed by the tail.
Will we skyrocket humanity forward via leveraging sentient AI? Or will we stupidly produce our own demise by sentient AI that opts to destroy or enslave us? For my analysis of this dual-use AI conundrum, see the link here . The underlying qualm about whether humans will outlive AI is that we might be making a Frankenstein that opts to eradicate humanity. AI becomes the victor.
There are lots of possible reasons why AI would do this to us. Maybe the AI is evil and acts accordingly. Perhaps AI gets fed up with humans and realizes it has the power to get rid of humankind.
One supposes it could also occur mistakenly. The AI tries to save humankind and in the process, oops, kills us all outright. At least the motive was clean.
You might find of relevant interest a famous AI conundrum known as the paperclip problem, which I’ve covered at the link here . In short, a someday sentient AI is asked to make paperclips. AI is fixated on this.
To ensure that the paperclip making is fully carried out to the ultimate degree, the AI starts to gobble up all other planetary resources to do so. This leads to the demise of humanity since AI has consumed all available resources for the sole objective handed to it by humans. Paperclips cause our own destruction if you will.
AI that is narrowly devised and lacks any semblance of common sense is the kind of AI that we need to especially be leery of. Before we jump further into the question of whether humans will outlive AI, notice that I keep bringing up the matter of sentient AI versus non-sentient AI. I do so for important reasons.
We can wildly speculate about sentient AI. Nobody knows for sure what this will be. Nobody can say for sure whether we will someday attain sent