AI Can Overcome Human Weaknesses

AI Can Overcome Human Weaknesses
AI AI Can Overcome Human Weaknesses Arun Shastri Contributor Opinions expressed by Forbes Contributors are their own. Arun is ZS’ global AI practice leader May 30, 2022, 08:11pm EDT | Share to Facebook Share to Twitter Share to Linkedin Many of the advantages of AI are well known, understood, and touted. And its limitations are well known too.
But there are other salient features that, while not often mentioned, are worthy of our attention. Advantages - AI applications can execute incredibly complicated tasks with ease. They can personalize recommendations for the next song you may enjoy or pick through millions of X-rays for the one that indicates a problem.
Moreover, they can accomplish such tasks at levels of volume and accuracy which human experts cannot match. Monotonous – yet important – jobs can be dispatched flawlessly and without complaint. Limitations - At the same time, many articles have been written about humans having abilities that AI lacks .
These articles often argue that humans and AI must work together, with AI augmenting humans’ more expansive abilities. We can imagine, anticipate, feel, and judge changing situations. Since a more wide-ranging Artificial General Intelligence is not yet within reach, current AI models – which are excellent in narrow tasks – still benefit from human guidance.
Beyond the obvious, AI has advantages that correspond directly to human weaknesses. Unlike us, it understands probabilities, does not introduce bias, is painfully consistent and avoids undue risk. Probability vs outcomes – Humans understand outcomes but are generally poor in processing probabilities.
Monty Hall’s Let’s Make a Deal is an example where humans demonstrate a lack of understanding of probabilities and updating priors. In this game, there are three doors: one has a car and the other two have goats. The contestant picks a door at random, and Monty chooses to open either of the other doors which shows a goat.
Monty then offers the contestant a chance to switch to the other closed door. Should they? It turns out that by opening a door, Monty has given the contestant additional information, and the contestant will win the car two thirds of the time if they always switch. We humans tend to get this type of question wrong, but AI can answer it perfectly.
Bias – Humans have many biases, whether we call them “gut instinct” or some other name. Confirmation bias is perhaps the most common: we look for and interpret information that backs up a preconceived assumption or theory. Two humans can watch the same news program and walk away with different conclusions on the day’s events.
In contrast, bias only manifests in AI through the data that we give it to learn. AI bias is restricted to a finite data set rather than the ever-changing complexity of human experiences, memories, beliefs, and fears. In that sense, AI bias is arguably more discrete and solvable than human bias.
MORE FOR YOU Black Google Product Manager Stopped By Security Because They Didn’t Believe He Was An Employee Vendor Management Is The New Customer Management, And AI Is Transforming The Sector Already What Are The Ethical Boundaries Of Digital Life Forever? Consistency – AI is consistent, painfully so. Unless we tell it otherwise, it will do what we ask consistently. The one consistent characteristic of humans is that we are not consistent – in exercise, in diet, the routes we take to work, et cetera.
Moreover, humans find ways to rationalize our inconsistencies. It is not unfathomable that the same patient profile presenting with the same symptoms to the same physician may get different diagnoses at different times. AI guarantees procedural and outcome consistency provided the underlying population does not have a huge deviation.
Risk – AI will not take risks, but humans will. Of course, that is why we want human intelligence to augment AI. The power of human ingenuity is taking risks and betting on it.
For example, betting on electric cars when no algorithm would suggest doing so. But sometimes this risk manifests in dysfunctional momentum , such as the Challenger space shuttle disaster that killed its crew. Even though an engineer urged delay of the launch, citing safety concerns stemming from a design flaw in the O-ring seals, the shuttle went up as scheduled and exploded about a minute after takeoff.
In her analysis of that fatal 1986 disaster, sociologist Diane Vaughan coined the term “ normalization of deviance ” to describe teams that become desensitized to unsound practices. AI would have objectively evaluated and determined that the launch should have been delayed. So yes, AI cannot imagine, anticipate, feel, and judge, but AI also understands probabilities, does not introduce new bias, is consistent, and avoids undue risk.
The fact that we can feel, and judge, may not always work in our favor. Follow me on Twitter or LinkedIn . Check out my website .
Arun Shastri Editorial Standards Print Reprints & Permissions.