Legal Doomsday For Generative AI ChatGPT If Caught Plagiarizing Or Infringing, Warns AI Ethics And AI Law

Legal Doomsday For Generative AI ChatGPT If Caught Plagiarizing Or Infringing, Warns AI Ethics And AI Law
Forbes Innovation AI Legal Doomsday For Generative AI ChatGPT If Caught Plagiarizing Or Infringing, Warns AI Ethics And AI Law Lance Eliot Contributor Opinions expressed by Forbes Contributors are their own. Dr. Lance B.
Eliot is a world-renowned expert on Artificial Intelligence (AI) and Machine Learning (ML). Following Feb 26, 2023, 08:00am EST | Press play to listen to this article! Got it! Share to Facebook Share to Twitter Share to Linkedin Is generative AI such as ChatGPT ripping off our websites and human devised content? Be aware, be . .
. [+] upset, be ready. getty Give credit where credit is due.
That’s a bit of sage wisdom that you perhaps were raised to firmly believe in. Indeed, one supposes or imagines that we might all somewhat reasonably agree that this is a fair and sensible rule of thumb in life. When someone does something that merits acknowledgment, make sure they get their deserved recognition.
The contrarian viewpoint would seem a lot less compelling. If someone walked around insisting that credit should not be recognized when credit is due, well, you might assert that such a belief is impolite and possibly underhanded. We often find ourselves vociferously disturbed when credit is cheated of someone that has accomplished something notable.
I dare say that we especially disfavor when others falsely take credit for the work of others. That’s an unsettling double-whammy. The person that should have gotten the credit is denied their moment in the sun.
In addition, the trickster is relishing the spotlight though they wrongly are fooling us into misappropriating our favorable affections. Why all this discourse about garnering credit in the rightmost of ways and averting the wrong and contemptible ways? Because we seem to be facing a similar predicament when it comes to the latest in Artificial Intelligence (AI). Yes, claims are that this is happening demonstrably via a type of AI known as Generative AI .
There is a lot of handwringing that Generative AI, the hottest AI in the news these days, already has taken credit for what it does not deserve to take credit for. And this is likely to worsen as generative AI gets increasingly expanded and utilized. More and more credit imbuing to the generative AI, while sadly those that richly deserve the true credit are left in the dust.
My proffered way to crisply denote this purported phenomenon is via two snazzy catchphrases: 1) Plagiarism at scale 2) Copyright Infringement at scale MORE FOR YOU The ‘Backsies’ Billionaire: Texan Builds Second Fortune From Wreckage Of Real Estate Empire He’d Sold Amazon Business Launches New AbilityOne Storefront To Advance Inclusive Procurement Why Businesses Should Invest In Employee Learning Opportunities I assume that you might be aware of generative AI due to a widely popular AI app known as ChatGPT that was released in November by OpenAI. I will be saying more about generative AI and ChatGPT momentarily. Hang in there.
Let’s get right away to the crux of what is getting people’s goats, as it were. Some have been ardently complaining that generative AI is potentially ripping off humans that have created content. You see, most generative AI apps are data trained by examining data found on the Internet.
Based on that data, the algorithms can hone a vast internal pattern-matching network within the AI app that can subsequently produce seemingly new content that amazingly looks as though it was devised by human hand rather than a piece of automation This remarkable feat is to a great extent due to making use of Internet-scanned content. Without the volume and richness of Internet content as a source for data training, the generative AI would pretty much be empty and be of little or no interest for being used. By having the AI examine millions upon millions of online documents and text, along with all manner of associated content, the pattern-matching is gradually derived to try and mimic human-produced content.
The more content examined, the odds are that the pattern matching will be more greatly honed and get even better at the mimicry, all else being equal. Here then is the zillion-dollar question: Big Question: If you or others have content on the Internet that some generative AI app was trained upon, doing so presumably without your direct permission and perhaps entirely without your awareness at all, should you be entitled to a piece of the pie as to whatever value arises from that generative AI data training? Some vehemently argue that the only proper answer is Yes , notably that those human content creators indeed deserve their cut of the action. The thing is, you would be hard-pressed to find anyone that has gotten their fair share, and worse still, almost no one has gotten any share whatsoever.
The Internet content creators that involuntarily and unknowingly contributed are essentially being denied their rightful credit. This might be characterized as atrocious and outrageous. We just went through the unpacking of the sage wisdom that credit should be given where credit is due.
In the case of generative AI, apparently not so. The longstanding and virtuous rule of thumb about credit seems to be callously violated. Whoa, the retort goes, you are completely overstating and misstating the situation.
Sure, the generative AI did examine content on the Internet. Sure, this abundantly was helpful as a part of the data training of the generative AI. Admittedly, the impressive generative AI apps today wouldn’t be as impressive without this considered approach.
But you have gone a bridge too far when saying that the content creators should be allotted any particular semblance of credit. The logic is as follows. Humans go out to the Internet and learn stuff from the Internet, doing so routinely and without any fuss per se.
A person that reads blogs about plumbing and then binge-watches freely available plumbing-fixing videos might the next day go out and get work as a plumber. Do they need to give a portion of their plumbing-related remittance to the blogger that wrote about how to plumb a sink? Do they need to give a fee over to the vlogger that made the video showcasing the steps to fix a leaky bathtub? Almost certainly not. The data training of the generative AI is merely a means of developing patterns.
As long as the outputs from generative AI are not mere regurgitation of precisely what was examined, you could persuasively argue that they have “learned” and therefore are not subject to granting any specific credit to any specific source. Unless you can catch the generative AI in performing an exact regurgitation, the indications are that the AI has generalized beyond any particular source. No credit is due to anyone.
Or, one supposes, you could say that credit goes to everyone. The collective text and other content of humankind that is found on the Internet gets the credit. We all get the credit.
Trying to pinpoint credit to a particular source is senseless. Be joyous that AI is being advanced and that humanity all told will benefit. Those postings on the Internet ought to feel honored that they contributed to a future of advances in AI and how this will aid humankind for eternity.
I’ll have more to say about both of those contrasting views. Meanwhile, do you lean toward the camp that says credit is due and belatedly overdue for those that have websites on the Internet, or do you find that the opposing side that says Internet content creators are decidedly not getting ripped off is a more cogent posture? An enigma and a riddle all jammed together. Let’s unpack this.
In today’s column, I will be addressing these expressed worries that generative AI is essentially plagiarizing or possibly infringing on the copyrights of content that has been posted on the Internet (considered an Intellectual Property right or IP issue). We will look at the basis for these qualms. I will be occasionally referring to ChatGPT during this discussion since it is the 600-pound gorilla of generative AI, though do keep in mind that there are plenty of other generative AI apps and they generally are based on the same overall principles.
Meanwhile, you might be wondering what in fact generative AI is. Let’s first cover the fundamentals of generative AI and then we can take a close look at the pressing matter at hand. Into all of this comes a slew of AI Ethics and AI Law considerations.
Please be aware that there are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad . Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like.
For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here , just to name a few. The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here .
In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U. S.
White House recently released to identify human rights in an age of AI, see the link here . It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society. I’ll be interweaving AI Ethics and AI Law related considerations into this discussion.
Fundamentals Of Generative AI The most widely known instance of generative AI is represented by an AI app named ChatGPT. ChatGPT sprung into the public consciousness back in November when it was released by the AI research firm OpenAI. Ever since ChatGPT has garnered outsized headlines and astonishingly exceeded its allotted fifteen minutes of fame.
I’m guessing you’ve probably heard of ChatGPT or maybe even know someone that has used it. ChatGPT is considered a generative AI application because it takes as input some text from a user and then generates or produces an output that consists of an essay. The AI is a text-to-text generator, though I describe the AI as being a text-to-essay generator since that more readily clarifies what it is commonly used for.
You can use generative AI to compose lengthy compositions or you can get it to proffer rather short pithy comments. It’s all at your bidding. All you need to do is enter a prompt and the AI app will generate for you an essay that attempts to respond to your prompt.
The composed text will seem as though the essay was written by the human hand and mind. If you were to enter a prompt that said “Tell me about Abraham Lincoln” the generative AI will provide you with an essay about Lincoln. There are other modes of generative AI, such as text-to-art and text-to-video.
I’ll be focusing herein on the text-to-text variation. Your first thought might be that this generative capability does not seem like such a big deal in terms of producing essays. You can easily do an online search of the Internet and readily find tons and tons of essays about President Lincoln.
The kicker in the case of generative AI is that the generated essay is relatively unique and provides an original composition rather than a copycat. If you were to try and find the AI-produced essay online someplace, you would be unlikely to discover it. Generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web.
As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set. There are numerous concerns about generative AI.
One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including manifestly untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations , a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here ). Another concern is that humans can readily take credit for a generative AI-produced essay, despite not having composed the essay themselves.
You might have heard that teachers and schools are quite concerned about the emergence of generative AI apps. Students can potentially use generative AI to write their assigned essays. If a student claims that an essay was written by their own hand, there is little chance of the teacher being able to discern whether it was instead forged by generative AI.
For my analysis of this student and teacher confounding facet, see my coverage at the link here and the link here . There have been some zany outsized claims on social media about Generative AI asserting that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims.
You might politely say that some people are overstating what today’s AI can actually do. They assume that AI has capabilities that we haven’t yet been able to achieve. That’s unfortunate.
Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action. Do not anthropomorphize AI. Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform.
With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app. One final forewarning for now.
Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc. ), make sure to remain skeptical and be willing to double-check what you see. Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions.
Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that Abraham