We will look threadbare at three issues troubling academics, celebrities and powerful politicians of the world including the US Senate and the European Union.
Is Generative AI harmful for mankind ?
Has OpenAI and Microsoft violated the copy right act at will and illegally?
Should the growth of Generative AI be curbed?
All three questions have different answers but let us not rush to conclusions but instead deep dive into the genesis of the core issues and the origin of the problem. Artificial Intelligence and Generative AI in chatbots is not new. It arrived around seventy years ago, just when the first computers arrived. Academics and intellectuals however did not take to Artificial Intelligence kindly. They feared that it was out to compete with and dominate human intelligence. Lot of scary story telling and fear mongering against AI has happened since then. Yet they have done nothing fearful to justify the allegations.
So why fear them now?
Let us look at what happened in the last decade. Around 2014, a machine learning algorithm was formulated called generative adversarial networks GANS that accurately authenticated audio and video takes of real people. The first high profile victim of this activity later known as deepfakes was none other than Republican Presidential candidate Donald Trump. Few words in many of his video speeches were manipulated by GANS to make him sound like a buffoon and a political imbecile. The fake versions had over 90% original content with bits of manipulated text that was ridiculously incorrect but was difficult to detect as fake. The Republicans hit back with deepfakes of opposition as Trump rode to power. Politicians and celebrities are prime targets of deep fake technology which has grown more sophisticated and difficult to detect by the day.
By 2020 Open AI had tested the skills of Generative AI in multiple formats and by 2022 ChatGPT with a $10billion funding from Microsoft started stoking fears of Generative AI taking over the world. Apart from deepfakes, the capability of generative AI replicating the original text, voice, graphics and images has enabled large scale academic plagiarism that will be extremely difficult to detect. Replication and plagiarism was never so easy. The AuthorsGuild of US have launched a class action suit against Open AI and Microsoft accusing them of copyright violation of thousands of books that Generative AI uses for training its Chat GPT tool and even GitHub’s Copilot for coding.
Meanwhile Microsoft backed Open AI started co-opting developers to produce amazing text and images that were close to those generated by human intelligence and years of deep learning. Generative AI powered by machine learning ML along with the publicity hype of Chat GPT made its use widespread. And as many developers started to actively use Chat GPT which gave real scalable benefits, PR firms and news agencies were fed a myth that Microsoft had discovered a ‘Google Killer’ in Chat GPT. So is Chat GPT a Google killer?
Not quite, says Yann LeCun VP and Chief Scientist at Meta AI in an interview to AIM. “I don’t think any company out there is significantly ahead of others.” he said . “But they, OpenAI have been able to deploy their systems in a way that they have a data flywheel. So the more feedback they have, those systems help them to generate more feedback and later adjust it to provide better outputs” he explains “ I do not think those systems in their current form can be fixed to be intelligent in ways that we expect them to be,” said LeCun. He explains that data systems are entertaining and impressive but not really useful. “To be useful, they have to make sense of real problems for people, help them in their daily lives as if they were traditional assistants completely out of reach,” he added, painting the real picture.
And since Generative AI in its current form is a great data gatherer but a poor data user to solve real time people problems like all other Artificial Intelligence tools, they are far away from being a Google killer or even mildly harmful to mankind. So our answer to the first question is – No Generative AI is not harmful to mankind as per Yann LeCun Chief AI Scientist at Meta.
Why ?
Because though it can collate data and churn out new texts it cannot put the same to use intelligently to solve human problems and human intelligence still has to manage that . Generative AI or AI in any form cannot independently harm or help humans without human intervention.