The GPT-3 can generate content at an alarming pace
The GPT-3 can generate content at an alarming pace|Representational image
OPINION

We have officially entered the 6th generation of warfare

The most advanced artificial intelligence computing technology named Generative Pre-Training-3 (GPT-3) was released a few weeks ago, leaving users in shock and awe

Satyendra Pandey

Jai Mrug

“Music is the most advanced form of mathematics.” This is an extremely insightful statement. It draws parallels about both subjects and how pattern recognition, intervals and inversions are integral to both. Yet try and find who wrote this and you will draw a blank. Extensive efforts aimed at pouring over archival information over the centuries and interviews with experts will also draw a blank. Why so? Because, the sentence was not written by a human, it was generated by a computer.

The most advanced artificial intelligence computing technology named the Generative Pre-Training-3 (GPT-3) was released a few weeks ago, leaving users in shock and awe. It is by far more powerful than anything one could imagine. For the more technical minded it is a text-generating algorithm which assesses the likelihood that a particular word will appear with surrounding words, which it spits out, and then teaches itself, repeatedly, getting smarter in the process.

The inputs to the model comprise half a trillion words and 175 billion parameters and all of the internet. Give it a topic and it will generate a five-paragraph essay in less than 10 seconds. Or engage it in a question and answer session, and it engages in conversation with ease. A user is not able to discern whether there is a human or a machine on the other end. The GPT-3 features a vastly miraculous Artificial Intelligence, perhaps an achievement that many in the corporate and political world would always dream of.

A machine that could mimic their responses, almost get human in responding linguistically to a particular situation. An AI that will make your computer tell you what you are supposed to say next. Say good-bye to those chat-bots. This is exponentially better and you’ll never be able to tell that it is a computer at the other end. Indeed, control groups had results where up to 86% of folks erroneously believed language was written by a human as opposed to a machine.

And the applications are many, ranging from replacing call-center workers to content generation, and drafting legal notices. For the first time in human history, it is the white-collar jobs that are likely to be impacted. Speechwriting – no issues; content generation – at the click of a button; programming -- tried and tested; policy comparisons – easily done; perhaps even policy formulations. But there are also extreme negative impacts. Documents can be forged, false instructions given and the AI can be leveraged in information warfare. To say that precautions must be taken will be an understatement.

That said that the system is not without flaws. Firstly, the GPT-3 is based on anticipation without context. So while the technology will no doubt impact jobs but the transition for now will be slow, because contextual capacity is still unproven, that is, how does the model know whether what it spits out is contextually correct or not? Could it be grammatically correct, style-wise accurate and yet not contextual. The answer is both: yes and no. Because regardless of computing power, the model is based on inputs and patterns. It is still missing the most human of elements. One that is subjective and much maligned, that is, human judgment.

This also puts up a question: Can there be a context that can be learnt by GPT? And how effectively can it do that? Because the key to effective communication is both context and style. Style, will often reflect in patterns and can be learnt effectively by seeing patterns of data. Typically style is learnt by seeing what precedes a certain set of identified data, and what follows it. And this understanding is reflected in the way following bits are chosen. So pattern learning could be a consequence of style.

However, style could be a consequence of both personality and context, and that is where the glitch could potentially lie. So suppose, if A were to have a flippant style always, it might not mean that the context is not very serious. The converse, if B were to have an angry and serious tone, it would not necessarily mean that the context is very serious. And again a computer cannot discern this. Not as of now at least.

For now, the GPT-3 poses some very real challenges. Especially at a time when posts on social media have been known to cause upheaval, and at a time when we are increasingly glued to our phone screens consuming vast amounts of information. Because the GPT-3 can generate content at an alarming pace. Content that may or may not be contextual but will further blur the lines between facts and “alternate facts.” And soon we will see an arena of what we may call “grey knowledge”. Knowledge that is repetitively derived from the same source, but cannot innovate and at the same time convolutes the information available.

In the era of political correctness, algorithimic bias is the newest challenge yet. And this should be noted and mitigated. Because the model sources majority of its content from the internet where the discourse for the most part is based on quantity and links over substance. Thus if a particular word say is associated with a particular sentiment – that’s exactly what the model spits out. Stereotypes are likely to grow stronger, bias is likely to increase – and at the end of the day there is no legal recourse. Because, the content is auto-generated via artificial intelligence.

If you thought that the Cambridge Analytica and deep-fakes scandals brought up significant concerns, this technology is at least 10 orders of magnitude greater, simply because of the extent of motivated and tendentious conversations it can generate. The scale could make Goebbels turn in his grave.

An impressive advance in computing or the beginning of something far more sinister. We have officially entered the 6th generation of warfare. The future is here. Are we ready?

(Jai Mrug is the founder and CEO of M76 analytics. He is also a renowned psephologist. Satyendra Pandey is an aviation professional and a columnist. Views expressed are personal.)

For more such stories, follow us on Facebook, Twitter and Instagram. Activate Website Notifications to stay updated.

EastMojo
www.eastmojo.com