Globalization

The rise of ChatGPT - Who speaks for you and why did you agree to it?

Date: January 24, 2023.
Audio Reading Time:

Many people have already tried to publish a media article written by a machine, or produce a legal complaint about a customer problem with the help of an AI-supported application. For the most part, they were satisfied, but not enthusiastic.

ChatGPT is a new star in the sky among the so-called "large language model tools”: that is, applications that are capable of writing a text according to your instructions and requirements in a very short time. In November, when the company Open AI launched ChatGPT, it gained a million curious users in the first five days.

From this text generator, you can get a short piece of information, an essay on a certain topic, and even a poem in just a few seconds. ChatGPT writes according to the entered instructions using a huge amount of stored data (or knowledge) from books, textbooks, and websites.

The content it produces is very similar to that written by a human hand, so the enthusiasm for this latest in a series of similar applications, comes mainly because of that "human" author's trait.

How to prevent students from cheating?

The application of ChatGPT is quite satisfactory when used by customer services, because its complex sentences take it to a higher level than the one where it was clear to customers that their complaint had not been answered by an employee, but by a machine. But that is not the level ChatGPT wants to stay at.

Thousands of students around the world see it as a solution for their exams, for which they need to prepare an essay on a specific topic. If they are skilled enough to enter the right criteria, they will receive a completely correct text from ChatGPT and their laziness will apparently be rewarded.

What will happen if a fellow student also submits more or less the same essay, with more or less similar conclusions? Or the whole student body?

Essentially, ChatGPT is able to decode strings of words, grouped into meaningful units, very much like human writing, but without understanding their meaning, or the wider context in which they are applied.

Students who use the shortcut offered by ChatGPT will receive a mediocre text, composed of information and conclusions that someone has already published before them, carefully selected by the machine. Maybe sufficient for a passing grade, but not for progress in scientific work.

The door to disinformation is wide open

The scope for abuse of AI-generated content at this stage of development is as large as its practical aspects.

“With the ability to generate large amounts of text quickly and convincingly, generative AI tools like ChatGPT could be used to create and disseminate fake news on a large scale”, wrote ChatGPT itself.

It gave a rather convincing answer to the Economic Times journalist's request to write an essay on why one should be wary of ChatGPT and other generative AI tools that could propagate fake news.

The answer was convincing. It also contained a value system, which is not characteristic of machines, but it was not authentic. It was a product of accumulated knowledge and social discussion, parts of which were quickly and precisely selected by ChatGPT and delivered to the user.

Many of the millions of users of this application have been participating in public discussions on various issues for two months, on blogs and social media. It is realistic to say that people no longer participate in a collision of opinions and arguments, but algorithms.

With the increase in the number of users of these "artificial intelligence-powered natural language processing tools", public discourse can turn into a market of second-hand ideas, without creativity and critical thinking.

“I suspect we are going to see huge amounts of content that is produced, none of which is particularly verified [and] the origins of which are not particularly clear. We are getting to a point where tools are going to make it harder and harder to solve this problem”, said Arthur Gregg Sulzberger, the chairman of the New York Times, during a panel discussion at the World Economic Forum in Davos last week.

Machines like ChatGPT are very welcome in the existing environment of communication bubbles in which most users of social networks "live".

Their urge to communicate with a large number of people will be satisfied by sharing desirable content that they will not have to "struggle" to produce, because it already exists.

The penetration of disinformation into such a space is drastically facilitated, because the verification of authenticity, truthfulness and relevance can no longer exist: the entire content is created elsewhere, in the world of algorithms.

With a number of negative aspects that she expects from platforms like ChatGPT, Janet Haven, CEO of Data & Society, an independent non-profit research organisation, considers more effective regulation a positive move.

“I predict -and hope - we will see growing attention at the federal level to build meaningful guardrails around the development and deployment of these and other AI systems - ones that account for their costs to society and put the protection of fundamental rights and freedoms over pure technical innovation”, said Haven.

This text is 100% human-produced.

Source TA, Photo: Shutterstock