Is the world prepared for the coming AI surge? The answer at this point would have to be “absolutely not!”

chatgpt

We like the functionality and useful input AI can give us daily, but do we understand the possible future implications of this technology? Most of us are blissfully unaware of what surprises it may harbor today.

ChatGPT launched in November 2022. Using a Large Language Model (LLM), ChatGPT is a generative pre-trained transformer capable of delivering human-like answers to questions. Through this technology, humans can finally interact with computers as they would with their fellow humans.

Since its launch, millions have typed questions into ChatGPT and received more or less satisfactory answers. Skype users now have AI chatbots messaging them, urging them to interact with them by asking questions.

The novelty of the technology has made us ignore its future and present implications. And there are many such implications, more profound than the wisest of us could comprehend at this time.

Regulators Move Against ChatGPT

Several countries have banned ChatGPT since its launch, including Russia, China, North Korea, and Iran. Seeing these closed, autocratic societies take a stand against a technology that uses the free flow of information is hardly surprising. Recently, however, Italy has blocked users’ access to ChatGPT as well, becoming the first Western country to take such a radical anti-AI measure.

Although having access to information blocked is never a net positive, many of the concerns the Italian regulator has raised are valid.

Concerns about AI

Specifically, Italy’s data protection authority has voiced privacy-related concerns. It is particularly concerned about how the AI stores and processes personal data, and by extension, it has called into question its GDPR compliance.

According to the Italian authorities, before March 20, 2023, a data breach occurred involving ChatGPT’s user conversations and payment information. The breach may have revealed sensitive data to hackers.

The Italian watchdog does not believe that there is a legal framework that allows ChatGPT to collect the information it needs to fine-tune its capabilities.

Another problem the Italian authorities have raised is that there is no age restriction on the use of ChatGPT.

Google’s rival AI product, Bard, has addressed this part of the problem already. It is only available to those above 18.

Italy is part of the EU, whose data protection authorities could deal a significant blow to AI across the 27-nation bloc if they were to find the Italian regulator’s stance on the issue valid from the perspective of data protection laws.

Ireland’s data protection commission has already signaled its intention to look into the legal basis of the Italian authorities’ move.

Companies operating in Europe must comply with the EU’s strict data protection laws. If they fail to do so, they risk being shut out of the 27-nation bloc, as ChatGPT has been shut out of the Italian market.

Italy’s Concerns Are Only the Tip of the Iceberg

Legal concerns about data protection – as valid as they are – represent only a tiny fraction of the problems AI may cause.

Long-term, AI may pose a severe threat to humanity, according to a group of AI-involved tech figures, including Twitter’s Elon Musk.

Here are some of the concerns these experts have about the uncontrolled rise of AI.

  • Instead of taking place in a carefully controlled and regulated environment, AI development has degenerated into a free-fall-all arms race in recent years.
  • AI developers cannot control, understand, or predict the behaviors of the digital minds they create.
  • AI threatens millions of jobs and skills.
  • AI can flood communication channels with misinformation on a level we have never seen nor are we prepared to handle.
  • In the future, AI may pose a threat to the human control of our civilization.

The solutions proposed are radical. Specialists have called on OpenAI and other actors involved in AI development to:

  • Suspend the development of the next iteration of the ChatGPT chatbot, ChatGPT-4, for at least six months
  • Reach an industry-wide agreement to slow AI development at critical junctures
  • Get governments to step in and institute moratoriums on development where needed
  • Call for the creation of new regulatory authorities to focus on AI

The UK has thus far refused to set up a dedicated regulator for the AI industry.

AI Regulation Lags Behind

Stepping up regulatory efforts and tackling the AI genie before it leaves the bottle should be a priority for authorities world-over.

Efforts should start by investigating ChatGPT and its ilk, exposing them to greater public scrutiny.

Authorities should assume control over the budding industry.

The EU is currently busy putting together the first AI-focused legislation in the form of its AI Act.

It will take years for the law to come into effect, however, leaving ample time for AI developers to speed ahead in the hopes of securing an edge over competitors before the crackdown.

The law itself will only regulate AI applications falling into categories it deems unacceptable or high-risk. Social credit-scoring systems would fall into the unacceptable category. Applications like the automated scoring of job applications to determine employee suitability are high-risk.

AI applications that do not fall into either of those categories will remain unregulated.

OpenAI responded to the concerns of the Italian regulator by pledging to reduce the use of personal information in its AI training process. It has reiterated its intention to make ChatGPT available to the Italian public as soon as possible.

James West has more than 15 years of experience writing about finance and particularly cryptocurrencies, covering emerging tech, trading and industry trends.

Leave a comment

Your email address will not be published. Required fields are marked *