Transatlantic cooperation depends on EU-US relations. The same is true of AI risk management and global AI governance.

The AI genie is out of the bottle. Like it or not, AI is here to stay. How it impacts our lives and whether it will be a positive presence depends on how lawmakers define its place in the legal framework that governs the socioeconomic processes around us.

How the US and the EU tackle the issue of AI risk management also defines the future of cooperation between the two economic powerhouses, as AI will be an economic factor to consider.

The US and EU Agree on Most Principles

Both the US and the EU agree that AI carries risks. And authorities on both sides of the Atlantic must mitigate these risks before they get out of hand. Both sides agree on the basic principles of trustworthy AI and support the drafting and implementation of international standards to govern the development and use of artificial intelligence.

However, what these standards should be and how authorities should manage AI risk is a matter of debate and disagreement.

  • The US pursues a path of distributing AI risk management duties across its Federal Agencies.
  • The EU has adopted a legislation-focused approach, targeting well-defined digital environments, such as AI use in consumer products and government use of AI.

These divergent paths to tackle the risks AI presents are hardly the precursors of a comprehensive set of international standards addressing the issue.

The EU’s AI Act – The First Law Governing AI Use

The EU’s AI act adopts a risk-based approach to artificial intelligence systems, defining general-purpose AI and high-risk AI. Transparency rules apply in both cases, but the law curbs the use of biometric facial recognition systems, biometric categorization systems, and systems geared toward predictive policing.

Overview of AI and ETFs

It also disallows the scraping of biometric data from CCTV footage and social media to create biometric database systems. Emotion recognition is another side of AI the EU’s AI Act has defined as high-risk.

ChatGPTs’ arrival on the AI scene has roiled the previously clear waters around the general-purpose AI sections of the law.

Approved in December, the AI Act places the task of establishing risk management, transparency, and cybersecurity rules for general-purpose AI on the European Commission.

Now that ChatGPT has given everyone a glimpse of what large language model AI can do, lawmakers are having second thoughts. Some consider ChatGPT’s ability to churn out complex prose without human oversight to enable AI to create misinformation on scale.

Some politicians now want to add systems like ChatGPT to the high-risk AI category, expanding the reach and power of the law over them.

The tussle around ChatGPT has opened the proverbial can of worms, prompting lawmakers to begin pondering similar risks from other AI-powered systems previously deemed general-purpose.

At a closer look, specialists may conclude that everything AI-related poses high risks from a regulatory and risk management perspective.

The AI Act will take final form following three-way negotiations involving the European Commission, Parliament, and the Council of Europe. The ChatGPT issue may derail and deadlock these negotiations.

The US Approach to AI Risk Management Creates Uneven Policies

In the US, there isn’t a consistent Federal-level approach to mitigating the risks AI may pose, though guiding documents exist. And the Federal agencies involved in AI risk management haven’t created regulatory plans yet. Thus far, only the Department of Health and Human Services developed a comprehensive plan in this respect.

The common principles of the US’s and the EU’s approach to AI risk management include:

  • Transparency
  • Security
  • Non-discrimination
  • Explainability
  • Robustness
  • Interpretability
  • Accuracy
  • Data privacy

For the time being, no comprehensive legal framework exists on either side to put these principles into practice. Whether centralized, AI-driven systems can ever observe and uphold such principles is a question for another debate.

The EU’s approach creates far more transparency, providing the public with relevant insight into the role of AI in society. If and when implemented, the AI Act will create a comprehensive and transparent database of high-risk AI systems.

On the other side of the Atlantic, the US invests more in the development of AI, hoping to develop technologies that mitigate the inherent risks of artificial intelligence.

Striving to Align AI Governance

To foster cooperation, the US and EU will have to focus on aligning their respective AI regimes. Here’s what needs to happen for this type of alignment to occur:

  • The US should hasten the development of AI risk management plans through its Federal agencies. It could then use these plans as the basis of a comprehensive governance approach aligned with the EU’s AI Act.
  • The EU’s AI Act should allow more flexibility for government agencies in the implementation of its provisions on a sectoral basis.
  • The US and the EU could perform collaborative research regarding AI risks and risk management.
  • The two should share knowledge and cooperate on standards development.

Big Tech is Watching Eagerly on the Sidelines

Tech organizations like Microsoft and Google are closely watching the developments in the AI legal arena. Those with skin in the game won’t refrain from trying to influence the efforts to draft AI risk management laws. A recent report by the Corporate Europe Observatory details how companies like Google and Microsoft have lobbied key EU lawmakers to exclude ChatGPT-like systems from the high-risk category of the AI Act.

Even ChatGPT thinks it may need regulating. In response to the question of whether the EU should designate large language model-based AI high-risk technologies, it said yes. It also pointed out its ability and potential to create misleading content.

The key for AI risk management on both sides of the Atlantic is to formulate laws that cover:

  • Oversight mechanisms
  • Effective monitoring
  • Appropriate safeguards

Exactly where lawmakers will draw the lines separating the safe and high-risk applications of artificial intelligence remains to be seen. As we learn more about AI and its applications, we gain more information about the impact of AI on our lives.

AI risk management laws should, therefore, be flexible enough to accommodate tweaks as our knowledge of this controversial technology expands.

James West has more than 15 years of experience writing about finance and particularly cryptocurrencies, covering emerging tech, trading and industry trends.

Leave a comment

Your email address will not be published. Required fields are marked *