AI Regulation, Global Governance And Challenges

AI Regulation, Global Governance And Challenges

By Daniel Keller, CEO & Co-founder, InFlux Technologies

The spontaneous pace of innovative growth in AI technologies over the past few years has continued to provoke different reactions, ranging from curiosity and enthusiasm to concern and outright fear. One thing, however, is fairly certain: There is an ongoing global race for AI dominance.

Business owners, companies and professionals in different sectors are coming to terms with the massive potential for growth, cost efficiency, reduction in human error and improved profit margins that AI offers. At the same time, the risks and inherent danger of “Wild West” AI have become apparent, necessitating the need for regulation and AI governance.

State Of The AI

The launch of ChatGPT by OpenAI in 2022 was a wake-up call for innovators in the AI space. Prior to its release, although companies like Google, Microsoft and Meta were already on the AI train, not much success had been realized—especially in the public domain.

BlenderBot 3 was the subject of harsh criticism, Galactica had to be pulled down after three days and Tay didn’t survive 24 hours on Twitter. The commercial success of ChatGPT unleashed a wave of AI technologies with a new focus on tools and applications that allowed for direct user interaction.

Google launched Bard (now Gemini) as a direct competitor, while Microsoft’s $13 billion investment in OpenAI allowed it to integrate generative AI technology into its search engine, Bing.

Other sectors are not left out of the “AI revival”; financial institutions employ AI solutions for fraud detection with algorithms that leverage behavioral analysis, natural language processing and pattern recognition to identify fraudulent activities. In the healthcare industry, AI is helping to improve patient experience and diagnosis, interpret X-ray results, manage healthcare data and more.

The Need For Regulation

As companies and businesses increasingly incorporate AI technology into their products, decision-making processes and service delivery, the spotlight is on the data process behind these algorithms and the AI outcomes.

Misinformation, perhaps, remains one of the biggest genuine risks of generative AI. In 2022, an image purportedly showing an explosion near the Pentagon made the rounds on social media and briefly triggered a panic reaction in the stock market.

Even more dangerous is the political effect that AI-generated news and deepfakes can cause; media outlets publishing real news side by side with AI pieces can spread misinformation on a large scale and erode the public’s trust in what they see or hear.

Biased AI models can also result in large-scale discrimination. A research article by the University of California uncovered racial bias in a widely used healthcare algorithm. Since AI systems are typically used in large organizations, algorithmic discrimination can amplify bias on a scale that dwarfs the capabilities of conventional systems.

AI Regulation Measures

Recognizing the inherent danger of unregulated artificial intelligence (AI), governments all over the world are paying closer attention to this subject. Some have even gone ahead and released guidelines and frameworks for guiding the use of AI technology.

Let’s take a look at some of them.

The EU Artificial Intelligence Act

Just as it did with the General Data Protection Regulation (GDPR), the European Union is one of the first governmental bodies to articulate legislation on AI. The EU AI Act “lays the foundations for the regulation of AI in the EU” and classifies AI risks into four different risk categories, namely:

• Unacceptable Risk

• High Risk

• Limited Risk

• Minimal Risk

By applying specific requirements to AI systems based on the risk category they fall in, the EU hopes to establish an AI environment that improves trust and minimizes the negative implications of such technologies.

For example, AI systems that fall under subliminal manipulation and the biometric classification of people based on sensitive characters (e.g., electoral disinformation tools) are classified under unacceptable risks and prohibited. The Act also covers other measures for post-market monitoring and information sharing.

The United States AI Executive Order

In 2023, the Office of Science and Technology Policy in the White House rolled out a “Blueprint for an AI Bill of Rights,” and the National Institute of Standards and Technology also released an “Artificial Intelligence Risk Management Framework.”

However, perhaps the most important AI regulation move is President Biden’s Executive Order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The order covers eight policy fields to ensure “new standards for AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, [and] promote innovation and competition,” among others.

China AI Regulation

China started to work on AI laws in 2021, beginning with the Code of Ethics for New-Generation AI. Other measures like China’s Deep Synthesis Provisions, Provisions on the Management of Algorithmic Recommendations in Internet Information Services, Interim Measures for Generative Artificial Intelligence Service Management and the Personal Information Protection Law all seek to capture the position of the government on the development, use and security control of AI technologies in China.

Challenges To AI Regulation

• Technology Growth Pace: The fast acceleration of AI innovation makes it difficult for government regulations to predict or enact a comprehensive framework of regulations. The EU AI Act attempts to address this by using different tiers of classification. However, the rapid evolution of AI technology could outpace existing regulations, necessitating constant flexibility and response agility.

• Bureaucratic Confusion: AI regulations laws, in many cases, rely on, interact and overlap with other existing regulations. This can sometimes cause bureaucratic confusion in local implementation and hinder international collaboration, especially due to differences in cross-boundary regulatory standards and frameworks.

• Regulation-Innovation Balance: Regulating AI technology in some cases may stifle innovation and limit explorative growth. Deciding which/when regulatory measures are innovation-friendly or not can be a tricky challenge with dire consequences.

Rounding Up

Effective AI regulation requires a collaborative approach involving governments, industry leaders and private sector experts to ensure ethical standards keep up with technological advancements. However, it is also important to strike a critical balance between mitigating the potential risks of AI and leveraging the technology for the greater good of humanity.