Point of View

Emerging regulations will change the global AI landscape

Home » Research & Insights » Emerging regulations will change the global AI landscape

Enterprise leaders, politicians, academics, and wider society agree AI must be regulated. Even so, the potential impact such regulations could have on enterprise investments and strategies has created an atmosphere of uncertainty, holding back the adoption of AI in general, and generative artificial intelligence (GenAI) specifically. But now, regulators are acting.

The EU has formalized its AI Act, and the US has issued an Executive Order. Do they provide the certainty required for enterprises to unleash the full potential of GenAI? This report compares the EU and US approaches and offers guidance on how enterprise leaders should prepare.

Legislation aims to ensure AI will benefit all

Policymakers worldwide have been scrambling to determine the regulatory principles under which AI models should operate. The underlying sentiment is to balance risks and opportunities posed by AI and encourage the development of trustworthy AI that considers everyone’s voice and benefits society.

Enterprise concerns with trusting the outputs, and understanding how GenAI delivered those outputs, have proved to be among the biggest obstacles to firms adopting the technology (see Exhibit 1). Many enterprise leaders would welcome clarity and the level playing field the law can demand.

Exhibit 1: Trust and governance are among the biggest obstacles to deploying GenAI in the enterprise

N=104 enterprise leaders actively exploring and deploying GenAI
Source: HFS Research 2023

EU sets the standard with the world’s first comprehensive AI law

The race to regulate AI is not new; it started before ChatGPT emerged. Still, Gen AI has drastically accelerated this process worldwide. The international community has demonstrated its commitment to creating global platforms and frameworks, and several nations have forged their own regulations that reflect their unique culture, values, and commitment to responsible AI.

In March 2024 the European Union (EU) adopted its landmark Artificial Intelligence Act (AI Act) — the world’s first comprehensive AI law. The EU puts its strictest rules on the riskiest AI models to ensure AI systems are safe and respect fundamental rights in the EU. While the law is not yet in effect, it brings some legal certainty for enterprises building AI models in an otherwise grey area.

Biden’s Executive Order requires federal agencies to manage AI risks

In the US, the Biden Administration issued an Executive Order (EO) in October 2023 to promote ethical, safe, and trustworthy AI development and use. It requires federal agencies to manage AI risks and promote American values such as privacy, civil rights, and liberties. While significant progress has occurred, no comprehensive federal AI legislation is in sight. Some feel there is no need for aggressive regulation that could stifle innovation and deny America global leadership in AI.

The EU has proven its ability to set a benchmark in the digital policy area, and the Brussels effect was seen after it enacted the General Data Protection Regulation (GDPR) in 2018, which jurisdictions around the world soon followed. The AI Act provides a blueprint for similar regulatory efforts. It has global implications, and US companies that offer AI products and/or services in the EU must abide by these regulations. They must map their processes and assess how well their AI systems conform to the upcoming AI Act. It is unclear whether enterprise leaders will tailor their products exclusively for the EU market with different standards for other regions, or if they will standardize their offerings globally using the AI Act as a baseline framework. The impact of GDPR suggests the latter is more likely.

Exhibit 2: Comparative analysis of regulatory approaches in the EU and US

Source: HFS Research 2024

EU AI Act Risk framework defines its impacts and constraints

The EU has chosen to regulate AI models based on the potential risks they pose to society, and these classifications trigger different compliance obligations (see Exhibit 3). This is bound to have a transformative impact on how US companies build AI tools and enact governance programs going forward.

Unacceptable Risk: The law bans AI systems that carry unacceptable risk, for example those that use biometrics data to infer sensitive characteristics such as people’s sexual orientation or social scoring by the government that might lead to discrimination.

High Risk: High-risk AI applications (see Exhibit 3) are those that pose a threat to human safety or fundamental rights. These can include AI systems used in hiring and law enforcement or critical infrastructure (such as transportation systems). Developers must show, for example, that their models comply with obligations that require risk-mitigation systems, high-quality data sets, transparency, human oversight, and accuracy.

Exhibit 3: The EU Act defines the highest risk applications of AI. If you are active in these areas, you must prepare to comply.

Source: HFS Research 2024

Limited Risk: This targets AI tools with specific transparency obligations and requires informing humans about their interactions with AI, such as chat bots for customer service.

Minimal Risk: Use cases defined as having minimal risk don’t have restrictions or mandatory compliance obligations. However, AI literacy training and general principles such as human oversight, non-discrimination, and removing bias apply. Examples include AI-powered video games and email spam filters.

Owners of general-purpose AI models have additional requirements

General-purpose AI models — such as GPT-4, LLaMA, etc., have separate compliance obligations, including transparency, documenting the modelling and training process, and complying with EU copyright law. For high-risk GPAI models, these obligations include model evaluations, risk mitigation systems, and reporting requirements.

Breach the AI Act and you could be fined 7% of annual global turnover

Violations of the AI Act carry fines ranging from 1.5% to 7% of a firm’s yearly global turnover, depending on the nature of the infringement and the company’s size.

The AI Act likely will come into force later this year with a gradual and phased transition and implementation period. Enterprises will have 24 months to comply with provisions of the law, except for AI systems that fall under prohibited use, general-purpose, and high-risk AI systems that require compliance within 6, 12, and 36 months respectively after the Act takes effect.

Six actions to take now to stay compliant with emerging AI regulations

As enterprises continue to ramp up efforts to build AI models, the AI Act presents some regulatory clarity. Here are actions enterprise leaders must take to manage their response effectively:

  1. Map and document the influence of AI systems across your entire business ecosystem.
  2. Adopt strong AI governance frameworks featuring risk and quality management systems to ensure the responsible development and deployment of AI.
  3. Become AI fluent by developing comprehensive AI literacy training programs designed to educate all staff members who interact with AI systems.
  4. Establish a dedicated function to formulate and oversee AI policies, and to identify and mitigate risks associated with AI deployment.
  5. US and other international companies operating AI systems within the EU must develop compliance programs to comply with the AI Act.
  6. Keep current with the latest AI policies, legislation, and regulatory frameworks emerging globally.
The Bottom Line: The time for talking about responsible AI is over — now you must take steps to comply.

Companies building high-risk AI systems must act swiftly to meet new regulatory demands. The clock is ticking, and the hard work starts now.

Enterprises building powerful AI models talk about how committed they are to the responsible use and deployment of AI. But now with the AI Act, they must go further by taking concrete steps to achieve and maintain compliance with emerging (and evolving) regulations.

Sign in to view or download this research.

Login

Register

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started

Logo

confirm

Congratulations!

Your account has been created. You can continue exploring free AI insights while you verify your email. Please check your inbox for the verification link to activate full access.

Sign In

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started
ASK
HFS AI