Enterprise leaders, politicians, academics, and wider society agree AI must be regulated. Even so, the potential impact such regulations could have on enterprise investments and strategies has created an atmosphere of uncertainty, holding back the adoption of AI in general, and generative artificial intelligence (GenAI) specifically. But now, regulators are acting.
The EU has formalized its AI Act, and the US has issued an Executive Order. Do they provide the certainty required for enterprises to unleash the full potential of GenAI? This report compares the EU and US approaches and offers guidance on how enterprise leaders should prepare.
Policymakers worldwide have been scrambling to determine the regulatory principles under which AI models should operate. The underlying sentiment is to balance risks and opportunities posed by AI and encourage the development of trustworthy AI that considers everyone’s voice and benefits society.
Enterprise concerns with trusting the outputs, and understanding how GenAI delivered those outputs, have proved to be among the biggest obstacles to firms adopting the technology (see Exhibit 1). Many enterprise leaders would welcome clarity and the level playing field the law can demand.
N=104 enterprise leaders actively exploring and deploying GenAI
Source: HFS Research 2023
The race to regulate AI is not new; it started before ChatGPT emerged. Still, Gen AI has drastically accelerated this process worldwide. The international community has demonstrated its commitment to creating global platforms and frameworks, and several nations have forged their own regulations that reflect their unique culture, values, and commitment to responsible AI.
In March 2024 the European Union (EU) adopted its landmark Artificial Intelligence Act (AI Act) — the world’s first comprehensive AI law. The EU puts its strictest rules on the riskiest AI models to ensure AI systems are safe and respect fundamental rights in the EU. While the law is not yet in effect, it brings some legal certainty for enterprises building AI models in an otherwise grey area.
In the US, the Biden Administration issued an Executive Order (EO) in October 2023 to promote ethical, safe, and trustworthy AI development and use. It requires federal agencies to manage AI risks and promote American values such as privacy, civil rights, and liberties. While significant progress has occurred, no comprehensive federal AI legislation is in sight. Some feel there is no need for aggressive regulation that could stifle innovation and deny America global leadership in AI.
The EU has proven its ability to set a benchmark in the digital policy area, and the Brussels effect was seen after it enacted the General Data Protection Regulation (GDPR) in 2018, which jurisdictions around the world soon followed. The AI Act provides a blueprint for similar regulatory efforts. It has global implications, and US companies that offer AI products and/or services in the EU must abide by these regulations. They must map their processes and assess how well their AI systems conform to the upcoming AI Act. It is unclear whether enterprise leaders will tailor their products exclusively for the EU market with different standards for other regions, or if they will standardize their offerings globally using the AI Act as a baseline framework. The impact of GDPR suggests the latter is more likely.
Source: HFS Research 2024
The EU has chosen to regulate AI models based on the potential risks they pose to society, and these classifications trigger different compliance obligations (see Exhibit 3). This is bound to have a transformative impact on how US companies build AI tools and enact governance programs going forward.
Unacceptable Risk: The law bans AI systems that carry unacceptable risk, for example those that use biometrics data to infer sensitive characteristics such as people’s sexual orientation or social scoring by the government that might lead to discrimination.
High Risk: High-risk AI applications (see Exhibit 3) are those that pose a threat to human safety or fundamental rights. These can include AI systems used in hiring and law enforcement or critical infrastructure (such as transportation systems). Developers must show, for example, that their models comply with obligations that require risk-mitigation systems, high-quality data sets, transparency, human oversight, and accuracy.
Source: HFS Research 2024
Limited Risk: This targets AI tools with specific transparency obligations and requires informing humans about their interactions with AI, such as chat bots for customer service.
Minimal Risk: Use cases defined as having minimal risk don’t have restrictions or mandatory compliance obligations. However, AI literacy training and general principles such as human oversight, non-discrimination, and removing bias apply. Examples include AI-powered video games and email spam filters.
General-purpose AI models — such as GPT-4, LLaMA, etc., have separate compliance obligations, including transparency, documenting the modelling and training process, and complying with EU copyright law. For high-risk GPAI models, these obligations include model evaluations, risk mitigation systems, and reporting requirements.
Violations of the AI Act carry fines ranging from 1.5% to 7% of a firm’s yearly global turnover, depending on the nature of the infringement and the company’s size.
The AI Act likely will come into force later this year with a gradual and phased transition and implementation period. Enterprises will have 24 months to comply with provisions of the law, except for AI systems that fall under prohibited use, general-purpose, and high-risk AI systems that require compliance within 6, 12, and 36 months respectively after the Act takes effect.
As enterprises continue to ramp up efforts to build AI models, the AI Act presents some regulatory clarity. Here are actions enterprise leaders must take to manage their response effectively:
Companies building high-risk AI systems must act swiftly to meet new regulatory demands. The clock is ticking, and the hard work starts now.
Enterprises building powerful AI models talk about how committed they are to the responsible use and deployment of AI. But now with the AI Act, they must go further by taking concrete steps to achieve and maintain compliance with emerging (and evolving) regulations.
Register now for immediate access of HFS' research, data and forward looking trends.
Get StartedIf you don't have an account, Register here |
Register now for immediate access of HFS' research, data and forward looking trends.
Get Started