The rapid evolution of emerging technologies such as blockchain and artificial intelligence (AI) has significantly impacted the healthcare sector. Enterprises and service providers have been on a journey to leverage these technologies to address the triple aim of healthcare (reducing the cost of care, enhancing care experiences, and improving health outcomes). The possibilities of applying generative AI (GenAI) in healthcare are only constrained by our imagination (see Exhibit 1).
However, risk is a formidable impediment in implementing AI, not just in operational terms but also in ethical, legal, and societal dimensions. The uncertain regulatory landscape adds another layer of complexity and will affect GenAI’s adoption rate.
In this series of perspectives, we will unpack the latest legislative and regulatory trends in the AI space and their impact on healthcare and life sciences and explore how to harness AI’s potential effectively while mitigating its associated risks.
Sample: 255 US health plans and 105 health systems and hospitals
Source: HFS Research, 2024
Artificial intelligence holds great promise to enhance the quality of healthcare. Despite the aim of healthcare regulations to ensure high-quality care, there are few regulations governing the use of AI in healthcare. However, this is all set to change with the explosion of GenAI, which has piqued the interest of policymakers around the world.
On October 30, 2023, The White House issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to advance a coordinated approach toward the safe and responsible development of AI. The executive order establishes a policy framework to manage the risks of AI and directs agency action to regulate the use of health AI systems and tools and to guide AI innovation across industries, including health and human services. In recent months, as various agencies have begun to carry out the mandates of the executive order, its full impact across the technology, health, and life sciences sectors and beyond has become clearer.
In a groundbreaking move, the EU has reached a political agreement on the world’s first comprehensive AI regulation, the Artificial Intelligence Act (AI Act). This law is intended to ensure the safety of AI systems on the EU market and provide legal certainty for investments and innovation in AI while minimizing associated risks to consumers and compliance costs for providers.
It features a risk-based approach, defining four risk classes, each covering different use cases of AI systems: unacceptable risk, high risk, limited risk, and minimal or no risk. In addition, foundational models must adhere to specific requirements. Fines for violations depend on the type of AI system, the company’s size, and the infringement’s severity. The Act has yet to come into force, but EU countries have reached political agreement on this law.
Given the anticipated regulatory landscape, organizations must proactively consider the potential impacts of these policies on their operations and strategic planning. To navigate this evolving terrain effectively, here are our top 10 proactive actions to consider:
While enterprises face an uncertain regulatory landscape, the US and EU have made some progress to lay out the potential path forward. So, it is time to get real about preparing an internal roadmap to operationalize the compliance frameworks. The consequences of not doing so are not just penalties; in healthcare, it could be the difference between life and death.
Register now for immediate access of HFS' research, data and forward looking trends.
Get StartedIf you don't have an account, Register here |
Register now for immediate access of HFS' research, data and forward looking trends.
Get Started