The European Union’s AI Act is a comprehensive regulation establishing a legal framework for artificial intelligence (AI) for the first time. The law was passed on May 21, 2024, and came into force on August 1 of the same year. It regulates the development, deployment, and use of AIs. The aim is to improve the functioning of the European market and promote the introduction of human-centered and trustworthy AI.
In doing so, the regulation lays down harmonized rules to ensure that AI systems are safe and transparent while protecting the fundamental rights of citizens. They include measures for risk assessment and mitigation as well as requirements for the transparency and traceability of artificial intelligence. Provisions for monitoring and enforcing regulations are also addressed. The use of AI is also becoming increasingly important in medical technology. In this article, we look at the objectives of the AI Act, how the risk classes are determined, and what impact is to be expected on the medical devices market.
What are the AI Act’s goals?t
The speed at which artificial intelligence has developed in recent years was hardly foreseeable for many. There is a feeling that the AI Act has therefore come almost too late. Many operating companies have to adapt the functions of their AIs to comply with the requirements. By adopting the regulations, the European Union is pursuing the following objectives
- Safety and transparency: Ensuring that AI systems work safely, transparently, and comprehensibly.
- Protection of fundamental rights: Ensuring that AI systems protect the fundamental rights of citizens and are non-discriminatory.
- Promoting innovation: Promote the development and use of AI technologies while taking existing risks into account.
- Risk-based approach: Introduce a risk-based regulatory framework that sets out specific requirements for different categories of AI systems.
- Prohibition of certain practices: Prohibiting AI applications that are deemed unacceptably risky, such as manipulative AI applications and real-time biometrics.
These objectives are intended to help strengthen trust in AI systems and ensure their safe and ethical use.
Risk assessment in the course of the AI Act
One of the most relevant aspects of the AI Act for medical device manufacturers is risk assessment. The focus here is on people as users of this technology. Risk mitigation and transparency measures are also essential components. Here are the main points of the risk assessment:
- Categorization by risk: artificial intelligence is divided into different categories based on their potential risk to the safety, fundamental rights, and health of citizens. These categories are:
- Unacceptable risks: AIs that are classified as unacceptably risky, such as manipulative applications and real-time biometrics. Systems for determining vulnerabilities of entire groups of people or classifying them based on social behavior are also prohibited.
- High-risk systems: Systems posing significant risks, such as AI in medical diagnostics or autonomous driving.
- Limited-risk systems: Systems posing a moderate risk and must meet specific transparency requirements.
- Systems with minimal or no risk: Systems that pose minimal or no risk and therefore have to meet less stringent requirements.
- Risk mitigation: For high-risk systems, strict security, transparency, and non-discrimination requirements must be met. Risk mitigation includes comprehensive risk assessments, continuous monitoring, and regular audits. For example, in the past, people of color have been disadvantaged by artificial intelligence for facial recognition.
- Transparency requirements: All AI systems must be transparent so that users can understand how the systems work and the decisions they make.
These criteria are intended to confirm that artificial intelligence is safe and trustworthy while enabling innovation.
Impact of the AI Act on medical devices
The EU’s AI Act has a significant impact on medical devices, as the AIs in question are often high-risk systems. Accordingly, products with integrated AI or stand-alone AIs are generally classified in MDR risk class IIa or higher. In most cases, medical diagnostics are used directly or indirectly. With regard to the MDR, a Notified Body must be consulted for products of this type. Post-market surveillance, a quality management system, technical documentation, and a plan for implementing risk management are also required for medical devices with a high-risk classification. However, some AI systems can also be assigned to Class I, which greatly reduces the number of requirements.
Nevertheless, this does not mean that a medical device that is classified as high-risk under the AI Act automatically receives this classification under the MDR. In other words, a device with integrated AI can be classified as a high-risk device under the AI Act, while it only receives the lowest Risk Class I under the MDR. Decisions are made on a case-by-case basis. Conversely, Class IIa and higher medical devices are always high-risk AI systems.
What should manufacturers expect?
One fear of many manufacturers is that their medical devices will be classified in a higher risk class. In such a case, companies would be subject to the requirements already mentioned. Higher costs and a longer time to market would also be associated with this. For example, a medical device using AI to make or support diagnoses could be upgraded. Monitoring devices with integrated AI to monitor patient data and trigger alarms could also be upgraded as a result of the AI Act. Even if these systems do not have a direct impact on patients at first glance, malfunctions can have devastating effects. It is important to note that the specific circumstances under which a medical device could be upgraded depend on many factors. Factors include the specific nature of the device and how the AI is integrated into the device.
Another point of criticism is the impact of the AI Act on the competitiveness of the European market. Small and medium-sized companies whose products are unexpectedly upgraded may not be able to afford the associated additional costs. In case of doubt, this means that these companies no longer have a budget available for further and new development.
It is important to note that the exact impact of the AI Act on medical devices depends on the specific implementation in the individual EU member states. It is recommended that medical device manufacturers familiarize themselves with the details of the AI Act and take appropriate measures to ensure compliance with the new regulations. As the AI Act has only recently come into force, it remains to be seen how the medical device industry will develop, especially as it has a two-year transition period.