Science & Enterprise logo
Science for business people. Enterprise for scientists.

How AI is shaping the health sector: upcoming rules for drugs and medical devices

– Sponsored content –

Human and robot hands
(Pete Linforth, Pixabay. https://pixabay.com/photos/connection-hand-human-robot-touch-3308188/)

8 Aug. 2024. Artificial Intelligence (AI) is revolutionizing the health sector, especially in pharmaceuticals and medical devices. The integration of AI into these fields promises to enhance efficiency, accuracy, and innovation.

However, this rapid advancement also brings forth significant challenges and the necessity for robust regulations. The upcoming AI regulation for pharma and medical devices aims to ensure the safe and ethical use of AI technologies. This article explores the current and forthcoming regulatory frameworks, ethical considerations, and the potential risks and opportunities associated with AI in the health sector.

Upcoming regulations in the EU

The AI regulation for pharma and medical devices in the European Union is set to undergo significant changes with the introduction of the EU AI Act. This landmark legislation, which was politically agreed upon in December 2023, aims to create a robust regulatory framework to govern the use of AI technologies in the pharmaceutical industry and medical devices sector. The EU AI Act categorizes AI systems into four risk levels — unacceptable, high, limited, and minimal — and mandates that organizations audit their AI models to ensure compliance with these categories.

Organizations will need to implement stringent measures to meet the requirements of the AI Act, including conducting regular compliance checks, reporting serious incidents, and taking immediate corrective actions in case of non-compliance. National supervisory authorities, in collaboration with the proposed European Artificial Intelligence Board (EAIB), will be responsible for enforcing the act, ensuring compliance, and imposing penalties where necessary.

The act also outlines specific timelines for different provisions to become binding, with the ban on unacceptable risk AI systems coming into effect six months after the act’s enforcement, and transparency reports and risk assessments becoming mandatory 12 months post-enactment. By 2026, the EU AI Act will be fully applicable, setting a precedent for other regions to follow in AI regulation.

AI tools in the pharmaceutical industry and medical devices sectors carry the risk of incorrect assessments if not properly managed. This underscores the importance of involving clinicians in the development and deployment of AI technologies and ensuring they receive adequate training. Regulatory oversight will be essential to prevent misuse and ensure the safe application of AI.

For example, the FDA Medical Device regulation and the FDA Artificial Intelligence guidelines emphasize the need for transparency and accountability in AI applications. This includes providing meaningful information about the logic involved in automated decision-making processes, as required by the General Data Protection Regulation (GDPR) in Europe.

Developing comprehensive guidelines to prevent the misuse of AI is a priority. The EU AI Act mandates that organizations must take immediate corrective actions in case of non-compliance and report any serious incidents to the designated national authorities. This will involve collaboration between national supervisory authorities and the proposed European Artificial Intelligence Board (EAIB).

In the context of machine learning and AI, ensuring data integrity and addressing biases in input data are critical for the safe and effective application of these technologies. Companies should also be prepared to conduct regular compliance checks, especially for AI systems that pose a high risk.

EMA and HMA initiatives

The integration of artificial intelligence (AI) in the pharmaceutical and medical devices sectors is rapidly evolving, prompting regulatory bodies to adapt and create frameworks that ensure safe and effective use. The European Medicines Agency (EMA) and the Heads of Medicines Agencies (HMA) have been proactive in this regard, introducing key initiatives such as the Big Data Workplan 2022-2025 and the AI workplan 2023-2028. The Big Data Workplan focuses on leveraging AI to enhance the medicinal product lifecycle, aiming to improve processes from drug development to post-market surveillance. This initiative underscores the importance of utilizing machine learning and data analytics to generate meaningful insights, optimize clinical trials, and ensure patient safety.

In parallel, the AI workplan 2023-2028 aims to establish a regulatory system that not only harnesses the capabilities of AI but also addresses the associated risks. This comprehensive plan includes developing guidelines for transparency, accountability, and risk mitigation, ensuring that AI applications in the pharmaceutical industry are both innovative and safe. By fostering collaboration with developers, academics, and other regulators, the EMA and HMA are committed to creating a robust framework that supports the responsible use of AI in healthcare.

These initiatives are part of a broader effort to align with the upcoming EU AI Act, which will set the stage for AI regulation across Europe. The act will require organizations to audit their AI systems, categorize their risk levels, and comply with stringent requirements to ensure ethical and safe AI deployment. This regulatory landscape highlights the need for continuous learning and adaptation to keep pace with technological advancements, balancing innovation with governance.

Upcoming regulations in the U.S.

The rapid integration of artificial intelligence (AI) in the pharmaceutical industry and medical devices brings significant potential for misuse if not properly regulated. One of the primary concerns is the risk of incorrect assessments. AI tools, when used without adequate clinician involvement and training, can lead to erroneous medical decisions, such as incorrect diagnoses or inappropriate treatment recommendations. This underscores the necessity for regulatory oversight to develop comprehensive guidelines that ensure the safe application of AI technologies in healthcare.

The FDA has been proactive in addressing these challenges through various initiatives. For instance, the Software Pre-Certification Program aims to streamline regulatory oversight for software-based medical devices, ensuring that only high-quality and safe software reaches the market. Additionally, the AI/ML-Based SaMD Action Plan outlines the FDA’s approach to regulating machine learning and AI-driven software as medical devices, focusing on real-world performance monitoring and quality assurance measures.

Moreover, the AI Executive Order issued in October 2023 directs U.S. executive departments to evaluate the safety and security of AI technologies. This order mandates rigorous testing and reporting on AI systems, emphasizing the importance of cybersecurity and ethical considerations in AI deployment.

The role of AI in the health sector

Artificial Intelligence (AI) is revolutionizing the pharmaceutical industry, offering numerous applications that enhance efficiency and accuracy across various stages of drug development and regulation. One of the most significant impacts of AI is in drug discovery and development, where AI algorithms can analyze vast datasets to identify potential drug candidates faster and more accurately than traditional methods. This accelerates the process of bringing new drugs to market, potentially saving millions in research and development costs.

In clinical trials, AI optimizes patient selection, predicts trial outcomes, and monitors patient adherence, ensuring that trials are conducted more efficiently and with higher success rates. The integration of AI in clinical trials not only speeds up the process but also improves the reliability of the results, which is crucial for regulatory approval.

The use of artificial intelligence is becoming increasingly important for supply chain management in the pharmaceutical industry. AI technologies are essential for optimizing resource allocation, enhancing decision-making processes, and improving overall efficiency. By addressing challenges such as inventory management and out-of-stock scenarios, AI can significantly streamline operations and support sustainable practices. Platforms like Profiter’s AI technology provide effective solutions that enable pharmaceutical companies to leverage these benefits. By reducing inventory levels and minimizing disruptions in the supply chain, these platforms allow pharmacists and healthcare providers to focus more on patient care, ultimately enhancing the quality and efficiency of healthcare delivery.

AI also plays a crucial role in regulatory compliance by managing the enormous volumes of data generated in the pharmaceutical industry. It ensures that all processes adhere to regulatory standards, reducing the risk of non-compliance and the associated financial and legal consequences. The implementation of AI in regulatory compliance helps streamline workflows and maintain high standards of quality and safety.

As AI continues to shape the health sector, upcoming rules for drugs and medical devices are being developed to address the unique challenges and opportunities presented by AI technologies. For instance, the AI Act in the European Union and the evolving guidelines from the FDA in the United States aim to ensure that AI applications in the pharmaceutical and medical devices sectors are safe, effective, and ethically sound. These regulations are crucial for balancing innovation with governance, ensuring that AI technologies can be harnessed for the benefit of patients and healthcare providers while minimizing risks.

Potential for misuse

The integration of artificial intelligence in medical devices and the pharmaceutical industry holds great promise but also brings significant risks. One of the primary concerns is incorrect assessments made by AI tools. These tools, if used without proper clinician involvement and adequate training, can lead to erroneous medical evaluations and decisions. This not only jeopardizes patient safety but also undermines trust in AI technologies within healthcare.

To mitigate these risks, there is a pressing need for robust regulatory oversight. Developing comprehensive guidelines and frameworks is essential to prevent misuse and ensure the safe application of AI in medical contexts. Regulatory bodies like the FDA have already begun to address these challenges through initiatives such as the FDA Medical Device regulation and the FDA Artificial Intelligence guidelines. Similarly, the AI Act and AI Act MDR in the EU aim to balance innovation with stringent governance, ensuring that AI applications in healthcare are both effective and secure.

By addressing these regulatory and operational challenges, the healthcare sector can harness the full potential of AI while safeguarding patient well-being and maintaining high standards of care.

*     *     *