The Israeli Ministry of Health (the “MOH“) recently published Key Principles for Evaluating AI-Driven Interventional Trials (the “Publication”). The Publication is the result of extensive work performed by a dedicated committee tasked with understanding global regulations, identifying key issues in AI-based clinical trials, and developing safety and ethical guidelines.
The Publication provides a comprehensive framework of questions designed to assist Helsinki Committees (the Israeli equivalent of IRBs) in evaluating clinical trial applications involving advanced AI technologies. These questions address ethical, clinical, and technological aspects, aiming to clarify the complexities of these innovative tools and their clinical applications.
The Publication’s key highlights include:
- Practical Evaluation Tool: The Publication serves as a recommended resource for Helsinki Committees, offering a structured approach for assessing safety and ethical considerations while approving clinical trials.
- Adaptability: While the content of the Publication includes recommendations, Helsinki Committees are encouraged to adopt and tailor them based on the specific risks and benefits of each trial.
- Types of Studies Addressed: The Publication identifies three main types of studies involving AI-based tools throughout their lifecycle in healthcare: (a) Model Development and Training: focused on testing multiple models using retrospective data to select the most accurate one. This stage emphasizes ensuring data privacy and compliance with regulatory standards; (b) Silent Prospective Studies: in these studies, the AI tool operates in real-world conditions but without influencing patient care. The integration of AI into operational systems poses unique risks, including potential disruptions to system functionality and information security challenges. The study focuses on comparing the system’s predictions with real-world outcomes to validate performance; and (c) Active Prospective Studies: here, the AI tool actively participates in clinical decision-making, influencing treatment processes. Ethical and safety challenges unique to AI are thoroughly evaluated, including the impact on clinical decisions, accuracy in preventing adverse events, and interactions between AI and human oversight.
The MOH invites various stakeholders to review the document and submit their feedback by April 1, 2025.
To the MOH Publication click here.
The content in this communication is provided for informational purposes only and is not intended to be comprehensive. It does not serve to replace professional legal advice required on a case by case basis.