Artificial Intelligence

Given its current growth as a key technology and economic driver offering immense opportunities – as well as new concerns – for healthcare, HOPE has been closely following EU action on artificial intelligence (AI) for a number of years. AI has become ubiquitous in hospitals and healthcare services, and more recent developments including generative and agentic AI are further pushing the boundaries. At the same time, AI-powered technologies challenge established knowledge bases and modus operandi, triggering data-related, privacy, cybersecurity, ethical, fundamental rights, regulatory, legal, bias, and transparency issues. They demand upskilling and reskilling the healthcare workforce and – crucially – require developers and healthcare providers to build up trust in their everyday usefulness and capabilities, including as part of the emerging European Health Data Space.

The European Commission’s work on AI gained momentum with its 2018 Communication, Artificial Intelligence for Europe. This strategy outlined a specifically European approach to AI including technological, ethical, legal, and socio-economic aspects, to boost the EU’s research and industrial capacity, aiming to put AI at the service of citizens and the economy. A High-Level Expert Group comprised of representatives from industry, academia, and civil society was appointed, which released Ethics Guidelines for Trustworthy AI in 2019. These were embraced by the Commission in another Communication, Building Trust in Human-Centric Artificial Intelligence.

In 2018, the Commission published the first Coordinated Plan on AI prepared with Member States to foster development and use (updated in 2021), followed by a White Paper: A European approach to excellence and trust (February 2020) laying the groundwork for the EU’s regulatory effort.

The proposal for a Regulation laying down harmonised rules on artificial intelligence (EU AI Act) was released in April 2021. Following intense political negotiations to refine its content, political agreement was found at the end of 2023. The final EU AI Act entered into force on 1 August 2024. As the world’s first AI legal framework, its reception and application by industry and deployers is being closely watched globally. It establishes a risk-based approach, stipulating different rules and standards based on AI systems’ respective features (see also HOPE position paper, which draws attention to protecting citizens’ rights, professional readiness and data quality). In 2024, the European AI Office was established within the Commission’s DG CNECT to support Member States’ authorities in their tasks, enforce rules for general-purpose AI models, promote an ecosystem of trustworthy AI, and foster international cooperation. It also supports the European AI Board, set up under the Act to ensure the coordination of national authorities, share technical and regulatory expertise, and provide policy advice. The majority of rules will be fully applicable on 2 August 2026 (with the exception of bans on prohibited practices and obligations on AI models already applicable, and classification of AI systems that must undergo third-party conformity assessment under other EU rules as high-risk, applicable from 2027).  Much of the legislation will be refined via implementing acts, and the ramifications are likely to be felt only during the implementation phase.

In 2022-2023, HOPE contributed to a Thematic Network on the European Health Policy Platform coordinated by Health Action International and Brunel University London. Its Joint Statement expressed concern about AI’s impact on health outcomes for key populations, highlighting its potential to exacerbate already existing health inequalities and proposing an array of general and specific measures to be taken by the Commission, Member States, and health AI stakeholders. It also called for respecting EU commitments in AI-specific and other legislation, including the GDPR, the Medical Devices / In Vitro Devices Regulations, the Product Liability Directive, and the Data Governance Act.

Healthcare AI needs to be seen within a wider context of exploitation. Other EU measures include the AI Innovation Package and AI Factories. Building on the launch of the InvestAI initiative that will mobilise €200 billion, the 2025 AI Continent Action Plan foresees further investment, infrastructure, and collaboration to enhance EU competitiveness, including in supercomputing and cloud services.

Looking into the future, the Apply AI Strategy initiative, released in April 2025, serves as a blueprint for the full adoption of AI in strategic sectors, strengthening the AI continent (see HOPE contribution to the public consultation). In summer 2025, the Commission also released the final report of its study on deployment of AI in healthcare, which outlines important challenges hindering widespread AI adoption and proposes to overcome them in a collaborative effort to meet all key needs. Inspiration for tackling AI obstacles is also drawn from the United States, Israel and Japan.

In the European Parliament, the AI Act was assigned to the Internal Market and Consumer Protection (IMCO) Committee with MEP Brando Benifei (S&D, Italy) as rapporteur, and Civil Liberties, Justice and Home Affairs (LIBE) with MEP Dragoş Tudorache (Renew/RO) as co-rapporteur. Shadow rapporteur MEP Axel Voss (EPP, Germany) also played a key role, inter alia presenting the report of a temporary Special Committee on AI in a Digital Age (AIDA) set up to analyse its future impact. As a member of the JURI Committee on Legal Affairs, he was also the rapporteur for the Commission proposal for an AI Civil Liability Directive – withdrawn by the Commission in February 2025 arguing that no foreseeable agreement could be reached – with other JURI colleagues in charge of Parliament proposals outlining the AI ethical framework, and intellectual property rights for the development of AI technologies, adopted in October 2020. In 2024, a Parliament working group overseeing the implementation of the AI Act was established, co-chaired by MEPs Benifei and McNamara (Renew, Ireland).

Successive EU Presidencies have issued Council conclusions including on the Charter of Fundamental Rights in the context of AI and digital change (Oct 2020), the need to strengthen the EU’s ambitions (Nov 2024), and ‘Towards the EU strategy on AI in science’ (May 2025).