Cambridge, England & Austin,TX, USA, December 9, 2025 – Health Innovation Kent Surrey Sussex (Health Innovation KSS), the University of Cambridge’s Trustworthy Artificial Intelligence Lab, the Responsible AI Institute and the health and care think tank, The King’s Fund, today announced TrustX Health, an initiative designed to verify, deploy, and test Agentic AI safely across health and care settings. TrustX is the first initiative of its kind focused on scientific, auditable, and scalable deployment of Agentic AI within the NHS and social care.
TrustX supports the ambitions of the NHS 2025 “Fit for the Future” 10 Year Health Plan for England, which calls for a fundamental shift toward prevention, digital transformation, and the widespread use of AI. The initiative responds directly to these priorities by creating a unified front door for evaluating and safely deploying Agentic AI across clinical and non-clinical workflows.
A trusted pathway for AI in health and care
Agentic AI is an artificial intelligence system that can accomplish a specific goal with limited supervision. The NHS and social care are preparing for large-scale adoption of AI tools, including technology to support diagnosis, automation of admin tasks, predicting demand for services, and ambient voice agents for tasks such as note-taking. TrustX provides a rigorous system for validating the reliability, alignment, and safety of these autonomous systems, ensuring that clinicians, patients, and regulators can trust how they operate.
TrustX introduces a visible “trusted AI technology” badge, jointly enabled by the Responsible AI Institute and Health Innovation KSS, one of 15 health innovation networks across England, established by NHS England in 2013 to spread innovation at pace and scale – improving health and generating economic growth. This gives professionals and the public confidence that an AI agent, a software system that can act on its own to achieve goals or complete tasks for a user, has been independently verified and is monitored over time for safety, accuracy, and alignment.
While initial deployments will focus on non-clinical use cases, TrustX is also designed to support the growing demand for AI in how clinical decisions are made, groups of patients are provided better care, and to detect disease earlier to improve prevention. Its verification architecture enables continuous monitoring, re-evaluation, and badge renewal to ensure systems remain safe as they evolve.
A group of pioneering founders from leading UK health tech companies are actively shaping the design of TrustX Health. These organisations are already demonstrating how AI can be deployed safely and responsibly in real-world settings. Contributing founders include Dr Dom Pimenta of TORTUS, Dr Haris Shuaib of Newtons Tree, Carmelo Insalaco of Rapid Health, Dr John Jeans of CLEARnotes, Amna Askari and Rachel Finegold of Frontier Health AI and Seb Barker of Magic Notes (Beam). They will be bringing practical insight and innovation to ensure the initiative meets the needs of clinicians, patients, and the wider health system.
Why TrustX Health is needed now
Agentic AI offers powerful automation, but also brings risks, including bias, changing over time, potential errors and misinformation. These risks are amplified in high-stakes environments such as health and care.
TrustX addresses these challenges with an NHS-embedded approach that evaluates how AI agents behave in real-world situations, how they interact with other existing technologies and data sources, and how they may change over time. This initiative creates the governance and technical foundations needed for safe, large-scale adoption across the NHS and social care.
What TrustX will deliver
- A front door for AI agent deployment across the NHS and social care, including:
-
- Scoring and verification of existing Agentic AI systems
- Skunkworks evaluation to create a collaborative space to determine which NHS and social care problems are appropriate for Agentic AI
- Support to build new AI agents or improve them to mitigate risks
- Real-world evaluation against productivity and cost-effectiveness metrics
- An open-source Agentic AI Trust Score to accelerate transparency and adoption.
- A national collaboration environment to assemble leaders from the NHS, academia, industry, civil service, and research institutes.
- Partnerships with NHS providers and social care sites to test live Agentic AI deployments, beginning with early work already underway at Sussex Partnership Foundation Trust.
- A sustainable and flexible funding structure, combining different funding approaches to support organisations, from startups to large suppliers, and the NHS and social care.
Pilot investments will support joint clinical and operational fellows, postdoctoral researchers, and research assistants, with roles expected to expand across NHS innovation labs, social care innovators and partner institutions.
A new benchmark for safe AI in health and care
The government’s 10 Year Health Plan commits to widespread AI deployment, deeper digital integration, and a shift toward prevention. Achieving this requires safe, trustworthy, and auditable AI systems that work reliably in complex environments and evolve responsibly over time.
TrustX offers an assurance pathway that reduces risk for NHS and social care organisations adopting Agentic AI across clinical and operational pathways. It sets a new global benchmark for responsible AI deployment in health and care.
The aspiration for TrustX is to inform and support the development of a shared ecosystem for safe experimentation, rapid learning, and scalable adoption across the NHS.
Launch event
TrustX will formally launch at the University of Cambridge on the evening of 9 December 2025 with leaders across government, NHS England, academia, life sciences and pioneering AI companies. The agenda includes keynotes including from David More who will share reflections from the financial services industry, panels on safe Agentic AI deployment and supporting the workforce with our pioneer founders and a live demonstration of the Trust Score and its application to an Agentic AI system. The community will then discuss next steps on further development of the Trust Score and its technical paper.
Responsible AI Institute
The Responsible AI Institute is an independent non-profit organisation dedicated to advancing responsible AI adoption. Since 2016, it has partnered with governments, industry, and academia to develop AI governance frameworks, verification tools, and benchmarking standards. The Institute supports organisations worldwide through its trust scoring, auditing, and Agentic AI verification programs that strengthen transparency, accountability, and safe deployment.
Health Innovation Kent Surrey Sussex
Health Innovation Kent Surrey Sussex (KSS) is the health innovation network for the Kent and Medway, Surrey and Sussex regions. There are 15 health innovation networks across England, established by NHS England in 2013. Health Innovation KSS supports health and social care teams to find, test and implement evidence-based solutions to the NHS’s greatest challenges, driving economic growth for the region, supporting innovators and improving the lives of local people.
Trustworthy Artificial Intelligence Lab, University of Cambridge
The Trustworthy Artificial Intelligence Lab (TRACE) is a leading research group within the University of Cambridge focused on building AI systems that are reliable, transparent, and suitable for high-stakes environments. The lab combines machine learning, human-computer interaction, and social science to study how AI should be designed, evaluated, and integrated into real-world decision-making.
The Kings Fund
The King’s Fund is an independent charity working to improve people’s health. Our vision is a world where everyone can live a healthy life. Our mission is to inspire hope and build confidence for positive change. We achieve this through expert insights and original research, developing leaders and their organisations, convening, and strategic, collaborative partnerships.
