As generative AI charges ahead, it presents challenges and opportunities across sectors. Its consequences are especially pronounced in healthcare, where patient wellbeing is at risk.
Responsible AI (RAI) Institute collaborated with Harvard Business School’s Digital Data Design (D^3) Institute to explore generative AI in healthcare and chart the path ahead. Through five sessions, the “GenAI in Healthcare” series convened healthcare providers and healthtech experts to discuss the complexities of designing, implementing, and evaluating AI systems across various healthcare applications. The series culminated in a capstone session focused on skin cancer detection as a use case. These discussions with cross-disciplinary experts underscored the necessity of AI responsibility in healthcare, offering critical insights relevant across sectors.
Session 1: Realizing Healthcare’s AI Potential
We launched the series by assessing the current landscape of AI in healthcare and providing an overview of responsible AI in this context. An audience poll showed most organizations were early in their AI journeys – offering the potential to incorporate responsible AI practices early on. The session emphasized the critical nature of managing risks such as biased data, unclear assignment of responsibility, and personal information leaks. With GenAI hastening the development and deployment of various systems, organizations must establish clear and actionable frameworks, tools, and processes to minimize risk.
Session 2: Scaling AI in Biopharma
The next three sessions focused on specific healthcare settings. Our second session brought in leaders of five organizations related to biopharma. We learned AI is already employed across the biopharma value chain, from R&D to business development. AI can improve clinical trials through unprecedented efficiency and add customizability to trials and studies. This level of control over design can help mitigate bias and enable trials that require very specific conditions. It can facilitate otherwise impossible breakthroughs and streamline medical writing and regulatory filings. However, its processes must be sound, reliable, and transparent, with valid and reproducible results.
Session 3: Scaling AI in Digital Health
Our third session emphasized that AI can improve patient and provider experiences. For example, AI can provide responsive, user-friendly interfaces that reduce communication barriers between providers, insurers, and patients. It can optimize administrative functions with the potential to reduce costs and improve treatment quality and operational efficiency. Yet, its risks and limitations warrant heightened transparency and explicit assignment of accountability, as well as guardrails to mitigate risks and maintain compliance. This requires close consideration of data access and partnerships between technology and healthcare organizations to optimally navigate the evolving landscape.
Session 4: Scaling AI for Hospitals and Healthcare Providers
In the fourth session, the series turned to provider perspectives. AI systems can help providers augment or substitute certain human activities, making their operations more efficient and effective. Technological innovations may help overburdened providers and staff and improve patient experiences. For example, AI might be deployed to synthesize complex information for customer-facing employees, improving communication between doctors, staff, and patients. Providers and technologists must work together to design and implement AI systems that benefit providers and patients without neglecting either perspective. Throughout development and scaling, stakeholders must establish safeguards to ensure patients receive accurate information, improving services while maintaining human oversight at each step.
Session 5: Lessons in Applying Responsible AI
For the capstone session, the RAI Institute and D^3 teams tied together lessons from the series and applied them to a case study on skin cancer detection. With interviews from medical experts, we learned how AI has supported skin lesion analysis. The experts emphasized the need to maximize accuracy and clearly communicate with patients about how AI systems are used and how their results should be interpreted.
Cross-sector Lessons
Diving into the healthcare context, where accuracy, privacy, and reliability are paramount, the series illuminates the need for organizations and AI professionals to diligently maintain responsible AI practices. With patient wellbeing on the line, healthcare applications spotlight the risks, limitations, and opportunities of AI in a particularly high-stakes environment.
Yet, these principles percolate across sectors. All generative AI systems must account for possible leaks of sensitive data. All organizations must understand their systems and ensure their results are actionable, reproducible, understandable, and adaptable. Leaders must be prepared to communicate the nuances of their training data, models, and results to a wide range of audiences – AI beginners and experts – such as regulators, customers, and shareholders. Regardless of the setting, organizations must fastidiously uphold responsible AI practices.
Healthcare organizations and AI professionals have a responsibility to navigate this landscape with care, ensuring that AI systems are accurate, reliable, transparent, and accountable. By establishing clear frameworks, tools, and processes for responsible AI development and deployment, we can harness the power of generative AI to transform healthcare while mitigating risks and maintaining patient trust.
Join us in this critical conversation and explore how your organization can prepare for the future of AI in healthcare. Insights and recordings of the entire “GenAI in Healthcare” series are available here.
Thank you to our expert panelists and interviewees:
Anna Marie Wagner, SVP Head of AI, Ginkgo Bioworks
Abraham Heifets, CEO, Atomwise
Michael Nally, CEO, Generate Biomedicines
Andrew Kress, CEO, HealthVerity
Stéphane Bancel, CEO, Moderna Therapeutics
Payal Agrawal Divakaran, Partner, .406 Ventures
Reena Pande, Physician Leader in Digital Health
Andrew Le, CEO and Co-Founder, Buoy Health
Dr. Marc Succi, Associate Chair of Innovation, Mass General Brigham
Alexandre Momeni, Partner, General Catalyst
Frederik Bay, General Manager, Healthcare Adobe
Timothy Driscoll, Senior Director, Technology Strategy & Innovation, Boston Children’s Hospital
Veronica Rotemberg, Director, Dermatology Imaging Informatics Group/Memorial Sloan Kettering Cancer Center
Rakesh Joshi, Lead Data Scientist, Skinopathy
Manoj Saxena, Founder and Chairman, Responsible AI Institute
Var Shankar, Executive Director, Responsible AI Institute
Alyssa Lefaivre Škopac, Head of Global Partnerships & Growth, Responsible AI Institute
Sabrina Shih, AI Policy Analyst, Responsible AI Institute
Thank you to our collaborators in organizing this series:
Satish Tadikonda, Senior Lecturer, Harvard Business School
Nikhil Bhojwani, Managing Partner, Recon Strategy
Kelsey Burhans, Program Director, Harvard Business School (D^3)
About the Responsible AI Institute
Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.
Members include leading companies such as Amazon Web Services, Boston Consulting Group, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.
Media Contact
Nicole McCaffrey
Head of Marketing, Responsible AI Institute
nicole@responsible.ai
+1 (440) 785-3588
Follow Responsible AI Institute on Social Media
X (formerly Twitter)