Responsible AI Institute Forms Inaugural Responsible Generative AI Consortium

Consortia of Corporations, Technology Providers and Experts from Universities to Collaboratively Address AI Safety and Alignment Concerns in Industries Including Healthcare, Media, Financial Services

June 28, 2023 03:00 AM Eastern Daylight Time

AUSTIN, Texas & LONDON–(BUSINESS WIRE)–Responsible AI Institute (RAI Institute), the leading non-profit building AI Assessments and a Certification program dedicated to converting responsible AI principles into practice, today announced the launch of the first Responsible AI Consortium in a series of consortia comprised of leading corporations, technology providers and experts from global universities. This inaugural consortium, focused on healthcare, is aimed at accelerating the responsible development and use of generative AI technologies through collective learning, experimentation and policy advocacy and is built around a unique hands-on responsible generative AI testbed to enable members to actively experiment with and refine responsible generative technologies in a real-world healthcare context.

A diverse portfolio of distinguished experts across the NHS, Harvard Business School, Turing Institute, St Edmund’s College at University of Cambridge and industry partners including Trustwise have allowed unique knowledge-sharing across the AI value chain across academia, policy makers, investors and healthcare providers. Learnings from the Consortium will be unveiled at today’s Symposium on Responsible Generative AI in Healthcare hosted at St Edmund’s College, University of Cambridge.

“We are in the middle of rapid advancements and adoption of generative AI and navigating the responsible AI landscape is proving to be a formidable challenge for all,” said Manoj Saxena, Founder and Chairman at RAI Institute. “Now, more than ever, we need to work together to make AI safe and aligned with human values. The creation of our Responsible Generative AI Consortium, with its practical testbeds and GenAI Safety Ratings, is a vital step towards our mission of helping AI practitioners to build, buy and supply safe and trusted AI systems.”

The Healthcare Responsible GenAI Consortium: Shaping Healthier Futures with Trustworthy AI

The Responsible GenAI Consortium in Healthcare is the first in a series of industry-specific consortia and GenAI testbeds to be launched by The Responsible AI Institute, with others being planned for later this year and next. The testbeds will incorporate the new Generative AI Safety Rating — a scoring system that grades the safety and reliability of generative AI systems, providing organizations, AI developers, policymakers, investors and other stakeholders a clear measure of system performance, fairness and compliance with rules and regulations.

The rating is based on the evaluation of various criteria and metrics including but not limited to bias detection, model hallucination, IP and privacy protection, transparency and accountability. Similar to the FICO credit score model, the numerical representation of the Generative AI Safety Rating will help guide improvements, enable progress tracking and foster a culture of responsible AI development and deployment.

The Responsible Generative AI testbeds are an integral part of Responsible AI Institute’s broader mission to promote the adoption of trustworthy AI by driving the following impacts:

  • Educate The Responsible AI Consortium will serve as a hub for knowledge sharing and resource pooling, allowing individuals and organizations to learn from one another. Activities will include hosting workshops, conferences, webinars, as well as developing educational resources. It will also facilitate cutting-edge research, sharing of case studies and executive education programs in responsible adoption of generative AI.
  • Innovate By providing a live and open generative AI testbed with independent and standards aligned Generative AI Safety Ratings for organizations and individuals, the consortium will encourage a more robust and diverse testing ground for new ideas and experiments in the field of generative AI. The consortium will enable corporations, researchers, policy makers, investors and individuals to work together on novel generative AI use cases and facilitate access to data sets, computational resources, open-source communities and testing platforms.
  • Advocate The consortium will provide expert insights to policymakers, regulators and investors, helping them make informed decisions about laws and shape regulations that both promote responsible use of AI and are conducive to sustainable AI innovation. It will raise awareness about responsible generative AI at various levels, from grassroots community organizations to national and international policy forums and create informational campaigns, engage media and policymakers, and act as a unified voice for its members, amplifying their concerns and suggestions for policymakers and sustainability focused investors in public debates.

Building Towards a Future Where Responsible AI Become the Norm Across All Industries

As AI continues to evolve, it is becoming imperative to shift from an algorithm-based AI design mindset to one that aligns with and fulfills the values and sustainability goals of humans. The Responsible AI Consortium and its testbeds will play a critical role in this evolution of Foundation Models to a human-centric AI design by providing an interactive learning, experimentation and advocacy environment where Consortium participants can collaborate and apply the principles of responsible AI to real-world use cases and problems.

With its upcoming series of responsible AI consortia and companion testbeds, RAI Institute and its partners hope to establish and distribute useful roadmaps, architectures, white papers and tools to provide a range of benefits for key roles across industries such as:

  • Enterprise AI system developers can leverage these best practices to enable safe and responsible deployment of generative AI models in production and minimize model errors and data commingling. Developers will become empowered to create more accurate and efficient models without compromising safety and compliance and make better-informed decisions about developing, deploying and scoring the safety of generative AI models.
  • Policymakers can manage generative AI and its surrounding policies more efficiently to ensure the protection of consumers and promote the responsible use of AI. The testbeds also offer a platform on which they can collaborate with other experts in their respective fields as well as share knowledge and best practices.
  • Technology vendors, platforms and tool providers can become members of the consortium and access the testbeds to experiment, demo, test and validate their software with generative AI use cases and access assessments to support their products and offerings.
  • The general public can learn about the benefits and potential drawbacks of generative AI and provide user feedback that will help shape the future development of generative AI. This will also provide a platform for people to participate in public discussions about the use of generative AI as well as advocate for and support regulations that prioritize transparency, accountability and equity.

Today’s Symposium on Responsible Generative AI in Healthcare features multiple panels with experts from leading global healthcare and life sciences companies, academia, venture and private equity, and technology vendors discussing today’s most pressing generative AI-related challenges. These include:

  • Role of ethics, accountability, and leadership in navigating the generative AI era
  • Lessons from the frontlines in putting trustworthy generative AI to work
  • Strategies for leadership development and capacity building
  • Business model and monetization strategies for responsible commercialization of generative AI

Supporting Quotes

St Edmund’s College, University of Cambridge “St Edmund’s is glad to welcome influencers from a broad range of sectors to the College for the first in a series of events hosted by the Responsible AI Institute that will bring together leading experts and impact the future of responsible AI,” said Catherine Arnold, Master of St Edmund’s College. “This is a unique opportunity for the highly diverse, international and experienced student community within the College to engage with this topic in a way that helps empower them to be global leaders of the future.”

Harvard Business School “Generative AI brings a unique set of promises and perils, and it’s advancing faster than previous AI technologies. As its development progresses, there is a pressing need for AI systems that empower human beings and promote equity,” said Dr. Satish Tadikonda, Senior Lecturer of Entrepreneurial Management at Harvard Business School. “The Responsible AI Consortium was created to allow everyone — employees, companies, individual consumers and even society at large — to be able to trust and scale AI with confidence and take ownership in shaping the future of responsible AI.”

NHS “The responsible use of AI in the healthcare industry has immense potential to improve human well-being, accelerate scientific discoveries and transform healthcare as we know it for patients and providers,” said Dr. Hatim Abdulhussein, practicing Primary Care Physician, the National Clinical Lead for AI and Digital Workforce at NHS England and Medical Director of Kent Surrey Sussex Academic Health Science Network. “It is vital to work collaboratively across disciplines, and engaging with this Consortium is necessary to understand the guardrails to support the development of safe and ethical AI that will build confidence in this technology for both the healthcare workforce and patients.”

Trustwise “At Trustwise, our top priority is to empower businesses to rapidly, safely and responsibly innovate with large language models and generative AI without putting their organization or customers at risk. We’ve joined forces with RAI Institute to be a part of the first-ever Responsible AI Consortium and look forward to ushering in a new era of responsible generative AI across industries worldwide,” said Seth Dobrin, CEO at Trustwise.

Lyric “AI innovation offers unprecedented opportunities to simplify the intricacies and complexities within the healthcare space,” said Rajeev Ronanki, CEO at Lyric. “As a healthcare technology company centered around the core value of transparency, we are proud to take part in fostering a community that ensures safe and responsible adoption of generative AI — by businesses and society — across the healthcare landscape.”

About Responsible AI Institute

Founded in 2016, the Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. RAI Institute’s conformity assessments and certifications for AI systems support practitioners as they navigate the complex landscape of AI products. Members include leading companies such as Amazon Web Services, Boston Consulting Group, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Media Contacts

Audrey Briers

Bhava Communications for RAI Institute

[email protected]

+1 (858)-314-9208‬

Alyssa Lefaivre Škopac

Acting Executive Director of RAI Institute

[email protected]

+1 (780) 237 5977

Share the Post:

Related Posts

Risks of Generative AI

Responsible AI Institute Webinar Recap (Hosted on April 17, 2024) The proliferation of deepfakes – synthetic media that can manipulate audio, video, and images to...

Responsible AI Institute + AltaML Case Study

Objective    AltaML, an applied artificial intelligence (AI) company that builds powerful AI tools to help businesses optimize performance and mitigate risks, recognized the importance...

Responsible AI Institute Q1 2024 New Members

Multiple Fortune 500 Companies Join RAI Institute; New Member Resources Spotlight Responsible AI Journeys and AI Industry Experts  AUSTIN, TEXAS – Apr. 14, 2024 –...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!