Responsible AI Institute Announces Participation in Department of Commerce Consortium Dedicated to AI Safety

Responsible AI Institute will be one of more than 200 leading AI stakeholders to help advance the development and deployment of safe, trustworthy AI under new U.S. Government safety institute

(Austin, TX) – Today, Responsible AI Institute announced that it joined more than 200 of the nation’s leading artificial intelligence (AI) stakeholders to participate in a Department of Commerce initiative to support the development and deployment of trustworthy and safe AI. Established by the Department of Commerce’s National Institute of Standards and Technology (NIST), the U.S. AI Safety Institute Consortium (AISIC) will bring together AI creators and users, academics, government and industry researchers, and civil society organizations to meet this mission.

“The Responsible AI Institute welcomes the formation of the NIST AI Safety Institute Consortium (AISIC). NIST has been a key source of authoritative responsible AI guidance. Given the pace of change in technology, AI adoption and global political and regulatory developments, it is urgently necessary to put in place guardrails for safe and responsible AI adoption. We look forward to contributing to the AISIC’s efforts in this regard,” said Var Shankar, Executive Director of the Responsible AI Institute.

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do,” said Secretary of Commerce Gina Raimondo. “Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack – and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

The consortium includes more than 200 member companies and organizations that are on the frontlines of developing and using AI systems, as well as the civil society and academic teams that are building the foundational understanding of how AI can and will transform our society. These entities represent the nation’s largest companies and its innovative startups; creators of the world’s most advanced AI systems and hardware; key members of civil society and the academic community; and representatives of professions with deep engagement in AI’s use today. The consortium also includes state and local governments, as well as non-profits. The consortium will also work with organizations from like-minded nations that have a key role to play in setting interoperable and effective safety around the world.

The full list of consortium participants is available here.

You can read the full press announcement from the U.S. Department of Commerce here. 

About Responsible AI Institute (RAI Institute)

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as ATB Financial, Amazon Web Services, Boston Consulting Group, Yum! Brands, Shell, Chevron, Roche and many others dedicated to bringing responsible AI to all industry sectors. 

Media Contact

Nicole McCaffrey

Head of Marketing, RAI Institute

[email protected] 

+1 (440) 785-3588

Follow RAI Institute on Social Media 

LinkedIn 

X (formerly Twitter)

Slack

Share the Post:

Related Posts

The UK has developed five principles for the responsible regulation of artificial intelligence (AI) to promote growth, innovation and public confidence. The principles aim to...

Background on Responsible AI (RAI) Maturity Responsible AI (RAI) frameworks and approaches, or approaches to AI that are designed to build trust, empower stakeholders, and mitigate...

Why Pilot an AI Certification Scheme? AI laws and regulations will rely significantly upon soft law mechanisms, like standards, certification programs and industry practices to...

Leading Companies Simplify Responsible AI Operationalization with Institute’s Resources and Tools for Building and Buying AI Systems  AUSTIN, TEXAS – Jan. 17, 2024 – Responsible AI...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!