Looking to Better Understand RAII’s Certification Work?

As the need for responsible and trustworthy AI literacy becomes more apparent, there’s a greater need for public educational resources. One important initiative, “The AI Ethics: Global Perspectives” course by The GovLab, publicly launched in February 2022 to address that need. It contains a series of pre-recorded modules and monthly panels by our lecturers with live Q&A sessions. Since the launch, dozens of modules tackling diverse issues from contributors around the world have been added, including a course by RAII’s Executive Director Ashley Casovan.

Casovan’s module, Implementing Responsible AI, “introduces the audience to the concept of responsible AI and the many factors that play a role in the design, development, deployment, and use of responsible AI systems.” Furthermore, the Implementing Responsible AI module breaks down “key challenges and explains how different stakeholders may take different approaches in their mitigation strategies.”

Ashley also outlines the work that we’re doing at RAII to implement responsible AI for a safer future of AI systems.

We invite you to watch Ashley’s course module, as well as modules from the other lecturers here.

Share the Post:

Related Posts

Risks of Generative AI

Responsible AI Institute Webinar Recap (Hosted on April 17, 2024) The proliferation of deepfakes – synthetic media that can manipulate audio, video, and images to...

Responsible AI Institute + AltaML Case Study

Objective    AltaML, an applied artificial intelligence (AI) company that builds powerful AI tools to help businesses optimize performance and mitigate risks, recognized the importance...

Responsible AI Institute Q1 2024 New Members

Multiple Fortune 500 Companies Join RAI Institute; New Member Resources Spotlight Responsible AI Journeys and AI Industry Experts  AUSTIN, TEXAS – Apr. 14, 2024 –...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!