Responsible AI Institute’s Newest Member- CalypsoAI!

We are excited to announce our newest member: CalypsoAI!

CalypsoAI’s mission is to accelerate trust in AI through independent testing and validation. Their solution, VESPR Validate, is a market-leading tool to ensure the safe deployment of AI through testing, evaluating, validating, and verifying AI/ML models. Providing decision-makers with the solution to grow trust in their system performance and ensuring the success of their AI strategy. In doing so, significantly reducing the risk, time, and money spent deploying AI/ML models successfully into live environments.

In 2022, CalypsoAI was named a Gartner® Cool Vendor in AI Core Technologies- Scaling AI in the Enterprise. Their independent, industry-leading, AI/ML model testing and auditable human-in-the-loop decision-making offers allows for safe and responsible AI deployment. Creating tests and perturbations that benchmark model performance against corruption, meaning MLops teams do not test models against their own data. This is a critical step in developing AI and ML, and a key accelerator toward the end result; trustworthy & responsible AI.

Their solution achieves:

  1. Greater decision-making; empowering teams to make decisions on model deployment, retraining, and more.
  2. Benchmarking model performance under degraded conditions. Gaining visibility of model performance under corruptions, perturbations, adversarial attacks, and more.
  3. Performance testing by comparing a model’s predictions to what it should have predicted.
  4. Internal and external evaluation of AI/ML models according to a model risk management strategy.
  5. Stakeholder engagement through easily understandable, jargon-free language.
  6. Repeatable and automated, enabling teams to quickly benchmark current model performance and identify further training needs after deployment.

When asked about becoming a RAI Institute member, Neil Serebryany, CEO of CalypsoAI said, “rigorous testing and security of AI/ML models throughout their lifecycle is an integral element of responsible AI. It is CalypsoAI’s mission to accelerate trust in AI through the development of solutions that empower decision-makers to ensure the security and validation of machine learning models. We are thrilled to be joining the Responsible AI Institute and have the opportunity to work with this community focused on furthering responsible and secure AI”.

Our team is excited to work with CalypsoAI to encourage responsible AI practices by making these processes more accessible.

To learn more about RAI Institute’s membership, check out our website.

Share the Post:

Related Posts

Risks of Generative AI

Responsible AI Institute Webinar Recap (Hosted on April 17, 2024) The proliferation of deepfakes – synthetic media that can manipulate audio, video, and images to...

Responsible AI Institute + AltaML Case Study

Objective    AltaML, an applied artificial intelligence (AI) company that builds powerful AI tools to help businesses optimize performance and mitigate risks, recognized the importance...

Responsible AI Institute Q1 2024 New Members

Multiple Fortune 500 Companies Join RAI Institute; New Member Resources Spotlight Responsible AI Journeys and AI Industry Experts  AUSTIN, TEXAS – Apr. 14, 2024 –...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!