Responsible AI Institute’s Newest Member- CalypsoAI!

We are excited to announce our newest member: CalypsoAI!

CalypsoAI’s mission is to accelerate trust in AI through independent testing and validation. Their solution, VESPR Validate, is a market-leading tool to ensure the safe deployment of AI through testing, evaluating, validating, and verifying AI/ML models. Providing decision-makers with the solution to grow trust in their system performance and ensuring the success of their AI strategy. In doing so, significantly reducing the risk, time, and money spent deploying AI/ML models successfully into live environments.

In 2022, CalypsoAI was named a Gartner® Cool Vendor in AI Core Technologies- Scaling AI in the Enterprise. Their independent, industry-leading, AI/ML model testing and auditable human-in-the-loop decision-making offers allows for safe and responsible AI deployment. Creating tests and perturbations that benchmark model performance against corruption, meaning MLops teams do not test models against their own data. This is a critical step in developing AI and ML, and a key accelerator toward the end result; trustworthy & responsible AI.

Their solution achieves:

  1. Greater decision-making; empowering teams to make decisions on model deployment, retraining, and more.
  2. Benchmarking model performance under degraded conditions. Gaining visibility of model performance under corruptions, perturbations, adversarial attacks, and more.
  3. Performance testing by comparing a model’s predictions to what it should have predicted.
  4. Internal and external evaluation of AI/ML models according to a model risk management strategy.
  5. Stakeholder engagement through easily understandable, jargon-free language.
  6. Repeatable and automated, enabling teams to quickly benchmark current model performance and identify further training needs after deployment.

When asked about becoming a RAI Institute member, Neil Serebryany, CEO of CalypsoAI said, “rigorous testing and security of AI/ML models throughout their lifecycle is an integral element of responsible AI. It is CalypsoAI’s mission to accelerate trust in AI through the development of solutions that empower decision-makers to ensure the security and validation of machine learning models. We are thrilled to be joining the Responsible AI Institute and have the opportunity to work with this community focused on furthering responsible and secure AI”.

Our team is excited to work with CalypsoAI to encourage responsible AI practices by making these processes more accessible.

To learn more about RAI Institute’s membership, check out our website.

Share the Post:

Related Posts

Healthcare AI

As generative AI charges ahead, it presents challenges and opportunities across sectors. Its consequences are especially pronounced in healthcare, where patient wellbeing is at risk....

Responsible AI Institute - Employment Working Group

An update from the RAI Employment working group on context-specific assessment development AI tools are widely used in corporate environments today to support recruitment and...

Sarah Curtis

Sarah Curtis Head of Product, Responsible AI Booz Allen Hamilton What does your job entail within your organization? I lead the creation of tools, frameworks,...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!