Towards Responsible AI in Employment: Insights from Our Employment Working Group

Responsible AI Institute - Employment Working Group

An update from the RAI Employment working group on context-specific assessment development

AI tools are widely used in corporate environments today to support recruitment and employment strategies. Automated tools have the potential to improve upon, and support, human evaluators in many ways. Yet without proper oversight, AI employment use cases can pose significant risks to candidates and employers alike. By adopting a responsible AI (RAI) framework, automated tools can effectively support decisions on recruitment, hiring, and promotion, promoting holistic, data-driven evaluations that prioritize fairness and effectiveness. 

Our Responsible AI Employment Working Group (EWG) aims to drive the integration of responsible AI practices into a spectrum of use cases. Our working group assembles the foremost authorities in AI for Employment to collaborate on refining and validating our tailored, context-sensitive assessments for a variety of employment-related applications, including candidate video evaluations, job-matching algorithms, and strategic workforce planning. 

This article details the progress of the working group in the last several months to determine real-world guardrails for organizations as they develop, deploy, or use video-based AI assessment tools in hiring and employee evaluations.

Background 

The Responsible AI Institute facilitates specialized working groups composed of AI experts, drawing from a distinguished network spanning academia, industry, government, and civil society. Our working groups significantly influence the RAI Institute’s sector-specific initiatives and stand as a crucial feedback mechanism, propelling forward community-led responsible AI practices grounded within specific use cases.

Integrating independent voices and leveraging community feedback is fundamental to our work at the Responsible AI Institute. This approach not only enriches our assessment development but also ensures that our initiatives are responsive to the multifaceted concerns surrounding AI in various contexts. 

Following the development of guidelines for the use of AI in employment, in 2022, the RAI Institute convened domain experts and stakeholders to test the guidelines and the RAI Framework end-to-end on a real use case. The group explored how the RAI Institute RAI Framework can be applied to AI applications intended to assess job candidates and employees based on video and audio data.

Members of the group included domain experts from a variety of organizations with expertise in artificial intelligence research and development, employment software, hiring experts, audit, and data ethics. These members included:

Co-Chairs 

Barbara Cosgrove, Vice President, Chief Privacy Officer, Workday 

Matissa Hollister, Assistant Professor, Organizational Behaviour, McGill University

Experts and Participants 

Shared Objectives

The overall goal of the Working Group is to advance AI built and used “the right way” by identifying and documenting possible harms and risks that arise from the development and deployment of AI systems, in this case, AI-powered workforce assessment systems.

To support this vision, the working group’s goals included:

  1. The development of a comprehensive assessment package for evaluating a specific use case – video-based AI-enabled systems used in hiring and employee evaluation – based on a calibration of the RAI Institute standard RAI System and Organizational Assessments 
  2. The curation and development of RAI best practices relevant to the use case by building on existing regulations, policies, standards, and industry techniques.

As a theory of change, the group’s position is that strong adherence to agreed-upon responsible AI standards and processes across industries will support the enhancement of AI products and a clear path to responsible AI, leading to fewer harms overall. For this reason, the working group is grounded in practical analysis and focused on specific case studies, assessments, certifications, and guardrail protocols to support responsible AI adoption.

Key outcomes of the group thus far have entailed: 

  1. Identifying key issues related to the use of AI systems in various use cases in Employment, present and future. 
  2. Mapping mitigation strategies and working through the types of requirements needed to support the RAI Institute’s assessments when used in automated employment contexts, building on lessons from the RAI Institute’s landmark certification pilot.
  3. Validating the RAI assessment with real-world learnings from subject matter experts and aligning the Employment calibration closely to the NIST AI Risk Management Framework and the ISO/IEC 42001:2023 standard, Information technology Artificial intelligence Management system.

Next Steps

Following the wrap-up of this successful use case exploration process, the working group will endeavor to provide public guidance on the use of AI in automated employment systems.

The RAI Institute thanks each member for their contributions to our work, ensuring our methodology remains at the cutting edge of the field. We also acknowledge and thank each member for their broader and vital thought leadership in this field. The RAI Institute is privileged and grateful to exchange ideas with our esteemed community of experts and looks forward to continued collaboration in the future.

Get Involved

The Responsible AI Institute is always interested in collaborating with subject matter experts and community members in the ecosystem of trustworthy AI. If you are passionate about contributing your expertise to our work and advancing responsible AI in a particular sector, contact Amanda Lawson, AI Policy Manager, at [email protected].

Become a Member - Responsible AI Institute

About Responsible AI Institute

The RAI Institute is focused on providing tools for organizations and AI practitioners to build, buy, and supply safe and trusted AI systems, including generative AI systems. Our offerings provide assurance that AI systems are aligned with existing and emerging internal policies, regulations, laws, best practices, and standards for the responsible use of technology.

By promoting responsible AI practices, we can minimize the risks of this exciting technology and ensure that the benefits of generative AI are fully harnessed and shared by all.

Media Contact

For all media inquiries, please refer to Head of Marketing & Engagement, Nicole McCaffrey, [email protected]

+1 440.785.3588.

Social Media

LinkedIn

Twitter

Slack

Share the Post:

Related Posts

Risks of Generative AI

Responsible AI Institute Webinar Recap (Hosted on April 17, 2024) The proliferation of deepfakes – synthetic media that can manipulate audio, video, and images to...

Responsible AI Institute + AltaML Case Study

Objective    AltaML, an applied artificial intelligence (AI) company that builds powerful AI tools to help businesses optimize performance and mitigate risks, recognized the importance...

Responsible AI Institute Q1 2024 New Members

Multiple Fortune 500 Companies Join RAI Institute; New Member Resources Spotlight Responsible AI Journeys and AI Industry Experts  AUSTIN, TEXAS – Apr. 14, 2024 –...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!