Co-Authored by Hadassah Drukarch and Monika Viktorova
As artificial intelligence (AI) systems become embedded into critical areas of our lives, the ethical implications and societal impacts of these technologies demand careful attention. Ensuring responsible AI governance has become a priority for AI practitioners, developers, policymakers, and business leaders alike. Independent review is emerging as a vital tool in this effort, offering a robust framework to safeguard human rights, promote ethical decision-making, and maintain transparency across the AI lifecycle. In the new guide Operationalizing Independent Review in AI Governance: A Guide for AI Practitioners, we outline actionable steps to help practitioners effectively implement independent review as part of their AI governance strategy.
Why Independent Review Matters in AI
Independent review is a flexible and impartial governance tool adaptable to all stages of AI development. Originating from bioethics, where Institutional Review Boards (IRBs) oversee research ethics, this approach has successfully guided complex fields like clinical research, ensuring that ethical standards are prioritized and risks mitigated before projects reach and impact the public. In the realm of AI, independent review addresses the dynamic nature and multifaceted impacts of emerging technologies by assembling an impartial committee of experts in ethics, technical feasibility, regulatory compliance, and societal impact. This committee scrutinizes AI projects from diverse perspectives, helping organizations evaluate the technical, ethical, and resource adequacy of their projects from inception through deployment and monitoring. By embedding independent review early in the AI lifecycle, organizations can proactively address trust and safety and societal concerns, align projects with ethical standards, and reduce the need for costly or ineffective post-deployment fixes.
Implementing independent review within an organization has far-reaching benefits beyond simply helping projects meet internal organizational standards and regulatory requirements. This approach builds trust with stakeholders by demonstrating a commitment to ethical oversight and societal well-being. As such, independent review acts as a safeguard against the risks of “moving fast and breaking things,” emphasizing deliberation, transparency, and accountability in the pursuit of innovation. Moreover, independent review improves project outcomes by encouraging practitioners to consider downstream risks and their mitigation strategies at every stage. Integrating review results into project management ensures risk mitigation is iterative and collaborative while fostering a culture where ethical considerations are intrinsic to AI development. As organizations adopt independent review as a standard practice, it can elevate industry standards and promote responsible AI development and deployment.
Lessons from Independent Review Practices in Big Tech
In recent years, several prominent tech companies have experimented with ethics review boards to oversee the development of AI technologies. While these initiatives highlight the promise of independent review as a governance tool, they have also faced significant challenges, revealing areas for improvement and adaptation in tech environments.
In 2019, Google launched the Advanced Technology External Advisory Council (ATEAC), designed to provide oversight and guidance on responsible development of AI projects within Google. However, the council was disbanded within a week due to public criticism regarding the board’s composition and lack of transparency. The council’s failure highlights two key challenges: the need for transparency in board member selection and a clear governance structure that grants the board authority. Without these foundational elements, even well-intentioned review boards may lack the credibility and impact required to make a meaningful difference.
Another example is Meta’s Oversight Board, which oversees content moderation decisions on the platform, signaling the company’s commitment to ethical governance and accountability. The board has demonstrated potential as a scalable review framework, making notable decisions on high-profile cases. However, the board has faced criticism over its limited scope and perceived influence. This case underscores the need for genuine independence and expanded decision-making scope in order for review bodies to foster public trust effectively.
Both cases reveal that establishing effective independent review practices requires clear structures, transparent processes, and actual decision-making power. By addressing these aspects, organizations can build review boards that truly contribute to responsible AI development.
Operationalizing Independent Review in AI Governance
The guide provides a structured approach to embedding independent review, starting with the basics of assembling a diverse Independent Review Committee (IRC). Key elements for creating an effective independent review framework include:
Impartiality: To minimize conflicts of interest, review teams should operate independently from project leadership. This includes setting up ‘sandboxed’ environments for review discussions, where decisions are made based solely on the project’s inherent ethical, technical, and societal merits.
Accountability: Binding authority is essential. IRCs must be empowered to make recommendations that influence project decisions, budgets, and timelines. Integrating independent review outcomes into internal gating processes ensures that recommendations are respected and acted upon.
Effective Consultation: The IRC should include technical experts, ethics professionals, community advocates, and legal advisors to provide a well-rounded evaluation. Expertise should be tailored to project needs to ensure the project is evaluated thoroughly. Input from these varied perspectives allows for a thorough examination of a project’s technical and resource feasibility, ethical soundness, and alignment with societal values.
Administration: Appointing an administrative lead for the review process helps coordinate timelines and communications, ensuring consistent adherence to review standards. Regular assessments are also necessary if project scope or design changes, keeping the review process relevant to evolving project needs.
For AI practitioners, adopting and operationalizing independent review as a sustainable part of their organization’s AI governance involves a number of practical steps:
- Embed Independent Review Early: Introduce independent review at the ideation or procurement planning stages, allowing for adjustments or even project halts if necessary.
- Define Clear Review Standards: Develop standardized submission guidelines and review procedures to ensure consistency and transparency. Document deliberations and decisions to promote continuous improvement and retain institutional knowledge.
- Involve Community Advocates: Including representatives from impacted communities provides invaluable insights into societal impacts, especially on vulnerable groups.
- Utilize Review Outcomes as Gating Criteria: Tie independent review outcomes to project approvals, budgets, and timelines to reinforce the board’s authority.
- Promote Independent Review as a Cultural Value: Advocate for independent review within your organization as a proactive approach to ethical decision-making, helping build a culture of responsible AI development.
Supporting Your Responsible AI Journey
Want to get more insights and support in embedding independent review into your organization’s AI governance practices? In our guide Operationalizing Independent Review in AI Governance: A Guide for AI Practitioners, we provide AI practitioners with the tools they need to make independent review a standard part of their governance structures. By committing to this approach, organizations can build AI systems that not only push technological boundaries but also foster public trust and societal benefit. In doing so, Independent Review becomes more than just a governance mechanism — it becomes a strategic advantage that sets the stage for ethical, sustainable AI innovation.
About the Authors
Hadassah Drukarch is a technology policy and governance specialist with extensive experience in translating the evolving regulatory landscape into actionable roadmaps that empower both people and businesses. As the Director of Policy and Delivery at the Responsible AI Institute, she builds bridges across people, process and technology through the development and delivery of conformity assessments, governance tools, training, and regulatory/policy guidance. She is also the co-founder of The Law of Tech, a Legal Tech platform that creates content and tools for practical, peer-driven learning to accelerate technology adoption in legal practice. Her research centers on advancing global AI and robotics governance, with a strong focus on transparency to foster trust and accountability — a commitment reflected in her involvement across multiple EU-funded research initiatives.
Monika Viktorova is a Product and Responsible Tech Manager in a large logistics company, where she bridges the gap between strategic innovation and responsible technology. She collaborates with business line experts, technologists, and users to guide transformative ideas through the entire product lifecycle. With a background in tech strategy and trustworthy AI consulting, Monika has previously supported financial services and public sector organizations in shaping their analytics and AI innovation roadmaps, focusing on best practices for responsible tech and privacy risk mitigation. She is passionate about translating high-level tech visions and ethical principles into actionable practices, ensuring that products align with both an organization’s strategic goals and its values.
The authors want to thank the broad range of stakeholders who provided their expertise, time and insight into this work. The work to adapt bioethics principles and IRCs to tech & AI began in 2020 with the RAI Institute Bioethics Working Group that was co-chaired by former Director Ashley Cassovan and Monika Viktorova and included a broad range of experts from tech, academia, the pharmaceutical industry, government, law, and other fields. A sincere thank you to all of the participants of the working group who lent their time, expertise, experience and insight to the project, whether by attending the working group meetings, roadshows or contributing offline to creating a recommendations draft that was released by RAI Institute in 2020. This updated guide draws from that invaluable multistakeholder effort.
About the Responsible AI Institute
Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.
Members include leading companies such as Amazon Web Services, Boston Consulting Group, Genpact, KPMG, Kennedys, Ally, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.
Media Contact
Nicole McCaffrey
Head of Strategy & Marketing
Responsible AI Institute
+1 (440) 785-3588
Find Responsible AI Institute Here: