Leaders in Responsible AI: A Member’s Story

Michael Brent - BCG

Michael Brent

Boston Consulting Group

Director, Responsible AI Team

What does your job entail within your organization?

I have the best job in the world. As part of the Responsible AI team, I address the most important ethical challenges that BCG faces when helping our clients to design, build, and deploy AI systems. From ideation to delivery, my team aims to identify and mitigate potential risks associated with AI deployment, protecting individuals and the environment. We partner with colleagues across data science and engineering, information security, data privacy, legal, risk and compliance, and senior leadership to ensure that AI systems are not only deployed responsibly, but are safer products because of the mitigations we help put in place. This creates trust in BCG and our clients, while enhancing the societal value of AI.

What do you think are the biggest challenges your organization faces related to integrating AI into your business?

BCG builds AI systems with clients in nearly every industry and across much of the globe. Because of this breadth, one challenge we face is ensuring compliance with emerging laws, rules, and directives, such as the EU AI Act, the U.S. AI Bill of Rights, or the Canadian AI and Data Act. There exists a diverse set of requirements across these laws, rules, and directives, which we must carefully monitor and track, especially across work that is globally distributed. Ensuring that our clients and our own data scientists and engineers have a consistent set of best practices guiding their work is essential to ensure we remain compliant.

Why must organizations prioritize responsible AI governance and systems in their business?

Asking why organizations ought to prioritize responsible AI practices is a lot like asking why automobile manufacturers should prioritize safety features in the design and testing of their vehicles. There is ample evidence showing that as cars have become safer over time, rates of personal injury have declined. Analogously, if AI system builders prioritize safety features in the design and testing of their products, we should likewise expect that these systems will become safer over time. This, in turn, should reduce the rates of harm caused by insufficiently tested AI systems, especially when launched with little safety oversight. As the analogy implies, to arrive at safer outcomes, an entire network of change must simultaneously occur, including the creation of standards and best practices, certifications, testing benchmarks, improved safety features, ways of training people in their safe deployment, and much more. By drawing on the lessons learned in the automotive and similar industries, we can create a path towards similar outcomes for AI systems.

What’s a lesson you’ve learned that has shaped your work in responsible AI? Why is this work important to you?

Thus far in my career, one lesson I’ve learned concerns the importance of people. Specifically, the people who are subjected to the outputs of the AI systems that we help design, build, and deploy. As business goals and research interests continue pushing AI technologies in new directions, highlighting the people impacted by these AI systems is crucial. Doing so helps ensure that their rights, values, and interests are represented in any discussion about which AI systems we ought to build or how to deploy these systems responsibly. This lesson underlies the value and importance that I find in responsible AI work. As alluded to above, this work plays a crucial role in creating safer AI systems that lower the likelihood of harming individuals and the environment, slowly but surely bending the arc of justice towards better outcomes for all.

Become a RAI Institute Member 

RAI Institute invites new members to join in driving innovation and advancing responsible AI. Collaborating with esteemed organizations, RAI Institute develops practical approaches to mitigate AI-related risks and fosters the growth of responsible AI practices.

Become a RAI Institute Member

About Boston Consulting Group

Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we work closely with clients to embrace a transformational approach aimed at benefiting all stakeholders—empowering organizations to grow, build sustainable competitive advantage, and drive positive societal impact. 

Our diverse, global teams bring deep industry and functional expertise and a range of perspectives that question the status quo and spark change. BCG delivers solutions through leading-edge management consulting, technology and design, and corporate and digital ventures. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, fueled by the goal of helping our  clients thrive and enabling them to make the world a better place.

About Responsible AI Institute (RAI Institute)

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as Amazon Web Services, Boston Consulting Group, KPMG, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Media Contact

Nicole McCaffrey

Head of Marketing, RAI Institute


+1 (440) 785-3588

Follow RAI Institute on Social Media 




Share the Post:

Related Posts

Procurement AI

Responsible AI Institute May 15, 2024 Webinar Recap Robust procurement practices have emerged as a crucial frontline in fostering responsible AI development and deployment. As...

Jeff Easley Headshot

Leading AI Nonprofit Announces Additional Advancements on Policy and Delivery Team AUSTIN, TEXAS – May 15, 2024 – Responsible AI Institute (RAI Institute), a prominent...


By Var Shankar In recent years, the RAI Institute has worked with Financial Institutions (FIs), regulators – such as at the London summit co-convened by...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!