The EU AI Act: State of Play, Global Implications and How Organizations Can Prepare

EU AI Act Panel Discussion 2. 21.24

February 21, 2024 Panel Discussion Highlights

During the inaugural installment of our virtual event series, Responsible AI Institute gathered experts in responsible AI to discuss the latest developments and implications related to the EU AI Act, which the European Union recently voted in favor of unanimously. The Act will impose requirements like the disclosure of training data sources and human oversight measures for “high risk” AI systems in areas like healthcare, credit scoring, and employment decisions. 

The panel featured AI governance expert Lucía Gamboa, Policy Manager at Credo AI. Adam Ruttenberg, a Partner at Cooley LLP, offered legal insights on the AI Act, while Hadassah Drukarch, a Fellow at the Responsible AI Institute, shared expertise on tech policy and regulation. Krystal Hu, tech correspondent at Reuters, skillfully moderated the discussion.

AI Governance Will Prove To Be Crucial

Companies that create AI systems face complex questions around compiling historical information on their development lifecycles to meet transparency demands. This is a central issue of the Act. Those simply applying off-the-shelf AI tools will need robust governance to oversee prudent usage.

Panelists agreed compliance won’t be a simple “check the box” exercise. Organizations must cultivate AI governance expertise and align legal, IT, product and executive teams around holistic risk frameworks. Proactive “AI governance by design” is recommended versus patching issues after problems arise. Gamboa believes that having benchmarks in place to assess AI systems will be critical to laying a good foundation to adhere to emerging regulation.

The Legal Implications Ahead

The training data disclosure requirement, which has sparked concerns it could open the door to lawsuits against AI companies for use of unlicensed data, was also a focus of the panel. There is uncertainty on whether current class action lawsuits against large language models will set precedents impacting disclosures. This “gray area” means some companies may wait to deploy in the EU until liability risks become clearer. Ruttenberg states that organizations are closely watching how the AI Act will affect their businesses in how they develop, sell, and purchase AI systems. 

The Future of AI Open Source Tools

Lastly, panelists delved into questions surrounding the open source community. Open source AI tools create unique challenges for assigning responsibility across distributed teams of contributors. There are also questions on whether the EU can harmonize enforcement consistently across member states versus inconsistencies seen with GDPR.

Looking ahead, panelists predict we’ll see more regulations for high risk areas like healthcare AI. Balancing innovation and regulation will remain tricky as the “pacing problem” means technology outpaces lawmakers and policy. Drukarch mentioned the ongoing race to regulate AI and how, at the end of the day, most Western countries are working towards the same goals. They are simply approaching it in different ways.

Overall, the EU AI Act signals the start of an era of AI governance which businesses must strategically invest in while also maintaining their AI innovation initiatives. Doing so will help mitigate legal risks that businesses could face in developing and supplying their AI systems. Additionally, measures will need to be put into place for organizations and open source tools to ensure the safety of consumers. The Act sets the stage for regulation to come, and time will tell whether other countries will follow suit or forge their own paths.

 

Supporting You on Your RAI Journey

Looking to stay informed about regulatory updates and learn how your organization can proactively prepare for coming AI regulation? RAI Institute invites new members to join in driving innovation and advancing responsible AI. Collaborating with esteemed organizations like those mentioned, RAI Institute develops practical approaches to mitigate AI-related risks and fosters the growth of responsible AI practices.

Become a RAI Institute Member

 

About Responsible AI Institute (RAI Institute)

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as Amazon Web Services, Boston Consulting Group, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

 

Media Contact

For all media inquiries please refer to Head of Marketing & Engagement, Nicole McCaffrey.

[email protected]

+1 440-785-3588

Social Media

LinkedIn

Twitter

Slack

Share the Post:

Related Posts

Risks of Generative AI

Responsible AI Institute Webinar Recap (Hosted on April 17, 2024) The proliferation of deepfakes – synthetic media that can manipulate audio, video, and images to...

Responsible AI Institute + AltaML Case Study

Objective    AltaML, an applied artificial intelligence (AI) company that builds powerful AI tools to help businesses optimize performance and mitigate risks, recognized the importance...

Responsible AI Institute Q1 2024 New Members

Multiple Fortune 500 Companies Join RAI Institute; New Member Resources Spotlight Responsible AI Journeys and AI Industry Experts  AUSTIN, TEXAS – Apr. 14, 2024 –...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!