Register to attend Responsible AI: What’s Real, What’s Next, and What Matters webinar 📅 January 22, 2025 | 🕚 11 AM EST | 🌐 Virtual

What are Fortune 500 companies most concerned about when it comes to AI regulations?

AI Regulation

Arize AI analyzed the SEC filings of Fortune 500 companies from 2022 to 2024, revealing that 137 of the leading Fortune 500 companies view AI regulation as a risk factor. Executives cited higher compliance costs, potential revenue impacts, and penalties for policy violations as primary concerns. However, these views are based on what is top of mind for executives:

Lack of clarity around the impacts of new laws on companies’ AI products: companies across sectors surveyed – such as media and entertainment, healthcare, financial services, CPG, aerospace and defense, and more – are building AI products internally and for their customers. But, new laws could change these plans in an instant. Executives cite uncertainty around how new AI laws will affect their products and how they should even respond. 

Difficulty dealing with the fragmented nature of AI regulations: given the lack of a single standard for AI policy, companies are struggling to harmonize diverse regulations, sector-specific requirements, and other legal and compliance requirements. The California Bill SB 1047, for example, has sparked debate regarding whether it will catalyze or slow down AI innovation, as it calls for developers to conduct safety testing, and implement other safeguards, with serious financial penalties for any incidents. 

Time-consuming to address and implement responsible AI: senior management at many Fortune 500 companies lack sufficient time to make sense of how new laws are impacting their AI products, and how they should track the ever-evolving regulatory landscape and business implications. This also takes their attention away from other priorities, becoming unwieldy to handle.  

Some companies are establishing their own AI guidelines in order to stay ahead of the curve and anticipate potential impacts to their AI products. However, new regulations could still affect their ability to remain competitive. Companies setting their own internal guidelines need to constantly incorporate emerging frameworks and guidance, and ensure compliance with the latest standards, but may not have the resources to collectively address responsible AI across functions. 

Generative AI still presents difficulties since it continues to evolve quickly, and requires advanced cybersecurity protections. Companies are trying to figure out how to design best practices and implement AI guidelines that can keep up. About 70% of the Fortune 500 companies mention generative AI risk in the context of competitive and/or security-related business threats. 

Untangling AI governance regulations and ensuring safe system behavior is the heart of deploying AI systems responsibly, but just when companies start to develop their own guidelines or implement guidance existing policies, the landscape keeps shifting. Ultimately, the main challenge is implementation, including:

  • Harmonizing fragmented regulations, legal requirements, and guidance (sector-specific, best practices, product/use-case specific)
  • Incorporating the latest AI governance practices and re-assessing risks for specific sectors
  • Addressing critical business threats such as AI security attacks 
  • Aligning AI use cases with business objectives to derive maximum benefits 

The Responsible AI Institute helps member companies take practical steps to adhere to responsible AI standards at the product and organizational levels, while remaining ready for future AI developments.

Become a Member - Responsible AI Institute

Source:

https://arize.com/wp-content/uploads/2024/07/The-Rise-of-Generative-AI-In-SEC-Filings-Arize-AI-Report-2024.pdf?utm_campaign=Q32024%3A%20Gen%20AI%20in%20SEC%20Filings%20Whitepaper&utm_medium=email&_hsenc=p2ANqtz–CvXO9JC5BRhxFjojr_mGsSLMUjLgIal1s1LEoMMgJ9112Qbnq0F6bubV6pgOmtcP5BW97wkjRs8QbqaoLqCkSR5Y-sg&_hsmi=316544644&utm_content=316544644&utm_source=hs_automation

About the Responsible AI Institute

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as Amazon Web Services, Boston Consulting Group, KPMG, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Media Contact

Nicole McCaffrey

Head of Marketing, Responsible AI Institute

nicole@responsible.ai 

+1 (440) 785-3588

Follow Responsible AI Institute on Social Media 

LinkedIn 

X (formerly Twitter)

Slack

Share the Post:

Related Posts

Responsible AI Institute's RAISE 2024 & Leadership in RAI Awards

Responsible AI Institute December 11, 2024 RAISE Recap The Responsible AI Institute’s RAISE 2024 event brought together over,1,000 industry leaders to explore responsible AI development....

Embedding ethical oversight in AI governance.

Co-Authored by Hadassah Drukarch and Monika Viktorova As artificial intelligence (AI) systems become embedded into critical areas of our lives, the ethical implications and societal...

Responsible AI Institute Virtual Event Series

Responsible AI Institute November 20, 2024 Webinar Recap AI is transforming industries at a breakneck pace, and its integration into the workplace is both an...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!