Arize AI analyzed the SEC filings of Fortune 500 companies from 2022 to 2024, revealing that 137 of the leading Fortune 500 companies view AI regulation as a risk factor. Executives cited higher compliance costs, potential revenue impacts, and penalties for policy violations as primary concerns. However, these views are based on what is top of mind for executives:
Lack of clarity around the impacts of new laws on companies’ AI products: companies across sectors surveyed – such as media and entertainment, healthcare, financial services, CPG, aerospace and defense, and more – are building AI products internally and for their customers. But, new laws could change these plans in an instant. Executives cite uncertainty around how new AI laws will affect their products and how they should even respond.
Difficulty dealing with the fragmented nature of AI regulations: given the lack of a single standard for AI policy, companies are struggling to harmonize diverse regulations, sector-specific requirements, and other legal and compliance requirements. The California Bill SB 1047, for example, has sparked debate regarding whether it will catalyze or slow down AI innovation, as it calls for developers to conduct safety testing, and implement other safeguards, with serious financial penalties for any incidents.
Time-consuming to address and implement responsible AI: senior management at many Fortune 500 companies lack sufficient time to make sense of how new laws are impacting their AI products, and how they should track the ever-evolving regulatory landscape and business implications. This also takes their attention away from other priorities, becoming unwieldy to handle.
Some companies are establishing their own AI guidelines in order to stay ahead of the curve and anticipate potential impacts to their AI products. However, new regulations could still affect their ability to remain competitive. Companies setting their own internal guidelines need to constantly incorporate emerging frameworks and guidance, and ensure compliance with the latest standards, but may not have the resources to collectively address responsible AI across functions.
Generative AI still presents difficulties since it continues to evolve quickly, and requires advanced cybersecurity protections. Companies are trying to figure out how to design best practices and implement AI guidelines that can keep up. About 70% of the Fortune 500 companies mention generative AI risk in the context of competitive and/or security-related business threats.
Untangling AI governance regulations and ensuring safe system behavior is the heart of deploying AI systems responsibly, but just when companies start to develop their own guidelines or implement guidance existing policies, the landscape keeps shifting. Ultimately, the main challenge is implementation, including:
- Harmonizing fragmented regulations, legal requirements, and guidance (sector-specific, best practices, product/use-case specific)
- Incorporating the latest AI governance practices and re-assessing risks for specific sectors
- Addressing critical business threats such as AI security attacks
- Aligning AI use cases with business objectives to derive maximum benefits
The Responsible AI Institute helps member companies take practical steps to adhere to responsible AI standards at the product and organizational levels, while remaining ready for future AI developments.
Source:
About the Responsible AI Institute
Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.
Members include leading companies such as Amazon Web Services, Boston Consulting Group, KPMG, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.
Media Contact
Nicole McCaffrey
Head of Marketing, Responsible AI Institute
nicole@responsible.ai
+1 (440) 785-3588
Follow Responsible AI Institute on Social Media
X (formerly Twitter)