On October 16th, NYC announced the release of an AI Action Plan to promote responsible AI in the City government. The Plan contains initiatives like AI education, implementation, procurement, and stakeholder engagement. The 51-page document includes 37 actions to govern AI, with 29 set to be rolled out in 2024.
NYC and AI Regulation
Led by NYC Mayor Eric Adams and Chief Technology Officer Matthew Fraser, this AI Action Plan is the first of its kind for a major city. It was developed by the Office of Technology and Innovation in consultation with about 50 city employees and external industry and academic stakeholders.
As of 2022, NYC reports the use of more than 20 algorithmic tools across roughly a dozen city agencies so these efforts are overdue.
The release of this Plan builds on NYC’s AI regulatory efforts such as Local Law 35, on algorithmic tool reporting, and Local Law 144 requiring independent bias audits of automated employment decision tools.
Action Plan Summary
The Action Plan’s definition of AI aligns with the definition from the UK’s Information Commissioner’s Office and Alan Turing Institute’s 2020 paper and the specificity of the draft EU AI Act by providing examples of AI system tasks. The Plan defines AI as:
“an umbrella term without precise boundaries, that encompasses a range of technologies and techniques of varying sophistication that are used to, among other tasks, make predictions, inferences, recommendations, rankings, or other decisions with data, and that includes topics such as machine learning, deep learning, supervised learning, unsupervised learning, reinforcement learning, statistical inference, statistical regression, statistical classification, ranking, clustering, and expert systems.”
The AI Action Plan sets out 7 key city initiatives to govern the use of AI:
- Create a comprehensive AI governance framework
- Engage diverse stakeholders
- Educate the public and city workers on Responsible AI
- Upskill NYC workforce in AI
- Implement AI in city agencies
- Support Responsible AI Procurement
- Maintain, update, report annual progress on Action Plan
Each initiative includes phased action timelines ranging from Q4 of 2023 to Q4 of 2025.
Continuing the Momentum
All municipalities should follow NYC’s example and articulate their well-researched AI plans as stakeholders have a right to be informed about the future of AI governance.
The time is now — AI regulatory momentum will only build as we near the final phases of the EU AI Act’s passage, a monumental legislation that will have far-reaching effects even in the U.S..
To promote responsible innovation and mitigate the potential biased impacts of this powerful technology, other important steps cities can take to promote Responsible AI Governance include:
- Research and development of AI and RAI, through steps such as research funding, AI commissions/working groups/committees, sandboxes with SMEs, and transparent public consultation processes, and AI use case review and feasibility studies
- Funding for AI literacy and workforce upskilling
- Public reporting of city agencies’ procurement, piloting and deployment of AI systems
- Adoption of leading RAI frameworks such as the RAI Institute’s that include principles related to consumer protection, accountability, bias, fairness, security, validity, reliability, explainability, interpretability, safety, transparency, privacy, and robustness
- Support for third party auditors, standards, conformity assessments, and post-audit accountability mechanisms and promotion of RAI governance collaboration among academia, industry, policy experts and international agencies
- Support for AI bias and harm incident reporting, tracking, and redress to ensure people harmed by AI can share their experiences
- Establishing and maintaining a risk inventory/database of AI systems
- Implementing risk management process and RAI governance frameworks through all city agencies that use or plan to use AI
- Appointing RAI Officers across the city workforce
- Share resources related to RAI including model techniques
- Conduct post-market AI system monitoring
- Strengthening statutes related to data protection, privacy, intellectual property, consumer protection, and human rights while introducing norms of algorithmic accountability
As an independent nonprofit in the field of Responsible AI for more than half a decade, we’re proud to act as a convener and interlocutor between regulators, industry, and the public. RAI Institute tracks this federal plan and over 250 others like it through our AI Regulatory Tracker for members. Get in touch with us about becoming a member today.
About Responsible AI Institute (RAI Institute)
Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. RAI Institute’s conformity assessments and certifications for AI systems support practitioners as they navigate the complex landscape of AI products. Members include leading companies such as Amazon Web Services, Boston Consulting Group, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.
Media Contacts
Audrey Briers
Bhava Communications for RAI Institute
+1 858.314.9208
Nicole McCaffrey
Head of Marketing, RAI Institute
+1 440.785.3588
Follow RAI Institute on Social Media
X (formerly Twitter)