By Var Shankar
In recent years, the RAI Institute has worked with Financial Institutions (FIs), regulators – such as at the London summit co-convened by the RAI Institute and the UK Financial Conduct Authority – ML researchers, civil society and law enforcement organizations to develop an understanding of the full range of responsible AI considerations in financial use cases. Financial use cases of AI are among the RAI Institute’s focus areas because of their implications for broader global issues, like sustainability, international development, and deepfakes.
On one such issue, namely financial crime, the G7’s experience of creating and expanding the Financial Services Task Force can serve as a model for the G7’s responsible AI initiative, known as the Hiroshima AI Process. G7 nations launched the Hiroshima AI Process in May 2023 to put guardrails on advanced AI systems. The first two publications of the Hiroshima AI Process – Principles and a Code of Conduct for organizations using advanced AI systems – were published in October 2023.
The Hiroshima AI Process has the potential to ground the development of advanced AI systems in democratic values. Apart from the EU’s three largest economies (Germany, France, and Italy), the G7 grouping includes the US, Canada, Japan, and the UK – with an additional spot for the EU. India, the world’s largest democracy, is an observer to the G7. US cabinet secretaries have signaled their strong commitment to the Hiroshima AI Process. The Hiroshima AI Principles and Code of Conduct were published on the same day as the White House’s landmark Trustworthy AI Executive Order.
The Hiroshima AI Principles and Code of Conduct – along with the OECD AI Principles and the ISO 42001 standard – are foundational elements of global responsible AI efforts.
Two elements of the Hiroshima AI Process set it apart from other important efforts. First, the Hiroshima AI Process has focused to date on the most advanced AI systems, such as the latest generative AI systems. Of all AI systems, these emerging, advanced systems are likely to have the most significant impacts. Second, the limited scope and nimble approach of Hiroshima AI Process allow it to move faster than typical multilateral efforts. This intentional feature is crucial, given the pace of developments in advanced AI.
However, the Hiroshima AI Process is not yet truly global. Key countries with advanced AI capabilities, such as China, are not involved.
The Financial Action Task Force (FATF) provides a template for how an effort that originates in the G7 can become globalized. The FATF was formed by the G7 in 1989 to help combat money laundering and financial crime. Today, the FATF has 40 members, which include China, Singapore, and Saudi Arabia. The FATF has expanded while being responsive to current events. For example, after the attacks of September 11, 2001, it expanded its scope to countering terrorist financing.
With the publication of the Principles and Code of Conduct, the Hiroshima AI Process is off to a strong start. The FATF built upon its early principles and recommendations to combat money laundering by encouraging the development of specialized, national Financial Intelligence Units (FIUs) that use similar standards and processes to identify and investigate suspicious financial activities.
The technocratic collaboration across national FIUs to give ‘teeth’ to the FATF’s principles and recommendations should serve as a model for policymakers as they develop AI Safety Institutes (AISIs) in the US, Canada, Japan, and the UK and the European AI Office (EAIO) to ensure that advanced AI systems are developed and distributed safely.
At the same time, a sector-specific push is also required to operationalize the Hiroshima AI Process. For example, Hiroshima AI Principle 4 is to ‘Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society, and academia.’
Though AISIs and EAIO will likely have an important role to play in this kind of information sharing, so will sector-specific organizations like Financial Institutions (FIs) and FIUs applying advanced AI systems to financial services use cases. The use of advanced AI in financial crimes compliance programs, for customer onboarding (including Know Your Customer/Know Your Business and sanctions checks), ongoing transaction monitoring, generation of Suspicious Activity Reports (SARs), and preliminary investigations, will only grow.
Another important lesson from the FATF context is to develop regional organizations that can adapt global standards to relevant regional guidance. Just as money laundering risks vary by region, risks stemming from advanced AI systems in Southeast Asia may look different from those in the United States or in the Middle East.
Though the FATF experience provides an attractive model for the development of the Hiroshima AI Process, it is not perfect. The FATF system has sometimes been critiqued as generating too many requirements or slow to respond to new methods of financial crime.
Additionally, elements of the FATF system do not easily apply in the AI safety context. For example, the FATF’s compliance mechanism is to periodically assess each member country’s alignment with the FATF recommendations for both effectiveness and technical compliance. Non-compliant countries are put on the FATF’s ‘grey list’ or the ‘black list’ until they become compliant. Being on either of these lists can reduce financial inflows into a country and damage its economy.
In the AI context, it may be possible to limit access to advanced AI systems, or to the cloud computing services that power advanced AI systems, in countries where the use of advanced AI systems is deemed risky. However, this would first require significant alignment within the G7 in areas such as AI policy and data protection.
Most importantly, to give the Hiroshima AI Process ‘teeth’ as a global effort to ground the development and distribution of advanced AI systems in democratic values, policymakers in G7 countries commit to it with attention, staffing, and resourcing.
Become a RAI Institute Member
RAI Institute invites new members to join in driving innovation and advancing responsible AI. Collaborating with esteemed organizations like those mentioned, RAI Institute develops practical approaches to mitigate AI-related risks and fosters the growth of responsible AI practices. Explore membership options here.
About Responsible AI Institute (RAI Institute)
Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.
Members include leading companies such as Amazon Web Services, Boston Consulting Group, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.
Media Contact
Nicole McCaffrey
Head of Marketing, RAI Institute
+1 440.785.3588
Follow RAI Institute on Social Media
X (formerly Twitter)