Canada’s Clarification of the Proposed AI and Data Act is a Welcome Step

Since the Government of Canada tabled the Artificial Intelligence and Data Act (AIDA) in June 2022, as part of Bill C-27, individuals and organizations have had many questions about the proposed draft, including about its scope, its penalties, and how it will be supported by sectoral regulations. Many of these issues are discussed and clarified in the companion document released earlier this week by Innovation, Science, and Economic Development Canada (ISED).

Recognizing that additional details will be shared in future materials and regulations, RAI Institute supports ISED’s continued leadership in developing a policy framework to support the responsible development and use of AI and supports the direction of ISED’s efforts as reflected in the companion document.

RAI Institute will continue to work with the Canadian AI policy community to bring additional clarity to the concept of “high-impact systems,” to describe when and how compliance with standards and certification programs will provide a presumption of conformity with certain AIDA requirements, and to further integrate AIDA’s framework with Canadian and international frameworks related to human rights, data, consumer protection, and AI use.

Within the companion document, we were particularly excited to see additional guidance in four areas:

1. Emphasis on standards and certification programs

The companion document highlights the roles that standards and certification programs can play in ensuring responsible AI implementation while promoting innovation and international alignment. For example, it notes that without clear standards, “it is difficult for consumers to trust the technology and for businesses to demonstrate that they are using it responsibly” and that “voluntary certifications can play an important role as the ecosystem is evolving.” The companion document also describes Canada’s plans for a wide-ranging consultation on five issues once AIDA is passed. One of these issues is the “types of standards and certifications that should be considered in ensuring that AI systems meet the expectations of Canadians.” RAI Institute appreciates Canada’s recognition that in addition to robust legislation and regulation, standards and certifications will play an important role in ensuring responsible implementation of AI systems.

RAI Institute will continue to reflect the evolving regulatory framework for AIDA within the requirements of the RAI Institute Certification Program.

2. International alignment

Apart from the development of “robust standards,” ISED recognizes the draft EU AI Act, the US Blueprint for an AI Bill of Rights, NIST’s AI Risk Management Framework 1.0, and the UK’s proposal for regulating AI as important efforts to regulate the impacts of AI systems. The companion document clarifies that Canada will work together with international partners to align approaches, that AIDA’s regulatory requirements “would be developed through extensive consultation and would be based on international standards and best practices in order to avoid undue impacts on innovation,” and that this approach can help ensure that Canadian firms be “recognized internationally as meeting robust standards.”

3. Working with the AI ecosystem

The companion document represents ISED’s commitment to “an open and transparent regulatory development process.” Specifically, it recognizes the questions raised about aspects of AIDA since it was tabled in June 2022, stating that AIDA’s purpose is “not to entrap good faith actors or to chill innovation, but to regulate the most powerful uses of this technology that pose the risk of harm.” Overall, Canada will take an active yet measured approach to regulating AI impacts. AIDA’s provisions will come into force in 2025 at the earliest, and once in force, its focus will be on “education, establishing guidelines, and helping businesses to come into compliance through voluntary means.”

4. Clarifying penalties

Though Canada will allow “ample time” for the AI ecosystem to adjust to AIDA’s regulatory framework “before enforcement actions are undertaken,” the companion document also describes “two types of penalties for regulatory non-compliance – administrative monetary penalties (AMPs) and prosecution of regulatory offences.” It also clarifies that these are distinct from the separate mechanism for “true criminal offences,” which AIDA creates three new types for. The companion document clarifies that AIDA’s new types of criminal offences apply to someone “who is aware of, or who appreciates, the harm they are causing or at risk of causing.”

Alongside the discussion of Canada’s transparent, consultative, internationally aligned, and education-focused approach to rolling out the AIDA framework, the companion document’s clarification of these penalties will help assuage many of the concerns of organizations and individuals.

As AI technology continues to evolve, so will legislation. It is imperative to have appropriate oversight not only to protect the public from the potential harms of these systems when not developed in a safe and responsible manner, but also to support positive business growth. Since leaving organizations guessing about “what good looks like” in responsible AI implementation can hinder innovation, Canada’s efforts to provide a framework for AI regulation are commendable. ISED’s companion document displays Canada’s efforts to support the advancement of technology development, in line with recent announcements from Scale AI and its continued support of the Pan-Canadian AI Strategy, and to protect Canadians from potential AI harms.

Share the Post:

Related Posts

Responsible AI Institute Q1 2024 New Members

Multiple Fortune 500 Companies Join RAI Institute; New Member Resources Spotlight Responsible AI Journeys and AI Industry Experts  AUSTIN, TEXAS – Apr. 14, 2024 –...

Healthcare AI

As generative AI charges ahead, it presents challenges and opportunities across sectors. Its consequences are especially pronounced in healthcare, where patient wellbeing is at risk....

Responsible AI Institute - Employment Working Group

An update from the RAI Employment working group on context-specific assessment development AI tools are widely used in corporate environments today to support recruitment and...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!