LCO report comparing European Union, Canadian AI regulation stresses development of ‘trustworthy AI’
Thursday, December 16, 2021 @ 12:28 PM | By Amanda Jerome
A Law Commission of Ontario (LCO) report comparing Canadian and European AI regulation notes there are “major gaps” in this area in Canada that “must be addressed urgently.”
“It’s an area of growing concern,” said Nye Thomas, the LCO’s executive director.
Thomas stressed that Canadian laws are “going to have to adapt to meet these new technologies; whether it be administrative law, or criminal law, or the law of civil procedure.”
Nye Thomas, Law Commission of Ontario executive director
The Comparing European and Canadian AI Regulation report, released in November 2021, was created by the LCO and professor Céline Castets-Renard, the University of Ottawa’s Research Chair on Accountable Artificial Intelligence in a Global Context. The report compares Canada’s Directive on Automated Decision-making to the EU’s proposal on AI regulation and reviews the strengths and weaknesses of both approaches.
“AI,” the report noted, “is still an under-regulated sector, with a combination in Canada of applicable legal frameworks, ethics declarations and best practices covering parts of a very broad and complex technology.”
“Globally, the European Commission was the first regulatory body to attempt a comprehensive legislation to address AI. Others will follow suit soon, selecting regulatory approaches that are best suited for their specific context,” the report added.
According to the report, the “only comprehensive effort to regulate AI and automated decision-making systems in Canada to date is the Government of Canada’s Directive on Automated Decision-making (“the Canada ADM Directive”).”
“Many other governments, including the Government of Ontario, have begun to consider AI and ADM regulation, but have not yet passed or implemented comprehensive or dedicated regulations,” the report explained.
The Canada ADM Directive requirements, the report noted, are “linked to ‘core administrative law principles such as transparency, accountability, legality, and procedural fairness’ and are divided into five categories or stages of use of automated decision-making:
- Performing an Impact Assessment
- Quality Assurance
The ADM Directive, which came into force on April 1, 2020, “requires an algorithmic impact assessment for every automated decision-making system (ADM), including the impact on rights of individuals or communities,” the report explained.
Unlike the proposed European Commission AI rules, the report added, the “Canada ADM Directive is very limited in scope.”
“Most significantly,” the ADM Directive is “not a rule of general application governing all, or even most, AI, automated decision-making and related systems across Canada.”
“Rather, the scope of the Canada ADM Directive is limited to a restricted class of systems and activities within the Canadian federal government,” the report explained.
“Most significantly,” the report emphasized, the ADM Directive “only regulates systems in the federal government and federal agencies. It does not apply to systems used by provincial governments, municipalities, or provincial agencies such as police services, child welfare agencies and/or many other important public institutions. Nor does the ... Canada ADM Directive apply to private sector AI or ADM systems.”
The report stressed that “even within the federal sphere, the extent of the limitations on the Canada ADM Directive are significant.”
As a few examples, the report noted, the ADM Directive “does not govern”:
- Systems that could be “deployed in the criminal justice system or criminal proceedings.”
- National security applications have been “explicitly exempt from the Directive, as are the Offices of the Auditor General, the Chief Electoral Officer, the Information Commissioner of Canada and the Privacy Commissioner of Canada and others.”
- “Several agencies, Crown corporations, and Agents of Parliament that [are] outside the core federal public service may enter into agreements with the Treasury Board to adopt the Directive’s requirements but are not required to do so.”
The report also noted that the “Canada ADM Directive does not have the legal status of a statute or a regulation. Nor is it a voluntary, self-assessing ‘ethical AI’ guideline or best practice.”
“Rather,” the report added, “the Directive falls somewhere in between.” The Directive is “a risk-based governance model.”
According to the report, the ADM Directive “does not explicitly require AI or ADM systems to comply with the Charter or Canadian human rights legislation.”
“Rather, the Directive states that its objective is to … ensure that Automated Decision Systems are deployed in a manner that reduces risks to Canadians and federal institutions, and leads to more efficient, accurate, consistent, and interpretable decisions made pursuant to Canadian Law,” the report noted.
In comparison, the European Commission issued “new AI binding rules on April 21st, 2021.”
“The new Coordinated Plan with Member states seeks to strengthen AI uptake, investment and innovation across the EU. The new rules on Machinery will complement this approach by adapting safety rules of robotic products integrating AI. While the AI Regulation will address the safety risks of AI systems, the new Machinery Regulation will ensure the safe integration of the AI system into the overall machinery,” the report explained.
The proposed regulatory framework on AI put forward by the European Commission has “specific objectives:
- ensure that AI systems placed and used on the Union market are safe and respect existing law on fundamental rights and Union values;
- ensure legal certainty to facilitate investment and innovation in AI;
- enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;
- facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation.”
The report noted that the new rules “will be applied directly in the same way across all Member States” and they “follow a risk-based approach.
In comparing the two approaches, the report noted that Canada’s ADM Directive’s “first and perhaps most notable strength is its comprehensiveness.”
“The Directive includes many (but not all) of the necessary elements of comprehensive ‘framework’ regulation identified by the LCO and other organizations. The Directive addresses an impressive range of issues, including: baseline requirements for many (likely most) federal government automated decision-making systems, irrespective of risk; strong protections for automated decision-making transparency; a mandatory register; a detailed and thoughtful risk assessment process; elements of a remedial regime; a commitment to procedural fairness; and an oversight regime,” the report explained.
However, the report also noted that the ADM Directive “has several weaknesses or limitations.”
“Unlike the EC Proposal, the Canada ADM Directive has a very limited scope. The Canada ADM Directive has a singular purpose: regulation of a specific range of federal government automated decision-making systems. This means that other kinds of AI and algorithmic systems are beyond its scope,” the report noted, pointing to criminal justice as an example.
As for the EC Proposal, the report highlighted several strengths, including the “fact that the European Commission is the first in the world to consider regulation of this scale [and] has the advantage of giving a direction that will be looked at by other states and will necessarily influence them.”
“Moreover,” the report emphasized, “the big AI players like the US, China and Canada cannot do without the European market and cannot refuse to comply with it.”
Another strength the report noted with the EC Proposal is that “the penalties are strong, and it is essential that they are so that regulation is taken seriously, especially by the already all-powerful digital American and Chinese giants in the AI markets.”
However, the report noted, the EC Proposal has several weaknesses, including the EC’s goal is “mainly to provide a framework for the placing of products on the market and not to protect individuals from the social risks that AI can generate.”
“Fundamental rights are thus not very present in the proposal,” the report stressed.
The report noted that the Canada ADM Directive and EC Proposal are “both innovative and complex regulatory instruments” and “both approaches represent sophisticated and thoughtful responses to the challenge of AI and ADM regulation in their respective jurisdictions.”
“Some may believe that the scope and breadth of the EC Proposal, coupled with the jurisdictional complexity of EU governance, makes the EC Proposal too complex and unfamiliar to be of benefit to Canadian policymakers and stakeholders,” the report added, noting that the LCO and Research Chair on Accountable Artificial Intelligence “acknowledge these concerns, but believe this analysis provides some general lessons about best practices and priorities in AI regulation.”
“To be clear,” the report noted “neither the LCO nor Research Chair on Accountable Artificial Intelligence believe that the Canada ADM Directive or EC Proposal represents the perfect solution to these issues. Nevertheless, we believe these topics are the baseline element of thoughtful AI regulation and represent an emerging standard for other Canadian governments and agencies.”
The report concluded by emphasizing that the EC Proposal “demonstrates major gaps in the regulation of unacceptable and high-risk AI systems in Canada.”
“Though imperfect, the EC Proposal includes a commitment to publicly identify, regulate and in some cases prohibit a broad range of high-risk AI systems. Article 5 of the EC Proposal identifies several categories of AI systems that are deemed ‘unacceptable’ and prohibited, including a limited class of biometric identification systems,” the report explained, noting that in this respect, the EC Proposal “represents a major advancement on Canada ADM Directive.”
The Canada ADM Directive, the report stressed, “does not explicitly identify or prohibit AI systems with unacceptably high risk, including biometric systems such as facial recognition. Nor does the Canada ADM Directive regulate law enforcement or criminal justice AI applications. These are major gaps in AI regulation in Canada that must be addressed urgently.”
The report also emphasized that “public and private sector AI regulation must be different.”
The ADM Directive and EC Proposal “have much different focuses and priorities: The Canada ADM Directive is directed to ADM systems used by the Government of Canada. The EC Proposal, on the other hand, is largely designed to address AI systems used in the private sector within the European Union,” the report explained.
The report noted that “Canada is a strong place of AI in terms of research, public and private investments, and training” and “now is the time to secure these efforts by adopting legal standards.”
Thomas noted that the European proposal is “very broad in scope” and, to date, the Canadian policy development has been more “decentralized.”
“This is quite a difference,” he said, noting that it “means that AI regulation development in Canada is going to proceed much differently than it does in Europe.”
Thomas also emphasized that the European model is “what’s called a pre-market certification model.”
“It’s really about setting the rules that companies and governments have to abide by before they can implement an AI system. So, it’s really about getting products onto the market. There’s comparatively little in AI regulation about rights protection, about mitigating the demonstrable harms from AI systems,” he said, noting that “hopefully” Canada will have a “much more robust regulatory framework protecting rights.”
The other significant difference between the approaches Thomas highlighted is that the “EU framework applies to both private sector actors and government.”
“The law commission and many other people believe that you actually need a different set of rules for private sector actors and for governments,” he explained, noting that there are “higher expectations of public participation when it comes to public sector AI systems.”
Thomas stressed that the most prominent risk with AI systems is bias, which can cause significant harm.
He noted that “certain rules and expectations, policies, legal requirements must be in place before these systems are released, so the public can have trust in them.”
If you have any information, story ideas or news tips for The Lawyer’s Daily please contact Amanda Jerome at Amanda.Jerome@lexisnexis.ca or call 416-524-2152.