The EU’s influence in AI governance

A year on from the EU proposing its AI governance measures and planning regulations to oversee the use of AI, Canada is now preparing its first ever private sector regulation to do the same. The resemblance between the two countries’ regulations just goes to show the influence that the European Union’s AI Act will probably have on additional authorities that are considering reining in artificial intelligence standards.  

Roughly one year after the EU became the first to suggest its AI governance measures to control the use of artificial intelligence, Canada followed by introducing its own private sector AI governance framework. Similarities between the two authorities and their laws reveal the impact – and the pressure - that the AI standards of the EU is having on other areas that want to control an industry that has been largely unregulated.

Canada’s proposed AI governance

The Canadian government introduced Bill C27 in June this year and, if it is passed, this bill will not merely remodel the federal private sector privacy law but will also oversee AI under the new Artificial Intelligence and Data Act (AIDA). In fact, the Canadian and European Union bills are both still in draft mode but, should the EU pass its bill, it will likely put the pressure on further countries to introduce their own – beginning with Canada.

Justin P’ng, an associate in Canadian law firm Fasken’s privacy and cybersecurity group, said he believes that with the EU artificial regulation, there will be a greater spotlight shone on the harms within the AI industry, as well as noticing more regulatory action and enforcement. He also believes that it will put pressure on other jurisdictions to act and regulate the industry. As we can see with Canada following suit, in September last year, Brazil’s Congress passed their own bill creating a legal AI governance framework. The potential for a “Brussels Effect” was noted by P’ng. The EU’s authority is shaping and regulating worldwide markets. This resembles what happened with the EU’s GDPR regulation.

With Canada’s AIDA concentrating on alleviating the risks of harm and unfairness in the use of AI systems, it follows a comparable harm-based approach to the EU’s AI Act. According to Marijn Storm, an associate at Morrison & Foerster in Brussels, given that the industry is growing too fast to identify which kinds of artificial intelligence should be regulated, this approach makes sense as those systems are inclined to be outdated by the time those regulations would be passed.

Difference between the EU and Canada’s AI regulations

There are, however, a few differences between the laws…

The EU’s AI Act classified AI systems into four separate categories:

  • unacceptable risk

  • high risk

  • limited risk

  • minimal risk

The majority of the compliance requirements relating to the EU AI Act fall under the high-risk category. The Canadian AIDA focuses on what they call “high-impact” systems.

 The key differences between the two proposed acts, can be seen in this table from Canadian law firm, Fogler Rubinoff.

 For the time being, Canada has not yet defined the meaning of “high impact” and has delayed the definition, deferring it to the regulations, and if the bill is passed, this will be imminent. 

Operational and practical effects

In terms of the practical effect and operational impacts, it doesn’t look as if there is going to be a vast difference with regard to the overall activities that both Canada and the EU are striving to manage. While Canada is finalising its AI governance regulations, P’ng advised that it would be prudent for companies with AI systems to presume they are covered under the regulation because there will be a heavy enforcement regime for noncompliant companies. 

The range of regulated activities is reasonably wide, and it impacts the whole supply chain of AI. Penalties under Canada’s AIDA will be able to reach $25 million (or 5%) of an organisation’s total worldwide revenue in the previous financial year, as well as imprisonment for those involved in developing the system.

In a nutshell, the EU has played a significant role in setting out regulatory activities to oversee the AI industry as a whole. Considering the vast range of regulatory considerations proposed by the bills, firms are going to want to set compliance in motion, and quickly.

If you would like to talk to us about future-proofing your compliance processes and efficiently keeping on top of the new AI governance regulations, please get in touch here for an obligation-free chat.

 

Previous
Previous

Concern over the Data Reform Bill

Next
Next

The Rise of Greenwashing and Need for Defined ESG Standards