As part of Bill C-27, the Digital Charter Implementation Act, 2022, the government proposed to regulate “high impact” AI systems under the Artificial Intelligence and Data Act (AIDA). Following significant and sustained criticism of AIDA and after the INDU Committee insisted the Minister table amendments to AIDA the Minister stated were available, the Minister provided the Committee with a letter that provided an overview of the proposed amendments and a draft of the amendments (in Annex A). (To facilitate a review of the government’s proposed amendments, I attach a redline showing how AIDA would read if all of the amendments were adopted can be found at the link to AIDA.2.0.) While the amendments provide more clarity on what the government intends with respect to AIDA.2.0, the proposed changes are still problematic and very concerning.
Overview
The proposed amendments to AIDA are extremely significant and, if enacted, would impose many new regulatory requirements never even hinted at when AIDA was first tabled. Further, the amendments still fail to address many of the key criticisms levied against the Bill and raise a host of new problems.
In summary and as explained more fully below:
- One of the key criticisms of AIDA was that it was not “intelligible” – it was an affront to Parliamentary sovereignty. I referred to this in my appearance before the INDU Committee and in prior articles here and here. The Minister has attempted to overcome these widely held criticisms by providing a list of initial high impact systems to be regulated and by providing criteria to govern what AI systems can be added. However, the substance of these criticisms are left unaddressed for the following reasons:
- The initial list of high impact systems is extremely broad and one cannot discern from reading Annex A what the government really intends to regulate. There is no requirement for any harm associated with the AI systems described in the initial list, even though they are deemed to be “high impact” AI systems, and the definition of harm has not been amended to add in any materiality threshold. The guidance in Annex B is helpful, but this guidance has no legal effect and does not constrain what can be regulated.
- The Minister’s letter contains factor that must be taken into account in designating new systems to be high impact systems. But, the factors enumerated are not conditions and do little to actually constrain what can be regulated. Further, the list of factors omits essential considerations such as the effects of the proposed regulations on trade or innovation and interoperability with the laws of our trading partners.
- All of the key obligations that will apply to high impact systems, general-purpose systems and machine learning models are to be established by regulation. However, there are no guiding principles that set out any parameters that must be met in establishing the regulations. As such all of essential decisions regarding how AI systems will be regulated are not directly or indirectly made by or guided by Parliament.
- AIDA still reflects a centralized and inefficient regulatory and governance regime that leaves all authority under AIDA to ISED and which provides no means of enhancing sectorial regulation of AI systems.
- The AIDA amendments do nothing to make the AI and Data Commissioner independent from ISED.
- AIDA still fractures the regulation of bias and discrimination and rather than enhancing the capabilities of the Canadian Human Rights Commission, it gives the all important regulation of bias and discrimination to ISED.
- The AIDA amendments are not interoperable with the proposed regulatory regimes in the US or the EU under the AIA. For example, AIDA would regulate some general purpose AI systems not regulated in the EU or the US, and would regulate others in a much more onerous way than in the EU. It would also regulate machine learning models rather than how ML models are used in high risk applications. It would also regulate content prioritization and moderation, something also not regulated under the EU AIA.
- The AIDA amendments continue to ensnare whole ecosystems of AI model developers including researchers, data scientists, and developers of open source models with the new proposal to regulate machine learning models, also unlike under the EU AIA.
- AIDA may intrude on provincial jurisdiction.
- AIDA will very likely impede innovation and adoption of AI systems.
- The AIDA amendments do not address the new restrictions on uses of anonymized data which potentially conflict with those in the CPPA, its overlapping regulatory regimes with other statutes including the CCPSA, the double jeopardy under these overlapping laws, and the disproportionate penalties for violation of AIDA compared to other similar laws.
- AIDA had very limited public debate prior to its introduction and the policy questions associated with AIDA (and the amendments) have not been and cannot be adequately assessed and debated in the mad rush to enact a law to regulate AI, whatever that law may be. It is possible that the government can explain and justify some of these apparent flaws in AIDA. However, it is doubtful that the appropriate study and debate can occur within the short time frame left to study AIDA.2.0 at the INDU Committee, even assuming that is the appropriate place for the study and debate to start.
The flaws in AIDA are described below. The criticisms should be read together with these prior blog posts that go into more detail on a number of these criticisms:[i]
- AIDA’s regulation of AI in Canada: questions, criticisms and recommendations
- AIDA Companion Document: overview and questions
- Government proposals to amend AIDA: the challenges ahead Part 2
- AIDA: my appearance before the INDU Committee
- Minister provides proposed amendments to AID
- UK AI Regulation Bill
“High Impact” AI systems to be regulated by AIDA
Initial list of high impact systems
In the Minister’s letter sent to the Committee, the Minister provided a list of the proposed AI systems to be regulated under AIDA. The proposed amendments would now expressly permit the regulation of any of those systems as “high impact” AI systems.
I provided an overview of those system in the blog post, Government proposals to amend AIDA: the challenges ahead Part 2 – Barry Sookman. As summarized in that blog post, the list is vague and is broader than similar categories that will be regulated by the EU AIA. The Minister provided a summary in Annex B to his letter explaining the justifications for each class. However, other than explaining that the classes are somewhat narrower than they appear to be because they do not, unfortunately, apply directly to the public sector, the classes are still very broad.
Moreover, initial AI systems to be regulated are deemed to be “high impact” with no requirement for any risk of harm to trigger regulations, thus potentially resulting in regulation that cannot be justified based on a standard that trades off the benefits of regulation with the adverse impacts on costs of compliance and innovation. But, even if AIDA was amended to require that they likely cause harm before they could be regulated, even the slightest physical or psychological harm to an individual (including potentially “hurt feelings”), damage to an individual’s property; or any economic loss to an individual, would meet the threshold.
Here are some examples showing the potential breadth of the Initial “High Impact” systems, which as can be seen contain no threshold for risk of harm.
- Determinations in respect of employment: This category includes any AI systems involved in the entire lifecycle of employment, from recruitment to termination. Given that AI can be used for resume screening, job matching, performance analysis, and many other HR functions, the potential applications are vast. Almost any AI tool used by employers or employment services could be included.
- Determination of service provision: This could apply to any AI system that decides eligibility, type, or cost of services ranging from insurance, banking, consumer and online services, to any federal, provincial, or municipal government service that use AI systems made available from companies. It is particularly wide-ranging because “services” is a very inclusive term covering indefinite classes of industries and public services, many of which are already or could be regulated by a variety of human rights and employment laws as part of an overall regulatory scheme and not, as AIDA does, isolated attempts to regulate a technology divorced from the overall regulatory scheme and context.
- Moderation and prioritization of online content: This extends to AI systems that filter, rank, or recommend content on platforms such as social media, search engines, or any digital service that curates or moderates content. Considering the prevalence of personalized feeds and content filters, this could affect a wide array of platforms. The regulation of content moderation and prioritization is not subject to regulation under the EU AIA. Further, as noted in my prior blog post, this category is broad enough to conflict with regulation of online harms, and because of the need to balance these goals with freedom of speech rights, should be separately regulated through the planned Online Harms law and be subject to Parliamentary approval. To be clear, I believe that misinformation propagated and prioritized over digital platforms poses one of the greatest risks to society and our democracies. However, any such regulation should be done by Parliament as part of the democratic process of balancing the urgent need to protect the public and protect freedoms of speech.
- Health care or emergency services: Excluding specific devices covered by the Food and Drugs Act, this would still include AI systems used in diagnostics, treatment recommendations, patient prioritization, or resource allocation in emergencies. Health care AI applications are diverse and evolving rapidly, making this category exceptionally broad and many of such systems are already regulated at the provincial level. Further, regulation at the federal level would leave unaddressed in province developed AI systems potentially creating a scatter gun approach to health care regulation.
- AI systems used by courts or administrative bodies: This involves AI used in legal or administrative decision-making, which could range from risk assessment tools to automated decision systems used in areas from immigration to welfare eligibility. This category is also very broad and also cuts across federal, provincial, and municipal bodies. As Teressa Scassa observed in her blog which examined the initial list of AI systems, this category “is confusing because it identifies the context rather than the tools as high impact”. She notes that this class “should perhaps be reworded to identify tools or systems as high impact if they are used to determine the rights, entitlements or status of individuals.”
- Assistance to peace officers: Any AI system that helps in law enforcement activities falls under this category. This could include predictive policing tools, facial recognition systems, or data analysis tools used in investigations.
The breadth of these categories suggests that the government is taking the widest possible berth of subject matter. However, the expansiveness of the language makes it difficult to discern what will really be regulated. Moreover, these broad categories are not limited by the principles that apply to the designation of new AI systems (discussed below). As a result there are still no guardrails limiting what, within these broad classes of systems, will be regulated.
The breadth of these categories will pose a challenge for AI developers, deployers, and users who will not be able to assess when AIDA is enacted whether their AI systems will become subject to regulation or how they will be regulated. This uncertainty will be especially prevalent until the actual regulations are finalized. This could unnecessarily chill innovation in Canada including by companies which may be apprehensive of making their systems available to Canadians.
If the government knows what it really wants to regulate initially it should refine the categories before AIDA is enacted. If, as may be the case, the government isn’t sure what it really wants to regulate, there should be no rush to pass AIDA.2.0. The public is entitled to understand what exactly is intended and to debate AIDA.2.0 on the merits before it is enacted.
Additional high impact systems that could be regulated
Under new section 36.1, the Governor in Council may, by regulation, amend the schedule by adding, varying or deleting a class or subclass of uses. In making a regulation, the Governor in Council must take into account:
- the risk of adverse impacts that the class or subclass of uses of artificial intelligence systems that is to be added, varied or deleted may have on the economy or any other aspect of Canadian society and on individuals, including on individuals’ health and safety and on their rights recognized in international human rights treaties to which Canada is a party;
- the severity and extent of those adverse impacts;
- the social and economic circumstances of any individuals who may experience those adverse impacts; and
- whether the uses in the class or subclass that is to be added, varied or deleted are adequately regulated under another Act of Parliament or an Act of a provincial legislature.
The addition of factors that should be taken into account in designating new AI systems as being “high impact” is a significant improvement to the Bill. Notably absent from the list, however, are factors that would also consider whether these systems are being subject to like regulation among Canada’s trading partners, the economic or trade impacts of regulating the AI systems, and the potential effects on innovation.
Another problem with these amendments is that the list of factors are only factors; they do not actually prohibit regulation that does not meet a factor or set out any weighting of the factors. For example, there is nothing prohibiting ISED from regulating an AI system that is adequately regulated under another Act of Parliament, a provincial legislature, or a municipal government.
Further, the factors still provide little practical limitations on types of AI systems that could be regulated as high impact systems. The criteria for designating new classes of “high impact” AI systems allow for a wide array of applications to be included under the regulatory umbrella, with no materiality threshold of risk of harm, or oversight by Parliament.
AIDA still reflects a centralized and inefficient regulatory regime
As many critics of AIDA have argued, a fundamental flaw with AIDA is its centralized model that leaves all regulation of AI systems to be regulated by ISED. This is not only an inefficient, but it is also an ineffective, regulatory regime.
The government may argue that it has overcome this criticism because under new Section 36.1 a factor to consider before a new AI system can be added is “whether the uses in the class or subclass that is to be added, varied or deleted are adequately regulated under another Act of Parliament or an Act of a provincial legislature.” However, this addition does not correct AIDA’s fundamental flaws.
First, the new addition is only a “factor” and is not a precondition to the additional of a new class of AI system that could be regulated. Thus, it does not prevent ISED from usurping regulatory authority over the AI system from another federal or provincial regulatory authority or agency.
Second, even if the additional language was changed to be a condition, the proposal would still be an inefficient and ineffective way of dealing with AI systems that are not adequately regulated by another regime. The best regulatory model is a hub and spoke model that uses existing regulatory agencies and tools. If an existing agency that regulates a sector is not adequately regulating the sector, it makes little sense to shift that regulatory authority to ISED. The much better approach is to provide the tools and knowledge to the agency to upgrade the regulation of the industry, but AIDA does not enable this. By way of example only:
- If Health Canada is not, in the sole opinion of Minister, adequately regulating medical devices under the Food and Drugs Act, AIDA would not permit the Minister of Health to enact further regulations to regulate medical devices. Instead, this regulation would need to shift to ISED fracturing the regulation of medical devices.
- If the Minister determined that other federal or provincial departments and ministries were not adequately regulating autonomous vehicles, the increased regulation would shift to ISED. This may, in fact, be part of what ISED envisages, as “autonomous driving systems” were specifically identified in the Companion Document as AI systems of interest to ISED.
- If ISED believes that financial institutions such as banks and insurance companies or credit unions are not being adequately regulated by OSFI or provincial counterparts, it could take over their regulation. This appears to be exactly what the government plans with its intention to regulate determinations in relation to services. Yet, OSFI and other regulators are much better placed, and know far more about financial institutions, than ISED when it comes to regulating financial institutions.
Third, the regulation of bias and discrimination using AI systems is now being grabbed by ISED rather that leaving it with Canadian Human Rights Commission and similar agencies across the country. Infact, the predominate focus of the initial list of AI systems involves regulating AI systems for bias and discrimination. Yet, the proposal is to leave this to ISED rather than to commissions and bodies with expertise in dealing with these issues. This is a fundamental flaw in AIDA that likely cannot be fixed with new amendments. As I have argued in prior blogs, AIDA unnecessarily fractures the regulation of bias and discrimination. The much better approach is to have bias and discrimination issues addressed under the Canadian Human Rights Act. If amendments or regulations under the Act are needed, this should be done and should fall under the jurisdiction of the Minister of Justice and not the Industry Minister.
It may be argued to the contrary that these federal and provincial agencies and commissions do not have the expertise or resources to regulate AI systems including to address bias and discrimination at the design and development stage, or to engage in ongoing monitoring. This is expertise that will need to be developed and adequate resources invested to perform these activities. In my view, this expertise and resource building should be invested to build up – and keep in one place – the fight against bias and discrimination, whether visited upon the public by algorithms or human beings.
The same is true in other areas. If, for example, Health Canada needed additional expertise to regulate medical devices that use AI, this expertise should be built up within Health Canada. It has extensive experience in understanding the medical device ecosystem and it would be in a far better position to continue to regulate medical devices with that bolstered expertise than ISED.
Fourth, the AIDA amendments do nothing to realistically make the AI and Data Commissioner independent from ISED.
Fifth, there is no basis to believe that ISED has the knowledge, experience or personnel capable of regulating the multiplicity of applications and technologies which AI systems will pervade.
Sixth, the current proposal will likely require massive hirings by ISED whose personnel will need to learn about a variety of sectors. The far better approach is the hub and spoke model that builds and enhances existing agency capacity. This makes the regulation part of, and work with, an overall regulatory regime that takes into account the goals and regulatory balances of the applicable law or regulatory regime.
AIDA lacks guiding principles
One of the key criticisms of AIDA was its affront to the principle of parliamentary sovereignty. The Minister attempted to overcome this widely held criticism by providing a list of initial high impact systems to be regulated and by providing criteria to govern what AI systems can be added. However, the substance of these criticisms is left unaddressed because, as noted above, the initial list of high impact systems is extremely broad and one cannot discern from reading Annex A what the government could regulate. The guidance in Annex B is helpful, but this guidance has no legal effect and does not constrain what can be regulated. Further, and as noted above, the Minister’s letter contains factor that must be taken into account in designating new systems to be high impact systems. These factors do not apply to what will ultimately be regulated that falls into the initial list classes. Moreover, the factors enumerated are not conditions that must be met. As they are only factors they do little to actually constrain what can be regulated.
Beyond this, the proposed amendments do not address that all the regulations that will apply to high impact systems (and general purpose systems and machine learning models that will also be regulated by AIDA.2.0) will not be constrained or guided even by general principles. Thus, the Minister has virtually unconstrained authority to regulate the most fundamental technologies of the generation without any Parliamentary control or oversight, whether direct or indirect, via guiding principles (other than for the addition of new AI systems that can be regulated).
I gave examples of guiding principles that are in the draft UK AI Bill in my appearance before the INDU Committee and my blog post on the topic.[ii] The failure to include at least principles to guide the regulatory process is still a major flaw in AIDA.
Regulation of general-purpose systems
The Minister’s proposed amendments would now introduce amendments to regulate a new class of AI systems called “general-purpose systems”. A general purpose system is defined as:
an artificial intelligence system that is designed for use, or that is designed to be adapted for use, in many fields and for many purposes and activities, including fields, purposes and activities not contemplated during the system’s development.
This new category of AI system – which is subject to many new regulatory requirements – is extremely broad and open ended. This sweepingly wide definition would regulate not only potentially high risk AI systems but many low or no risk systems potentially subjecting whole ecosystems of foundation models and generative AI systems to unnecessary and disproportionate regulation and to regulation of systems not subject to regulation in other jurisdictions like the US or EU under the AIA. Some examples of AI systems likely to be caught as part of the definition are set out in the endnote below.[iii]
There is, as of yet, no international consensus on the question of what types of general purpose AI systems should be regulated or how they should be regulated. However, our trading partners have not proposed anything as extensive as what is being proposed in AIDA.2.0.
The US Executive AI Order instructs federal agencies to develop security and safety standards for certain artificial intelligence applications. The order focuses on “dual-use foundation models,” which the order defines as powerful general-purpose models that present significant security risks. The order defines “dual-use foundation models” as meaning:
AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:
(i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;
(ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or
(iii) permitting the evasion of human control or oversight through means of deception or obfuscation.
The final definition in the AIA of general-purpose AI (GPAI) models has not yet been made available. The recent publicly available (proposed) definition, which may have been amended as part of the political agreement is
an AI model including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is released on the market and that can be integrated into a variety of downstream systems or applications.
As can be seen, the definition of GPAI models in the AIA is much narrower than the proposed definition in AIDA. It contains concepts that require training with a large amount of data, using self supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks.
Further, under proposed amendments to the EU AIA, providers of general purpose systems have transparency requirements to downstream system providers. Model providers additionally need to have policies in place to ensure that that they respect copyright law when training their models. Unlike in Canada, the full panoply of regulation in the EU is reserved only for those that carry systemic risks.
GPAI models that could pose systemic risks are subject to more substantive regulation in the EU because they are very capable or widely used. Initially, these are general purpose AI models that were trained using a total computing power of more than 10^25 floating point operations (FLOPs) The threshold, which can be updated, is designed to capture the currently most advanced GPAI models, namely Open AI’s GPT-4 and likely Google DeepMind’s Gemini. The distinction between GPAI models (which carry a lower level of regulatory compliance) and those which carry systemic risks was agreed to in the EU to avoid over regulation of GPAI models, something that Italy, Germany, and France and others were particularly concerned about.
Under AIDA.2.0 there would be numerous obligations on persons that make GPAI models systems available or who manage their operations. No distinction is made in the regulatory obligations between systems that carry little, if any, risk (which would not be subject to any regulation in the US or EU), those that may carry some or limited risk, and those which may have systemic risk. The impact of the proposed regulatory regime (summarized in the endnote below[iv]), will likely result in disproportionate regulation which could well impede innovation and Canadian access to some GPAI models.
Regulation of machine learning models
The Minister’s proposed amendments would now also introduce amendments to regulate a new class of AI systems called “machine learning models” which are defined as:
a digital representation of patterns identified in data through the automated processing of the data using an algorithm designed to enable the recognition or replication of those patterns
These amendments are also very concerning. Machine learning models are fundamental components of AI technology, used widely across industries and developed by a diverse range of entities. They serve as the backbone of many AI applications, including image recognition, natural language processing, recommendation systems, autonomous vehicles, and more.
Machine learning models are developed by a wide range of entities, including research institutions such as academic institutions and research organizations, tech companies like Google, Facebook, Microsoft, and Amazon, start-ups that focus on developing specialized machine learning models for specific applications or industries, and individual researchers and data scientists who may, for example, create machine learning models for personal projects or open-source contributions.
Machine learning models are made available through many different means. These include open source through platforms such as GitHub, commercial products or offerings, models that are integrated into AI development frameworks, and cloud service providers that offer APIs for various machine learning models enabling developers to integrate them into their applications.
The government’s proposal to regulate machine learning models would apply to all persons who make such models available for use for the first time for incorporation into a high impact system in the course of interprovincial trade and commerce. This is, effectively, regulation of a technology, rather than an application. As summarized in the endnote,[v] it would create obligations on whole ecosystems of developers including individual researchers and data scientists. It could also create significant regulatory burdens that could stifle innovation and result in the unavailability of machine learning models to Canadians.
Moreover, the decision to regulate machine learning models per se, is a major departure from the approach being taken in the EU under the AIA which does not directly target models themselves. Instead, it will seek to regulate the people and the processes for how organizations deploy their AI use cases, based on the risk classification associated with the AI system.
AIDA may stifle innovation and adoption of AI
The government wants to rush ahead with AIDA. It wants to do this even before there is any international consensus on the best ways to promote public safety and prevent harms while promoting innovation. This includes identifying the best regulatory model which can include a mixture of voluntary compliance, reliance on and promoting best practice management standards such as the recent ISO/IEC 42001:2023 standard, enforcing existing laws and leveraging existing regulatory models, or enacting a horizonal law – a one size fits all – regulatory framework which is AIDA.2.0.
The government has placed a lot of emphasis on its regulatory approach being similar to the EU AIA. One can see this in the recent proposed amendments dealing with GPAIs and the initial list of AI systems to be regulated. However, despite this and as illustrated above, there are major differences between what AIDA could regulate and what is covered by the AIA including the initial systems to be regulated and the proposals with respect to GPAI and machine learning models. Further, there is still much to be done to finalize the AIA and its final contours are still to be negotiated. As Luis Alberto Montezuma pointed out “The political agreement reached in trilogue does not in fact include a definitive legal text, nor is it set in stone. This makes upcoming ‘technical meetings,’ which will finalize the text before it is officially adopted by both the Parliament and Council, incredibly important, especially considering some countries still do not consider it a done deal.” It is thus risky for a middle nation like Canada to set rules based on a law that has not been finalized.
We must realize that AIDA will not only apply to Canadian organizations. Foreign entities seeking to make available or manage AI systems will have to navigate the Canadian rules. The territorial scope of AIDA will be assessed under the real and substantial connection test, a test adopted consistently by the Supreme Court of Canada and other courts including in the privacy context. AIDA will therefore apply to all of the AI actors regulated by AIDA which include persons that make high impact systems or GPAI systems available or manage their operations, and persons who make available machine learning models for incorporation into a high impact system, as long as those activities occur in the course of international or interprovincial trade or commerce. AIDA’s scope, therefore, could impact whole international ecosystems of persons and entities. It will impose new governance obligations they may not otherwise be required to adhere to. For example, it would impose obligations on open source developers of GPAI and machine learning, domestic and foreign, which because of their decentralized organization may impose impractical barriers, though under the EU AIA providers of free and open-source models are exempted from most of obligations.
While there are obvious benefits to protecting the public from harms visited on them by out of Canada AI actors, the potential concerns about AIDA hindering deployment of AI systems in Canada because the rules are out of step with those of our largest trading partners like the US and the EU cannot be ignored or disregarded.
The regulatory framework governing AI will undoubtedly also affect investment in Canada in AI infrastructure and skills. While the EU is pressing ahead with legislation, a law that may put EU start-ups at a competitive disadvantage, according to a recent report quoting from the UK Secretary of State, the UK has confirmed that it will not legislate on AI until the timing is right to avoid stifling innovation. The UK is working hard to understand the risks and is taking an evidence based approach and would not “lurch to legislate” having “seen the impact that that can have”. This may well be the reason that Microsoft will spend £2.5 billion (US $3.2bn) to expand its next generation AI datacenter infrastructure in the UK and not the EU and, in part, may be why the UK is the tech hub of Europe with an ecosystem worth more than that of Germany and France combined.
As pointed out in a recent article, the approach countries take to regulating AI can significantly affect their competitive advantages:
The regulatory approaches nations adopt for artificial intelligence (AI) can significantly influence their trajectory in the global order. Countries that eschew stringent AI regulations are posited to gain a competitive edge, potentially spearheading innovations and setting international AI benchmarks. This paradigm shift could allow for a more agile integration of AI in critical sectors, fostering environments where technological breakthroughs are not impeded by protracted legislative processes. “significantly influence their trajectory in the global order. Countries that eschew stringent AI regulations are posited to gain a competitive edge, potentially spearheading innovations and setting international AI benchmarks.
AIDA may intrude on or overlap with provincial jurisdiction
AIDA’s breadth will inevitably intrude into, or significantly overlap with, provincial jurisdiction. AIDA tries to limit its scope to activities carried out in the course of international or interprovincial trade and commerce, a federal head of jurisdiction. (There may be other bases for its jurisdiction.) Yet, the reality is that AI systems including machine learning models will ubiquitously be included in products and services that will invariably and almost universally cross provincial and national borders or be offered or managed from public clouds that are accessible throughout Canada. Even the initial list of AI systems to be regulated such as AI used in employment, to offer services, or for healthcare are matters that frequently fall into provincial jurisdiction.
Parliament has enacted comprehensive regulatory schemes before based on its trade and commerce powers. This has been the basis for federal laws on competition policy, bankruptcy and insolvency, intellectual property and for consumer protection including regulation of medical devices and dangerous consumer products. Parliament has increasingly been expanding its jurisdiction over digital technologies bolstered by the interprovincial and international uses of network based technologies. Examples are PIPEDA and CASL, and now the CPPA and AIDA.
The government will undoubtedly defend its jurisdiction to enact AIDA based on its trade and commerce power pointing to the need to regulate AI systems that could cause harm. However, while AIDA has a definition of harm, there is a complete absence of any materiality threshold which may call into question the need for a national law that could potentially regulate a ubiquity of products and services, whether they pose a likelihood of harm or not.
These developments are bound to result in challenges to jurisdiction and calls for a reformulation of the principles applied to federal provincial division of powers to take into account the expansive and intrusive potential of digital delivery of products and services. AIDA well be the basis for this challenge. Parliament has never tried to regulate a specific and ubiquitous technology (such as electricity or the micro chip), thus raising real questions as to how the trade and commerce power would be interpreted by the Supreme Court if AIDA is challenged on constitutional grounds.
Even if AIDA will not tread onto provincial jurisdiction, or depending on the applicable regulations always do so, the government has not published anything on how the overlaps in regulatory authority will be coordinated. This is no small issue considering just that the initial list of AI systems predominantly will regulate activities currently subject to provincial regulation.
New Transparency Obligations
The AIDA amendments include a new transparency obligation that would require individuals to be informed when an AI system is being used in certain circumstance. New Section 6 would read as follows:
Informing individuals of artificial intelligence system
6 (1) If it is reasonably foreseeable that, in the circumstances, an individual communicating with an artificial intelligence system could believe that they are communicating with another individual, the person who manages the system’s operations must ensure that the system, without delay, clearly advises the individual that they are communicating with an artificial intelligence system.
Exception — physical product
(2) The person need not comply with subsection (1) if
-
- the system is a consumer product, as defined in section 2 of the Canada Consumer Product Safety Act;
- every individual using the system needs to use a physical product to communicate with it; and
- a written statement is placed prominently on each such physical product or its packaging stating that, in using the product, the individual is communicating with an artificial intelligence system.
There are also proposed transparency obligations in connection with GPAI systems. In particular, before a GPAI system is made available to the public the person who makes it available for the first time must ensure that “best efforts have been made so that members of the public, unaided or with the assistance of software that is publicly available and free of charge, are able to identify the output as having been generated by an artificial intelligence system”, and “all measures prescribed by regulation have been taken so that members of the public are able to identify the output as having been generated by an artificial intelligence system”.
The new transparency obligation addresses current ethical considerations ensuring that individuals are aware when there are communicating with an AI system. As AI systems become more sophisticated and indistinguishable from human operators, this transparency could become crucial for maintaining trust in digital ecosystems.
While this goal may be laudable and have positive benefits, it could also be impractical or difficult given the state of the art of watermarking and the pervasive use of AI in products. First, as Martin Ebers pointed out, watermarking techniques still have technical limitations and drawbacks in terms of technical implementation, accuracy, robustness, and standardisation. Second, the sheer volume and variety of AI interactions may make it difficult to enforce consistently. Users frequently encounter AI in contexts where such disclosure may interrupt or complicate the interaction. Third, as AI becomes more integrated into daily life, its presence in these interactions could become so commonplace that users may not require or even appreciate constant reminders that they are not interacting with a human. This obligation could potentially lead to ‘warning fatigue,’ where users become desensitized to the notifications.
The need for this new transparency obligation may be rooted in a historical bias against AI, where there is a clear distinction between human and machine interaction. This bias may reflect concerns about deception or the authenticity of interactions. However, as AI systems improve and public familiarity with them grows, societal norms are likely to evolve, and interactions with AI systems could become as accepted as interactions with human-operated systems. Consumers may come to understand and accept that many routine communications are AI-powered, considering them a standard part of the technological landscape.
A way of addressing these concerns could be to add the possibility of further exceptions in Section 6(2) should these concerns materialize.
The transparency obligations also do not include the protection for copyright owners under the EU AIA. Although the final text is not yet available, the compromise agreement reached in the EU would include a newly introduced article on “Obligations for providers of general-purpose AI models”. This would include two distinct requirements related to copyright.
First, providers of GPAI models will be required to “put in place a policy to respect Union copyright law, in particular to identify and respect, including through state of the art technologies where applicable, the opt out reservations of rights under Article 4(3) of Directive (EU) 2019/790. According to a recent article, the recitals also contain language that appear to make the EU copyright laws apply to the training of AI systems, even if this training activity takes place outside of the EU.
Second, providers of GPAI models would be required to “draw up and make publicly available a sufficiently detailed summary about the content used for training of general-purpose AI models”.
The new transparency provisions in AIDA.2.0 also do not address the harms associated with the use of individuals’ names, likeness, or voice by AI systems that are increasingly being used to disseminate fake audiovisual images including pornographic videos, or used for cyberbullying, the circulation of false information about the individual, and other types of deepfake recordings and videos. These are threats increasingly being visited upon public figures and celebrities as well as ordinary individuals.
AIDA will create duplicative regulatory regimes and contain disproportionate penalties
I explained in a prior blog post how AIDA would
- create conflicting obligations for AI actors that seek to use anonymized information for AI system purposes with the new provisions in the CPPA that deal with anonymization;
- create double jeopardy risks under the CPPA, CCPSA, and AIDA; and
- treat offenses under AIDA much more harshly than offences under analogous legislation such as the CCPSA, the Food and Drugs Act, and other hazardous products laws or sanctions for violating the Canadian Human Rights Act.
None of these criticisms have been addressed by AIDA.2.0.
Final remarks
AIDA, as introduced into Parliament, is nothing but a shell of a law. It passed First and Second reading in that form. The INDU Committee is now halfway through its hearings on Bill C-27, hearings which already included many witnesses whose appearances focused on AIDA before AIDA.2.0 was made public (including my appearance).
The Committee is now faced with assessing new amendments in a very short period of time that will regulate one of the most transformative technologies of our time. What Parliament does to regulate AI could have far reaching implications on public safety and other potential harms and on innovation. We must get this right. This means taking adequate time for all stakeholders to properly assess the policy and technical aspects of amendments. This will not be a short exercise as there are so many policy issues associated with regulating AI. The reality, as recently observed by Bianca Wylie and Martin McDonald in an article published by the think tank CIGI:
The bill is a poorly conceived and rushed piece of attempted legislation. As many have argued, AIDA is so weak, both due to the process used to create it and its governance constructs, that it should be stopped entirely.
The government may well have good reasons for the policy choices in the proposed amendments including decisions to depart materially from the EU AIA with respect to the regulation of GPAI systems and machine learning models and content moderation and prioritization. It is quite possible, however, that further detailed study will reveal that the government may have jumped the gun on amendments incorrectly anticipating how the EU would land on key regulatory choices.
The wording of the EU AIA is still being finalized. Other countries including the UK, Australia, Japan, and Singapore and other members of the G7 like the US are still studying the best ways to regulate AI systems. As such, is there a need to move at breakneck speed for a middle country like Canada to enact a law that could and likely will be out of step with those of our major trading partners, particularly when there has been scant public consultation and debate in Canada about how AI systems should be regulated? I think not.
You can watch my appearance at the INDU Committee on Bill C-27 including AIDA at this link.
[i] A good summary of the amendments can also be found on Teresa Scassa’s blog @ Blog (teresascassa.ca).
[ii] The Principles in the UK AI Bill were:
- AI Ethical principles: regulation of AI should deliver safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; contestability and redress;
- any business which develops, deploys or uses AI should be transparent about it; test it thoroughly and transparently; comply with applicable laws, including in relation to data protection, privacy and intellectual property;
- AI and its applications should comply with equalities legislation; be inclusive by design; be designed so as neither to discriminate unlawfully among individuals nor, so far as reasonably practicable, to perpetuate unlawful discrimination arising in input data; meet the needs of those from lower socio-economic groups, older people and disabled people; and generate data that are findable, accessible, interoperable and reusable; and
- a burden or restriction which is imposed on a person, or on the carrying on of an activity, in respect of AI should be proportionate to the benefits, taking into consideration the nature of the service or product being delivered, the nature of risk to consumers and others, whether the cost of implementation is proportionate to that level of risk and whether the burden or restriction enhances international competitiveness.
[iii] Examples of general purpose systems likely to be caught by the proposed AIDA amendments include the following AI systems.
- Large language models (LLMs) like OpenAI’s GPT (Generative Pre-trained Transformer) conversational platforms which are capable of generating human-like text and is now pervasively used in content generation, chatbots, and automated writing. These type of models are designed with adaptability in mind, capable of assisting users in numerous domains, from answering queries to generating content. While hugely useful, these models can result in biased or discriminatory responses and be used to disseminate misinformation.
- Reinforcement learning agents that are integrated into a variety of systems like robotics. AI-driven robots are designed for various tasks, from manufacturing to healthcare, and could perform roles beyond their initial purposes.
- AI-powered language translation services are tools used across diplomacy, healthcare, and business to facilitate communication across language barriers.
- Speech recognition systems are found in numerous applications including aviation and smart assistants and are engineered to understand and respond to spoken language commands across many vertical industry sectors.
- AI-integrated IoT platforms serve as hubs for various Internet of Things (IOT) applications, including smart homes and industrial automation.
- AI-powered search engines.
- AI-integrated humanoid robots will be used in entertainment and customer service application to adapt to various roles.
- AI-Generated Graphic Design systems can be used to generate designs, graphics, and logos for use in marketing, advertising, and branding across various industries.
- AI-Integrated Virtual Reality (VR) Environments or VR systems powered by AI can be used to provide immersive experiences in applications ranging from education to gaming to architectural visualization.
- AI-Powered E-commerce Recommendation systems utilize AI algorithms for product recommendations. These systems adapt to diverse customer preferences and product categories, serving various purposes in online retail applications.
- AI-Integrated Health and Fitness Apps offer health and fitness applications with AI-driven features offer adaptable solutions for tracking exercise, nutrition, and wellness across many different potential health and fitness uses.
- AI-Generated Text Summarization Tools like Microsoft Co-pilot can be applied across multiple fields, from journalism to research, to quickly distill lengthy documents into concise summaries.
- AI-Powered Data Analytics Platforms can be adopted to analyze data across many industries, including finance, healthcare, and marketing.
- AI-integrated document management and OCR systems efficiently handle data across multiple industries.
- AI-Generated Art systems can be used to generate art, such as paintings and digital designs, is inherently versatile and adaptable for various artistic expressions.
- AI systems designed for creative expression include general purpose AI systems such as music generation models, content remixing systems, logo and graphic design systems, and pattern and texture generation tools.
[iv] Here is a summary of the proposed regulatory approach for GPAI models in the proposed AIDA amendments.
Before these systems are made available for the first time in international or interprovincial trade and commerce, measures must be in place for data used in their development, along with assessments of potential adverse impacts, risk mitigation measures, and testing of their effectiveness. These systems must also incorporate features for human oversight, provide plain-language descriptions, and ensure that AI-generated output can be identified by the public. Compliance with these requirements will be assessed by qualified third parties.
Additionally, stringent record-keeping is mandated, covering compliance, data, processes, and any other prescribed records.
For systems already in existence before these requirements come into force, there is a grace period to align with the new regulations.
When making these general-purpose AI systems available, plain-language descriptions must be provided to users or published on publicly accessible websites, accompanied by any measures prescribed by regulations. Changes to these systems that maintain their general-purpose nature are considered distinct, with criteria such as varying risks or effectiveness.
For those managing the operations of these systems, various responsibilities include ensuring compliance, assessing and mitigating risks, conducting effectiveness tests, enabling human oversight, addressing suspicions of serious harm, and maintaining comprehensive records of compliance.
Compliance with the obligations outlined for individuals and entities involved in making general-purpose AI systems available or managing them presents a set of potential multifaceted challenges. These challenges vary depending on the size and scope of the organizations involved, with smaller and entrepreneurial companies that offer lower-risk AI models more difficulties.
For larger technology companies like OpenAI, Google, or Meta, which produce potentially dual-use foundation models and systems with a higher potential for harm, there may be more resources and capacity to navigate the complex regulatory framework. These companies may be better equipped to allocate the necessary personnel and funding for compliance efforts. However, they too face the challenge of ensuring that their extensive AI deployments align with the regulations, which can be logistically complex and resource-intensive.
In contrast, smaller entities and startups that offer lower-risk AI models face several distinct challenges when striving for compliance. Resource constraints are a significant issue, as smaller companies often have limited financial and human resources. Allocating personnel and funding for compliance efforts can be particularly burdensome, potentially slowing their growth and competitiveness in the AI market. Additionally, these organizations may lack the legal and regulatory expertise required to navigate complex compliance requirements, necessitating investments in legal counsel or regulatory consultants. Meeting record-keeping obligations, such as documenting data usage and compliance measures, can be resource-intensive and divert resources from research and development. The compliance costs and legal requirements may also create a disadvantage for smaller entities compared to larger tech companies, potentially hindering their ability to compete effectively and impacting their innovation pace. Scaling challenges may also arise as smaller companies aim to expand while complying with evolving regulatory requirements, further impeding their growth prospects.
[v] The proposed regulatory framework for machine learning models is as follows:
The proposed regulatory framework for machine learning models under the Minister’s proposed amendments entails several requirements on persons who make the models available for the first time for incorporation into a high impact system. These are independent obligations from those that apply persons who make high impact systems available.
Before a machine learning model is made available for incorporation into a high-impact system in international or interprovincial trade and commerce for the first time, the responsible party must establish measures regarding data used in model development, measures to mitigate biased output risks, and prepare a model card containing specified information. Compliance with regulations and record-keeping obligations are also mandated. For existing machine learning models, there is a grace period to align with these requirements
The regulatory framework for machine learning models proposed under AIDA.2.0 presents several specific challenges for individuals and entities engaged in making these models available to the public. These challenges extend to a diverse range of actors, including academic researchers, startups, and large tech companies.
First, there is the issue of compliance costs, as adhering to regulatory requirements can be financially burdensome. Smaller players, such as startups with limited resources, may find it particularly challenging to meet these compliance costs, potentially affecting their ability to innovate and compete. Developers of open source models may not be organized to be able to comply because of their decentralized nature.
Additionally, the complexity of the potential regulations is a concern. The framework may introduce intricate compliance procedures, documentation demands, and quality control measures. These complexities could prove cumbersome for developers, especially those lacking expertise in legal or regulatory matters.
Record-keeping obligations represent another challenge. Maintaining comprehensive records concerning data, processes, and compliance can be time-intensive and resource-draining, potentially diverting valuable resources away from research and development activities.
Preparing model cards, as required by the regulations, adds an extra layer of complexity. Ensuring the accuracy and completeness of these cards may require considerable time and effort, which could be particularly challenging for individuals or organizations less familiar with regulatory requirements.
Furthermore, there is a potential impact on research and innovation. Academic researchers often share machine learning models and research findings openly, Collaboration and knowledge sharing within the AI community may be affected by the proposed regulations. Developers may become more cautious about sharing their models and findings due to concerns related to compliance and liability.