Artificial Intelligence (AI) has quickly evolved from a science fiction concept to a business buzzword, with real-world applications emerging at a rapid pace in recent years.
Once the stuff of dystopian blockbusters, AI burst into the mainstream last year when tools like Dall-E 2 and other image generators let anyone create lifelike (and sometimes controversial) visuals with just a few keywords.
Since then, the conversation around AI and machine learning (ML) has only intensified. The arrival of ChatGPT has broadened the range of possible uses to include written and even spoken content—raising concerns among creative professionals about what this could mean for their work.
As AI’s role in society has shifted from novelty to practical tool, artists, politicians, and business leaders have all voiced concerns.
AI concerns: From jobs to security
Some critics argue that AI, like other forms of automation, is taking jobs away from people who need them. While this concern is often raised, there’s little concrete evidence to support it—especially considering the many new roles expected to emerge as AI adoption grows.
Looking closer, the main worries about AI center on data privacy and security. For AI to work effectively, it needs to process large amounts of relevant data. Many of the most controversial AI tools gather their data from across the internet, often without proper permissions or clear attribution.
This makes regulating AI adoption a real challenge. Lawmakers in the United States and Canada are not only struggling to keep up with the pace of AI innovation—they’re also lagging behind other countries in setting up comprehensive data protection rules.
EU leads the way Artificial Intelligence Act
While the European Union set the global standard with the General Data Protection Regulation (GDPR) in 2016, there’s still no equivalent, up-to-date law in the U.S. or Canada (though any company doing business with EU citizens must comply with GDPR by default).
The EU is also ahead of the U.S. and Canada in proposing AI-specific data regulations, having introduced the Artificial Intelligence Act (AIA) in April 2021. Like GDPR, the AIA has pushed other governments to consider how they’ll regulate AI without stifling innovation.
As a result, lawmakers in North America have put forward proposals on both sides of the border, hoping these will shape future laws to ensure AI is used safely and responsibly.
Canada’s Artificial Intelligence and Data Act (AIDA)
In June 2022, to bring a new focus on data protection alongside regulation for AI adoption, the Canadian government released Bill C-27, also known as the Digital Charter Implementation Act 2022.
In addition to the Consumer Privacy Protection Act (CPPA) and the Personal Information and Data Protection Tribunal Act (PIDPTA), Bill C-27 introduces the Artificial Intelligence and Data Act (AIDA), which would be the first piece of legislation in Canada to regulate the development and deployment of AI systems in the private sector.
In many ways, Bill C-27 is a direct response to the Blueprint for an AI Bill of Rights released in the US the previous October (more on that later), calling for a wealth of consumer protections as well as establishing “rights” by the government to directly audit or intervene upon any AI systems in production. Unlike the US’s Blueprint, however—which largely amounts to a wishlist from the White House—the AIDA is part of a legislative package (Bill C-27) that is actually already on a path to becoming law.
The legislation sets out AIDA’s purpose as follows:
- To regulate international and interprovincial trade and commerce in AI systems by setting common requirements for the design, development, and use of these systems across Canada; and
- To prohibit certain actions related to AI systems that could cause serious harm to individuals or their interests.
Additionally: “Harm” in AIDA means (a) physical or psychological harm to a person, (b) damage to a person’s property, or (c) financial loss to a person.
AIDA will apply to anyone engaged in a 'regulated activity,' which the law defines as:
- Processing or making available any data about human activities for the purpose of designing, developing, or using an AI system;
- Designing, developing, or making available an AI system, or managing its operations.
Exactly what these 'persons' will be responsible for is still a bit unclear. AIDA includes language about reducing risks of harm and bias from 'high-impact' AI systems, but it doesn’t define what 'high-impact' means. That definition will need to be clarified as AIDA moves closer to becoming law.
In general, anyone overseeing high-impact AI must set up processes to identify, assess, and reduce risks of harm or bias that could result from using the AI system. They also need to implement ways to monitor both compliance with and the effectiveness of those risk-mitigation measures.
The act also requires greater transparency about AI, especially when it comes to consumer data. For example, if an AI system is made available for use, the responsible party must publish a plain-language description of the system on a public website, explaining:
- how the system is intended to be used
- the types of content it’s designed to generate
- the types of decisions, recommendations, or predictions it is designed to make
- the risk mitigation measures in place.
Once AIDA is passed, it will also require the appointment of a Minister with broad enforcement powers. These include ordering organizations using high-impact AI to:
- Produce records
- Complete an audit, or hire an independent auditor to conduct one
- Implement any measures specified in an audit report
- For high-impact systems, stop using or offering the system if it poses a serious and imminent risk of harm
- Publish certain audit information on a public website, as long as it doesn’t reveal confidential business details
The U.S. proposes a Blueprint for an AI Bill of Rights
While Canada’s proposed legislation is opaque on certain definitions (ie. what specifically qualifies as “high impact” AI), its clear language on penalties and enforcement—and its connection to broader data protection rules—shows a much more actionable plan than what’s been put forward in the U.S. so far.
Still, the new US AI framework draws on themes from earlier international data privacy laws, but with a stronger focus on social justice and equity—areas many experts say have been overlooked until now.
The Blueprint is built around five pillars that any organization developing or using AI should follow:
- Safe and Effective Systems: People shouldn’t be exposed to untested or poorly designed AI systems that could lead to unsafe outcomes—whether for individuals, specific communities, or organizations using personal data.
- Algorithmic Discrimination Protections: In short, AI models can’t be built with bias, and systems shouldn’t be deployed unless they’ve been checked for potential discrimination.
- Data Privacy: Organizations must avoid abusive data practices, and the use of surveillance technologies must be kept in check.
- Notice and Explanation: People should always know when and how their data is being used, and how it might impact decisions about them.
- Human Alternatives, Consideration, and Fallback: People should be able to opt out of data collection, and have access to a real person if they have concerns.
No single government can protect data and promote innovation on its own.
The main takeaway for business leaders exploring AI is to proceed with extreme caution. While there aren’t many specific rules yet, regulators are paying close attention and working on new guidelines.
Startups, in particular, stand to gain if they can responsibly leverage AI to automate processes, scale operations, and accelerate innovation. This is especially true for R&D, where teams might use AI for quality control or even to supplant human practitioners—an advantage for startups just getting started.
Knowing how to define your use of AI in the context of R&D takes real expertise—especially when it comes to identifying which activities may qualify for tax credits or government grants that founders can leverage to extend their runway.
To learn more about business growth strategies and how your team can access non-dilutive R&D funding, book a call with Boast today.