A public servant within the department of innovation may soon be called to be the country’s new Artificial Intelligence and Data Commissioner, tasked with helping overseeing the use of AI technology in the private sector.
Civil society organizations and researchers working in the field say it’s crucial this officer be independent, rather than embedded within the department, in order for AI to be effectively regulated.
Last year, the Liberal government introduced Bill C-27 , a three-part piece of legislation also known as the Digital Charter Implementation Act.
The third part introduces new rules for businesses’ use of artificial intelligence, including obligations around the assessment of AI systems, mitigating the risks of harm or biases, and record keeping. Parts 1 and 2 focus on privacy.
Drafted by Innovation, Science and Economic Development Canada (ISED), the proposed legislation also states that the minister may designate a senior official of the department to become an Artificial Intelligence and Data Commissioner, who would be responsible for assisting the minister in the administration and enforcement of that part of the bill.
CONCERNS WITH BILL C-27
In an open letter on the Artificial Intelligence and Data Act to the department’s minister, François-Philippe Champagne, a group of civil society organizations, including the Canadian Civil Liberties Association (CCLA) and the Public Interest Advocacy Centre, wrote it would be “inappropriate” for the regulation of AI to fall completely under the jurisdiction of ISED.
The letter added that placing an Artificial Intelligence and Data Commissioner under the department would further undermine “the independence and effectiveness of oversight.”
Having the commissioner under ISED could lead to private interests being considered over human rights, given the department’s goals of improving conditions for investment and increasing Canada’s share in global trade said Daniel Konikoff, interim director of the privacy, technology and surveillance program at the CCLA. He has concerns that data protection and transparency may be jeopardized as the department prioritizes innovation and accruing profit.
“I think that we run the risk of having a potential for certain priorities to be emphasized in the administration of artificial intelligence regulation compared to others,” Konikoff said. “Coming at this from a civil society perspective, we want to make sure that AI is being used in a rights-respecting way.”
Other concerns shared in the open letter—signed by 19 organizations as well as researchers, professors and legal experts—included that the bill had definitional gaps and that there was a lack of consultation before the tabling of the act. Its signatories also argued that they were seriously concerned about lumping together the privacy portions of Bill C-27 with AI regulations, recommending the latter be considered in a separate process.
Other recommendations made to Champagne included recognizing privacy as a fundamental human right, committing to consulting with stakeholders and expanding regulation around the technology to the public sector, including government agencies.
CODE OF CONDUCT FOR AI IN CANADA
The proposed bill comes at a time when the government is working to address anxiety over the proliferation and pace of AI development.
At a national AI conference on AI in Montreal on Wednesday, Champagne unveiled a voluntary two-page code of conduct for the use of advanced generative AI systems in Canada, complementing Bill C-27.
The self-imposed guardrails will “build safety and trust as the technology spreads,” Champagne told a crowd of techies at the All In conference, where Canadian technology companies including OpenText and Cohere pledged to sign on.
The document lays out measures organizations can take when working in advanced generative AI — the algorithmic engine behind chatbots such as OpenAI’s ChatGPT, which can spit out anything from term papers to psychotherapy.
“We have seen technology advancing at what I would say is lightning speed,” Champagne said. “The mission we should give ourselves is to move from fear to opportunities.”
GOVERNMENT VS. PRIVATE SECTOR LEGISLATION
Benoit Deshaies, director of data and artificial intelligence for the Office of the Chief Data Officer of Canada, said the bill itself does not apply to federal government operations. If the Privacy Act applies to an organization, the bill does not apply.
Deshaies said there are “certain key differences” between how people interact with government versus how they interact with corporations, which drives why there are differences in the legislation for government operations versus the private sector.
Christelle Tessono, a technology policy researcher who signed the open letter, said excluding the public sector creates discrepancies in how AI is regulated, adding that governments should be held accountable in the same way as industry.
“Right now, the government has the directive on automated decision-making systems which is not easily enforceable and doesn’t apply against specific actors in government, like RCMP, national security and the Department of National Defence, who are purchasing and using AI systems a lot whether it’s for hiring in their specific departments or for surveillance technology,” Tessono said. “It’s important to hold everyone in government and industry to the same standard.”
HOW TO TACKLE AI COMMISSIONER INDEPENDENCE
Tessono also said it’s crucial the commissioner be independent from ISED to encourage accountability. “Right now the bill doesn’t set out in detail what types of powers the commissioner would have, so really, it’s very opaque as an accountability system.”
The level of independence of the officer will impact the level of discretion that they would have when conducting audits and enforcing the act, she added. “(If) their powers are relegated at the discretion of the minister, it really depends on the type of government and how they want to enforce the act that will define how powerful the commissioner will be.”
Tessono said both the United States and Europe are developing independent approaches: the Americans are empowering the existing Federal Trade Commissioner to oversee policy alongside additional staff while the Europeans are establishing the European Artificial Intelligence Board, composed of representatives from each European Union member state.
In Canada, Tessono said it would be most beneficial for the Privacy Commissioner to take on some parts of the role, with someone in the office dedicated to understanding the privacy impacts related to AI and the wide range of human rights impacts that emerge from its use.
“What’s important is that they have the appropriate powers to investigate, conduct audits and (enforce) fines as well,” Tessono said.
Konikoff said two solutions to encourage independence of the role would be to either create a new tribunal to enforce the Artificial Intelligence and Data Act or have the country’s Privacy Commissioner take on the role and act as a regulator.
The House of Commons Industry and Technology Committee began its study of the bill this week. Thirteen meetings are to follow, according to the committee’s chair, Joël Lightbound.
With files from The Canadian Press.
Copyright Postmedia Network Inc., 2023