OTTAWA — As the federal government explores new ways to use artificial intelligence, it’s also being warned to keep the technology away from criminal justice, policy-making and hiring.
Then-president of the Treasury Board Anita Anand announced in April the federal government is drafting an AI strategy, due to be released this spring.
Read More:
- Chinese firm DeepSeek’s AI chatbot restricted from some government phones
- Sask. Polytech students in Saskatoon tap into AI for interior design
- AI code signatories happy with decision but want more company
The government ran consultations on the strategy last fall and received almost 300 submissions from researchers, unions, Indigenous organizations and others on how AI could be used in the public service.
A report published late last month shared what they had to say.
Some of the people Ottawa consulted said AI chatbots could deliver “quick and accurate” government information to Canadians and translate text so that government communications are readily available in French and English.
Some suggested using AI for tasks like monitoring public health, the environment and the economy, processing applications for social services, reviewing legal documents and helping to detect and prevent “fraudulent activities.”
AI could help out with administrative tasks like creating and sorting documents, managing group email inboxes and drafting responses to common questions, some respondents said.
Some also proposed using AI to “analyze public sentiment” on social media and collect feedback from public consultations to help inform policy and program development.
The federal government has used AI for decades for tasks like analyzing satellite imagery and forecasting. It also has used it to predict the outcome of tax cases and sort temporary visa applications.
Public Services and Procurement Canada is expanding its use of artificial intelligence to help clear a backlog of transactions in the Phoenix public service payroll system.
While AI is being used already to recruit public servants, the report says some respondents think it could “streamline” the process by screening applications, scheduling interviews and exams and developing training programs.
The report says some respondents warned Ottawa not to deploy AI in more sensitive areas of government like criminal justice, including sentencing and parole decisions.
They also said AI should not be used for surveillance, political decision-making or deciding on an individual’s eligibility for social services. Many participants also raised concerns about using AI to make decisions about hiring, promoting or firing public servants.
The consultation report said public servants “at all levels” need AI training.
Adegboyega Ojo, a professor and Canada Research Chair in Governance and Artificial Intelligence at Carleton University, said the areas the report earmarked for AI use are “relatively safe.”
He agreed that the areas listed as no-go zones for AI are “sensitive and require extreme caution,” though he said AI could still be helpful in some of those areas “if ethical and responsible practices are followed.”
“The idea that AI could independently develop policies is unrealistic and overlooks the political dynamics that shape policy-making,” Ojo said.
“However, AI can enhance the collection and analysis of evidence, as well as the modelling of policy impacts on different groups, including marginalized communities.”
Ojo said AI also can help determine if someone is eligible for social services by automating tasks like triaging applications. He said the key is to ensure safety measures are in place and humans are the “final decision-makers.”
Fenwick McKelvey, an associate professor in information and communication technology policy at Concordia University and the co-director of the Applied AI Institute, said AI-driven chatbots could be helpful for things like filing taxes.
“Chatbots are part of the future of how Canadians will be using the internet for better or worse, and I think that that at least acknowledges trends in the public sector towards more user-centred design,” said McKelvey, who took part in the government consultations.
McKelvey said he agrees with the consultation report’s list of no-go areas for AI and he’d like to see the government continue to explore different risks and ways to avoid reinforcing biases.
“I think you want to focus on applications where it’s very clear the public benefits,” he said. “I think we too often focus on thinking about how are they going to integrate existing tools like ChatGPT in the civil service, which really puts the government as just a client, where really it could be building and imagining next steps for computer infrastructure.”
— With files from Anja Karadeglija.
This report by The Canadian Press was first published Feb. 7, 2025.
Catherine Morrison, The Canadian Press