White House lays out early framework for regulating AI development, growth

WASHINGTON — U.S. President Joe Biden enacted sweeping new ground rules and guardrails Monday for the growth and development of artificial intelligence, leaving room for what Canadian experts hope will be a careful but complementary approach from Ottawa.

The executive order, billed as the single most comprehensive government action on AI in the technology's history, covers a broad array of areas, from public safety and security to defending human rights.

To both realize AI's promise and avoid its many perils, the technology needs to be carefully regulated, Biden told a signing ceremony in the East Room of the White House.

"We face a genuine inflection point in history — one of those moments where the decisions we make in the very near term are going to set the course for the next decades," he said.

"There is no greater change I can think of in my life than AI presents as potential: exploring the universe, fighting climate change, ending cancer as we know it and so much more."

Canada was among several key U.S. allies that the White House consulted with in recent months as it developed the new framework, Innovation Minister François-Philippe Champagne said in a statement.

"I have been engaged in frequent dialogue with our international partners, including within the G7, to ensure the responsible use of AI globally," said Champagne, who was on his way to this week's AI summit in London.

Canada's own national AI strategy was released in 2017, and last month launched a new voluntary code of conduct for the development of advanced AI systems, he added.

"We are working with companies and experts around the world to transform AI from fear to opportunity."

Monday's order requires that AI developers share the results of their research and safety tests with the government when that work delves into areas that pose a risk to national security, public safety or the health of the U.S. economy.

It establishes a new AI Safety and Security Board under the auspices of the Department of Homeland Security that will assess potential threats to critical infrastructure, as well as any chemical, biological, nuclear or cybersecurity dangers.

The Department of Commerce will be tasked with creating new rules for watermarking and content authentication to mitigate the ever-advancing perils of AI-generated material that can be used for disinformation or fraud.

Biden — reportedly both captivated and concerned by the technology's potential in recent months — strayed briefly from his script Monday to describe how "deepfake" videos can be produced with only a few seconds of authentic material.

"I've watched one of me on a couple of occasions," he said. "I said, 'When the hell did I say that?'"

In some ways, governments are acting on lessons learned back in the early part of the century, when social media was unleashed on the public, said Charles Eagan, chief technology officer with Canadian tech icon BlackBerry Ltd.

"There's a number of examples where we've deployed technology, and then we've tried to retrofit trust or order into it," Eagan said in an interview.

But while governments are taking more timely action now, they're not exactly ahead of the curve, he added: in its various forms, AI is already in wide use, influencing human behaviour and being used to generate authentic-looking content.

"This is happening sort of behind the scenes in a non-transparent way, and I think it does amplify the need for transparency," Eagan said.

Public awareness will also be key — particularly when it comes to what he calls "digital exhaust," where people are only now beginning to truly grasp how their personal information is being collected and used.

"Some of the examples of technology being deployed without the checks and balances has led us to maybe be a little bit more more careful."

Mark Daley, who earlier this month began a five-year term as Western University's first-ever chief AI officer, acknowledged the challenge of walking the fine line between the technology's promise and its potential dangers.

"It is fiendishly difficult to get the right balance between taking the very real societal safety concerns seriously, while at the same time not hamstringing innovation," Daley said.

"Everyone's trying to titrate this and find the right balance ... and I fully expect that this policy will evolve, based on feedback from society."

Canada has already taken some preliminary steps that are broadly in line with the direction the U.S. is taking — which Daley said should give Ottawa some latitude in avoiding the risk of suppressing innovation.

Such an approach would be appropriate, he added, considering Canada's role in developing some of the foundational technology of AI, including former University of Toronto computer science professor Geoffrey Hinton.

"There is an opportunity for Canada to be maybe five degrees more towards the innovation side of things," he said.

"And I think that's appropriate, because Canada is partially the birthplace of the deep-learning technology that is so attractive right now."

Hinton, dubbed by some the "godfather of AI," famously quit his job at Google earlier this year to speak out about what he considers the dangers of the rapidly evolving technology falling into the wrong hands.

"The idea that this stuff could actually get smarter than people — a few people believed that," Hinton told the New York Times earlier this year. "I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

Others fear Canada in particular is moving too slowly when it comes to regulating the growth of AI.

Yoshua Bengio, a prominent Canadian AI pioneer who like Hinton has been warning about the dangers of its unchecked growth, recently lamented the slow pace of Bill C-27, legislation designed in part to regulate the technology.

The bill passed first and second reading in the House of Commons earlier this year but has been mired in the committee review stage ever since.

Monday's order also acknowledges the disruptive power of AI in the workplace, and aims to better protect the rights of workers against over-surveillance and algorithmic bias, and to provide retraining opportunities for displaced workers.

But as much as it focuses on the potential dangers, it also seeks to capitalize on its promise as well — especially in a country already billing itself as a global leader in raising venture capital for AI startups.

The National AI Research Resource will provide grants, data and other research in Biden administration priority areas like climate change and health care, and gives a leg up to smaller developers in commercializing their breakthroughs.

And new rules will streamline the country's ability to attract and fast-track the arrival of highly skilled immigrants and foreign nationals with critical AI expertise to study, stay and work in the U.S.

This report by The Canadian Press was first published Oct. 30, 2023.

James McCarten, The Canadian Press