Advertisement

Britain to crack down on AI chatbots scraping people’s information

chatgpt artificial intelligence
chatgpt artificial intelligence

Britain is to crack down on artificial intelligence (AI) companies collecting data without consent, amid fears that chatbots are scraping information on their users without permission.

AI companies have been warned by Britain’s official information watchdog that they could be fined if they fail to get consent for gathering people’s personal data.

The information commissioner has told companies using generative AI technology they are subject to data protection law, which means they must gain consent or show there is a legitimate interest for collecting the personal information.

Regulators are understood to be increasingly concerned at the privacy implications of the explosion in generative AI, led by firms like OpenAI with its popular ChatGPT model.

It relates not only to the personal data gathered from individuals using large language models like ChatGPT but also the companies’ scraping of huge amounts of data from across the internet, some of which is personal.

Companies including Amazon, JPMorgan and Accenture have restricted staff from using the tool for fear of how information submitted may be used.

OpenAI Sam Altman - JIM LO SCALZO/EPA-EFE/Shutterstock
OpenAI Sam Altman - JIM LO SCALZO/EPA-EFE/Shutterstock

A senior regulator told The Telegraph: “The reason why they put these models out there for us all to be able to download now is because they want our data for the models to learn. So has that consent been properly sought? It's a regulatory question in the UK and across Europe.”

The information commissioner can issue notices requiring companies to explain their activities, issue enforcement orders demanding firms cease actions or impose fines worth up to £17m under data protection laws.

A spokesman for John Edwards, Britain’s information commissioner, said: “We will act where organisations are not following the law and considering the impact on individuals.”

Ofcom is also planning to hit AI companies with tougher rules to ensure the technology is not being misused.

The agency, which is the new online safety regulator for social media and tech companies, is planning to require risk assessments of any new AI.

The crackdown comes after Rishi Sunak last week met bosses from three of the biggest AI companies – OpenAI, Google-backed Anthropic and DeepMind – amid rising concerns about the technology’s impact on society.

The Prime Minister said the technology needed to have the right “guardrails” and that he had discussed the risk of disinformation as well as wider “existential” threats.

The competition watchdog has already launched an investigation into the AI market, including examining the safety implications of the technology.

The issue of privacy came to the fore in March when Italy’s data protection authority temporarily blocked ChatGPT because there was “no legal basis that justifies the massive collection and storage of personal data”.

OpenAI responded by applying rules across Europe enabling anyone to opt out of processing through an online form. It also expanded its privacy policy and introduced a right to erase information users considered inaccurate, similar to the right to be forgotten in data laws.

Andrew Strait, associate director at the Ada Lovelace institute, said: “There is a challenge with consent as a basis for processing data at the scale of ChatGPT. It is really hard to communicate to the average person what is happening with their data.

“Does it disappear? Is it capturing your information? Is it reusing it? Consent works best when you have a clear understanding of what you are consenting to.”

The Information Commissioner spokesman said: “Organisations developing or using generative AI should be considering their data protection obligations from the outset.

“Data protection law still applies even when the personal information being processed comes from publicly accessible sources. If you are developing or using generative AI that processes personal data, you must do this in a way that is lawful. This includes consent or legitimate interests.”

Lorna Woods, professor of internet law at Essex University, said: “Data protection rules apply whether or not you’ve made something public.”

Broaden your horizons with award-winning British journalism. Try The Telegraph free for 1 month, then enjoy 1 year for just $9 with our US-exclusive offer.