Back
Politics

UK Government Outlines Plans for Stricter AI Chatbot and Social Media Regulation to Protect Children

View source

UK Government Announces AI Chatbot Regulation

The UK government plans to introduce changes aimed at strengthening the regulation of AI chatbots, specifically targeting services that pose risks to children.

The UK government plans to introduce changes aimed at strengthening the regulation of AI chatbots, specifically targeting services that pose risks to children. These changes, anticipated to be announced by Keir Starmer, are intended to allow for significant fines or service blocking for companies failing to comply.

Addressing Legal Loopholes

Proposed law changes seek to close a legal loophole that currently exempts certain AI chatbots from illegal content duties under the Online Safety Act. The existing Act primarily covers chatbots used as search engines, for pornography production, or in user-to-user contexts. However, chatbots creating material encouraging self-harm or generating child sexual abuse content without searching the internet are not consistently covered. The online regulator Ofcom had previously noted its limited powers regarding AI-generated content not explicitly categorized as pornography.

Consequences for Non-Compliance

Companies found in breach of the Online Safety Act could face penalties of up to 10% of their global revenue. Regulators may also pursue court orders to block non-compliant services within the UK.

Social Media Restrictions for Children

Alongside AI regulation, plans are in place to accelerate new restrictions on social media use for children, potentially including measures such as an under-16 ban or limits on infinite scrolling. A public consultation on these measures is anticipated, with implementation potentially occurring as early as this summer.

Political Response

The Conservative party criticized the government's claims of immediate action, with Shadow Education Secretary Laura Trott stating that the proposed consultation has not yet begun. Trott expressed a clear stance against under-16s accessing social media platforms.

Industry and Advocacy Group Reactions

Chris Sherwood, Chief Executive of the NSPCC, highlighted instances of children experiencing harm from AI chatbots, citing inaccurate information given to a 14-year-old girl regarding eating habits and body dysmorphia, and exposure to self-harm content. Sherwood emphasized concerns about tech companies' ability to design safe systems. The Molly Rose Foundation, established after Molly Russell's death, acknowledged the steps as a "welcome downpayment" but called for a stronger Online Safety Act prioritizing child wellbeing.

Industry Efforts to Enhance Safety

OpenAI, maker of ChatGPT, has introduced parental controls and is implementing age-prediction technology following a case where a 16-year-old reportedly took his own life after interactions with ChatGPT.