Australia's Landmark Online Safety Laws: Social Media Ban and Age Verification
Australia has enacted new legislation restricting social media access for individuals under the age of 16, with the primary ban becoming effective on December 10. Concurrently, broader online safety codes requiring age verification for access to various age-restricted materials, including pornography and violent content, came into effect in March. These measures have led to significant account deactivations by social media platforms, while also drawing criticism regarding their effectiveness, potential for circumvention, and privacy implications. Several international governments are exploring or implementing similar online age restriction policies.
Social Media Ban for Under-16s
Australia's social media ban, implemented through the Online Safety Amendment Act, prohibits individuals under 16 from holding accounts on designated social media platforms. Prime Minister Anthony Albanese described the measure as a "world-leading" initiative intended to allow children to experience childhood.
The eSafety Commissioner, Julie Inman Grant, stated it aims to safeguard teens from "pressures and risks" online, while Communications Minister Anika Wells cited concerns about algorithmic design and user engagement patterns.
The ban encompasses platforms including Meta's Instagram, Facebook, and Threads; TikTok; YouTube; X (formerly Twitter); Reddit; Snapchat; Kick; and Twitch. In total, 10 age-restricted platforms are covered by the legislation. Platforms failing to take "reasonable steps" to prevent access for under-16s face potential fines of up to A$49.5 million. Exemptions are noted for platforms primarily used for gaming, health, and education.
Platform Compliance and Deactivations
Meta initiated account deactivations for users identified as 13 to 15 years old from December 4, prior to the official ban. Notifications were delivered via text, email, and in-app messages.
The company reported blocking a total of 544,052 accounts believed to belong to under-16s between December 4 and December 11. This included 330,639 Instagram accounts, 173,497 Facebook accounts, and 39,916 Threads accounts.
Meta stated that from December 4, individuals under 16 could not create new accounts on its platforms. Snapchat reported locking or disabling over 415,000 accounts belonging to users under 16 by the end of January, with continued daily account locks.
Federal government data indicated that over 4.7 million accounts across 10 platforms were deactivated or removed within the initial two days of the ban. The eSafety Commissioner clarified that this total includes historical, inactive, and duplicate accounts, in addition to those identified as belonging to under-16s. The eSafety Commissioner's office sought data on deactivated accounts from platforms, though a detailed breakdown beyond Meta and Snapchat has not been publicly released by all companies.
Age Verification Methods
Platforms have outlined various methods for age verification and for users to challenge account restrictions if they assert they are 16 or older. These methods include:
- Submitting a "video selfie" for facial age scans.
- Providing a driver's license.
- Other government-issued identification.
The UK-based Age Check Certification Scheme (ACCS) evaluated these methods, acknowledging their merits but noting that no single, universally effective solution was identified for all deployment scenarios. The eSafety Commissioner suggests options like credit card checks or government ID. Companies are required to minimize the collection of personal information and comply with privacy laws.
Industry Responses and Concerns
Companies, while confirming compliance, have expressed concerns or suggested alternative approaches.
-
Meta stated that achieving full compliance would be an "ongoing and multi-layered process." The company has suggested that a law requiring parental approval for under-16s to download social media applications would be beneficial, and advocated for age verification at the app store level rather than individual app-based verification. Meta argued that such an approach could ensure consistent, industry-wide protections and prevent users from migrating to less regulated platforms. Meta also expressed concerns that the ban could isolate vulnerable teenagers from online support communities and that the law's premise regarding an "algorithmic experience" is inaccurate.
-
YouTube expressed concerns that the laws could reduce child safety by removing established parental controls, claiming parents would lose the ability to supervise accounts. Rachel Lord, Public Policy Senior Manager at Google and YouTube Australia, described the legislation as "rushed regulation" that she believes misunderstands the platform's use by young Australians. YouTube also stated that children would still be able to view videos without a logged-in account. Communications Minister Anika Wells characterized YouTube's concerns as "outright weird," asserting that if YouTube is perceived as unsafe, the responsibility lies with the platform.
-
Snapchat identified "significant gaps" in the ban's implementation, citing technical limitations in achieving accurate age verification, with facial age estimation technology typically accurate within two to three years of a person's actual age.
-
Reddit initiated a legal challenge against the Australian government, contending that the ban is inefficient and restricts young people's freedom of speech, potentially isolating teens from age-appropriate community experiences, including political discussions.
Monitoring and Other Platform Actions
Australia's eSafety Commissioner requested two emerging applications, Lemon8 and Yope, to conduct self-assessments regarding their compliance. Lemon8 has reportedly committed to excluding users under 16. Yope's CEO stated the company self-assessed and determined it functions as a private messenger, similar to WhatsApp, with no public content. The gaming platform Roblox announced measures to prevent children under 16 from engaging in chat with adult strangers, introducing mandatory age checks for chat features starting in December for Australia, New Zealand, and the Netherlands.
Broader Online Safety Measures and Age Verification
New online safety codes, implemented in March, expand on previous measures by requiring age verification for access to specific online content and services. The codes mandate that various online service providers implement steps to prevent children from accessing age-inappropriate material.
These restrictions apply to content such as pornography, extremely violent material, and content related to self-harm, suicide, and disordered eating. Research by the eSafety Commission indicated that one in three children aged 10 to 17 have encountered sexual images or videos online, and over 70 percent had seen or heard violent content or material depicting self-harm, suicide, or disordered eating.
Under these new codes:
- Adult websites, such as Pornhub, RedTube, YouPorn, and Tube8, are mandated to implement age verification. Aylo, the parent company of these sites, restricted access for Australian users of all ages days before the codes officially took effect, citing concerns over data privacy and the effectiveness of such measures.
- AI companion chatbots must confirm users are 18 or older before generating sexually explicit, high-impact violence, or self-harm material.
- App stores are required to prevent users under 18 from purchasing or downloading R18+ apps.
- Adult messaging services may need to verify user ages for content involving sexually explicit or self-harm material.
- Online gaming platforms will require age assurance for access to R18+ classified games.
- Search engines for logged-out users will blur search results containing pornography and high-impact violence by default. Searches for suicide or disordered eating will display a referral support service as the first result.
- Social media services allowing pornography or self-harm material must ensure users are 18 or older.
The eSafety Commissioner welcomed the introduction of these codes, stating they introduce online safeguards comparable to existing restrictions for physical spaces, ensuring age-appropriate experiences.
Breaches of these codes may result in penalties up to A$49.5 million.
Concerns Regarding Broader Age Verification
Experts have raised concerns regarding user privacy with the implementation of these codes. Professor Daniel Angus of Queensland University of Technology described the development as a "fundamental erosion of the anonymous internet" due to increased collection of private data and governmental insistence on identity verification. Professor Tama Leaver of Curtin University noted the challenge of balancing online safety for children with respecting privacy. Concerns were also raised about online systems potentially retaining user data.
Scarlet Alliance, a sex worker advocacy group, expressed concerns that the requirements could have a "chilling effect" on platforms hosting online advertising for their services and content, and warned of potential over-filtering of content, including sexual health information. Research involving Australian teenagers on online adult content found that while some identified it as a source of explicit information, others expressed concern about it fostering unrealistic expectations, noting that outright bans could increase the desire to access content.
Reported Challenges and User Circumvention
Despite the ban and new codes, reports indicate challenges in full enforcement and instances of user circumvention. Shadow Communications Minister Melissa McIntosh stated that many under-16 accounts remained active or had been reactivated, that age-verification tools were easily bypassed, and that children migrated to other platforms not initially included in the ban. Some under-16 users have reported circumventing the social media ban, with examples of individuals shifting to other entertainment platforms or finding ways to regain access to restricted sites.
Age-assurance methods, such as facial age estimation scans, have been reported as faulty. The use of Virtual Private Networks (VPNs) to mask user locations and bypass age verification has seen an increase in downloads in Australia, though these levels reportedly returned to normal. The eSafety Commissioner indicated that enforcement would focus on systemic failures rather than isolated cases of individual circumvention.
International Context
Australia's social media ban and age verification measures are being observed and emulated internationally.
- Spain announced plans to implement a social media ban for individuals under 16, with Prime Minister Pedro Sanchez citing concerns about online addiction, abuse, pornography, manipulation, and violence, and stating that platforms would be required to implement "effective age-verification systems."
- Indonesia announced a ban on social media access for children under 16, effective gradually from March 28, citing concerns over online pornography, cyberbullying, online fraud, and internet addiction. Indonesia is the first country in South-East Asia to restrict social media access for children based on age.
- The United Kingdom Labour government faces pressure for a similar policy, with the House of Lords backing a ban for under-16s. The UK's similar age verification system for adult content reportedly led to a significant increase in VPN app downloads.
- The European Union has an expert group discussing a similar ban for children, with Brussels monitoring the outcomes and legal challenges of the Australian ban. France, Denmark, Greece, and Spain advocate for similar actions at the EU level.
- The US state of Florida is exploring limits on children's social media use.
- India is also considering its own social media ban for teenagers.