Australia Mandates Under-16 Social Media Ban, Drawing Global Attention
Australia has officially implemented a national policy requiring social media companies to prevent individuals under the age of 16 from accessing their platforms. The measure, which came into effect on a Wednesday following a scheduled commencement date of December 10, mandates that companies take "reasonable steps" to ensure new accounts are not created by underage users and that existing accounts belonging to under-16s are deactivated or removed. This pioneering initiative aims to protect children from various online harms and has garnered international attention, with several other countries exploring or enacting similar restrictions.
Policy Implementation and Rationale
The Australian government asserts the initiative's goal is to reduce "pressures and risks" children encounter on social media. They specifically cite platform design features that encourage extended screen time and content that may adversely affect health and wellbeing.
A government-commissioned study earlier reported that 96% of Australian children aged 10-15 used social media. Of these, 70% had reportedly been exposed to harmful content or behavior, including misogynistic material, fight videos, and content promoting eating disorders or suicide.
The study also indicated that 14% experienced grooming-type behavior, and over 50% reported being subjected to cyberbullying.
Australia's legislation is noted for its strictness, as it does not include exemptions for parental approval, distinguishing it from proposals in other jurisdictions. Prime Minister Anthony Albanese acknowledged that the implementation process would be challenging and "won't be perfect," emphasizing the social responsibility of social media companies.
Scope and Affected Platforms
Ten prominent platforms have been identified for inclusion in the ban: Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, Kick, and Twitch. The government is also considering expanding the ban to include online gaming platforms. Roblox and Discord have reportedly introduced age verification features on some functions.
Platforms were selected based on three criteria:
- Whether a platform's primary or "significant purpose" is to facilitate online social interaction between two or more users.
- Whether it allows users to interact with some or all other users.
- Whether it permits users to post material.
YouTube Kids, Google Classroom, and WhatsApp are not included, as they were determined not to meet these criteria. Children will still be able to view most content on platforms like YouTube that do not require an account for viewing.
Enforcement and Penalties
Enforcement of the ban targets social media companies, not individual children or parents. Companies found in serious or repeated breach of the regulations face fines of up to A$49.5 million (approximately US$32.9 million).
Australia's eSafety Commissioner, Julie Inman Grant, is responsible for enforcing the ban. The Commissioner indicated that compliance checks would commence shortly and planned to issue notices to the 10 platforms, requesting information on their implementation strategies and the number of accounts closed. A public report on the ban's initial effectiveness is scheduled for release. Commissioner Inman Grant asserted that the targeted platforms possess the necessary technology and user data to implement accurate age restrictions.
Companies are required to employ "reasonable steps" and utilize age assurance technologies, though specific technologies have not been mandated. Potential methods discussed include government IDs, facial or voice recognition, and age inference. The government encourages the use of multiple verification methods and has specified that platforms cannot rely solely on users self-declaring their age or on parental affirmation. Meta, the parent company of Facebook, Instagram, and Threads, announced it would begin closing teen accounts. It stated that users mistakenly removed could verify their age using a government ID or a video selfie. Communications Minister Anika Wells reported that over 200,000 TikTok accounts in Australia had been deactivated by the ban's effective date.
Younger individuals have reportedly attempted to bypass age verification systems, using methods such as fake birthdates or Virtual Private Networks (VPNs). Minister Wells cautioned that individuals attempting to evade detection would likely be identified eventually, as platforms are required to conduct routine checks on under-16 accounts. Downloads of VPNs increased prior to the ban but later returned to normal levels.
Initial Impact and User Experiences
A month after implementation, some teenagers reported changes in their social media use:
- Amy (14): Reported decreased phone usage, feeling "disconnected," reduced desire to maintain social media "streaks," and engaged in alternative activities like running. Her interest in social media decreased.
- Aahil (13): Maintained approximately 2.5 hours of daily social media use on platforms like YouTube and Snapchat by using fake birthdates. He also spent significant time on unbanned platforms like Roblox and Discord. His mother observed an increase in his moodiness and time spent gaming.
- Lulu (15): Indicated her social media usage remained consistent by creating new accounts with ages above 16 for TikTok and Instagram. She reported reading more but not increasing outdoor activities or face-to-face meetings.
All three teenagers increased their use of WhatsApp and Facebook Messenger for communication with friends who had lost social media access.
Consumer psychologist Christina Anthony suggested that initial mood changes in teenagers might stem from the disruption of social media as a coping mechanism. She noted a "compensatory behavior" trend, where teenagers initially sought alternative photo and video-sharing apps, though downloads for these have since decreased.
Reactions and Perspectives
Social Media Companies
Companies expressed concerns following the announcement, citing difficulties in implementation, potential for circumvention, inconvenience for users, and privacy risks associated with large-scale data collection. Some companies, including Snap and YouTube, disputed their classification as social media platforms. Google, YouTube's parent company, is reportedly evaluating a legal challenge. While expressing opposition, TikTok and Snap indicated they would comply. Kick stated it would introduce "a range of measures." Meta noted the ban could lead to "inconsistent protections" across various applications used by teenagers. X has raised its minimum user age to 16. Meta has also cautioned against policies that might push teenagers towards less secure online environments.
Proponents
Polling indicates broad approval among parents in Australia. Advocates like 12-year-old student Florence Brodribb suggest the ban could contribute to healthier and safer development for young people by mitigating the influence of social media algorithms and reducing instances of cyberbullying and child exploitation. Wayne Holdsworth, an advocate whose son died by suicide following an online sextortion scam, viewed the new law as a starting point, emphasizing the importance of educating children about online risks.
Critics
Critics have raised questions regarding the efficacy of age assurance technologies, citing potential for inaccuracies, wrongful blocking, or failure to identify underage users. Concerns have also been voiced regarding the adequacy of the proposed fines. Arguments against the ban include that it may not address all online harms, as dating websites, gaming platforms, and AI chatbots are not currently included. Additionally, some critics suggest that the policy could isolate teenagers who rely on social media for community, particularly those from LGBTQ+, neurodivergent, or rural communities, and that educating children on digital literacy might be a more effective approach. Simone Clements, a parent of 15-year-old twins in the entertainment industry, identified potential financial impacts on her children, for whom social media serves as a platform for their portfolios and an income source. A report from Headspace indicated that one in ten young people sought support for coping with the ban's impact in Australia.
Data Protection Considerations
Critics have raised concerns regarding the extensive collection and storage of personal data required for age verification and the potential for mishandling. The government has stated that the legislation incorporates "strong protections" for personal information. These protections stipulate that collected data must only be used for age verification purposes and must be destroyed subsequent to verification, with "serious penalties" for breaches. Platforms are also required to offer verification alternatives to the use of government IDs.
International Context and Similar Initiatives
Australia's ban on social media access for individuals under 16 is a novel policy globally, drawing international observation. Several countries are reportedly observing Australia's implementation as a case study, and a spokesperson for the Australian Communications Minister stated the ban is being considered a model by other global leaders.
- United Kingdom: The House of Lords has approved an amendment to a schools bill proposing a ban on social media access for individuals under 16. The government has indicated its intention to contest this amendment in the House of Commons while conducting its own consultation on a potential ban. More than 60 Labour MPs have urged the Prime Minister to implement a ban, aligning with some Conservative Party members. Lord Nash, a former Conservative schools minister, described children's social media use as a "societal catastrophe" linked to mental health issues, online radicalisation, and disruptive classroom behavior. However, some campaigners and children's charities, including the NSPCC, have warned of "unintended consequences," suggesting a focus on stronger enforcement of existing child safety regulations. Labour's spokesperson stated the party would not support the Lords' amendment, advocating for gathering evidence before changing the law.
- Indonesia: Will implement a ban on social media access for children under 16 from March 28, becoming the first Southeast Asian nation to do so. The policy aims to address online pornography, cyberbullying, online scams, and internet addiction. Platforms affected include TikTok, Instagram, Threads, Facebook, YouTube, X, Bigo Live, and Roblox. Challenges regarding technical implementation, particularly preventing identity falsification, have been voiced by the Minister of Elementary and Secondary Education.
- France: Parliament recommended banning under-15s from social media and imposing a curfew for 15- to 18-year-olds.
- Denmark: Plans to ban social media for under-15s.
- Norway: Is considering a similar proposal.
- Spain: The government has submitted a draft law requiring legal guardian authorization for under-16s to access social media.
- United States: An attempt in Utah to ban social media for under-18s without parental consent was blocked by a federal judge.
- Other nations: Malaysia is planning a similar ban, and the Philippines has introduced a bill. Singapore, Greece, India (Karnataka), and New Zealand are also considering or have implemented similar measures.
Broader Screen Time Trends
Amidst these policy debates, observations indicate a broader trend of increasing screen time across all age groups, not limited to younger individuals. A 2023 YouGov survey in the United States found that over 50% of individuals aged 45 to 64 reported spending five or more hours daily on screens. While younger groups (18-29) generally showed higher usage, neuropsychologists suggest the differences in screen time between age groups are narrowing. Younger individuals were also more likely to acknowledge screen time struggles and attempt to reduce their usage. Problematic screen use is defined as a loss of control over time spent on screens, leading to negative consequences, with approximately 3-5% of people experiencing clinical addiction.