Australia Adds YouTube to Child Social Media Ban that Covers Under-16 Ban

Australia has officially added YouTube to its sweeping social media ban for children under the age of 16, reversing an earlier exemption that had drawn heavy criticism from child safety advocates and rival tech companies. The ban, part of the Online Safety Amendment (Social Media Minimum Age) Act 2024, requires platforms to block access to minors or face steep fines—up to A$49.5 million for non-compliance.
So, let’s talk about online safety, es for young children. Online safety for young children is a critical issue, given the predatory risks and harmful content they can encounter on the web. Social media platforms, while offering connectivity and creativity, have indeed exposed kids to dangers like grooming, cyberbullying, and inappropriate content. There are monsters that are lurking the web looking for who to prance on. The question now; is an Online Safety Act—like Australia’s Online Safety Amendment (Social Media Minimum Age) Act 2024 or the UK’s Online Safety Act 2023—the solution? Well, a quick look shows that there are even bigger monsters lurking within.
Countries have been gradually introducing “Online Safety Laws”
The Trojan Horse: Online safety legislation
Let’s start with the Online Safety Amendment (Social Media Minimum Age) Act 2024 specific to Australia, where it was passed on November 29, 2024. It sets a minimum age of 16 for social media use, effective by December 2025. No other country has an The Online Safety Amendment (Social Media Minimum Age) Act 2024 bans children under 16 from holding social media accounts, with platforms like Snapchat, TikTok, Instagram, Facebook, and X required to enforce age restrictions. Non-compliance can result in fines up to AUD 49.5 million. The law emphasizes privacy protections and places the onus on platforms, not parents or children.
The United Kingdom Online Safety Act 2023, effective from 2025, mandates social media platforms to enforce age limits consistently and protect children from harmful content. It does not set a specific minimum age but requires platforms to assess risks to children and implement age-appropriate restrictions. There is discussion about a potential Australia-style ban for under-16s, but no such law has been enacted yet.
Since June 2023, French law requires social media platforms to obtain parental consent for users under 15 to create accounts. The European Union’s General Data Protection Regulation (GDPR) also mandates parental consent for processing personal data of children under 16, though France lowered this to 15. Technical challenges have delayed enforcement, and proposals exist to ban smartphones for under-11s and internet-enabled phones for under-13s, but these are not yet law. In Germany under GDPR, children aged 13–15 need parental consent to use social media. No stricter national age limit exists, and there are no plans to increase it. Italian children under 14 require parental consent to sign up for social media accounts under GDPR and national laws. In 2024, Norway proposed amending its Personal Data Act to raise the minimum age for social media use from 13 to 15, though parents can still consent for younger users. Legislation is under development, with no clear timeline for enactment. GDPR-based national laws in require social media users to be at least 13, with no additional national restrictions mentioned.
In the United States the Children’s Online Privacy Protection Act (COPPA) requires parental consent for children under 13 to use social media. The proposed Protecting Kids on Social Media Act would mandate age verification for account holders, but it is not yet law. At the state level, California passed a 2024 law, effective 2027, to prevent platforms like TikTok from tailoring content to children based on their data. Indonesia In January 2025, the Minister of Communication and Digital Affairs expressed interest in introducing social media age restrictions similar to Australia’s, but no legislation has been enacted or proposed yet.
How Parental Responsibility has been subverted by Government
So according to the Australian government, it’s better for foreign-owned multinational tech platforms to control children’s internet use than for parents to supervise or manage their children’s social media and online interactions. Any child in Australia without a parental lock can access a pornographic website that does not require an account by simply clicking the “Are you over 18?” box. If this bill accomplished anything good, it should have been to prevent children from accessing pornography, which it deliberately avoids doing. The Online Safety Amendment (Social Media Minimum Age) Act 2024 is about many things – keeping children safe is not one of them.
Online Safety Laws for Censorship, Surveillance and Control
Online safety Bill wants young people to be protected against hideous content, they will need to prove they are 16 to access websites. But they can change their gender and take a covid fake vaccine at any age, without parental consent. Jokes on us if we let it. Ursula Von Der Leyen made the connection.
The proposed online safety bill is taking a drastic measure to shield young people from explicit content, requiring them to verify their age as 16 to access certain websites. Meanwhile, it’s astonishing that minors can alter their gender identity and receive a COVID-19 vaccine without needing parental consent. It’s a disturbing reality that we’re allowing to unfold.
Isn’t it intriguing that the blame for corrupting kids is being placed on the internet, rather than the adults who are actually controlling it? Let’s set the record straight: yes, the online world is plagued by vile and disturbing content. However, what’s not being disclosed is that major online platforms such as YouTube, Meta, and TikTok have had the technological capability to detect and filter out objectionable content instantly for years. If a user hums just three notes of a copyrighted song, their video is immediately flagged – a feature that Shazam has been utilizing since 2008. It’s astounding that these platforms can identify a bassline but claim they’re unable to detect a beheading. This is not a matter of technological limitations, but rather a lack of willingness to use their capabilities for the greater good, unless it aligns with their interests. Once you grant Big Tech the power to filter content, you’re just one algorithm tweak away from censorship – not just of explicit material, but of any content that doesn’t suit their brand. This is exactly what they used during COVID, anyone who spoke against what they dictated was blocked and disallowed. Trust me, they are coming there. Gradually.
They’re essentially placing a digital ankle bracelet on every 15-year-old
So, what’s the response of governments? Instead of regulating these platforms and demanding transparency in their algorithms, they’re opting for a more invasive approach. They’re essentially placing a digital ankle bracelet on every 15-year-old, requiring IDs, biometrics, voice scans, and face scans. This is a prime example of lazy policy-making, a nightmare for privacy, and a blatant con. The technology to address this issue exists, and it’s possible to do so without treating every individual like a criminal. It’s time to stop pretending that this is about safety and acknowledge the truth: this is just a new excuse to monitor, monetize, and manipulate the entire population, starting with the most vulnerable – our children. But it goes further, something called function creep. This is a tool for censorship and surveillance. Not only for 15-year-olds, but for everyone.
U.K Online Safety Act comes into Effect
Let’s zoom into the U.K now. The British Labour Government is considering the option of banning VPNs, due to a surge in VPN usage in the UK, following the passing of the controversial Online Safety Act. For years, politicians from across the political spectrum insisted the Online Safety Act would focus solely on illegal content – shielding children from pornography, criminal exploitation, and material encouraging or assisting suicide – without threatening free expression. But from the moment its age-verification duties took effect on 25 July, that reassurance began to unravel. Social media sites, search engines, and video-sharing services are now legally required to shield under-18s from content deemed harmful to their mental or physical well-being. Failure to comply risks fines of up to £18 million or 10% of global turnover, whichever is greater. At the heart of the regime is a requirement to implement “highly effective” age checks. If a platform cannot establish with high confidence that a user is over 18, it must restrict access to a wide category of ‘sensitive’ content, even when that content is entirely lawful.
This has major implications for platforms where news footage, protest clips or political commentary appear in real time. Ofcom’s guidance makes clear that simple box-ticking exercises – like declaring your age or agreeing to terms of service – will no longer suffice. Instead, platforms are expected to use tools like facial age estimation, ID scans, open banking credentials, or digital identity wallets. The Act also pushes companies to filter harmful material before it appears in users’ feeds. Ofcom’s broader regulatory guidance warns that recommender systems can steer young users toward material they didn’t ask for. In response, platforms may now be expected to reconfigure their algorithms to filter out entire categories of lawful expression before it reaches underage or unverified users. One platform already moving in this direction is X. Its approach offers a revealing – and potentially sobering – glimpse of where things may be heading. The company uses internal signals, including when an account was created, any prior verification, and behavioural data, to estimate a user’s age. If that process fails to confirm the user is over 18, they are automatically placed into a sensitive content filtering mode. As the platform’s Help Centre explains: “Until we are able to determine if a user is 18 or over, they may be defaulted into sensitive media settings and may not be able to access sensitive media.” Listen to Zia Yusuf, Head of DOGE for Reform UK and Former Co-founder & CEO Velocity Black.
Written By Tatenda Belle Panashe