Australia has enacted a world-first ban on social media for users aged under 16, causing millions of children and teenagers to lose access to their accounts.

Facebook, Instagram, Threads, X, YouTube, Snapchat, Reddit, Kick, Twitch and TikTok are expected to have taken steps from Wednesday to remove accounts held by users under 16 years of age in Australia, and prevent those teens from registering new accounts.

Platforms that do not comply risk fines of up to $49.5m.

There have been some teething problems with the ban’s implementation. Guardian Australia has received several reports of those under 16 passing the facial age assurance tests, but the government has flagged it is not expecting the ban will be perfect from day one.

All listed platforms apart from X had confirmed by Tuesday they would comply with the ban. The eSafety commissioner, Julie Inman Grant, said it had recently had a conversation with X about how it would comply, but the company had not communicated its policy to users.

Bluesky, an X alternative, announced on Tuesday it would also ban under-16s, despite eSafety assessing the platform as “low risk” due to its small user base of 50,000 in Australia.

Parents of children affected by the ban shared a spectrum of views on the policy. One parent told the Guardian their 15-year-old daughter was “very distressed” because “all her 14 to 15-year-old friends have been age verified as 18 by Snapchat”. Since she had been identified as under 16, they feared “her friends will keep using Snapchat to talk and organise social events and she will be left out”.

Others said the ban “can’t come quickly enough”. One parent said their daughter was “completely addicted” to social media and the ban “provides us with a support framework to keep her off these platforms”.

“The fact that teenagers occasionally find a way to have a drink doesn’t diminish the value of having a clear, ­national standard.”

Polling has consistently shown that two-thirds of voters support raising the minimum age for social media to 16. The opposition, including leader Sussan Ley, have recently voiced alarm about the ban, despite waving the legislation through parliament and the former Liberal leader Peter Dutton championing it.

The ban has garnered worldwide attention, with several nations indicating they will adopt a ban of their own, including Malaysia, Denmark and Norway. The European Union passed a resolution to adopt similar restrictions, while a spokesperson for the British government told Reuters it was “closely monitoring Australia’s approach to age restrictions”.

  • Arcane2077@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    98
    arrow-down
    2
    ·
    1 month ago

    Some good silver linings here, but what everyone needs to remember here is that nobody would be supporting this at all if facebook wasn’t intentionally predatory and bad for (all) people’s brains.

    If regulators in Australia had a spine they would call for an end to those practices, and now that’s infinitely harder to do

    • ms.lane@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      7
      ·
      1 month ago

      Some good silver linings here

      Where?

      The kids will move to less monitored platforms and even on things like YouTube, parental controls are now gone.

      You need to have an account for parental controls to be applied to, kids aren’t allowed an account, vis-a-vis, no more parental controls or monitoring for problem content.

      • wheezy@lemmy.ml
        link
        fedilink
        English
        arrow-up
        25
        ·
        edit-2
        1 month ago

        As someone that grew up with an “unmonitored” internet. I can say that it was significantly more healthy than the profit driven “keep watching” algorithm that is all of social media today.

        Yeah. I saw “two girls one cup” and “lemon party”. But, did I slowly have my perspective of reality changed by the 30 second videos I swiped on for hours at a time for days on end?

        No, most of my time was spent learning about computers, “stealing” music, and chatting with my real life friends.

        I don’t think a kid today can experience that internet anymore. It’s gone. But acting like “unmonitored” internet access is worse is pearl clutching and ignoring the fundamental problems the profit driven internet has created at the expense of societies mental health.

        Kids will absolutely find another place to connect online in Australia. But, honestly, I think whatever that is will be healthier than the absolute brain rot that is profit driven social media.

        We got to this point because parents think that kids need a monitored internet. Afraid of online predators. So it was passed off to corporations that learned how to systematically institute mental abuse in order to keep their apps open longer.

      • The_Decryptor@aussie.zone
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 month ago

        You need to have an account for parental controls to be applied to, kids aren’t allowed an account, vis-a-vis, no more parental controls or monitoring for problem content.

        Except that YT hides pretty much everything interesting behind a login wall these days.

        I tried to listen to a Daft Punk song yesterday in a private tab and was blocked.

    • porcoesphino@mander.xyz
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 month ago

      I think that’s easier said than done. There are a lot of negatives associated with social media and some are easier to put restrictions on (say violent content) but I don’t think we really have a good grasp of all the ways use is associated with depression for example. And wouldn’t some of this still fall back to age restricted areas, kind of like with movies?

      But yeah, it would be nice to see more push back on the tech companies instead of the consumers

      • The_v@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 month ago

        Its a very simple fix with a few law changes.

        1. The act of promoting or curating user submitted data makes the company strictly liable for any damages done by the content.

        2. The deliberate spreading of harmful false information makes the hosting company liable for damages.

        This would bankrupt Facebook, Twitter, etc within 6 months.

        • Attacker94@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          The act of promoting or curating user submitted data makes the company strictly liable for any damages done by the content.

          I assume you don’t mean simply providing the platform for the content to be hosted, in that case I agree this would definetly help.

          The deliberate spreading of harmful false information makes the hosting company liable for damages.

          This one is damn near impossible to enforce for the sole reason of the word “deliberate”, the issue is that I would not support such a law without that part.

          • T156@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 month ago

            This one is damn near impossible to enforce for the sole reason of the word “deliberate”, the issue is that I would not support such a law without that part.

            It would also be easily abused, especially since someone would have to take a look and check, which would already put a bottleneck in the system, and the social media site would have to take it down to check, just in case, which gives someone a way to effectively remove posts.

          • The_v@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 month ago

            I left out the hosting part for just that reason. The company has to activately do something to gain the liability. Right now the big social media companies are deliberately prioritizing harmful information to maximize engagement and generate money.

            As for enforcement hosters have had to develop protocols for removal of illegal content since the very beginning. Its still out there and can be found, but laws and mostly due diligence from hosters, makes it more difficult to find. Its the reason Lemmy is not full of illegal pics etc. The hosters are actively removing it and banning accounts that publish it.

            Those protocols could be modified to include obvious misinformation bots etc. Think about the number of studies that have shown that just a few accounts are the source of the majority of harmful misinformation on social media.

            Of course any reporting system needs to be protected from abuse. The DMCA takedown abusers are a great example of why this is needed.

        • porcoesphino@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 month ago

          That kind of aligns with some actions I would love to see but I don’t really see how it helps in the example I used to highlight some of the harder things to fix, depression. How does that improve the correlation between social media use and depression in teenagers? I can see it will improve from special cases like removing posts pro eating disorder content but I’m pretty confident the depression correlation goes well beyond easy to moderate content.

          Also, if we presumed that some amount of horrific violence is okay for adults to choose to see and a population of people thinks its reasonable to restrict this content for people below a certain age (or swap violence for sex / nudity) then do we just decide we know better than that population, that freedom is more important, or does it fall back to age restrictions again (but gated on parts of the site)? I’m avoiding saying “government” here and going with “population of people” to try to decouple a little from some of the negatives people associate with government, especially since COVID

          But yeah, holding tech companies accountable like that would be lovely to see. I suspect the cost is so large they couldn’t pay so it would never happen, but I think that’s because society has been ignoring their negative externalities for so long they’re intrenched

            • porcoesphino@mander.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              True, but there is momentum. It’s empowering other countries and that could lead to a second pass at legislation in Aus after its not so outlandish or it could lead to another country doing something better and then Aus copying after the costly validation was done by someone else. I think waiting for perfect legislation likely leads to what we’ve had for a while and that’s even less / very little push back on tech companies

    • wheezy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 month ago

      It’s a bandaid. And just like previous attempts like this all this will do is make Australian kids better at circumventing the censorship or using an alternative website. Which, honestly, is probably a positive in and of itself. I’d much rather my kid be visiting some random forum type website (like I grew up with) then the absolute brain rot that is social media algorithms.

      Seeing “lemon party” posted before the mods removed it definitely fucked me up less than the slop being fed into the brains of teenagers on social media today.

    • venusaur@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      11
      ·
      1 month ago

      Wow I’m shocked you have no downvotes. I totally agree but Lemmy seems to hate internet restrictions, especially porn. Don’t come for their porn. They’ll destroy you.