
In 2023, Bark, an online monitoring tool, processed 5.6 billion activities on family accounts alone. Of that stat were many more statistics showing large percentages of kids exposed to self-deprecating, bullying and sexual content.
That dataset, along with preceding and likely following datasets, drove many countries to pass online security acts. The United Kingdom’s Online Safety Act received Royal Assent in 2023. In 2025, the United States passed KOSA, the Kids Online Safety Act, and in December of 2025, Australia will also pass an online safety act.
Online safety, in this case, means that companies no longer have just a moral obligation to reduce exposure to harmful content but a legal obligation to remove illegal content. This is to protect children from content that could be legal yet still harmful.
While the purpose of the online safety acts are based on really good intentions, it raises a dilemma for several governments: the challenge of balancing internet safety and ensuring privacy amongst online users on social media. “Once they start going toward privatization and straying away from user data protection, that is where it starts to get foggy,” said senior Jack Rippchen.
People have begun questioning whether or not these laws are protecting or policing users. As the internet and social media have become more and more accessible, government internet laws have become more and more restrictive. This raises concern that open internet might be coming to an end.
Governments often argue that the online safety laws are to help combat terrorism, cybercrime and misinformation. These laws, however, can be interpreted by the government in such a way that they suddenly have more power to monitor digital activity. “To the point that the government acts aren’t taking away your right to privacy and so long as the IDs aren’t directly linked to yourself is fine,” senior Colin Merrell said. “Once the safety checks start invading privacy and requiring too much personal identification is when it has stepped over the line.”
Governments aren’t the only ones holding significant digital power as tech companies, too, contain much influence. The same tools that are used by governments to protect are also used to predict profit from corporate data practices. Fortunately, online safety acts also target corporate companies.
Companies now have a legal duty to remove media content that contains child sexual abuse material, terrorism, hate speech, and more. “This marks the most significant step forward in child safety since the internet was created,” said U.K. Technology Secretary Peter Kyle.
Major online media platforms such as YouTube, Spotify, TikTok and potentially more than 100,000 digital services are expected to comply with the Online Safety Act. The U.K., the U.S. and Australia have begun requiring users of YouTube and Spotify to verify their age to access restricted content. This means prompting users to submit either a government-issued ID or a facial scan.
YouTube has started requiring age verification to access mature videos and restricted videos. This is so that children watching YouTube are recommended videos appropriate for their age. Spotify had begun requiring age verification for users to access music videos and podcasts with mature themes. Accounts could be deleted if a user fails to present age verification within a time period.
While the goal of these laws are to protect children from dangerous internet content, the privacy of all users is sacrificed to a degree. “When the government and corporations use your private data to “better” their services, that becomes more about controlling you on an individual level rather than protecting you,” senior Sumedh Rajurkar. Online safety laws are made with good intentions, but as they expand as national security measures and company data-collecting practices, the fine line between protecting and policing becomes increasingly blurred.
