Opponents say the legislation would violate the First Amendment by allowing the federal government to dictate information and censor protected speech, potentially making the regulatory landscape worse for businesses and consumers.
The U.S. Senate has approved two crucial bills aimed at enhancing online safety for children, marking a significant legislative step as the measures now head to the House of Representatives. The legislative package, championed as a necessary response to growing concerns about the dangers children face online, has sparked a heated debate over potential implications for internet freedom and free speech.
The two bills - the Children and Teens' Online Privacy Protection Act and the Kids Online Safety Act, nicknamed COPPA 2.0 and KOSA - would need to pass in the Republican-controlled House, currently on recess until September, to become law.
Approved by the Senate in a rare bipartisan 91-3 vote, the bills are designed to address a range of online threats to children, including cyberbullying, exposure to inappropriate content, and data privacy concerns. The bills provide stricter age verification requirements for accessing certain websites, enhanced parental control tools, and obligations for social media platforms to identify and mitigate harmful content more effectively.
KOSA would establish a “duty of care” for online platforms to prevent and mitigate specific harms to minors under the age of 17, including the promotion of suicide, eating disorders, substance abuse, sexual exploitation and advertisements for illegal products.
KOSA would authorize the Federal Trade Commission (FTC) to issue guidance to assist covered platforms in complying with the duty of care and enable the agency to bring enforcement actions against companies that fail to comply.
KOSA would also require social media platforms to provide minors with options to protect their information and opt out of personalized algorithmic recommendations.
COPPA 2.0 would ban online companies from collecting personal information from minors under the age of 17. Under current law, companies are prohibited from gathering personal information only if they have “actual knowledge” that the computer user is under age 17.
"Kids are not your product, kids are not your profit source, and we are going to protect them in the virtual space," Senator Marsha Blackburn, a Republican cosponsor of KOSA, said in a press conference after the vote.
Despite the legislative support, the bills have drawn sharp criticism from various quarters, notably tech industry groups and the American Civil Liberties Union (ACLU). Opponents argue that the measures could inadvertently lead to censorship and infringe upon First Amendment rights.
Tech industry representatives warn that the bills' requirements for proactive content monitoring could compel platforms to over-censor, thereby stifling free expression.
By “rushing to pass flawed bills,” Congress will not effectively protect children and will only make the regulatory landscape worse for businesses and consumers, said Ash Johnson, senior policy manager at the Information Technology and Innovation Foundation. “KOSA aims to protect minors on social media platforms from content that is harmful to children, but in so doing opens the door to censorship by the FTC, which would have the power to decide what qualifies as harmful.”
The ACLU has echoed these concerns, highlighting the potential for government-mandated censorship under the guise of child protection. “As state legislatures and school boards across the country impose book bans and classroom censorship laws, the last thing students and parents need is another act of government censorship deciding which educational resources are appropriate for their families,” said Jenna Leventoff, the ACLU’s senior policy counsel.
Critics argue that the bills could fundamentally alter the landscape of internet freedom. The emphasis on age verification and content monitoring raises practical and ethical questions about privacy and the balance between safety and freedom.
For instance, implementing robust age verification mechanisms could require collecting sensitive personal information, raising privacy concerns. Moreover, the obligation for platforms to preemptively filter content might lead to an overabundance of caution, where platforms remove content that is perfectly legal but potentially controversial or misinterpreted.
Such measures could also disproportionately affect smaller platforms that lack the resources to implement comprehensive monitoring systems. This could stifle innovation and competition in the tech industry, reinforcing the dominance of major players who can afford to comply with the new regulations.
As the bills advance to the House of Representatives, the debate is expected to intensify. The outcome of this legislative effort will likely set a precedent for how the U.S. addresses online safety for children in the future, with potential ripple effects for global internet governance.
As part of ongoing efforts to ensure a safer online environment, both Google and Meta announced steps to address the issue of misleading artificial intelligence (AI)-generated content, whether in the form of explicit deepfakes or hallucinations.
More specifically, Google announced new measures to combat the rapidly increasing spread of AI-generated non-consensual explicit deepfakes in its search results, following consultations with experts and victim-survivors. The company made it easier for targets of fake explicit images—which experts have said are overwhelmingly women—to report and remove deepfakes that surface in search results. Additionally, Google took steps to downrank explicit deepfakes “to keep this type of content from appearing high up in Search results.”
Following reports that Meta AI gives incorrect information about the attempted assassination of former President Donald Trump, even claiming that the rally shooting didn’t happen, Meta said it configured its AI chatbot to avoid answering questions about the event.
“In a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn't happen—which we are quickly working to address,” Meta Global Policy VP Joel Kaplan wrote in a blog post. “These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward.”