Revised GUARD Act Still Poses Privacy and Free Speech Risks for Minors and Families

From Moocchen, the free encyclopedia of technology

Introduction

After facing significant criticism, lawmakers have amended the GUARD Act, a bill originally aimed at limiting minors' access to artificial intelligence systems. The initial version was alarmingly broad, potentially covering almost every AI-powered chatbot or search engine. The revised bill narrows its focus to so-called “AI companions”—conversational systems designed to simulate emotional or interpersonal engagement with users. While this change addresses some of the most sweeping concerns, serious issues remain regarding privacy, online expression, and parental choice.

Revised GUARD Act Still Poses Privacy and Free Speech Risks for Minors and Families
Source: www.eff.org

What the GUARD Act Targets

From Broad Restrictions to AI Companions

The original GUARD Act would have applied to a vast array of AI services, including generic chatbots, virtual assistants, and even search tools. Critics argued this would stifle innovation and limit access to legitimate educational and therapeutic tools. The updated version attempts to zero in on “AI companions,” defined as systems that encourage emotional disclosures from users or maintain a persistent identity, persona, or character. However, this narrower scope still leaves many ambiguities.

A Narrower but Still Problematic Scope

While the bill now targets only AI companions, the core mechanism for enforcing restrictions remains intrusive. Companies offering such services must implement “reasonable age verification” systems tied to users’ real-world identities. This requirement persists even when parents explicitly approve their children’s use of these tools. For instance, a family might believe an AI companion helps an isolated teenager practice social interactions or engage in harmless creative roleplay. A parent serving overseas might rely on a persistent AI storyteller for a young child. Under the revised bill, these use cases still face mandatory age checks that demand sensitive personal or financial information before the child can access the service.

The Burden of Identity-Linked Age Verification

Privacy and Accessibility Concerns

The revised GUARD Act allows a broader set of verification methods compared to its predecessor, but they all remain anchored to real-world identity—such as government ID, bank records, or age-verified accounts tied to mobile operating systems or app stores. This approach creates significant privacy and access hurdles. Millions of Americans lack current government identification, stable bank accounts, or reliable digital identity systems. Even for those who have these, linking identity to online speech tools poses risks for anonymity, data security, and personal privacy. Many users may simply avoid using AI companions altogether rather than surrender their sensitive information to a verification system that feels invasive.

Revised GUARD Act Still Poses Privacy and Free Speech Risks for Minors and Families
Source: www.eff.org

Unclear Definitions and Heavy Penalties

Ambiguous Legal Standards Create Risk

The revised bill leaves critical definitions vague, particularly around what constitutes an “emotional disclosure” or a “persistent identity.” At the same time, it sharply increases penalties for companies and developers who misjudge these boundaries. This combination of fuzzy language and severe liability creates a chilling effect. Companies may choose to block all users under 18 (or even all users) from any service that could be interpreted as an AI companion, rather than risk legal punishment. This outcome undermines parental choice and access to beneficial tools.

Conclusion: Unresolved Issues

Congress has narrowed the GUARD Act, but the revised version still attempts to solve a complex social problem through vague legal standards, heavy liability, and privacy-invasive verification systems. While the bill’s focus on AI companions is an improvement, it fails to adequately protect privacy, free speech, and the ability of families to make informed decisions about technology. Until these deeper flaws are addressed, the GUARD Act remains a problematic piece of legislation that could harm the very users it claims to protect.