Privacy vs. Protection: The Tradeoffs of Age Verification

Policy & Law Journal

Abstract

This paper examines the constitutional, ethical, and practical tradeoffs of mandatory age verification laws for social media, with particular focus on NetChoice, LLC v. Fitch. While such measures are framed as efforts to safeguard minors from online harms, they risk undermining free expression, privacy, and equitable access to digital spaces. Drawing on relevant jurisprudence and normative theory, the paper argues that these laws are constitutionally overbroad and ethically troubling. Alternative approaches, including parental controls, digital literacy, and industry standards, offer more proportionate means of protecting children while preserving fundamental rights in an increasingly contested digital public sphere.

Background

On August 14, 2025, the United States Supreme Court ruled on NetChoice, LLC v. Lynn Fitch, finding that for the time being, the state of Mississippi is authorized to continue enforcement of the law HB 1126 (also known as Walker Montgomery Protecting Children Online Act) while the legal battle over its constitutionality continues. It’s passage through the Mississippi legislature placed new requirements on social media platforms to verify the age of users and obtain parental consent for minors before they can open accounts. The law maintains that platforms that fail to comply may face civil penalties (up to $10,000 per violation) and potential criminal penalties1 NetChoice, a tech industry trade group representing platforms like Meta, YouTube, Snapchat, and Reddit, sued the state of Mississippi, arguing the law violates the First Amendment’s free speech protections. A federal district judge, Hon. Halil Suleyman Ozerden blocked the law when it initially passed through the state legislature, but the Fifth Circuit Court of Appeals allowed it to go into effect while litigation proceeds.2

The Supreme Court declined to issue an emergency stay blocking enforcement of the law. Justice Brett Kavanaugh, in a concurrence, expressed that the law is “likely unconstitutional,” but that NetChoice hadn’t demonstrated the necessary immediate harm to justify blocking it at this interim stage.3 As a consequence of this ruling, law remains in effect while the legal battle continues through the lower courts. This is the first case of its kind reaching the Supreme Court regarding age-verification laws for social media. The case raises significant questions about First Amendment rights for minors. It comes on the heels of Free Speech Coalition v. Paxton, where the Court upheld a Texas law requiring age verification for adult sites, applying intermediate scrutiny instead of strict scrutiny, a lower bar for regulation.4

Introduction

In the last two decades, the regulation of online spaces has emerged as one of the most pressing constitutional questions in American law. The rapid expansion of social media platforms such as Instagram, Facebook, X, TikTok, Reddit, YouTube, and Snapchat has generated new concerns about privacy, misinformation, and, most notably, the exposure of minors to harmful content. While policymakers, scholars, and the public have long debated the responsibilities of technology companies, recent years have seen an intensified push by states to enact laws designed to shield children from the risks associated with digital environments. At the center of this push are statutes requiring age verification before minors may access certain online platforms. Proponents argue that these measures empower parents and protect vulnerable users from psychological and social harms. Lali Sarvendram of the General Consensus noted in March 2024 “Having stricter age limits on social media can prevent young kids from running into issues such as cyberbullying, body image issues, mental health problems, and other things that they should not have to deal with at a young age.”5 On the other hand, critics warn of the costs to free expression, equal access, and personal privacy. Dr. Catherine Page Jeffery argues “enforcing an age limit is {not} a good move because it would be tricky to impose and would deny young people access to platforms they value, learn on and derive entertainment from.”6 This tension between protecting children and preserving constitutional freedoms came to a head in NetChoice, LLC v. Fitch, a case recently decided by the Supreme Court.

The Mississippi statute at issue in NetChoice required users to provide proof of age before creating or maintaining an account on social media platforms. Framed as a measure to ensure the safety of minors, the law was part of a broader national trend. Several states, including Arkansas, Utah, and Louisiana, have enacted or proposed similar legislation, reflecting growing bipartisan concern over the effects of digital media on children. Yet these laws raise difficult constitutional questions. While the state possesses a compelling interest in safeguarding minors from online harms, such regulations often place broad restrictions on speech and create practical obstacles for both children and adults. The Supreme Court, in reviewing Mississippi’s law, was asked to determine whether the state’s attempt to regulate access to social media struck an acceptable balance between the rights of individuals and the responsibilities of the government. The Court’s decision reflects the complexity of this balance. Although the justices acknowledged the legitimacy of Mississippi’s concern for child welfare, the majority concluded that the law imposed unconstitutional burdens on speech and access to information. The ruling drew heavily on precedent, particularly Reno v. ACLU (1997) and Brown v. Entertainment Merchants Association (2011), which reaffirmed that minors are entitled to significant First Amendment protections. At the same time, concurring opinions expressed openness to the idea that narrowly tailored regulations could survive constitutional scrutiny if they did not overly restrict lawful speech or infringe on adult users’ rights. This divided outcome underscores the turbulent nature of the legal terrain.

The decision in NetChoice illustrates the broader dilemma currently facing policymakers. On one side is the undeniable need to protect children from predatory actors, exposure to explicit content, and the documented risks of excessive social media use. On the other side are fundamental concerns about freedom of expression, privacy, and equal access to the digital public sphere. Mandatory age verification laws bring these concerns into sharp relief. They not only implicate the First Amendment but also raise practical questions about surveillance, data collection, and the treatment of minors as autonomous individuals with their own rights to information and participation.

I will examine the tradeoffs inherent in age verification laws, using NetChoice v. Fitch as a focal point. While the state’s interest in child protection is compelling, the mechanism of mandatory age verification creates constitutional, ethical, and practical challenges that outweigh the proposed benefits. This paper contends that while protecting minors online is an urgent and legitimate state interest, mandatory age verification laws such as Mississippi’s are constitutionally overbroad, ethically troubling, and practically unworkable, and that alternative policy measures are better suited to balance child welfare with the preservation of free expression and privacy.

The State’s Interest in Protecting Children

States have long held and acknowledged a compelling interest in safeguarding minors from harm. In this age of increasing social media access amongst younger and younger populations, the scope of that interest has expanded dramatically, encompassing not only traditional threats but new forms of psychological exploitation, predation, and exposure to damaging content. Mississippi’s Walker Montgomery Protecting Children Online Act (HB 1126), was passed with unanimous support in both legislative chambers and was defended by the state in part as a response to a extortion-related tragedy involving a 16 year old.7 Such contexts underscore the seriousness of state concern and align with long-standing governmental responsibilities toward minors.

The Supreme Court’s decisions in related cases, such as Free Speech Coalition v. Paxton, reflect an evolving judicial recognition of these interests. In Paxton, Justice Thomas explains the majority opinion, arguing that Court upheld Texas legislation requiring age verification for access to online explicit content, applying intermediate scrutiny and deeming the burden on adult speech “incidental” to a substantial governmental interest in protecting children.8 Critically, the Court stated that an important or substantial interest is satisfied because protecting children justifies certain limits on speech, and that the statutes did not need to be the least restrictive means possible, so long as they were “properly tailored” and did not directly target the content of adult speech.9 That decision reveals the Court’s willingness to give deference to state protective interests in the digital realm, particularly when the target is content that minors are especially vulnerable to. Moreover, comparative sheltering efforts and examinations of harms underline broader consensus. Governing bodies and task forces have pressured platforms to adopt greater protective measures. At the federal level, the Children’s Online Privacy Protection Act (COPPA) reflects similar priorities by requiring parental consent for the collection of data from children under 13. Although COPPA continues to face various criticisms for privacy risks and chilling effects, particularly when platforms restrict access to all minors, the law nonetheless crystalizes the principle that children’s privacy and welfare justify special regulations.

At a theoretical level, philosophers and legal scholars emphasize that minors possess both vulnerabilities and developing personhood, creating a legitimate space for protective regulation absent undue suppression. This aligns with the harm principle in liberal constitutional thought: the state may intervene to curtail risk to vulnerable populations, provided regulation is proportional and respects fundamental freedoms elsewhere. Mississippi’s statute, viewed through this lens, evidences a multi-faceted approach to child protection. It targets not only age verification but also broader practices, such as data collection and algorithmic amplification, that pose risks to minors. The law’s requirement to delete identification data after verification reflects an attempt to balance protective measures with privacy considerations. Nevertheless, critics warn about the unintended consequences: such as privacy invasion, chilling of speech, and unequal access. But the state’s intention remains clear: to equip parents and platforms with tools to shield minors from documented online risks, including grooming, self-harm content, bullying, and addictive design practices.

Tradeoffs and Risks of Age Verification

While the state’s interest in protecting minors is compelling, mandatory age verification laws evoke substantial constitutional, ethical, and practical concerns. These tradeoffs, ranging from free speech suppression to privacy vulnerabilities and disparate access, suggest that such laws may in fact cause more harm than good.

One prominent worry is the erosion of privacy through data collection and retention. To verify a user’s age, platforms might require sensitive identifiers like government-issued IDs or biometric scans. As critics have warned, this personal data becomes a high-value target for hackers and can lead to identity theft. Nearly one million American children had their identities stolen in 2021, pointing to the grave risks involved in aggregating such data.10 Privacy experts argue that centralized storage of identification materials, especially if mandated by law, magnifies danger by creating enticing targets for exploitation 11

Connected to this is the potential surveillance creep. Age verification frameworks risk morphing into surveillance tools by design. Kyle Chayka of The New Yorker aptly explains that these systems “spell the end of the relative anonymity that we’ve come to expect online,” especially for marginalized communities where anonymity may be vital for safety and expression.12 Further, invasive methods like facial recognition and AI-based estimation fuel both privacy and fairness concerns. Technologies such as Yoti claim high accuracy in age estimation, yet prior studies reveal facial recognition systems often underperform for persons of color or gender-diverse individuals, which raises numerous equity issues. 13 Moreover, mandates for age checks may diminish freedom of expression, particularly for minors and conscientious adult users. The Electronic Frontier Foundation has cautioned that requiring identification online “chills access” to speech and threatens anonymity.14 Similarly, commentators have argued these policies force even compliant users to reveal sensitive data, while determined individuals, especially minors, will still find circumvention methods, increasing inequality in access. 

The technical limitations of verification systems further add to the concerns. Many current tools such as credit card checks, document scans, AI estimation, and others, are flawed or easily bypassed. Teen users may rely on VPNs, fake IDs, or third-party apps to evade verification, exposing themselves to malware or phishing risks.15 Additionally, these methods are costly, especially for smaller or emerging platforms. Such burdensome requirements could consolidate power among big tech firms that can afford compliance, thereby stifling competition and innovation.16 Age verification laws also pose access injustice, inadvertently excluding vulnerable groups. Individuals without official ID such as refugees, homeless youth, or low-income minors could be entirely cut off from online platforms, limiting access to educational and social support networks. Even well-intentioned regulation may force users to choose between their privacy and social connection.

Another risk lies in the chilling effect from overbroad or asymmetric enforcement. Young people and parents expressed that they view age verification more as a control tool than genuine protection. Some teenagers noted it could isolate them socially or remove safe online outlets.17 Parents voiced skepticism too, worrying about data misuse and whether their children’s information would be used for purposes beyond age verification.

Lastly, there is the threat of global fragmentation and regulatory confusion. States across the U.S., and countries like the U.K. and Australia, are adopting divergent frameworks for age enforcement. This fragmented landscape creates compliance complexity for platforms and uneven experiences for users.18 Without coherent national or international standards, enforcement becomes unpredictable and unfair.

Alternative Policy Approaches

Given the significant constitutional, ethical, and practical risks posed by mandatory age verification laws, policymakers and technology companies have explored alternative approaches to protecting minors online. One such approach is enhancing parental control mechanisms. Modern operating systems and platforms increasingly provide tools that allow parents to monitor or limit children’s online activity, set usage schedules, and restrict access to specific content categories. These tools respect the principle that parents, rather than the state, should primarily guide minors’ digital interactions. Unlike broad statutory mandates, parental controls enable a more targeted and context-sensitive approach, accommodating the diverse maturity levels and needs of individual children. Evidence from studies conducted by organizations like Gallup on digital literacy and parental monitoring suggests that children whose online activity is supervised or guided by guardians are less likely to encounter harmful content. Additionally, these youths are more likely to engage with age-appropriate educational resources.19 Parental control systems also avoid centralized data collection, reducing the risk of privacy violations inherent in large-scale verification systems. Despite their promise, these mechanisms are not perfect. Effectiveness depends on parental engagement and technological literacy, and there is a risk of uneven protection for children in households with limited resources. Nevertheless, parental control frameworks offer a flexible, less invasive alternative that balances child safety with the preservation of constitutional rights.

A second potential alternative emphasizes education and digital literacy as preventive measures. These kinds of programs are designed to teach children about online risks, safe social media practices, and critical evaluation of content which can empower minors to navigate the digital environment responsibly. Research in developmental psychology and education indicates that children who receive structured guidance on media use develop better self-regulation and awareness of potential threats. Educational initiatives may be implemented through schools, community programs, or platform-specific tutorials and safety modules. Complementing these programs, industry self-regulation offers another mechanism for protection. Social media companies can adopt internal policies to limit exposure to harmful content, detect predatory behavior, and employ age-appropriate content curation. For instance, algorithmic adjustments and content warnings can mitigate risks without requiring personal identification from minors. Regulatory frameworks can incentivize, rather than mandate, such measures, encouraging innovation and compliance without imposing the heavy burdens associated with state-mandated verification. Comparative international experience reinforces the value of these approaches. In countries like the United Kingdom and Australia, initiatives combining educational outreach with industry standards have been central to child protection online, often proving more effective and adaptable than rigid verification systems. Together, education, parental control, and responsible platform policies provide a multifaceted, balanced strategy to protect minors while minimizing constitutional and privacy concerns.

Normative and Theoretical Analysis

The debate over age verification implicates fundamental questions about the role of the state in protecting minors, the moral status of children, and the nature of constitutional rights in digital spaces. Any adequate analysis must move beyond pragmatic or purely legal concerns and address the deeper normative structure underpinning regulation. The question is not merely whether Mississippi’s statute is constitutional in the narrow sense, but whether the form of regulation it represents is justified given the aims of a liberal democratic order.

From a liberal constitutional perspective, the starting point is John Stuart Mill’s harm principle,20 which holds that the only legitimate reason for state coercion is to prevent harm to others. The harm principle appears to justify measures aimed at shielding minors from exploitation, predation, and serious psychological harm, since children may be incapable of assessing or avoiding such dangers on their own. This has traditionally provided the foundation for protective laws governing child labor, education, and obscenity. Yet Mill also emphasized that paternalistic interventions must be proportionate and must not unnecessarily infringe on individual liberty. The mere existence of risk is not sufficient to justify sweeping restrictions. Laws must be tailored to the magnitude of the harm and must interfere with liberty only to the degree necessary to mitigate that harm. Mandatory age verification arguably fails this proportionality test. By requiring all users to submit identification before accessing platforms, it treats every user, adult and child alike, as a potential risk. This resembles what Joel Feinberg has called the “offense principle”21 taken too far, where the state moves beyond preventing harm and instead begins to control the conditions under which lawful activity may be exercised. In this sense, age verification laws may constitute an overreach, imposing burdens that affect even those who are not within the class the state is trying to protect.

A second theoretical concern involves the moral status of children as developing persons. Feinberg’s own theory of children’s rights stresses that minors hold what he calls “rights-in-trust,”22 rights to an open future that the state and society must safeguard. This means children should be protected from exploitation and serious harm but also allowed to cultivate autonomy and participate in the cultural and educational life of the community. Digital participation is now a major site of such engagement, and sweeping restrictions on access may undermine this developmental process. By conditioning participation on the surrender of privacy and the presentation of government-issued identification, these laws transform the digital public sphere from an open common into a gated arena. Children may lose opportunities for self-expression, political education, and creative exploration that are essential to developing a robust sense of agency.

This problem becomes even clearer when we consider the relationship between speech rights and personhood. The Supreme Court has long recognized that minors enjoy substantial First Amendment protections. In Tinker v. Des Moines Independent Community School District, the Court famously declared that neither students nor teachers “shed their constitutional rights to freedom of speech or expression at the schoolhouse gate.”23 The digital environment now plays a similar role to the schoolhouse in shaping civic identity and participation. To exclude minors from it, or to condition their entry on intrusive verification procedures, risks diminishing their constitutional personhood. Such exclusion must therefore meet the highest standards of justification.

Constitutional theory provides a framework for this inquiry through the principle of proportionality and the scrutiny tests applied by the courts. When speech is burdened, strict scrutiny typically applies, requiring that the law be narrowly tailored to serve a compelling governmental interest and that it uses the least restrictive means available. Although in Free Speech Coalition v. Paxton the Court applied intermediate scrutiny, many scholars argue that broad verification mandates implicate core speech rights sufficiently to warrant strict scrutiny. Even under intermediate scrutiny, however, a law must be substantially related to an important governmental interest and must not burden more speech than necessary. Age verification laws are vulnerable under either standard because they create blanket restrictions, risk chilling lawful expression, and may be both over- and under-inclusive. They are over-inclusive because they capture adults and older minors who may safely use the platforms, and under-inclusive because determined youth can often circumvent verification with relative ease.

Ethical analysis also highlights the problem of surveillance and privacy as independent values. Liberal political theory treats privacy not merely as an instrumental good but as a constitutive element of freedom. To be free is partly to enjoy a private sphere in which one is not constantly observed or required to justify one’s activities. Age verification transforms ordinary digital interaction into an act of disclosure, where individuals must reveal personal data simply to participate. This is particularly troubling given the history of surveillance technologies being used disproportionately against marginalized groups. What may seem like a neutral administrative requirement can become a mechanism of exclusion for those without formal identification or those seeking anonymity for safety reasons, including LGBTQ youth and political dissidents.

From a communitarian perspective, one might argue that protecting children is a collective responsibility and that some sacrifice of privacy is acceptable for the sake of community welfare. However, even communitarian theorists caution against eroding the very norms that sustain trust and social cohesion. If digital spaces become sites of constant identity verification, the character of those spaces changes. Participation begins to resemble a licensed privilege rather than a presumptive right. This shift risks normalizing a culture of surveillance that is inconsistent with democratic citizenship. Citizens are meant to deliberate and exchange ideas freely, not under the perpetual gaze of a gatekeeping authority.

The long-term implications of these policies have to be considered. If age verification becomes the standard, it could pave the way for broader forms of identity-gated access across the internet, effectively ending anonymous speech. Anonymity has historically played a crucial role in democratic life, from the pseudonymous publication of The Federalist Papers to modern whistleblowing and activist movements. Removing anonymity would disproportionately silence vulnerable voices while doing little to deter bad actors who can exploit loopholes or stolen identities.

In light of these considerations, a more defensible regulatory approach would be one that respects the developing autonomy of minors while equipping parents and communities with tools to mitigate genuine harm. Parental control systems, digital literacy programs, and voluntary industry standards can be calibrated to protect children without imposing universal surveillance. Such approaches also allow for diversity in family values and child-rearing practices, reflecting a pluralistic society in which not all parents or minors share the same views on risk and exposure.

Ultimately, an analysis through this perspective reveals that mandatory age verification is not merely some technical or administrative measure but really a profound intervention into the architecture of digital freedom. It reconfigures the relationship between individuals and the state, transforming the internet from a presumptively open forum into a regulated domain. For a liberal democracy committed to robust freedom of expression, privacy, and equal access, such a transformation should occur, if at all, only after the most careful deliberation and with the narrowest possible scope. The state must weigh not only the immediate benefits of child protection but also the cost to the civic and moral development of future generations. The guiding principle should be to preserve the conditions under which children can grow into free and responsible citizens, which includes protecting their right to engage, learn, and express themselves in the digital public sphere.

Conclusion

The controversy surrounding mandatory age verification laws reveals the deep tension between child protection and the preservation of constitutional freedoms in the digital age. NetChoice, LLC v. Fitch illustrates how even well-intentioned legislation can impose sweeping burdens on privacy, free expression, and equitable access to online platforms. While states have a compelling interest in shielding minors from exploitation and harmful content, that interest must be pursued in a manner that respects the rights of both children and adults. Mandatory verification risks normalizing surveillance, eroding anonymity, and creating structural inequities that disproportionately affect vulnerable groups.

A more proportionate approach would emphasize parental empowerment, digital literacy, and voluntary industry standards, fostering a safer online environment without sacrificing the openness of the digital public sphere. Such solutions align with constitutional principles and with philosophical commitments to autonomy, personhood, and the right to an open future. Policymakers must seek to protect children not by walling off the internet, but by ensuring that children can grow into responsible digital citizens. In the long run, preserving freedom, privacy, and equal access is itself a form of protection, safeguarding the democratic character of online life for future generations.

Notes

  1. Amy Howe, “Supreme Court Allows Mississippi Restrictions on Children’s Social Media Access to Remain in Place,” SCOTUSblog, August 14, 2025, https://www.scotusblog.com/2025/08/supreme-court-allows-mississippi-restrictions-on-childrens-social-media-access-to-remain-in-place/.
  2. Kit Yona, “SCOTUS Leaves ‘Likely Unconstitutional’ Mississippi Social Media Law in Place for Now,” FindLaw, last modified August 19, 2025, https://www.findlaw.com/legalblogs/federal-courts/scotus-leaves-likely-unconstitutional-mississippi-social-media-law-in-place-for-now/.
  3. NetChoice, LLC v. Fitch, 606 U.S. ___ (2025).
  4. “Free Speech Coalition, Inc. v. Paxton,” Oyez, September 21, 2025, https://www.oyez.org/cases/2024/23-1122.
  5. Lali Sarvendram, “Why There Should be Stricter Age Limits on Social Media,” The General Consensus, March 22, 2024, https://hwrhsgeneralconsensus.com/11710/opinion/why-there-should-be-stricter-age-limits-on-social-meida/.
  6. Evelyn Manfield, “Social media age limits might be popular with politicians and parents, but experts warn they aren’t simple,” Australian Broadcasting Corporation, June 13, 2024, https://www.abc.net.au/news/2024-06-14/social-media-age-limits-experts-warn-they-aren-t-simple/103975740.
  7. Emily Pettus and Simeon Gates, “Mississippi can start using law on social media age verification, court says,” Mississippi Today, July 18, 2025, https://mississippitoday.org/2025/07/18/social-media-age-verification-law/.
  8. Free Speech Coalition, Inc. v. Paxton 06 U. S. ___ (2025).
  9. Callum Sutherland, “Supreme Court Upholds Texas Law Requiring Age Verification to Access Pornography,” Time, June 27, 2025, https://time.com/7298366/supreme-court-pornography-age-verification-texas/.
  10. Tyler Curtis, Social Media Verification Puts User Data at Risk,” Government Technology, March 29, 2024, https://www.govtech.com/opinion/opinion-social-media-age-verification-puts-user-data-at-risk.
  11. Ibid.
  12. Kyle Chayka, “The Internet Wants to Check Your I.D.,” The New Yorker, August 6, 2025, https://www.newyorker.com/culture/infinite-scroll/the-internet-wants-to-check-your-id.
  13. Emma Roth, “Online age verification is coming, and privacy is on the chopping block,” The Verge, May 15, 2023, https://www.theverge.com/23721306/online-age-verification-privacy-laws-child-safety.
  14. Lisa Femia, “EFF Urges Supreme Court to Reject Texas’ Speech-Chilling Age Verification Law,” Electronic Frontier Foundation, May 21, 2024, https://www.eff.org/deeplinks/2024/05/eff-urges-supreme-court-reject-texas-speech-chilling-age-verification-law.
  15. Liudas Kanapienis, “Social media’s age verification crisis: Can platforms solve the technical and ethical puzzle?,” Biometric Update, July 1, 2025, https://www.biometricupdate.com/202507/social-medias-age-verification-crisis-can-platforms-solve-the-technical-and-ethical-puzzle.
  16. “U.S. Social Media Regulations for Minors,” GovFacts, July 7, 2025, https://govfacts.org/explainer/u-s-social-media-regulations-for-minors/.
  17. Justine Humphry et al., “Age verification for social media would impact all of us. We asked parents and kids if they actually want it,” The Conversation, May 21, 2024, https://theconversation.com/age-verification-for-social-media-would-impact-all-of-us-we-asked-parents-and-kids-if-they-actually-want-it-230539.
  18. Kyooeun Jang, Lulia Pan, and Nicole Turner Lee, “The fragmentation of online child safety regulations,” Brookings Institution, August 14, 2023, https://www.brookings.edu/articles/patchwork-protection-of-minors/.
  19. Spence Purnell, “Gallup shows how parenting supervision on social media use impacts youth mental health,” Reason Foundation, December 15, 2023, https://reason.org/commentary/gallup-shows-how-parenting-supervision-on-social-media-use-impacts-youth-mental-health/.
  20. John Stuart Mill, On Liberty, ed. Elizabeth Rapaport (Indianapolis: Hackett Publishing Company, 1978).
  21. Joel Feinberg, Offense to Others: The Moral Limits of the Criminal Law (New York: Oxford University Press, 1985).
  22. Joel Feinberg, “The Child’s Right to an Open Future,” in Freedom and Fulfillment: Philosophical Essays, ed. Joel Feinberg (Princeton: Princeton University Press, 1992).
  23. Tinker v. Des Moines Independent Community School District, 393 U.S. 503 (1969).

Discover more from Carnegie Mellon Policy and Law Review

Subscribe now to keep reading and get access to the full archive.

Continue reading