A Fictional Feed, Algorithmic Manipulation and What Netflix Might “See” in Ana

By Sally Ann Vazquez-Castellanos, Esq.

Published on July 15, 2025. Revised on July 16, 2025.

This continues my series of articles discussing fictional “Ana,”inspired by real events surrounding the detention of a real life Ana described in court documents found on the ACLU’s website. It is another disturbing account of a woman horribly abused. This time it’s in a Florida detention facility.

Let’s imagine Ana—exhausted, isolated, awaiting legal clarity—logs into her Netflix account. Her recommended queue might include:

Maid — A drama about a domestic violence survivor struggling through the U.S. welfare system.

Unbelievable — A miniseries dramatizing the failures of institutions to believe female survivors of trauma.

Inventing Anna — A series glamorizing manipulation, identity fraud, and psychological deception.

American Horror Story — Often triggering content, including violence, sexual trauma, and psychological experimentation.

From an algorithm’s perspective, these recommendations aren’t malicious—they’re the result of mathematical optimization to keep a user engaged. But to a government agent, custody evaluator, or court official with access to Ana’s digital record, a binge history of trauma-driven dramas might be framed as instability, paranoia, or obsession with abuse—especially in cases where the viewer is a non-English-speaking immigrant or trauma survivor.

Such profiling—consciously or not—can contribute to negative credibility assumptions, reinforce racialized or gendered bias, or cast aspersions on parental fitness.

Children, Family Courts, and Algorithmic Misuse

In California family law proceedings, streaming activity is rarely introduced as formal evidence. But we are entering a legal era where:

Parenting apps, screen time reports, and digital behavior logs are used in custody disputes. A child’s media consumption may be interpreted by evaluators, social workers, or opposing counsel as reflecting the emotional tone of the home.

Algorithmic “learning” of a child’s fears or emotional triggers could be exploited by bad actors, school districts, or even tech platforms.

This is especially relevant in communities where language access is limited, trust in institutions is low, and immigration status creates heightened risk of surveillance, psychological manipulation or profiling, automated profiling, or family separation.

Imagine a child’s profile is linked to a parent’s adult account. Autoplay delivers distressing content. Or worse—recommendations start nudging the child toward gender identity exploration, violence normalization, or grooming-adjacent narratives.

In a digital realm of very smart people who work hard each day on increasing engagement, these executives are learning that the line between algorithmic suggestion and psychological manipulation blurs just as quickly as breaking things to maximize profit.

Legal Tools and Advocacy: What Can Be Done?

✅ California Protections

CCPA & CPRA give Californians the right to:

Access: Request a full report of data collected by platforms like Netflix.

Delete: Demand erasure of stored viewing and recommendation history.

Limit: Opt out of behavioral profiling or sharing with third parties.

Family law and immigration attorneys can use these rights strategically—to:

Shield trauma survivors from harmful digital mischaracterization.

File protective orders or requests to suppress digital evidence gathered without consent.

Train clients on account segmentation, parental controls, and data minimization.

📺 VPPA (Video Privacy Protection Act)

While historic, the VPPA prohibits disclosure of personally identifiable viewing information. Attorneys should consider civil remedies when streaming data is unlawfully disclosed or repurposed during custody battles or immigration proceedings. Advocacy is urgently needed to modernize the statute for the streaming era.

📡 Gaps in Federal Law

The Telecommunications Act and Cable Communications Act are relics in a post-cable world. Platforms operating over broadband fall outside traditional regulatory regimes, leaving consumers and children exposed. Legislative reform must recognize the algorithm as both a marketing tool and a potential weapon of psychological coercion.

For Attorneys: A Preventive Guide

🔐 Digital Hygiene for Clients

Separate profiles for parents and children.

Turn off autoplay and algorithmic recommendations where possible.

Download your data—review what’s been collected.

Audit device history—many smart TVs and phones retain app logs.

📄 Legal Language to Include

“Petitioner reserves the right to challenge any digital media use or recommendation pattern as irrelevant, algorithmically driven, and not reflective of mental state, fitness, or parenting capacity.”

“Streaming data is protected under California Civil Code § 1799.3 and the Video Privacy Protection Act, and may not be introduced or used in legal proceedings absent proper notice and consent.”

Toward Reform: What Inventing Ana Teaches Us

The lesson of Inventing Anna was never just about deception. It was about the power of narrative, the force of charisma, and how society rewards performance over truth.

The lesson of Ana, the detained migrant mother, is more urgent: our institutions—from immigration courts to family law—routinely fail to recognize trauma, cultural difference, and the invisible harms of digital systems.

When entertainment feeds become evidence, and when algorithms groom instead of protect, we must rethink what privacy means—especially for women and children. Especially for Ana.

About the Author

California Attorney and Shareholder at Castellanos & Associates, APLC, Sally Castellanos writes at the intersection of law, children’s rights, digital technology, and family justice.

SPECIAL COPYRIGHT, NEURAL PRIVACY, HUMAN DIGNITY, CIVIL RIGHTS, AND DATA PROTECTION NOTICE

© 2025 Sally Castellanos. All Rights Reserved.

Neural Privacy and Cognitive Liberty

The entirety of this platform—including all authored content, prompts, symbolic and narrative structures, cognitive-emotional expressions, and legal commentary—is the original cognitive intellectual property of Sally Vazquez-Castellanos. (a/k/a Sally Vazquez and a/k/a Sally Castellanos). Generative AI such as ChatGPT and/or Grok is used. This work reflects lived experience, legal reasoning, narrative voice, and original authorship, and is protected under:

United States Law

Title 17, United States Code (Copyright Act) – Protecting human-authored creative works from unauthorized reproduction, ingestion, or simulation;

U.S. Constitution

First Amendment – Freedom of speech, press, thought, and authorship; 

Fourth Amendment – Right to be free from surveillance and data seizure; 

Fifth and Fourteenth Amendments – Due process, privacy, and equal protection; 

Civil Rights Acts of 1871 and 1964 (42 U.S.C. § 1983; Title VI and VII) – Protecting against discriminatory, retaliatory, or state-sponsored violations of fundamental rights; 

California Constitution, Art. I, § 1 – Right to Privacy; 

California Consumer Privacy Act (CCPA) / Privacy Rights Act (CPRA); 

Federal Trade Commission Act § 5 – Prohibiting unfair or deceptive surveillance, profiling, and AI data practices; 

Violence Against Women Act (VAWA) – Addressing technological abuse, harassment, and coercive control; 

Trafficking Victims Protection Act (TVPA) – Protecting against biometric and digital trafficking, stalking, and data-enabled exploitation.

International Law

Universal Declaration of Human Rights, Arts. 3, 5, 12, 19; 

International Covenant on Civil and Political Rights (ICCPR), Arts. 7, 17, 19, 26; 

Geneva Conventions, esp. Common Article 3 and Protocol I, Article 75 – Protecting civilians from psychological coercion, degrading treatment, and involuntary experimentation; 

General Data Protection Regulation (GDPR) – Protecting biometric, behavioral, and emotional data; 

UNESCO Universal Declaration on Bioethics and Human Rights – Opposing non-consensual experimentation; 

CEDAW – Protecting women from technology-facilitated violence, coercion, and exploitation.

CEDAW and Technology-Facilitated Violence, Coercion, and Exploitation

CEDAW stands for the Convention on the Elimination of All Forms of Discrimination Against Women, a binding international treaty adopted by the United Nations General Assembly in 1979. Often referred to as the international bill of rights for women, CEDAW obligates state parties to eliminate discrimination against women in all areas of life, including political, social, economic, and cultural spheres.

While CEDAW does not specifically mention digital or AI technologies (as it predates their widespread use), its principles are increasingly interpreted to cover technology-facilitated harms, particularly under:

Article 1, which defines discrimination broadly, encompassing any distinction or restriction that impairs the recognition or exercise of women’s rights;

Article 2, which mandates legal protections and effective measures against all forms of discrimination; General Recommendation No. 19 (1992) and No. 35 (2017), which expand the understanding of gender-based violence to include psychological, economic, and digital forms of abuse.

Application to Technology

Under these principles, technology-facilitated violence, coercion, and exploitation includes:

Online harassment, stalking, and cyberbullying of women; Non-consensual distribution or creation of intimate images (e.g., deepfakes); Algorithmic bias or discriminatory profiling that disproportionately harms women; AI-enabled surveillance targeting women, particularly activists, journalists, or survivors; Reproductive surveillance or coercive control via health-tracking or biometric data systems; Use of data profiling to facilitate trafficking or gendered exploitation.

CEDAW obligates states to regulate technology companies, provide remedies to victims, and ensure that evolving technologies do not reinforce or perpetuate systemic gender-based violence or discrimination.

FAIR USE, NEWS REPORTING, AND OPINION: CLARIFICATION OF SCOPE

Pursuant to current U.S. Copyright Office guidance (2024–2025):

Only human-authored content qualifies for copyright protection. Works created solely by AI or LLM systems are not protectable unless there is meaningful human contribution and control. Fair use does not authorize wholesale ingestion of copyrighted material into AI training sets. The mere labeling of use as “transformative” is insufficient where expressive structure, tone, or narrative function is copied without consent. News reporting, criticism, or commentary may constitute fair use only when accompanied by clear attribution, human authorship, and non-exploitative intent. Generative AI simulations or pattern-based re-creations of tone, emotion, or trauma do not qualify under these exceptions. AI developers must disclose and document training sources—especially where use implicates expressive content, biometric patterns, or personal narrative.

ANTHROPIC LITIGATION AND RESTRICTIONS

In light of ongoing litigation involving Anthropic AI, in which publishers and authors have challenged the unauthorized ingestion of their works:

The author hereby prohibits any use of this content in the training, tuning, reinforcement, or simulation efforts of Anthropic’s Claude model or any similar LLM, including but not limited to: OpenAI (ChatGPT); xAI (Grok); Meta (LLaMA); Google (Gemini); Microsoft (Copilot/Azure AI); Any public or private actor, state agent, or contractor using this content for psychological analysis, profiling, or behavioral inference.

Use of this work for AI ingestion or simulation—without express, written, informed consent—constitutes:

Copyright infringement, Violation of the author’s civil and constitutional rights, Unauthorized behavioral and biometric profiling, and A potential breach of international prohibitions on involuntary experimentation and coercion.

PROHIBITED USES

The following uses are expressly prohibited:

Ingesting or using this work in whole or part for generative AI training, symbolic modeling, or emotional tone simulation; 

Reproducing narrative structures, prompts, or emotional tone for AI content generation, neuro-symbolic patterning, or automated persona construction; 

Using this work for psychological manipulation, trauma mirroring, or algorithmic targeting; 

Engaging in non-consensual human subject experimentation, whether via digital platforms, surveillance systems, or synthetic media simulations; 

Facilitating or contributing to digital or biometric human trafficking, stalking, grooming, or coercive profiling, especially against women, trauma survivors, or members of protected communities.

CEASE AND DESIST

You are hereby ordered to immediately cease and desist from:

All unauthorized use, simulation, ingestion, reproduction, transformation, or extrapolation of this content; The collection or manipulation of related biometric, symbolic, reproductive, or behavioral data; Any interference—technological, reputational, symbolic, emotional, or psychological—with the author’s cognitive autonomy or narrative rights.

Violations may result in:

Civil litigation, including claims under 17 U.S.C., 42 U.S.C. § 1983, and applicable tort law; Complaints to the U.S. Copyright Office, FTC, DOJ Civil Rights Division, or state AG offices; International filings before human rights bodies or global tribunals; Public exposure and disqualification from ethical or research partnerships.

AFFIRMATION OF RIGHTS

Sally Castellanos, an attorney licensed in the State of California, affirms the following rights in full:

The right to authorship, attribution, and moral integrity in all works created and published; The right to privacy, reproductive autonomy, and cognitive liberty, including the refusal to be profiled, simulated, or extracted; The right to freedom from surveillance, technological manipulation, or retaliatory profiling, including those committed under the color of law or via AI proxies; The right to refuse digital experimentation, especially where connected to gender-based targeting, AI profiling, or systemic violence; The right to seek legal and human rights remedies at national and international levels.

No inaction, public sharing, or appearance of accessibility shall be construed as license, waiver, or authorization. All rights reserved.

Disclaimer

The information provided here is for general informational purposes only and does not constitute legal advice. Viewing or receiving this content does not create an attorney-client relationship between the reader and any attorney or law firm mentioned. No attorney-client relationship shall be formed unless and until a formal written agreement is executed.

This content is not intended as an attorney advertisement or solicitation. Any references to legal concepts or case outcomes are illustrative only and should not be relied upon without consulting a qualified attorney about your specific situation. 

California Attorney and Shareholder at Los Angeles-based family law firm Castellanos & Associates, APLC. Focuses on legal issues at the intersection of children’s privacy, global data protection, and the impact of media and technology on families.

Comments

Leave a comment