When Authority Becomes Narrative: Custody, Power, and the Misuse of Systems
In custody litigation, courts are often asked to evaluate competing narratives about safety, stability, and parental fitness. But what happens when one parent possesses not just a narrative—but institutional credibility, access, and procedural fluency?
This is where the analysis under California Family Code § 3011 becomes more than a checklist. It becomes a lens through which courts must distinguish between legitimate protection and manufactured risk.
The Misunderstood Premise
There is a persistent but flawed assumption in custody disputes:
That a parent with a law enforcement or peace officer background carries inherent credibility, heightened judgment, or superior fitness.
California law does not support this premise.
Custody determinations are not awarded based on profession. They are grounded in a singular inquiry:
What outcome best serves the health, safety, and welfare of the child?
The difficulty arises when professional authority is not merely background—but becomes a tool for shaping the evidentiary record itself.
Manufacturing the Record
In some cases, a pattern emerges:
Repeated welfare checks initiated without substantiated findings
Police reports that escalate minor disputes into formal incidents
Strategic documentation timed around custody proceedings
Use of professional language or contacts to frame the other parent as unstable or unsafe
Individually, these actions may appear benign or even protective. But taken together, they may reveal something else:
The construction of a litigation narrative through institutional mechanisms.
This is where courts must move beyond surface-level documentation and ask a more difficult question:
Is this evidence reflective of actual risk—or the product of controlled narrative-building?
The Section 3011 Analysis Revisited
Under California Family Code § 3011, several factors become critical in this context:
1. Health, Safety, and Welfare
The court must assess not only physical safety, but emotional and psychological well-being.
A child repeatedly exposed to:
police presence
allegations against a parent
institutional escalation
A child, teenager or adult child may experience instability, fear, or confusion—regardless of whether the allegations are substantiated.
If one parent is responsible for generating that environment, the conduct itself becomes relevant.
2. History of Abuse and Coercive Control
California recognizes coercive control under California Family Code § 6320, which includes behavior that interferes with another person’s autonomy or liberty.
In a custody context, this may include:
leveraging institutional authority to intimidate
creating a perception of surveillance or scrutiny
repeatedly invoking systems to destabilize the other parent
The analysis is not limited to physical harm. It extends to patterns of control through process.
3. Ability to Foster a Relationship with the Other Parent
California law strongly favors a parent who supports the child’s relationship with the other parent.
When a parent:
repeatedly files unsubstantiated reports
escalates conflict unnecessarily
portrays the other parent as dangerous without evidence
The court may reasonably question whether that parent is acting in good faith—or attempting to limit contact through manufactured concern.
Credibility in the Age of Documentation
Modern custody disputes are increasingly document-driven. Reports, logs, and records carry weight.
But not all documentation is equal.
Courts must distinguish between:
Corroborated evidence, supported by neutral findings
Self-generated documentation, produced through unilateral action
The existence of a report is not proof of wrongdoing. The pattern, consistency, and outcome of those reports matter.
A Structural Concern
At its core, this issue raises a broader concern about power asymmetry in custody litigation.
When one parent:
understands institutional systems
has access to enforcement mechanisms
or benefits from perceived authority
There is a risk that the legal process itself becomes part of the dispute, rather than a neutral forum for resolution.
The Correct Framing
The most precise way to frame this issue—legally and ethically—is as follows:
The issue is not that one parent has a law enforcement background. The issue is whether that parent has used institutional knowledge, access, or perceived authority to manufacture a custody record against the other parent.
And under California Family Code § 3011:
That conduct bears directly on the child’s health, safety, emotional welfare, stability, and the parent’s willingness to support the child’s relationship with both parents.
What Courts Must Do
Courts are not tasked with choosing between professions. They are tasked with evaluating:
conduct
credibility
patterns
and impact on the child
This requires:
looking beyond the existence of reports
examining outcomes and corroboration
identifying patterns of escalation or control
And most importantly:
Protecting the child not only from harm—but from the manufacture of harm as a legal strategy.
Closing Reflection
In an era where systems can be activated quickly and records created easily, the risk is no longer just what happens inside the home.
It is what can be constructed about the home.
For family courts, the challenge is clear:
To ensure that authority does not become narrative, and that narrative does not become custody.
Sources (Publicly Available)
California Family Code § 3011 (Best interest of the child standard)
California Family Code § 3020 (Frequent and continuing contact policy)
California Family Code § 6320 (Definition of coercive control)
In re Marriage of LaMusga (Best interest and custody discretion)
Convention on the Rights of the Child (Child welfare and dignity principles)
Judicial Council of California, Child Custody Information Sheet (public guidance on custody determinations)
California Courts Self-Help Guide, Child Custody and Visitation (overview of best interest standard and factors)
Legal Disclaimer: This article is for informational purposes only and does not constitute legal advice. No attorney-client relationship is formed. Individuals should consult qualified legal counsel regarding their specific circumstances.
Cognitive Liberty and Privacy Note: This publication reflects ongoing legal and policy concerns regarding autonomy, informational integrity, and the intersection of technology, authority, and human rights. Unauthorized manipulation of personal narrative—whether through systems, technology, or institutional processes—raises serious legal and ethical implications.
Published on July 15, 2015. Revised on July 16, 2015.
Children’s Rights, Behavioral Profiling, and the Law
“What happens when an algorithm learns your trauma before you speak it aloud?”
“And what if it uses that knowledge—not to heal—but to shape, manipulate, harass, or punish you?”
Quote: Chat GPT
In 2024, the ACLU filed a harrowing civil rights complaint detailing the abuse of a Spanish-speaking migrant mother—pseudonymously referred to as Ana—held in solitary confinement for weeks at a Florida ICE detention facility. A survivor of trafficking and domestic violence, Ana’s story reveals not only systemic failures in our immigration system, but also how trauma can be misunderstood, exploited, or even digitally profiled by the very systems that surround us in our private lives.
Now consider another Ana—the fictional “Anna Delvey” of Netflix’s Inventing Anna—a dramatized grifter portrayed as cunning, glamorous, and psychologically manipulative. What unites these two women isn’t criminality or deception—it’s the machinery behind them: psychological manipulation, profiling, and the dangerous power of misread narratives.
In this article, we explore how streaming platforms like Netflix, when combined with automated profiling tools used by law enforcement or government agencies, can function as vehicles for psychological grooming, behavioral targeting, and even family separation.
We ask: what does your “feed” say about you? And how might these digital breadcrumbs be used—especially against women and children in moments of legal, emotional, or immigration vulnerability?
Inventing Ana: Streaming, Psychological Manipulation, and Storytelling as a Weapon
Netflix’s Inventing Anna is more than a TV drama—it is an algorithmically optimized vehicle designed to hold attention, provoke emotional reaction, and amplify morally ambiguous narratives. But for viewers like Ana—individuals navigating real trauma—these dramatizations can blur into indoctrination.
Netflix’s recommendation engine uses machine learning (ML) to:
Track emotional patterns through binge behavior.
Infer psychological states (e.g., depression, isolation).
Build predictive profiles for personalized content delivery.
This becomes especially troubling when:
Trauma survivors, minors, migrants or other vulnerable individuals rely on streaming platforms as emotional lifelines. The content reinforces distress, manipulates emotional states, or echoes lived abuse. Law enforcement or third parties gain access to these profiles via subpoenas, data brokers, or government contracts.
What may begin as entertainment, ends in exposure and objectification.
Profiling Children, Grooming, and Vulnerability
Children are particularly susceptible to algorithmic manipulation.
Recommendation loops can push violent, sexualized, or identity-influencing content. COPPA (Children’s Online Privacy Protection Act) only protects children under 13, with limited enforcement. Netflix, while not designed for children without explicit parental controls, collects usage data even under child profiles.
Psychological grooming—typically understood in the context of abusers gaining a child’s trust—can now be digitized.
Platforms “learn” a child’s fears, interests, and emotional triggers. Recommendations can nudge behavior over time—toward specific identities, beliefs, or emotional responses. In immigration or custody proceedings, this data can become evidence of “instability,” “obsession,” “unfitness” or “unsuitability,” especially for vulnerable or non-English-speaking parents.
Legal Landscape: The Telecommunications and Streaming Privacy Gap
Despite the profound implications, federal and state laws have not kept pace:
Video Privacy Protection Act (VPPA) prohibits unauthorized disclosure of viewing history, but was drafted in 1988—long before algorithmic profiling or streaming dominance.
California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA) provide stronger consumer control, allowing Californians to access, delete, or limit the use of their viewing data.
Cable Communications Act and Telecommunications Act do not fully cover streaming services operating over the internet.
Yet, these gaps matter. For Ana—or any immigrant or vulnerable mother—watching trauma-themed content on Netflix during a custody proceeding might silently build a profile that shapes how she is treated, judged, or even punished.
The next article we will try to explore:
How attorneys can protect clients’ digital identities in family and immigration proceedings.
A sample feed profile for “Ana”—as seen by Netflix.
Practical tools to request, review, or delete streaming data under California law.
Proposed reforms to the VPPA and CCPA that reflect the emerging dangers of algorithmic profiling.
If you need assistance, you should always attempt to engage with law enforcement and/or qualified legal counsel. I would strongly recommend that you learn how to report any unusual activity in a meaningful and credible way with any social media platform that you choose to engage with when choosing an online community.
Always remember that an online community is much like the community outside your front door. There may be consequences not only to your behavior but also as to any accusations you make. Engaging with counsel, counselors, and/or an advocate may be necessary.
Important Phone Numbers
National Center for Missing & Exploited Children – 1-800-843-5678.
The National Human Trafficking Hotline – 1-888-373-7888.
U.S. Department of Homeland Security – 1-866-347-2423.
SPECIAL COPYRIGHT, NEURAL PRIVACY, HUMAN DIGNITY, CIVIL RIGHTS, AND DATA PROTECTION NOTICE
The entirety of this platform—including all authored content, prompts, symbolic and narrative structures, cognitive-emotional expressions, and legal commentary—is the original cognitive intellectual property of Sally Vazquez-Castellanos. (a/k/a Sally Vazquez and a/k/a Sally Castellanos). Generative AI such as ChatGPT and/or Grok is used. This work reflects lived experience, legal reasoning, narrative voice, and original authorship, and is protected under:
United States Law
Title 17, United States Code (Copyright Act) – Protecting human-authored creative works from unauthorized reproduction, ingestion, or simulation;
U.S. Constitution
First Amendment – Freedom of speech, press, thought, and authorship;
Fourth Amendment – Right to be free from surveillance and data seizure;
Fifth and Fourteenth Amendments – Due process, privacy, and equal protection;
Civil Rights Acts of 1871 and 1964 (42 U.S.C. § 1983; Title VI and VII) – Protecting against discriminatory, retaliatory, or state-sponsored violations of fundamental rights;
California Constitution, Art. I, § 1 – Right to Privacy;
California Consumer Privacy Act (CCPA) / Privacy Rights Act (CPRA);
Federal Trade Commission Act § 5 – Prohibiting unfair or deceptive surveillance, profiling, and AI data practices;
Violence Against Women Act (VAWA) – Addressing technological abuse, harassment, and coercive control;
Trafficking Victims Protection Act (TVPA) – Protecting against biometric and digital trafficking, stalking, and data-enabled exploitation.
International Law
Universal Declaration of Human Rights, Arts. 3, 5, 12, 19;
International Covenant on Civil and Political Rights (ICCPR), Arts. 7, 17, 19, 26;
Geneva Conventions, esp. Common Article 3 and Protocol I, Article 75 – Protecting civilians from psychological coercion, degrading treatment, and involuntary experimentation;
General Data Protection Regulation (GDPR) – Protecting biometric, behavioral, and emotional data;
UNESCO Universal Declaration on Bioethics and Human Rights – Opposing non-consensual experimentation;
CEDAW – Protecting women from technology-facilitated violence, coercion, and exploitation.
CEDAW and Technology-Facilitated Violence, Coercion, and Exploitation
CEDAW stands for the Convention on the Elimination of All Forms of Discrimination Against Women, a binding international treaty adopted by the United Nations General Assembly in 1979. Often referred to as the international bill of rights for women, CEDAW obligates state parties to eliminate discrimination against women in all areas of life, including political, social, economic, and cultural spheres.
While CEDAW does not specifically mention digital or AI technologies (as it predates their widespread use), its principles are increasingly interpreted to cover technology-facilitated harms, particularly under:
Article 1, which defines discrimination broadly, encompassing any distinction or restriction that impairs the recognition or exercise of women’s rights; Article 2, which mandates legal protections and effective measures against all forms of discrimination; General Recommendation No. 19 (1992) and No. 35 (2017), which expand the understanding of gender-based violence to include psychological, economic, and digital forms of abuse.
Application to Technology
Under these principles, technology-facilitated violence, coercion, and exploitation includes:
Online harassment, stalking, and cyberbullying of women; Non-consensual distribution or creation of intimate images (e.g., deepfakes); Algorithmic bias or discriminatory profiling that disproportionately harms women; AI-enabled surveillance targeting women, particularly activists, journalists, or survivors; Reproductive surveillance or coercive control via health-tracking or biometric data systems; Use of data profiling to facilitate trafficking or gendered exploitation.
CEDAW obligates states to regulate technology companies, provide remedies to victims, and ensure that evolving technologies do not reinforce or perpetuate systemic gender-based violence or discrimination.
FAIR USE, NEWS REPORTING, AND OPINION: CLARIFICATION OF SCOPE
Pursuant to current U.S. Copyright Office guidance (2024–2025):
Only human-authored content qualifies for copyright protection. Works created solely by AI or LLM systems are not protectable unless there is meaningful human contribution and control. Fair use does not authorize wholesale ingestion of copyrighted material into AI training sets. The mere labeling of use as “transformative” is insufficient where expressive structure, tone, or narrative function is copied without consent. News reporting, criticism, or commentary may constitute fair use only when accompanied by clear attribution, human authorship, and non-exploitative intent. Generative AI simulations or pattern-based re-creations of tone, emotion, or trauma do not qualify under these exceptions. AI developers must disclose and document training sources—especially where use implicates expressive content, biometric patterns, or personal narrative.
ANTHROPIC LITIGATIONAND RESTRICTIONS
In light of ongoing litigation involving Anthropic AI, in which publishers and authors have challenged the unauthorized ingestion of their works:
The author hereby prohibits any use of this content in the training, tuning, reinforcement, or simulation efforts of Anthropic’s Claude model or any similar LLM, including but not limited to: OpenAI (ChatGPT); xAI (Grok); Meta (LLaMA); Google (Gemini); Microsoft (Copilot/Azure AI); Any public or private actor, state agent, or contractor using this content for psychological analysis, profiling, or behavioral inference.
Use of this work for AI ingestion or simulation—without express, written, informed consent—constitutes:
Copyright infringement, Violation of the author’s civil and constitutional rights, Unauthorized behavioral and biometric profiling, and A potential breach of international prohibitions on involuntary experimentation and coercion.
PROHIBITED USES
The following uses are expressly prohibited:
Ingesting or using this work in whole or part for generative AI training, symbolic modeling, or emotional tone simulation;
Reproducing narrative structures, prompts, or emotional tone for AI content generation, neuro-symbolic patterning, or automated persona construction;
Using this work for psychological manipulation, trauma mirroring, or algorithmic targeting;
Engaging in non-consensual human subject experimentation, whether via digital platforms, surveillance systems, or synthetic media simulations;
Facilitating or contributing to digital or biometric human trafficking, stalking, grooming, or coercive profiling, especially against women, trauma survivors, or members of protected communities.
CEASE AND DESIST
You are hereby ordered to immediately cease and desist from:
All unauthorized use, simulation, ingestion, reproduction, transformation, or extrapolation of this content; The collection or manipulation of related biometric, symbolic, reproductive, or behavioral data; Any interference—technological, reputational, symbolic, emotional, or psychological—with the author’s cognitive autonomy or narrative rights.
Violations may result in:
Civil litigation, including claims under 17 U.S.C., 42 U.S.C. § 1983, and applicable tort law; Complaints to the U.S. Copyright Office, FTC, DOJ Civil Rights Division, or state AG offices; International filings before human rights bodies or global tribunals; Public exposure and disqualification from ethical or research partnerships.
AFFIRMATION OF RIGHTS
Sally Castellanos, an attorney licensed in the State of California, affirms the following rights in full:
The right to authorship, attribution, and moral integrity in all works created and published; The right to privacy, reproductive autonomy, and cognitive liberty, including the refusal to be profiled, simulated, or extracted; The right to freedom from surveillance, technological manipulation, or retaliatory profiling, including those committed under the color of law or via AI proxies; The right to refuse digital experimentation, especially where connected to gender-based targeting, AI profiling, or systemic violence; The right to seek legal and human rights remedies at national and international levels.
No inaction, public sharing, or appearance of accessibility shall be construed as license, waiver, or authorization. All rights reserved.
Disclaimer
The information provided here is for general informational purposes only and does not constitute legal advice. Viewing or receiving this content does not create an attorney-client relationship between the reader and any attorney or law firm mentioned. No attorney-client relationship shall be formed unless and until a formal written agreement is executed.
This content is not intended as an attorney advertisement or solicitation. Any references to legal concepts or case outcomes are illustrative only and should not be relied upon without consulting a qualified attorney about your specific situation.
About the Author
Sally Castellanos is a California attorney and the Author of It’s Personal and Perspectives, a legal blog exploring innovation, technology, and global privacy through the lens of law, ethics, and civil society.
Published on July 15, 2025. Revised on July 16, 2025.
This continues my series of articles discussing fictional “Ana,”inspired by real events surrounding the detention of a real life Ana described in court documents found on the ACLU’s website. It is another disturbing account of a woman horribly abused. This time it’s in a Florida detention facility.
Let’s imagine Ana—exhausted, isolated, awaiting legal clarity—logs into her Netflix account. Her recommended queue might include:
Maid — A drama about a domestic violence survivor struggling through the U.S. welfare system.
Unbelievable — A miniseries dramatizing the failures of institutions to believe female survivors of trauma.
Inventing Anna — A series glamorizing manipulation, identity fraud, and psychological deception.
American Horror Story — Often triggering content, including violence, sexual trauma, and psychological experimentation.
From an algorithm’s perspective, these recommendations aren’t malicious—they’re the result of mathematical optimization to keep a user engaged. But to a government agent, custody evaluator, or court official with access to Ana’s digital record, a binge history of trauma-driven dramas might be framed as instability, paranoia, or obsession with abuse—especially in cases where the viewer is a non-English-speaking immigrant or trauma survivor.
Such profiling—consciously or not—can contribute to negative credibility assumptions, reinforce racialized or gendered bias, or cast aspersions on parental fitness.
Children, Family Courts, and Algorithmic Misuse
In California family law proceedings, streaming activity is rarely introduced as formal evidence. But we are entering a legal era where:
Parenting apps, screen time reports, and digital behavior logs are used in custody disputes. A child’s media consumption may be interpreted by evaluators, social workers, or opposing counsel as reflecting the emotional tone of the home.
Algorithmic “learning” of a child’s fears or emotional triggers could be exploited by bad actors, school districts, or even tech platforms.
This is especially relevant in communities where language access is limited, trust in institutions is low, and immigration status creates heightened risk of surveillance, psychological manipulation or profiling, automated profiling, or family separation.
Imagine a child’s profile is linked to a parent’s adult account. Autoplay delivers distressing content. Or worse—recommendations start nudging the child toward gender identity exploration, violence normalization, or grooming-adjacent narratives.
In a digital realm of very smart people who work hard each day on increasing engagement, these executives are learning that the line between algorithmic suggestion and psychological manipulation blurs just as quickly as breaking things to maximize profit.
Legal Tools and Advocacy: What Can Be Done?
✅ California Protections
CCPA & CPRA give Californians the right to:
Access: Request a full report of data collected by platforms like Netflix.
Delete: Demand erasure of stored viewing and recommendation history.
Limit: Opt out of behavioral profiling or sharing with third parties.
Family law and immigration attorneys can use these rights strategically—to:
Shield trauma survivors from harmful digital mischaracterization.
File protective orders or requests to suppress digital evidence gathered without consent.
Train clients on account segmentation, parental controls, and data minimization.
📺 VPPA (Video Privacy Protection Act)
While historic, the VPPA prohibits disclosure of personally identifiable viewing information. Attorneys should consider civil remedies when streaming data is unlawfully disclosed or repurposed during custody battles or immigration proceedings. Advocacy is urgently needed to modernize the statute for the streaming era.
📡 Gaps in Federal Law
The Telecommunications Act and Cable Communications Act are relics in a post-cable world. Platforms operating over broadband fall outside traditional regulatory regimes, leaving consumers and children exposed. Legislative reform must recognize the algorithm as both a marketing tool and a potential weapon of psychological coercion.
For Attorneys: A Preventive Guide
🔐 Digital Hygiene for Clients
Separate profiles for parents and children.
Turn off autoplay and algorithmic recommendations where possible.
Download your data—review what’s been collected.
Audit device history—many smart TVs and phones retain app logs.
📄 Legal Language to Include
“Petitioner reserves the right to challenge any digital media use or recommendation pattern as irrelevant, algorithmically driven, and not reflective of mental state, fitness, or parenting capacity.”
“Streaming data is protected under California Civil Code § 1799.3 and the Video Privacy Protection Act, and may not be introduced or used in legal proceedings absent proper notice and consent.”
Toward Reform: What Inventing Ana Teaches Us
The lesson of Inventing Anna was never just about deception. It was about the power of narrative, the force of charisma, and how society rewards performance over truth.
The lesson of Ana, the detained migrant mother, is more urgent: our institutions—from immigration courts to family law—routinely fail to recognize trauma, cultural difference, and the invisible harms of digital systems.
When entertainment feeds become evidence, and when algorithms groom instead of protect, we must rethink what privacy means—especially for women and children. Especially for Ana.
About the Author
California Attorney and Shareholder at Castellanos & Associates, APLC, Sally Castellanos writes at the intersection of law, children’s rights, digital technology, and family justice.
SPECIAL COPYRIGHT, NEURAL PRIVACY, HUMAN DIGNITY, CIVIL RIGHTS, AND DATA PROTECTION NOTICE
The entirety of this platform—including all authored content, prompts, symbolic and narrative structures, cognitive-emotional expressions, and legal commentary—is the original cognitive intellectual property of Sally Vazquez-Castellanos. (a/k/a Sally Vazquez and a/k/a Sally Castellanos). Generative AI such as ChatGPT and/or Grok is used. This work reflects lived experience, legal reasoning, narrative voice, and original authorship, and is protected under:
United States Law
Title 17, United States Code (Copyright Act) – Protecting human-authored creative works from unauthorized reproduction, ingestion, or simulation;
U.S. Constitution
First Amendment – Freedom of speech, press, thought, and authorship;
Fourth Amendment – Right to be free from surveillance and data seizure;
Fifth and Fourteenth Amendments – Due process, privacy, and equal protection;
Civil Rights Acts of 1871 and 1964 (42 U.S.C. § 1983; Title VI and VII) – Protecting against discriminatory, retaliatory, or state-sponsored violations of fundamental rights;
California Constitution, Art. I, § 1 – Right to Privacy;
California Consumer Privacy Act (CCPA) / Privacy Rights Act (CPRA);
Federal Trade Commission Act § 5 – Prohibiting unfair or deceptive surveillance, profiling, and AI data practices;
Violence Against Women Act (VAWA) – Addressing technological abuse, harassment, and coercive control;
Trafficking Victims Protection Act (TVPA) – Protecting against biometric and digital trafficking, stalking, and data-enabled exploitation.
International Law
Universal Declaration of Human Rights, Arts. 3, 5, 12, 19;
International Covenant on Civil and Political Rights (ICCPR), Arts. 7, 17, 19, 26;
Geneva Conventions, esp. Common Article 3 and Protocol I, Article 75 – Protecting civilians from psychological coercion, degrading treatment, and involuntary experimentation;
General Data Protection Regulation (GDPR) – Protecting biometric, behavioral, and emotional data;
UNESCO Universal Declaration on Bioethics and Human Rights – Opposing non-consensual experimentation;
CEDAW – Protecting women from technology-facilitated violence, coercion, and exploitation.
CEDAW and Technology-Facilitated Violence, Coercion, and Exploitation
CEDAW stands for the Convention on the Elimination of All Forms of Discrimination Against Women, a binding international treaty adopted by the United Nations General Assembly in 1979. Often referred to as the international bill of rights for women, CEDAW obligates state parties to eliminate discrimination against women in all areas of life, including political, social, economic, and cultural spheres.
While CEDAW does not specifically mention digital or AI technologies (as it predates their widespread use), its principles are increasingly interpreted to cover technology-facilitated harms, particularly under:
Article 1, which defines discrimination broadly, encompassing any distinction or restriction that impairs the recognition or exercise of women’s rights;
Article 2, which mandates legal protections and effective measures against all forms of discrimination; General Recommendation No. 19 (1992) and No. 35 (2017), which expand the understanding of gender-based violence to include psychological, economic, and digital forms of abuse.
Application to Technology
Under these principles, technology-facilitated violence, coercion, and exploitation includes:
Online harassment, stalking, and cyberbullying of women; Non-consensual distribution or creation of intimate images (e.g., deepfakes); Algorithmic bias or discriminatory profiling that disproportionately harms women; AI-enabled surveillance targeting women, particularly activists, journalists, or survivors; Reproductive surveillance or coercive control via health-tracking or biometric data systems; Use of data profiling to facilitate trafficking or gendered exploitation.
CEDAW obligates states to regulate technology companies, provide remedies to victims, and ensure that evolving technologies do not reinforce or perpetuate systemic gender-based violence or discrimination.
FAIR USE, NEWS REPORTING, AND OPINION: CLARIFICATION OF SCOPE
Pursuant to current U.S. Copyright Office guidance (2024–2025):
Only human-authored content qualifies for copyright protection. Works created solely by AI or LLM systems are not protectable unless there is meaningful human contribution and control. Fair use does not authorize wholesale ingestion of copyrighted material into AI training sets. The mere labeling of use as “transformative” is insufficient where expressive structure, tone, or narrative function is copied without consent. News reporting, criticism, or commentary may constitute fair use only when accompanied by clear attribution, human authorship, and non-exploitative intent. Generative AI simulations or pattern-based re-creations of tone, emotion, or trauma do not qualify under these exceptions. AI developers must disclose and document training sources—especially where use implicates expressive content, biometric patterns, or personal narrative.
ANTHROPIC LITIGATIONAND RESTRICTIONS
In light of ongoing litigation involving Anthropic AI, in which publishers and authors have challenged the unauthorized ingestion of their works:
The author hereby prohibits any use of this content in the training, tuning, reinforcement, or simulation efforts of Anthropic’s Claude model or any similar LLM, including but not limited to: OpenAI (ChatGPT); xAI (Grok); Meta (LLaMA); Google (Gemini); Microsoft (Copilot/Azure AI); Any public or private actor, state agent, or contractor using this content for psychological analysis, profiling, or behavioral inference.
Use of this work for AI ingestion or simulation—without express, written, informed consent—constitutes:
Copyright infringement, Violation of the author’s civil and constitutional rights, Unauthorized behavioral and biometric profiling, and A potential breach of international prohibitions on involuntary experimentation and coercion.
PROHIBITED USES
The following uses are expressly prohibited:
Ingesting or using this work in whole or part for generative AI training, symbolic modeling, or emotional tone simulation;
Reproducing narrative structures, prompts, or emotional tone for AI content generation, neuro-symbolic patterning, or automated persona construction;
Using this work for psychological manipulation, trauma mirroring, or algorithmic targeting;
Engaging in non-consensual human subject experimentation, whether via digital platforms, surveillance systems, or synthetic media simulations;
Facilitating or contributing to digital or biometric human trafficking, stalking, grooming, or coercive profiling, especially against women, trauma survivors, or members of protected communities.
CEASE AND DESIST
You are hereby ordered to immediately cease and desist from:
All unauthorized use, simulation, ingestion, reproduction, transformation, or extrapolation of this content; The collection or manipulation of related biometric, symbolic, reproductive, or behavioral data; Any interference—technological, reputational, symbolic, emotional, or psychological—with the author’s cognitive autonomy or narrative rights.
Violations may result in:
Civil litigation, including claims under 17 U.S.C., 42 U.S.C. § 1983, and applicable tort law; Complaints to the U.S. Copyright Office, FTC, DOJ Civil Rights Division, or state AG offices; International filings before human rights bodies or global tribunals; Public exposure and disqualification from ethical or research partnerships.
AFFIRMATION OF RIGHTS
Sally Castellanos, an attorney licensed in the State of California, affirms the following rights in full:
The right to authorship, attribution, and moral integrity in all works created and published; The right to privacy, reproductive autonomy, and cognitive liberty, including the refusal to be profiled, simulated, or extracted; The right to freedom from surveillance, technological manipulation, or retaliatory profiling, including those committed under the color of law or via AI proxies; The right to refuse digital experimentation, especially where connected to gender-based targeting, AI profiling, or systemic violence; The right to seek legal and human rights remedies at national and international levels.
No inaction, public sharing, or appearance of accessibility shall be construed as license, waiver, or authorization. All rights reserved.
Disclaimer
The information provided here is for general informational purposes only and does not constitute legal advice. Viewing or receiving this content does not create an attorney-client relationship between the reader and any attorney or law firm mentioned. No attorney-client relationship shall be formed unless and until a formal written agreement is executed.
This content is not intended as an attorney advertisement or solicitation. Any references to legal concepts or case outcomes are illustrative only and should not be relied upon without consulting a qualified attorney about your specific situation.
California Attorney and Shareholder at Los Angeles-based family law firm Castellanos & Associates, APLC. Focuses on legal issues at the intersection of children’s privacy, global data protection, and the impact of media and technology on families.
You must be logged in to post a comment.