Category: Los Angeles Divorce Lawyers

  • Inventing Ana: How Streaming Algorithms Enable Psychological Grooming and Threaten the Rights of Children

    Inventing Ana: How Streaming Algorithms Enable Psychological Grooming and Threaten the Rights of Children

    By Sally Ann Vazquez-Castellanos, Esq.

    Published on July 15, 2015. Revised on July 16, 2015.

    Children’s Rights, Behavioral Profiling, and the Law

    “What happens when an algorithm learns your trauma before you speak it aloud?”

    “And what if it uses that knowledge—not to heal—but to shape, manipulate, harass, or punish you?”

    Quote: Chat GPT

    In 2024, the ACLU filed a harrowing civil rights complaint detailing the abuse of a Spanish-speaking migrant mother—pseudonymously referred to as Ana—held in solitary confinement for weeks at a Florida ICE detention facility. A survivor of trafficking and domestic violence, Ana’s story reveals not only systemic failures in our immigration system, but also how trauma can be misunderstood, exploited, or even digitally profiled by the very systems that surround us in our private lives.

    Now consider another Ana—the fictional “Anna Delvey” of Netflix’s Inventing Anna—a dramatized grifter portrayed as cunning, glamorous, and psychologically manipulative. What unites these two women isn’t criminality or deception—it’s the machinery behind them: psychological manipulation, profiling, and the dangerous power of misread narratives.

    In this article, we explore how streaming platforms like Netflix, when combined with automated profiling tools used by law enforcement or government agencies, can function as vehicles for psychological grooming, behavioral targeting, and even family separation.

    We ask: what does your “feed” say about you? And how might these digital breadcrumbs be used—especially against women and children in moments of legal, emotional, or immigration vulnerability?

    Inventing Ana: Streaming, Psychological Manipulation, and Storytelling as a Weapon

    Netflix’s Inventing Anna is more than a TV drama—it is an algorithmically optimized vehicle designed to hold attention, provoke emotional reaction, and amplify morally ambiguous narratives. But for viewers like Ana—individuals navigating real trauma—these dramatizations can blur into indoctrination.

    Netflix’s recommendation engine uses machine learning (ML) to:

    Track emotional patterns through binge behavior.

    Infer psychological states (e.g., depression, isolation).

    Build predictive profiles for personalized content delivery.

    This becomes especially troubling when:

    Trauma survivors, minors, migrants or other vulnerable individuals rely on streaming platforms as emotional lifelines. The content reinforces distress, manipulates emotional states, or echoes lived abuse. Law enforcement or third parties gain access to these profiles via subpoenas, data brokers, or government contracts.

    What may begin as entertainment, ends in exposure and objectification.

    Profiling Children, Grooming, and Vulnerability

    Children are particularly susceptible to algorithmic manipulation.

    Recommendation loops can push violent, sexualized, or identity-influencing content. COPPA (Children’s Online Privacy Protection Act) only protects children under 13, with limited enforcement. Netflix, while not designed for children without explicit parental controls, collects usage data even under child profiles.

    Psychological grooming—typically understood in the context of abusers gaining a child’s trust—can now be digitized.

    Platforms “learn” a child’s fears, interests, and emotional triggers. Recommendations can nudge behavior over time—toward specific identities, beliefs, or emotional responses. In immigration or custody proceedings, this data can become evidence of “instability,” “obsession,” “unfitness” or “unsuitability,” especially for vulnerable or non-English-speaking parents.

    Legal Landscape: The Telecommunications and Streaming Privacy Gap

    Despite the profound implications, federal and state laws have not kept pace:

    Video Privacy Protection Act (VPPA) prohibits unauthorized disclosure of viewing history, but was drafted in 1988—long before algorithmic profiling or streaming dominance.

    California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA) provide stronger consumer control, allowing Californians to access, delete, or limit the use of their viewing data.

    Cable Communications Act and Telecommunications Act do not fully cover streaming services operating over the internet.

    Yet, these gaps matter. For Ana—or any immigrant or vulnerable mother—watching trauma-themed content on Netflix during a custody proceeding might silently build a profile that shapes how she is treated, judged, or even punished.

    The next article we will try to explore:

    How attorneys can protect clients’ digital identities in family and immigration proceedings.

    A sample feed profile for “Ana”—as seen by Netflix.

    Practical tools to request, review, or delete streaming data under California law.

    Proposed reforms to the VPPA and CCPA that reflect the emerging dangers of algorithmic profiling.

    If you need assistance, you should always attempt to engage with law enforcement and/or qualified legal counsel. I would strongly recommend that you learn how to report any unusual activity in a meaningful and credible way with any social media platform that you choose to engage with when choosing an online community.

    Always remember that an online community is much like the community outside your front door. There may be consequences not only to your behavior but also as to any accusations you make. Engaging with counsel, counselors, and/or an advocate may be necessary.

    Important Phone Numbers

    National Center for Missing & Exploited Children – 1-800-843-5678.

    The National Human Trafficking Hotline – 1-888-373-7888.

    U.S. Department of Homeland Security – 1-866-347-2423.

    SPECIAL COPYRIGHT, NEURAL PRIVACY, HUMAN DIGNITY, CIVIL RIGHTS, AND DATA PROTECTION NOTICE

    © 2025 Sally Castellanos. All Rights Reserved.

    Neural Privacy and Cognitive Liberty

    The entirety of this platform—including all authored content, prompts, symbolic and narrative structures, cognitive-emotional expressions, and legal commentary—is the original cognitive intellectual property of Sally Vazquez-Castellanos. (a/k/a Sally Vazquez and a/k/a Sally Castellanos). Generative AI such as ChatGPT and/or Grok is used. This work reflects lived experience, legal reasoning, narrative voice, and original authorship, and is protected under:

    United States Law

    Title 17, United States Code (Copyright Act) – Protecting human-authored creative works from unauthorized reproduction, ingestion, or simulation;

    U.S. Constitution

    First Amendment – Freedom of speech, press, thought, and authorship; 

    Fourth Amendment – Right to be free from surveillance and data seizure; 

    Fifth and Fourteenth Amendments – Due process, privacy, and equal protection; 

    Civil Rights Acts of 1871 and 1964 (42 U.S.C. § 1983; Title VI and VII) – Protecting against discriminatory, retaliatory, or state-sponsored violations of fundamental rights; 

    California Constitution, Art. I, § 1 – Right to Privacy; 

    California Consumer Privacy Act (CCPA) / Privacy Rights Act (CPRA); 

    Federal Trade Commission Act § 5 – Prohibiting unfair or deceptive surveillance, profiling, and AI data practices; 

    Violence Against Women Act (VAWA) – Addressing technological abuse, harassment, and coercive control; 

    Trafficking Victims Protection Act (TVPA) – Protecting against biometric and digital trafficking, stalking, and data-enabled exploitation.

    International Law

    Universal Declaration of Human Rights, Arts. 3, 5, 12, 19; 

    International Covenant on Civil and Political Rights (ICCPR), Arts. 7, 17, 19, 26; 

    Geneva Conventions, esp. Common Article 3 and Protocol I, Article 75 – Protecting civilians from psychological coercion, degrading treatment, and involuntary experimentation; 

    General Data Protection Regulation (GDPR) – Protecting biometric, behavioral, and emotional data; 

    UNESCO Universal Declaration on Bioethics and Human Rights – Opposing non-consensual experimentation; 

    CEDAW – Protecting women from technology-facilitated violence, coercion, and exploitation.

    CEDAW and Technology-Facilitated Violence, Coercion, and Exploitation

    CEDAW stands for the Convention on the Elimination of All Forms of Discrimination Against Women, a binding international treaty adopted by the United Nations General Assembly in 1979. Often referred to as the international bill of rights for women, CEDAW obligates state parties to eliminate discrimination against women in all areas of life, including political, social, economic, and cultural spheres.

    While CEDAW does not specifically mention digital or AI technologies (as it predates their widespread use), its principles are increasingly interpreted to cover technology-facilitated harms, particularly under:

    Article 1, which defines discrimination broadly, encompassing any distinction or restriction that impairs the recognition or exercise of women’s rights; Article 2, which mandates legal protections and effective measures against all forms of discrimination; General Recommendation No. 19 (1992) and No. 35 (2017), which expand the understanding of gender-based violence to include psychological, economic, and digital forms of abuse.

    Application to Technology

    Under these principles, technology-facilitated violence, coercion, and exploitation includes:

    Online harassment, stalking, and cyberbullying of women; Non-consensual distribution or creation of intimate images (e.g., deepfakes); Algorithmic bias or discriminatory profiling that disproportionately harms women; AI-enabled surveillance targeting women, particularly activists, journalists, or survivors; Reproductive surveillance or coercive control via health-tracking or biometric data systems; Use of data profiling to facilitate trafficking or gendered exploitation.

    CEDAW obligates states to regulate technology companies, provide remedies to victims, and ensure that evolving technologies do not reinforce or perpetuate systemic gender-based violence or discrimination.

    FAIR USE, NEWS REPORTING, AND OPINION: CLARIFICATION OF SCOPE

    Pursuant to current U.S. Copyright Office guidance (2024–2025):

    Only human-authored content qualifies for copyright protection. Works created solely by AI or LLM systems are not protectable unless there is meaningful human contribution and control. Fair use does not authorize wholesale ingestion of copyrighted material into AI training sets. The mere labeling of use as “transformative” is insufficient where expressive structure, tone, or narrative function is copied without consent. News reporting, criticism, or commentary may constitute fair use only when accompanied by clear attribution, human authorship, and non-exploitative intent. Generative AI simulations or pattern-based re-creations of tone, emotion, or trauma do not qualify under these exceptions. AI developers must disclose and document training sources—especially where use implicates expressive content, biometric patterns, or personal narrative.

    ANTHROPIC LITIGATION AND RESTRICTIONS

    In light of ongoing litigation involving Anthropic AI, in which publishers and authors have challenged the unauthorized ingestion of their works:

    The author hereby prohibits any use of this content in the training, tuning, reinforcement, or simulation efforts of Anthropic’s Claude model or any similar LLM, including but not limited to: OpenAI (ChatGPT); xAI (Grok); Meta (LLaMA); Google (Gemini); Microsoft (Copilot/Azure AI); Any public or private actor, state agent, or contractor using this content for psychological analysis, profiling, or behavioral inference.

    Use of this work for AI ingestion or simulation—without express, written, informed consent—constitutes:

    Copyright infringement, Violation of the author’s civil and constitutional rights, Unauthorized behavioral and biometric profiling, and A potential breach of international prohibitions on involuntary experimentation and coercion.

    PROHIBITED USES

    The following uses are expressly prohibited:

    Ingesting or using this work in whole or part for generative AI training, symbolic modeling, or emotional tone simulation; 

    Reproducing narrative structures, prompts, or emotional tone for AI content generation, neuro-symbolic patterning, or automated persona construction; 

    Using this work for psychological manipulation, trauma mirroring, or algorithmic targeting; 

    Engaging in non-consensual human subject experimentation, whether via digital platforms, surveillance systems, or synthetic media simulations; 

    Facilitating or contributing to digital or biometric human trafficking, stalking, grooming, or coercive profiling, especially against women, trauma survivors, or members of protected communities.

    CEASE AND DESIST

    You are hereby ordered to immediately cease and desist from:

    All unauthorized use, simulation, ingestion, reproduction, transformation, or extrapolation of this content; The collection or manipulation of related biometric, symbolic, reproductive, or behavioral data; Any interference—technological, reputational, symbolic, emotional, or psychological—with the author’s cognitive autonomy or narrative rights.

    Violations may result in:

    Civil litigation, including claims under 17 U.S.C., 42 U.S.C. § 1983, and applicable tort law; Complaints to the U.S. Copyright Office, FTC, DOJ Civil Rights Division, or state AG offices; International filings before human rights bodies or global tribunals; Public exposure and disqualification from ethical or research partnerships.

    AFFIRMATION OF RIGHTS

    Sally Castellanos, an attorney licensed in the State of California, affirms the following rights in full:

    The right to authorship, attribution, and moral integrity in all works created and published; The right to privacy, reproductive autonomy, and cognitive liberty, including the refusal to be profiled, simulated, or extracted; The right to freedom from surveillance, technological manipulation, or retaliatory profiling, including those committed under the color of law or via AI proxies; The right to refuse digital experimentation, especially where connected to gender-based targeting, AI profiling, or systemic violence; The right to seek legal and human rights remedies at national and international levels.

    No inaction, public sharing, or appearance of accessibility shall be construed as license, waiver, or authorization. All rights reserved.

    Disclaimer

    The information provided here is for general informational purposes only and does not constitute legal advice. Viewing or receiving this content does not create an attorney-client relationship between the reader and any attorney or law firm mentioned. No attorney-client relationship shall be formed unless and until a formal written agreement is executed.

    This content is not intended as an attorney advertisement or solicitation. Any references to legal concepts or case outcomes are illustrative only and should not be relied upon without consulting a qualified attorney about your specific situation. 

    About the Author

    Sally Castellanos is a California attorney and the Author of It’s Personal and Perspectives, a legal blog exploring innovation, technology, and global privacy through the lens of law, ethics, and civil society.

  • A Fictional Feed, Algorithmic Manipulation and What Netflix Might “See” in Ana

    By Sally Ann Vazquez-Castellanos, Esq.

    Published on July 15, 2025. Revised on July 16, 2025.

    This continues my series of articles discussing fictional “Ana,”inspired by real events surrounding the detention of a real life Ana described in court documents found on the ACLU’s website. It is another disturbing account of a woman horribly abused. This time it’s in a Florida detention facility.

    Let’s imagine Ana—exhausted, isolated, awaiting legal clarity—logs into her Netflix account. Her recommended queue might include:

    Maid — A drama about a domestic violence survivor struggling through the U.S. welfare system.

    Unbelievable — A miniseries dramatizing the failures of institutions to believe female survivors of trauma.

    Inventing Anna — A series glamorizing manipulation, identity fraud, and psychological deception.

    American Horror Story — Often triggering content, including violence, sexual trauma, and psychological experimentation.

    From an algorithm’s perspective, these recommendations aren’t malicious—they’re the result of mathematical optimization to keep a user engaged. But to a government agent, custody evaluator, or court official with access to Ana’s digital record, a binge history of trauma-driven dramas might be framed as instability, paranoia, or obsession with abuse—especially in cases where the viewer is a non-English-speaking immigrant or trauma survivor.

    Such profiling—consciously or not—can contribute to negative credibility assumptions, reinforce racialized or gendered bias, or cast aspersions on parental fitness.

    Children, Family Courts, and Algorithmic Misuse

    In California family law proceedings, streaming activity is rarely introduced as formal evidence. But we are entering a legal era where:

    Parenting apps, screen time reports, and digital behavior logs are used in custody disputes. A child’s media consumption may be interpreted by evaluators, social workers, or opposing counsel as reflecting the emotional tone of the home.

    Algorithmic “learning” of a child’s fears or emotional triggers could be exploited by bad actors, school districts, or even tech platforms.

    This is especially relevant in communities where language access is limited, trust in institutions is low, and immigration status creates heightened risk of surveillance, psychological manipulation or profiling, automated profiling, or family separation.

    Imagine a child’s profile is linked to a parent’s adult account. Autoplay delivers distressing content. Or worse—recommendations start nudging the child toward gender identity exploration, violence normalization, or grooming-adjacent narratives.

    In a digital realm of very smart people who work hard each day on increasing engagement, these executives are learning that the line between algorithmic suggestion and psychological manipulation blurs just as quickly as breaking things to maximize profit.

    Legal Tools and Advocacy: What Can Be Done?

    ✅ California Protections

    CCPA & CPRA give Californians the right to:

    Access: Request a full report of data collected by platforms like Netflix.

    Delete: Demand erasure of stored viewing and recommendation history.

    Limit: Opt out of behavioral profiling or sharing with third parties.

    Family law and immigration attorneys can use these rights strategically—to:

    Shield trauma survivors from harmful digital mischaracterization.

    File protective orders or requests to suppress digital evidence gathered without consent.

    Train clients on account segmentation, parental controls, and data minimization.

    📺 VPPA (Video Privacy Protection Act)

    While historic, the VPPA prohibits disclosure of personally identifiable viewing information. Attorneys should consider civil remedies when streaming data is unlawfully disclosed or repurposed during custody battles or immigration proceedings. Advocacy is urgently needed to modernize the statute for the streaming era.

    📡 Gaps in Federal Law

    The Telecommunications Act and Cable Communications Act are relics in a post-cable world. Platforms operating over broadband fall outside traditional regulatory regimes, leaving consumers and children exposed. Legislative reform must recognize the algorithm as both a marketing tool and a potential weapon of psychological coercion.

    For Attorneys: A Preventive Guide

    🔐 Digital Hygiene for Clients

    Separate profiles for parents and children.

    Turn off autoplay and algorithmic recommendations where possible.

    Download your data—review what’s been collected.

    Audit device history—many smart TVs and phones retain app logs.

    📄 Legal Language to Include

    “Petitioner reserves the right to challenge any digital media use or recommendation pattern as irrelevant, algorithmically driven, and not reflective of mental state, fitness, or parenting capacity.”

    “Streaming data is protected under California Civil Code § 1799.3 and the Video Privacy Protection Act, and may not be introduced or used in legal proceedings absent proper notice and consent.”

    Toward Reform: What Inventing Ana Teaches Us

    The lesson of Inventing Anna was never just about deception. It was about the power of narrative, the force of charisma, and how society rewards performance over truth.

    The lesson of Ana, the detained migrant mother, is more urgent: our institutions—from immigration courts to family law—routinely fail to recognize trauma, cultural difference, and the invisible harms of digital systems.

    When entertainment feeds become evidence, and when algorithms groom instead of protect, we must rethink what privacy means—especially for women and children. Especially for Ana.

    About the Author

    California Attorney and Shareholder at Castellanos & Associates, APLC, Sally Castellanos writes at the intersection of law, children’s rights, digital technology, and family justice.

    SPECIAL COPYRIGHT, NEURAL PRIVACY, HUMAN DIGNITY, CIVIL RIGHTS, AND DATA PROTECTION NOTICE

    © 2025 Sally Castellanos. All Rights Reserved.

    Neural Privacy and Cognitive Liberty

    The entirety of this platform—including all authored content, prompts, symbolic and narrative structures, cognitive-emotional expressions, and legal commentary—is the original cognitive intellectual property of Sally Vazquez-Castellanos. (a/k/a Sally Vazquez and a/k/a Sally Castellanos). Generative AI such as ChatGPT and/or Grok is used. This work reflects lived experience, legal reasoning, narrative voice, and original authorship, and is protected under:

    United States Law

    Title 17, United States Code (Copyright Act) – Protecting human-authored creative works from unauthorized reproduction, ingestion, or simulation;

    U.S. Constitution

    First Amendment – Freedom of speech, press, thought, and authorship; 

    Fourth Amendment – Right to be free from surveillance and data seizure; 

    Fifth and Fourteenth Amendments – Due process, privacy, and equal protection; 

    Civil Rights Acts of 1871 and 1964 (42 U.S.C. § 1983; Title VI and VII) – Protecting against discriminatory, retaliatory, or state-sponsored violations of fundamental rights; 

    California Constitution, Art. I, § 1 – Right to Privacy; 

    California Consumer Privacy Act (CCPA) / Privacy Rights Act (CPRA); 

    Federal Trade Commission Act § 5 – Prohibiting unfair or deceptive surveillance, profiling, and AI data practices; 

    Violence Against Women Act (VAWA) – Addressing technological abuse, harassment, and coercive control; 

    Trafficking Victims Protection Act (TVPA) – Protecting against biometric and digital trafficking, stalking, and data-enabled exploitation.

    International Law

    Universal Declaration of Human Rights, Arts. 3, 5, 12, 19; 

    International Covenant on Civil and Political Rights (ICCPR), Arts. 7, 17, 19, 26; 

    Geneva Conventions, esp. Common Article 3 and Protocol I, Article 75 – Protecting civilians from psychological coercion, degrading treatment, and involuntary experimentation; 

    General Data Protection Regulation (GDPR) – Protecting biometric, behavioral, and emotional data; 

    UNESCO Universal Declaration on Bioethics and Human Rights – Opposing non-consensual experimentation; 

    CEDAW – Protecting women from technology-facilitated violence, coercion, and exploitation.

    CEDAW and Technology-Facilitated Violence, Coercion, and Exploitation

    CEDAW stands for the Convention on the Elimination of All Forms of Discrimination Against Women, a binding international treaty adopted by the United Nations General Assembly in 1979. Often referred to as the international bill of rights for women, CEDAW obligates state parties to eliminate discrimination against women in all areas of life, including political, social, economic, and cultural spheres.

    While CEDAW does not specifically mention digital or AI technologies (as it predates their widespread use), its principles are increasingly interpreted to cover technology-facilitated harms, particularly under:

    Article 1, which defines discrimination broadly, encompassing any distinction or restriction that impairs the recognition or exercise of women’s rights;

    Article 2, which mandates legal protections and effective measures against all forms of discrimination; General Recommendation No. 19 (1992) and No. 35 (2017), which expand the understanding of gender-based violence to include psychological, economic, and digital forms of abuse.

    Application to Technology

    Under these principles, technology-facilitated violence, coercion, and exploitation includes:

    Online harassment, stalking, and cyberbullying of women; Non-consensual distribution or creation of intimate images (e.g., deepfakes); Algorithmic bias or discriminatory profiling that disproportionately harms women; AI-enabled surveillance targeting women, particularly activists, journalists, or survivors; Reproductive surveillance or coercive control via health-tracking or biometric data systems; Use of data profiling to facilitate trafficking or gendered exploitation.

    CEDAW obligates states to regulate technology companies, provide remedies to victims, and ensure that evolving technologies do not reinforce or perpetuate systemic gender-based violence or discrimination.

    FAIR USE, NEWS REPORTING, AND OPINION: CLARIFICATION OF SCOPE

    Pursuant to current U.S. Copyright Office guidance (2024–2025):

    Only human-authored content qualifies for copyright protection. Works created solely by AI or LLM systems are not protectable unless there is meaningful human contribution and control. Fair use does not authorize wholesale ingestion of copyrighted material into AI training sets. The mere labeling of use as “transformative” is insufficient where expressive structure, tone, or narrative function is copied without consent. News reporting, criticism, or commentary may constitute fair use only when accompanied by clear attribution, human authorship, and non-exploitative intent. Generative AI simulations or pattern-based re-creations of tone, emotion, or trauma do not qualify under these exceptions. AI developers must disclose and document training sources—especially where use implicates expressive content, biometric patterns, or personal narrative.

    ANTHROPIC LITIGATION AND RESTRICTIONS

    In light of ongoing litigation involving Anthropic AI, in which publishers and authors have challenged the unauthorized ingestion of their works:

    The author hereby prohibits any use of this content in the training, tuning, reinforcement, or simulation efforts of Anthropic’s Claude model or any similar LLM, including but not limited to: OpenAI (ChatGPT); xAI (Grok); Meta (LLaMA); Google (Gemini); Microsoft (Copilot/Azure AI); Any public or private actor, state agent, or contractor using this content for psychological analysis, profiling, or behavioral inference.

    Use of this work for AI ingestion or simulation—without express, written, informed consent—constitutes:

    Copyright infringement, Violation of the author’s civil and constitutional rights, Unauthorized behavioral and biometric profiling, and A potential breach of international prohibitions on involuntary experimentation and coercion.

    PROHIBITED USES

    The following uses are expressly prohibited:

    Ingesting or using this work in whole or part for generative AI training, symbolic modeling, or emotional tone simulation; 

    Reproducing narrative structures, prompts, or emotional tone for AI content generation, neuro-symbolic patterning, or automated persona construction; 

    Using this work for psychological manipulation, trauma mirroring, or algorithmic targeting; 

    Engaging in non-consensual human subject experimentation, whether via digital platforms, surveillance systems, or synthetic media simulations; 

    Facilitating or contributing to digital or biometric human trafficking, stalking, grooming, or coercive profiling, especially against women, trauma survivors, or members of protected communities.

    CEASE AND DESIST

    You are hereby ordered to immediately cease and desist from:

    All unauthorized use, simulation, ingestion, reproduction, transformation, or extrapolation of this content; The collection or manipulation of related biometric, symbolic, reproductive, or behavioral data; Any interference—technological, reputational, symbolic, emotional, or psychological—with the author’s cognitive autonomy or narrative rights.

    Violations may result in:

    Civil litigation, including claims under 17 U.S.C., 42 U.S.C. § 1983, and applicable tort law; Complaints to the U.S. Copyright Office, FTC, DOJ Civil Rights Division, or state AG offices; International filings before human rights bodies or global tribunals; Public exposure and disqualification from ethical or research partnerships.

    AFFIRMATION OF RIGHTS

    Sally Castellanos, an attorney licensed in the State of California, affirms the following rights in full:

    The right to authorship, attribution, and moral integrity in all works created and published; The right to privacy, reproductive autonomy, and cognitive liberty, including the refusal to be profiled, simulated, or extracted; The right to freedom from surveillance, technological manipulation, or retaliatory profiling, including those committed under the color of law or via AI proxies; The right to refuse digital experimentation, especially where connected to gender-based targeting, AI profiling, or systemic violence; The right to seek legal and human rights remedies at national and international levels.

    No inaction, public sharing, or appearance of accessibility shall be construed as license, waiver, or authorization. All rights reserved.

    Disclaimer

    The information provided here is for general informational purposes only and does not constitute legal advice. Viewing or receiving this content does not create an attorney-client relationship between the reader and any attorney or law firm mentioned. No attorney-client relationship shall be formed unless and until a formal written agreement is executed.

    This content is not intended as an attorney advertisement or solicitation. Any references to legal concepts or case outcomes are illustrative only and should not be relied upon without consulting a qualified attorney about your specific situation. 

    California Attorney and Shareholder at Los Angeles-based family law firm Castellanos & Associates, APLC. Focuses on legal issues at the intersection of children’s privacy, global data protection, and the impact of media and technology on families.

  • When Family Law and Immigration Collide: Ana’s Story and the Criminalization of Motherhood

    When Family Law and Immigration Collide: Ana’s Story and the Criminalization of Motherhood

    By Sally Ann Vazquez-Castellanos

    Published on July 15, 2025. Revised on September 23, 2025.

    What happens when a moment of maternal care becomes a criminal act? For Ana, an immigrant mother in Florida and survivor of domestic violence, taking her U.S. citizen son out for ice cream—outside the court’s supervised visitation schedule—resulted in her prosecution, separation from her child, and eventual detention by federal immigration authorities.

    Ana’s story is the subject of a powerful case study published by the ACLU of Florida, which documents how Florida’s family and criminal legal systems intersect with federal immigration enforcement to disproportionately punish immigrant women and mothers of color. Charged under Florida Statutes § 787.03 for Interference with Custody, Ana’s brief unsupervised outing with her son triggered a cascade of punitive actions, including solitary confinement and prolonged detention by U.S. Immigration and Customs Enforcement (ICE).

    The case illustrates how local courts and ICE collaborate in ways that can override a parent’s best intentions, escalate family disputes into criminal matters, and ignore the trauma histories of survivors. As the ACLU explains, Ana’s experience is not an isolated incident—it reflects a broader national pattern:

    “The criminalization of immigrant parents—particularly mothers—results in unjust prosecutions, long-term separation from children, and due process violations that undermine the integrity of both family and immigration systems.”

    — ACLU of Florida, Civil Rights & Civil Liberties Report

    The report raises urgent legal and human rights questions such as:

    Are immigrant parents being punished for trying to maintain a bond with their children?

    What safeguards exist when custody orders intersect with criminal statutes and immigration enforcement?

    How can legal systems account for trauma, survival, and cultural context in family law proceedings?

    Please visit the ACLU website to learn more about Ana’s story.

    ACLU of Florida, “Ana’s Story: When Family Law and Immigration Enforcement Collide” (2024), available at:

    👉 https://www.aclufl.org/sites/default/files/field_documents/anas_crcl_final_version.pdf

    This case and others demands not only empathy but legal reform. Custody disputes should not be criminalized—especially not when immigrant families are already navigating systems stacked against them.

    I would like to thank the American Civil Liberties Union (ACLU) (Florida) and Robert F. Kennedy Human Rights.

    Legal Disclaimer

    This blog post is for informational purposes only and does not constitute legal advice. Reading this article does not create an attorney-client relationship. For advice about your specific legal matter, please consult a qualified attorney.

    California Attorney and Shareholder at Los Angeles-based family law firm Castellanos & Associates, APLC. Focuses on legal issues at the intersection of children’s privacy, global data protection, and the impact of media and technology on families.

    SPECIAL COPYRIGHT, NEURAL PRIVACY, HUMAN DIGNITY, CIVIL RIGHTS, AND DATA PROTECTION NOTICE

    © 2025 Sally Castellanos. All Rights Reserved.

    Neural Privacy and Cognitive Liberty

    The entirety of this platform—including all authored content, prompts, symbolic and narrative structures, cognitive-emotional expressions, and legal commentary—is the original cognitive intellectual property of Sally Vazquez-Castellanos. (a/k/a Sally Vazquez and Sally Castellanos).

    Generative AI such as ChatGPT and/or Grok is used. This work reflects lived experience, legal reasoning, narrative voice, and original authorship, and is protected under:

    United States Law

    Title 17, United States Code (Copyright Act) – Protecting human-authored creative works from unauthorized reproduction or simulation; 

    U.S. Constitution

    First Amendment – Freedom of speech, press, thought, and authorship; 

    Fourth Amendment – Right to be free from surveillance and data seizure; 

    Fifth and Fourteenth Amendments – Due process, privacy, and equal protection; 

    Civil Rights Acts of 1871 and 1964 (42 U.S.C. § 1983; Title VI and VII) – Protecting against discriminatory, retaliatory, or state-sponsored violations of fundamental rights; 

    California Constitution, Art. I, § 1 – Right to Privacy; 

    California Consumer Privacy Act (CCPA) / Privacy Rights Act (CPRA); 

    Federal Trade Commission Act § 5 – Prohibiting unfair or deceptive surveillance, profiling, and AI data practices; 

    Violence Against Women Act (VAWA) – Addressing technological abuse, harassment, and coercive control; 

    Trafficking Victims Protection Act (TVPA) – Protecting against biometric and digital trafficking, stalking, and data-enabled exploitation.

    International Law

    Universal Declaration of Human Rights, Arts. 3, 5, 12, 19; 

    International Covenant on Civil and Political Rights (ICCPR), Arts. 7, 17, 19, 26; 

    Geneva Conventions, esp. Common Article 3 and Protocol I, Article 75 – Protecting civilians from psychological coercion, degrading treatment, and involuntary experimentation; 

    General Data Protection Regulation (GDPR) – Protecting biometric, behavioral, and emotional data; 

    UNESCO Universal Declaration on Bioethics and Human Rights – Opposing non-consensual experimentation; 

    CEDAW – Protecting women from technology-facilitated violence, coercion, and exploitation.

    CEDAW and Technology-Facilitated Violence, Coercion, and Exploitation

    CEDAW stands for the Convention on the Elimination of All Forms of Discrimination Against Women, a binding international treaty adopted by the United Nations General Assembly in 1979. Often referred to as the international bill of rights for women, CEDAW obligates state parties to eliminate discrimination against women in all areas of life, including political, social, economic, and cultural spheres.

    While CEDAW does not specifically mention digital or AI technologies (as it predates their widespread use), its principles are increasingly interpreted to cover technology-facilitated harms, particularly under:

    Article 1, which defines discrimination broadly, encompassing any distinction or restriction that impairs the recognition or exercise of women’s rights;

    Article 2, which mandates legal protections and effective measures against all forms of discrimination; General Recommendation No. 19 (1992) and No. 35 (2017), which expand the understanding of gender-based violence to include psychological, economic, and digital forms of abuse.

    Application to Technology

    Under these principles, technology-facilitated violence, coercion, and exploitation includes:

    Online harassment, stalking, and cyberbullying of women; Non-consensual distribution or creation of intimate images (e.g., deepfakes); Algorithmic bias or discriminatory profiling that disproportionately harms women; AI-enabled surveillance targeting women, particularly activists, journalists, or survivors; Reproductive surveillance or coercive control via health-tracking or biometric data systems; Use of data profiling to facilitate trafficking or gendered exploitation.

    CEDAW obligates states to regulate technology companies, provide remedies to victims, and ensure that evolving technologies do not reinforce or perpetuate systemic gender-based violence or discrimination.

    FAIR USE, NEWS REPORTING, AND OPINION: CLARIFICATION OF SCOPE

    Pursuant to current U.S. Copyright Office guidance (2024–2025):

    Only human-authored content qualifies for copyright protection. Works created solely by AI or LLM systems are not protectable unless there is meaningful human contribution and control. Fair use does not authorize wholesale ingestion of copyrighted material into AI training sets. The mere labeling of use as “transformative” is insufficient where expressive structure, tone, or narrative function is copied without consent. News reporting, criticism, or commentary may constitute fair use only when accompanied by clear attribution, human authorship, and non-exploitative intent. Generative AI simulations or pattern-based re-creations of tone, emotion, or trauma do not qualify under these exceptions. AI developers must disclose and document training sources—especially where use implicates expressive content, biometric patterns, or personal narrative.

    ANTHROPIC LITIGATION AND RESTRICTIONS

    In light of ongoing litigation involving Anthropic AI, in which publishers and authors have challenged the unauthorized ingestion of their works:

    The author hereby prohibits any use of this content in the training, tuning, reinforcement, or simulation efforts of Anthropic’s Claude model or any similar LLM, including but not limited to: OpenAI (ChatGPT); xAI (Grok); Meta (LLaMA); Google (Gemini); Microsoft (Copilot/Azure AI); Any public or private actor, state agent, or contractor using this content for psychological analysis, profiling, or behavioral inference.

    Use of this work for AI ingestion or simulation—without express, written, informed consent—constitutes:

    Copyright infringement, Violation of the author’s civil and constitutional rights, Unauthorized behavioral and biometric profiling, and A potential breach of international prohibitions on involuntary experimentation and coercion.

    PROHIBITED USES

    The following uses are expressly prohibited:

    Ingesting or using this work in whole or part for generative AI training, symbolic modeling, or emotional tone simulation; 

    Reproducing narrative structures, prompts, or emotional tone for AI content generation, neuro-symbolic patterning, or automated persona construction; 

    Using this work for psychological manipulation, trauma mirroring, or algorithmic targeting; 

    Engaging in non-consensual human subject experimentation, whether via digital platforms, surveillance systems, or synthetic media simulations; 

    Facilitating or contributing to digital or biometric human trafficking, stalking, grooming, or coercive profiling, especially against women, trauma survivors, or members of protected communities.

    CEASE AND DESIST

    You are hereby ordered to immediately cease and desist from:

    All unauthorized use, simulation, reproduction, transformation, or extrapolation of this content; The collection or manipulation of related biometric, symbolic, reproductive, or behavioral data; Any interference—technological, reputational, symbolic, emotional, or psychological—with the author’s cognitive autonomy or narrative rights.

    Violations may result in:

    Civil litigation, including claims under 17 U.S.C., 42 U.S.C. § 1983, and applicable tort law; Complaints to the U.S. Copyright Office, FTC, DOJ Civil Rights Division, or state AG offices; International filings before human rights bodies or global tribunals; Public exposure and disqualification from ethical or research partnerships.

    AFFIRMATION OF RIGHTS

    Sally Castellanos, an attorney licensed in the State of California, affirms the following rights in full:

    The right to authorship, attribution, and moral integrity in all works created and published; The right to privacy, reproductive autonomy, and cognitive liberty, including the refusal to be profiled, simulated, or extracted; The right to freedom from surveillance, technological manipulation, or retaliatory profiling, including those committed under the color of law or via AI proxies; The right to refuse digital experimentation, especially where connected to gender-based targeting, AI profiling, or systemic violence; The right to seek legal and human rights remedies at national and international levels.

    No inaction, public sharing, or appearance of accessibility shall be construed as license, waiver, or authorization. All rights are reserved.

    Disclaimer

    The information provided here is for general informational purposes only and does not constitute legal advice. Viewing or receiving this content does not create an attorney-client relationship between the reader and any attorney or law firm mentioned. No attorney-client relationship shall be formed unless and until a formal written agreement is executed.

    This content is not intended as an attorney advertisement or solicitation. Any references to legal concepts or case outcomes are illustrative only and should not be relied upon without consulting a qualified attorney about your specific situation.

  • The Best Interest of the Child: A Look at the Impact of Social Media in Child Custody Proceedings in California

    The Best Interest of the Child: A Look at the Impact of Social Media in Child Custody Proceedings in California

    By: Sally Vazquez-Castellanos

    Published, April 12, 2025. Revised, April 13, 2025.

    California’s Age-Appropriate Design Code

    California’s Age-Appropriate Design Code Act (CAADCA) was passed in 2022 to protect children’s online privacy and safety. The Act requires businesses that provide online services or products likely to be used by children under 18 to prioritize the interests of young users in the design of their products.

    In 2023, a federal judge issued a preliminary injunction over free speech and constitutional concerns. The CAADCA presently remains under review.

    Algorithmic Integrity: The Social Media Algorithm Act

    In 2023, California also passed the Social Media Algorithm Act, effective January 2025. According to the New York Times, the legislation aims to protect young users from the adverse effects of algorithm-driven content, which can contribute to issues such as addiction and cyberbullying.(1)

    California’s Social Media Algorithm Act is complimentary legislation to the CAADCA. It’s a further effort by the state to address addictive, harmful algorithmic practices intended to target children and teens.

    By Prioritizing chronological feeds, the law reportedly seeks to offer a safer online environment for children. Technology companies apparently have until 2027 to comply with these rules.

    The law is significant because it forces businesses to prioritize the harms that can come from the algorithm. This is extremely important because companies are being asked to take a deep dive into examining how algorithms impact the minds of young children. It’s absolutely essential because it can have a detrimental impact on the psychology of a child’s mind in addition to the traditional societal harms children’s face in the digital age.

    The Children’s Code in the United Kingdom

    Since September 2021, the United Kingdom set design standards for digital services that were likely to be accessed by children under age 18 to protect their privacy and online safety in their own Age-Appropriate Design Code (UK AADC).

    The UK’s Age Appropriate Design Code, also known as the “Children’s Code,” is the first official guideline for online services accessed by children. California’s laws are based on this code. The UK AADC helps businesses follow UK data protection laws such as general data protection regulation (GDPR) and focuses on a child’s best interests. Introduced by the Information Commissioner’s Office (ICO) in 2021, the Children’s Code aims to protect children’s data.

    Similar to California’s legislation, the ICO states that the purpose of the Children’s Code is to ensure that online services are designed and operated in the best interests of children, which includes promoting their safety, wellbeing, and development.

    The UK Age-Appropriate Design Code includes a set a 15 standards that act as guidelines for data processing, design, and to protect children online.

    The United Nations Convention on the Rights of the Child

    According to the ICO, the best interest of the child standard should be evaluated based on Article 3 of the United Nations Convention on the Rights of the Child (UNCRC). When a family law court looks at this standard, it may consider if a business is acting in the best interests of children and may also consider how a business uses children’s data in relation to the rights outlined in the UNCRC.

    The range of rights under UNCRC include:

    •safety;

    •health;

    •wellbeing;

    •family relationships;

    •physical, psychological and emotional development;

    •identity;

    •freedom of expression;

    •privacy; and

    •agency to form their own views and have them heard.

    In a Child’s Best Interest: Taking a Fresh Look at Business Design & Operations

    In California, child custody disputes focus on a child’s best interest. The social media impact on a child’s mental and emotional health has become significant in all of our lives, especially as it relates to our children.

    It’s important to design online experiences that are age-appropriate to ensure safety and to support emotional growth, while minimizing risks associated with social media.

    Social Media’s Impact on a Child’s Mental Health

    In California, family law courts consider a child’s use of social media when deciding what is in their best interest. In recent years, the role of social media use is becoming increasingly significant in these cases, particularly concerning the child’s mental and emotional well-being.

    In California, the legislation discussed here requires businesses to use age-appropriate design principles in the design of their products.

    Family law court’s in California may consider a child’s exposure to social media platforms as part of the best interest evaluation. Courts may look at how social media use affects a child’s mental health, social interactions, and overall well-being when determining a custody arrangement.

    The TikTok Dilemma

    However, we are now faced with a national emergency and the consideration of the TikTok application on our children’s phones. In many ways, I could envision a scenario where parents and caregivers may need the court’s analysis to go a step further.

    We are in the middle of a national emergency due to a series of presidential Executive Orders that include the current crisis over the TikTok App. The application is commonly found on many smartphones across America, and it’s popular among younger audiences, particularly teenagers, and raises serious questions regarding the privacy and security of its users.

    There are serious issues about the collection and use of children’s data among foreign adversaries, while at the same time we are faced with existential threats that go beyond intellectual property theft and retaliatory tariffs with countries like China, which is where TikTok’s parent company ByteDance is located.

    Online sexual exploitation of young women by experienced predators is a serious issue within the app’s ecosystem and continues to be a major concern.

    In light of these concerns, we must consider how children use social media. We must also consider how algorithms on smartphones and the security associated with these devices has the potential to lead to the exploitation of children by advertisers, third parties, and foreign nations.

    These are serious issues for children and the adults who supervise and love them. Parents can’t be with their children 24/7. It’s an incredible responsibility for any parent or caregiver to have to deal with while business continues to connect us to the world. It’s especially troublesome when smartphones are essential to a child’s day-to-day life in schools across the country.

    Companies should understand the risks posed when third party service providers or others have access to your child’s smartphone. This is an important consideration for any court or legal proceeding when having to consider the psychological impact done to children after prolonged usage or over a lengthy period of time.

    Enforcement and Penalties

    California’s Age-Appropriate Design Code and the UK Children’s Code does not establish for a private right of action by individuals. They both provide for enforcement through a regulatory body. In California, enforcement rests with the California Attorney General.

    The California Privacy Protection Agency enforces state data protection laws, and the Agency investigates complaints under state privacy laws such as the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA).

    In the United Kingdom, it is the ICO that sets guidelines for the Children’s Code. However, in the United Kingdom penalties rest with fines that may be available for a data breach under GDPR.

    In California, penalties are on a per-child, per-violation basis, while penalties under the UK Children’s Code rests with the fines available under GDPR for a data breach.

    Conclusion

    Overall, the integration of social media considerations into child custody disputes reflects the evolving nature of family law in addressing modern challenges that affect the well-being of children. As technology continues to advance and social media becomes an integral part of daily life, its impact on parenting cannot be overlooked.

    In child custody cases, it’s essential for courts to integrate California’s age-appropriate design principles, recognizing that algorithmic integrity and online engagement directly influences a child’s emotional development and safety.

    These issues are critically important to cases that go well beyond family law courts. With the application of these principles, legal experts may be able to holistically evaluate how online behavior and interactions not only impact a child’s well-being but also leads to an evaluation of potential risks associated with harmful content or the misuse of social media platforms.

    1. New York Times article, “Newsom Signs Bill That Adds Protection for Children on Social Media. The California Legislation Comes Amid Growing Concerns About the Impact of Cellphones and Social Media on Adolescents’ Mental Health,” written by Shawn Hubler and Amy Quin. Published Sept. 21, 2024. Updated Sept. 22, 2024.