Tag: paternity

  • Judicial Reform in Recognizing Subtle Harm and Trauma Informed Practice

    Judicial Reform in Recognizing Subtle Harm and Trauma Informed Practice

    By Sally Vazquez-Castellanos

    Revised on September 27, 2025 at 7:37 pm.

    As I often do when something troubles me these days, I open a dialogue with ChatGPT

    The following was taken from a previously published conversation with my ChatGPT about trauma-informed practice in the legal profession and its related systems.

    Please keep in mind that I have staunchly advocated for the recognition of subtle harms that are found in algorithms and targeted ads often displayed to children and vulnerable adults in the digital age.

    It really can be as simple (or as complicated) as understanding just how difficult it may be for a child with a complexion problem to have to walk up to store personnel at CVS to ask for a tube of Clearasil. If you think that’s funny, try living life as a child who is impoverished or challenged in some way, then perhaps you might understand why this is a crisis that leaves some children vulnerable to being influenced by the wrong people. Sadly, many of these kids are bombarded with all kinds of messaging on their smartphones, which may include nasty behavior from individuals who abuse the privilege of being on social media platforms.

    When we consider bias, racism, sexism, discriminatory and disparate treatment and practices institutionalized in American systems, the legal system as a whole is confronting how to deal with these societal harms that are increasingly becoming much more subtle in an age of technological dominance. I do think it’s important to note that bad people exist everywhere, including our digital spaces.

    Just as digital platforms can be misused to cause quiet but devastating reputational harm through implication, curated messaging, or indirect targeting, so too can harm within family systems occur through subtle forms of control, manipulation, and intimidation—often without immediate physical evidence.

    In the context of family law and child custody and conservatorship proceedings, this form of abuse may be referred to as “coercive control”—a pattern of psychological, emotional, and sometimes economic manipulation used to dominate or isolate a partner or child. It is insidious precisely because it often evades the traditional markers of harm that courts are trained to recognize. When courts lack sufficient training in trauma-informed practices, child sexual abuse dynamics, and non-physical forms of abuse, the result is often the minimization or outright dismissal of credible concerns raised by protective parents.

    The parallel is clear: when institutions are not adequately prepared to recognize subtle, systemic harm, they may unintentionally legitimize or perpetuate it. In the media space, this results in public targeting masked as content; in the courtroom, it may result in placing children with abusive parents or penalizing the protective parent for “alienation” rather than identifying the underlying abuse.

    Judicial reform must include mandatory education for judges and court personnel on coercive control, trauma responses, and the complex dynamics of abuse—especially as they present in contested custody cases. Understanding that harm is not always loud, visible, or immediate is essential to ensuring that justice is truly protective, particularly for children and survivors.

    Just as we must be vigilant in digital spaces against subtle but coordinated reputational harm, we must bring that same level of vigilance into our courts—to recognize that harm can be quiet, strategic, and deeply destructive. Training and reform are not optional; they are critical for the safety and well-being of the families our courts are entrusted to serve.

    Legal Disclaimer

    This blog post is for informational purposes only and does not constitute legal advice. Reading this article does not create an attorney-client relationship. For advice about your specific legal matter, please consult a qualified attorney.

    About the Author

    Sally Castellanos is a California attorney and the Author of It’s Personal and Perspectives, a legal blog exploring innovation, technology, and global privacy through the lens of law, ethics, and civil society.

  • Ensuring Children’s Privacy and Safety in the Digital Age: TikTok and the AI Dilemma

    Ensuring Children’s Privacy and Safety in the Digital Age: TikTok and the AI Dilemma

    By Sally Vazquez-Castellanos

    Republished on September 23, 2025 at 7:02 pm.

    In the face of rapid technological advancement, both policymakers and tech companies are dealing with increasingly complex issues concerning the online safety and privacy of children. Global laws and regulations, such as the EU’s AI Act, were implemented to address these challenges.

    When considering the plain meaning of recent executive orders as well as the ongoing conflicts in Ukraine and the Middle East, we must understand that TikTok’s issues are a national security nightmare for the United States.

    TIKTOK and National Security Concerns

    In a recent article from Reuters, President Trump’s proposal to have TikTok sell its U.S. interests remains on the table days after the deal was said to be on hold.

    TikTok is at the forefront of debate regarding national security and children’s privacy. In recent years, concerns have grown about how the platform handles user data. We have seen executive orders aimed at addressing the collection of sensitive data from American consumers by foreign adversaries, prompting recent presidential directives against TikTok to mitigate the risk to national security and children’s privacy.

    TikTok, a widely used app among children and teens is heavily scrutinized for the company’s data collection practices. The U.S. government, under both President Trump and President Biden, has taken steps to limit these concerns, citing the need to protect national security and critical infrastructure and technologies.

    President Trump’s Executive Order Issued August 6, 2020

    President Trump’s executive order expressly states that there’s a national emergency with respect to critical infrastructure and technologies. The presidential directive addresses the need to secure Information and Communications Technology and Services in the United States. It also deals with mobile apps developed and owned by companies in China that threaten the national security, foreign policy and economy of the United States.

    The AI Act and Global Perspectives

    On a global scale, the EU’s AI Act aims to regulate artificial intelligence technologies, focusing on transparency, accountability, and personal data protection, especially as the ‘internet of things’ increasingly becomes integrated in all of our lives. The use of artificial intelligence without guidelines and regulation, including Google workstations and other smart technologies at schools nationwide, could pose a significant threat to our nation’s children and educational system.

    Artificial intelligence regulations are part of a broader effort to safeguard users globally, inspired by important regulations such as privacy and General Data Protection Regulation in the United Kingdom (GDPR). A regulation such as GDPR treats data privacy as a fundamental human right. We have similar constitutional and state law authorities in California such as the California Privacy Rights Act (CPRA), but under federal law it’s much more complicated.

    Conclusion

    The intersection of technological innovation and children’s safety online demands ongoing attention and adaptation of a number of laws, policies and practices. We must address national security concerns with platforms like TikTok, which includes the ethical use of artificial intelligence.

    Security concerns for our nation, which includes the national economy, also demonstrates a profound need for regulation and policies at the federal level that carefully considers comprehensive regulations like the EU’s AI Act. Meanwhile, stakeholders continue their important work towards providing a safer digital environment for children worldwide.