Child with an AI equipped cell phone, Author Shani Epstein (CC BY-SA 4.0 International)
Artificial Intelligence (AI) – a technology which allows computers to perform complex tasks – is being heavily promoted across all spheres of endeavor. But there are dangers inherent in this technology, especially to our children.
Dangerous Content
“This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please [1A].”
It has been widely reported now that a Google AI chatbot instructed a Michigan college student to die [1B]. Had a younger or less resilient child been the recipient of such a negative message, we can only guess what the outcome might have been.
Snapchat’s AI gave inappropriate advice to reporters posing as children – allegedly advising what it thought to be a 13 y.o. girl on how to lie to parents about a trip with a 31 y.o. man, and how to cover up bruises for a meeting with Child Protective Services [2][3].
Snapchat asserts that it has since put in place tools which attempt to detect “non-conforming” language. This is meant to include references to hate speech, violence, illicit drug use, sexually explicit terms, child sexual abuse, and bullying.
However, many AI systems are already live and accessible to children, producing misleading or harmful content and interactions [5A]. Amazon’s Alexa advised a child to stick a coin in an electrical socket [4].
The use of chatbots, moreover, can lead to danger when bots do not recognize appeals for help or provide inadequate advice. A 2018 test of two mental health chatbots by the BBC revealed that both apps failed to properly handle children’s reports of sexual abuse, though both had been considered suitable for children [5B].
Grooming
“Unlike traditional grooming, which relies solely on the instincts and tactics of the predator, AI-driven grooming uses advanced algorithms to identify and target potential victims more effectively. AI is used to analyze a child’s online activities, communication patterns, and personal information, allowing predators to tailor their approaches to exploit vulnerabilities [6A].”
This, by itself, should set off alarm bells for parents.
Deepfakes and Sextortion
AI algorithms are capable of generating shockingly realistic child sexual abuse material. Deepfakes (videos in which the face or body have been digitally altered) blur the line between what is real and what is not, greatly increasing the potential for sextortion.
Both predators and bullies can exploit AI-generated deep fake images to coerce children into sending money, engaging in sex acts, or otherwise submitting to their demands, in order to prevent release of the images [6B].
Unfortunately, the use of deep fake images is not explicitly prohibited by law in most states. Currently, it is not illegal for a friend, colleague, or stranger to download photos from someone’s public social media profile, use AI to create graphically erotic content, and then distribute the deepfakes online [7].
Privacy
“Studies show that little ones often chat with smart speakers, telling personal stories and disclosing details that grownups might consider private…One study found that kids between 3 and 6 years old believed that smart speakers had thoughts, feelings and social abilities [8A].”
AI collects a huge amount of data about us, often without our knowledge. The “Hello, Barbie” doll, for instance, was found to record conversations and transmit the data to third parties like advertisers and toy manufacturers, raising significant privacy concerns [9].
In addition, smart toys are often permanently connected to the web, therefore, susceptible to hacking and security breaches [10].
Voice assistants typically rely on stored voice recordings to facilitate a system’s continuous learning. However, child rights advocates have raised questions over the lack of clarity as to company data retention policies [5C].
The Children’s Online Privacy Protection Act (COPPA) theoretically protects kids 13 years and under by restricting access and usage of personal information about them that can be found online [11]. However, COPPA has been regularly violated by media companies, manufacturers, and others [8B].
Social and Psychological Impact
The social and psychological impact of AI on children has not yet been fully explored. We and our children are in unknown territory.
Combating AI Risks
Clearly, parents are facing new challenges in this technology-intense environment. But there are steps parents can take to protect their children [6C]:
- Education on the part of parents is essential. Many parents are intimidated by this technology, and do not have the necessary knowledge to convey to their children.
- Parents should use privacy settings on social media platforms, and monitor their children’s computer usage.
- Children should be taught that not everything on the internet is true. They should be encouraged to verify information, even if it seems to come from someone they know.
- Children should be cautioned that private information must remain private. Clear boundaries should be set for them. Parents may want to avoid entirely those interactive toys that “talk” with young children.
- Open dialog with children is crucial. Parents should make their concerns clear. Children should feel free to share their online experiences with parents.
- Parents should report suspicious activity.
Instructing children about the dangers surrounding internet use and AI, in particular, is a parent’s responsibility. However technically savvy some children may become, they are not equipped to navigate these dangers alone. Only parents can provide the needed judgment and guidance.
—
[1A and 1B] CBS News, “Google AI chatbot responds with a threatening message: “Human…Please die.” By Alex Clark and Melissa Mahtani, 11/20/24, https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-human-please-die/.
[2] Washington Post, “Snapchat tried to make a safe AI. But it chats with me about booze and sex”, 3/14/23, https://www.washingtonpost.com/technology/2023/03/14/snapchat-myai/ .
[3] Fox News, “Snapchat IA advice allegedly gave advice to 13-year-old girl on relationship with 31 year-old man, having sex” by Nikolas Lanum, 4/13/23, https://www.foxnews.com/media/snapchat-ai-chatbot-gave-advice-13-year-old-girl-relationship-31-year-old-man-having-sex.
[4] CNBC, “Amazon’s Alexa assistant told a child to do a potentially lethal challenge” by Sam Shead, 12/29/21, https://www.cnbc.com/2021/12/29/amazons-alexa-told-a-child-to-do-a-potentially-lethal-challenge.html.
[5A, 5B, and 5C] UNICEF, “Generative AI: Risks and opportunities for children”, https://www.unicef.org/innocenti/generative-ai-risks-and-opportunities-children#:~:text=Generative%20AI%20has%20produced%20dangerous%20content:%20Amazon,child%20sexual%20abuse%2C%20bullying%20and%20hate%20speech.
[6A, 6B, and 6C] Child Rescue Coalition, “The Dark Side of AI: Risks to Children”, https://childrescuecoalition.org/educations/the-dark-side-of-ai-risks-to-children/#:~:text=AI%2Ddriven%20grooming%20not%20only%20makes%20it%20easier,predator%20to%20exploit%20the%20child%20over%20time.
[7] Sen. Tracy Pennycuick, “Protecting Our Children: Addressing the Dangers of AI-Generated Deep Fakes in Pennsylvania”, 10/21/24, https://senatorpennycuick.com/2024/10/21/protecting-our-children-addressing-the-dangers-of-ai-generated-deepfakes-in-pennsylvania/.
[8A and 8B] Healthy Children, “How Will Artificial Intelligence (AI) Affect Children?” by Tiffany Munzer MD, 4/30/24, https://www.healthychildren.org/English/family-life/Media/Pages/how-will-artificial-intelligence-AI-affect-children.aspx.
[9] Tech Policy Lab, “Toys that Listen: A Study of Parents, Children, and Internet Connected Toys” by Emily McReynolds, et al, October 2017, https://techpolicylab.uw.edu/wp-content/uploads/2017/10/Toys-That-Listen_CHI-2017.pdf.
[10] UNICEF, World Economic Forum, “Children and AI – Where are the opportunities and risks?”, https://www.unicef.org/innovation/sites/unicef.org.innovation/files/2018-11/Children%20and%20AI_Short%20Verson%20%283%29.pdf.
[11] Federal Trade Commission, “The Children’s Online Privacy Protection Rule (‘COPPA’)”, https://www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa.
Pope Francis has taken the unusual step of dissolving an influential Peruvian Catholic group after investigation revealed “sadistic abuses” of power and spirituality. The group’s founder, Luis Fernando Figari, was earlier found to have committed sexual acts with recruits, and to have engaged in significant financial mismanagement.
See, https://www.CNN.com/2025/01/21/americas/pope-dissolves-peruvian-catholic-group-intl-latam/index.htlm.
FOR MORE OF MY ARTICLES ON POVERTY, POLITICS, AND MATTERS OF CONSCIENCE CHECK OUT MY BLOG A LAWYER’S PRAYERS AT: https://alawyersprayers.com


Thank you, Anna, for this valuable article. We’ve been reading it over and over again, and I truly believe that not reading it would be a serious oversight in our parenting journey.
This has always been a dangerous world. Unfortunately, the technology we thought would make it safer for our children is not what we hoped.
As a man with technical background, I’ve learned one thing: the more efficient or powerful a tool, the more dangerous it is. Like in the fairy-tale of The Sleeping Beauty, we can’t protect ourselves by avoiding the dangerous tools altogether, but by embracing their strengths and understanding their dangers. The life has taught me the hard and painful way, that running away does not help.
And you’ve been doing it right Anna.
That’s a tremendous endorsement, Hubert. I freely admit that I have never been adept w/ technology. I often say I can be intimidated by a toaster (LOL). But the world is filled w/ technology. We have to be aware.
My wife, despite being technically savvy, initially refused to use the robotic vacuum cleaner, convinced it would make her lazy. She held the same belief about the dishwasher. However, in the end, the dishwasher handles about 75% of all dishes, and the robot vacuum cleaner does its job—though it’s still denied full responsibility for vacuuming the entire house.
As for me, I take a similar approach for the same reason—I try to stick to a hands-on method for most tasks, relying on AI and technology only for the most arduous or repetitive jobs.
This is a little trick we use to keep ourselves sharp, both mentally and physically. Of course, this is just a personal recommendation—I fully understand that every person has their own unique circumstances and preferences. 🙂
🌹❤️
I had no idea AI posed so many risks to kids. One thing that really stood out to me was the importance of having open and honest conversations with the kids about online safety. It’s not just about setting limits and monitoring their screen time, but also about teaching them how to use the digital world responsibly and wisely.
This post is a total eye-opener, Anna.
Glad this was helpful, Ritish.
Ugh….why must man corrupt EVERYTHING?!?!?
The “sin nature” of mankind reveals itself everywhere. It’s the very reason we need a Savior in the Person of Jesus Christ.
Scary stuff, but it’s already here, so we have to figure out how to deal with it.
Exactly.
A simple illustration of AI’s emotional danger: about 40 years ago, before smart phones or even personal computers, a friend had s Walter Winchell style dummy. He showed it and his ventiloquist skills to a four year old, but then as the child engaged with the dummy, he pushed its head waaaay out showing the stick on which it sat! The kid was traumatized and ran screaming to his mother. He refused to even look and see that the dummy was not a “real boy.”
Robots can provide the illusion of connection, which all human beings crave. There is already a recognized psychological risk of unhealthy emotional attachment to an non-reciprocating entity with consequent loneliness, depression, and isolation. See, https://www.fisherphillips.com/en/news-insights/dont-fall-in-love-with-your-robot-steps-employers-can-take-ai-attachments-in-workplace.html.
Needless to say, this would interfere w/ the development of real relationships.
Japanese, South Korean, and Chinese online idol culture mirrors this. Fans there obsess over musical groups whose profiles are entirely artificial and w/ whom they have no genuine interaction (to the point of believing they are in love w/ individual singers). See, https://en.wikipedia.org/wiki/Chinese_idol.
I find AI to have terrifying possibilities, but hope that is due to my lack of depth in the field. Last week, I heard a thinker say how wonderful it will be to give all knowledge readily, to the billions of people on earth. He did not mention understanding, nor wisdom, nor any morality that would best attend knowledge, to avert disaster.
You hit the nail on the head, Marilee. I am profoundly concerned this technology will be misused, not only be predators but by autocrats.
Protecting children from the misuse and dangers of AI ought to be our #1 priority. I’m appalled that so much of it remains unregulated, especially the storing of personal data. Privacy laws ought to protect us but I don’t think the means of enforcing and monitoring have been caught up with the technology. And then there’s the danger of our own government using our data against us as you warn, limiting our freedoms.
Mankind has a habit of pursuing new technologies at full speed, w/o taking into account the consequences. This area is unregulated because there is no monetary incentive to regulate it. Sadly, the financial interests of large corporations take precedence over those of individuals w/ our legislators.
This is thorough and well researched Anna
We must be made aware of the dangers of AI and
proceed with utmost caution
Governments need to be properly advised and children protected
Thank you for posting –
I am glad you found the post helpful. Sadly, the government seems either enamored of this new technology or captive to its proponents. Parents must be pro-active.
Yes, unfortunately many believe governments and politicians have their best interests at heart
Even if they did, the rate of technological change is beyond them and it has now become potentially dangerous
If there was ever an instance of technology outpacing the ability to understand how to take precautions, this is it. Thank you for pointing this out, Anna!
–Scott
A sad reality of which parents must be aware.
Pingback: AI and Children – NarrowPathMinistries
scary indeed. I thought little people lived inside the car radio when I was small. it’s not like we speak aloud every thought…
Tremendous post Anna. In a world known for victimizing and exploiting children, I see far more harm than good coming from the use of AI.
Perhaps most disturbing of all is the necessity of parents educating their children on the inherent dangers of such technology. I say that as a parent of an elementary school teacher who deals every day with the consequences of parents who have very little involvement in the lives of their children.
As for governmental regulation and protection of children, I think we all know how effective that often is.