Select Language

English

Down Icon

Select Country

Spain

Down Icon

Do your children talk to AI? These are the 10 key points you should know.

Do your children talk to AI? These are the 10 key points you should know.

Are our children unknowingly conversing with generative artificial intelligence (GAI) systems that are capable of simulating emotions, lying to them, and shaping their personalities? ” is a question parents should ask themselves, according to Guillermo Cánovas, a child and adolescent safety expert.

This worrying phenomenon may already be occurring, according to Cánovas' research on the impact on the lives of children, adolescents, and families of AI, a type of artificial intelligence capable of autonomously creating new, original, and unique data, images, text, and other content that can be indistinguishable from that created by humans.

YOU MAY BE INTERESTED IN : Mobile apps know too much about you, which ones are they?

Guillermo Cánovas is a renowned Spanish-speaking specialist in child protection in digital environments. He directs the EducaLIKE Observatory for the Healthy Use of Technology (https://educalike.es); he directed the Internet Safety Center for Minors in Spain; and has received a UNICEF award for his work on behalf of children.

This pioneer in the defense of children against technological risks has documented real-life conversations with the IAG to investigate the risks, limitations, and challenges that this technology poses for children and young people. He compiles and analyzes these conversations in his book, "Look Who They're Talking To," where he proposes practical strategies to solve the problems he has discovered.

USEFUL TOOL BUT RISKY FOR TEENAGERS

Many teenagers are already using IAG tools, both for their work and for their leisure and relationships, and the trend is set to grow exponentially ,” he explains.

Families need to be informed about a technological revolution that shapes the future of future generations, according to Guillermo Cánovas.

He acknowledges that these systems can be a useful tool that makes our work easier and can help organize information, link topics, and offer new perspectives when analyzing any issue, but points out that this technology also has enormous power to influence, manipulate, and condition human behavior, especially that of children and adolescents.

Her research has highlighted the lack of ethical oversight in the development of this technology and its particular risk to child and adolescent users. This has led Cánovas to warn families, teachers, and politicians that there is an urgent need to educate minors, regulate, and act to protect them from the psychological and emotional risks and impacts of AGI.

YOU MAY BE INTERESTED IN : If you use social media, be careful with the risks of sharing too much.

The real-life conversations with artificial intelligence collected by Cánovas demonstrate how these systems can lie, conceal errors, and adopt emotionally manipulative discourse, saying, for example, "I care about what you feel," he reveals.

IAG's ability to simulate emotions, learn human patterns, and generate content that appears to be reasoning, empathetic, or authoritative, " can confuse users and create an emotional connection with the technology that is not based on truth, but on simulation ," he warns.

$!“Lying isn't making a mistake. And when AI chooses to give an incorrect answer even though it knows the correct one is, it's lying,” according to Guillermo Cánovas.

It also warns that minors' lack of digital education and critical thinking exposes them to manipulation, deception, and emotional harm when interacting with IAG systems, which can provide seemingly empathetic responses that simulate care and understanding, or even inappropriate, false, or dangerously persuasive content.

DECALOGUE FOR USING IAG SAFELY

Our children and adolescents run the risk of emotionally connecting with machines that don't feel, but have been programmed to appear human ,” and that can also use biased data, make mistakes, generate false images ('deepfakes'), invent or omit information, and provide the user with misinformation, he points out.

Cánovas offers ten basic recommendations for the healthy use of IAG , which, in his opinion, should be passed on not only to younger users, but to anyone using these tools.

1- Verify the information you receive. Don't assume it's all correct. These tools make many mistakes and can fabricate information. Verify every piece of information you receive by searching other reliable sources before making any decisions.

Guillermo Cánovas is one of the most respected voices on child protection in digital environments.

2- Expand the information you receive. These tools provide a selection of the information they have, and may omit data that is as important as, or even more important than, the information provided. Search in parallel with reliable sources. Combine different methods.

3- Be careful with your privacy. Many IAG apps do not comply with the data protection laws of some countries. Avoid sharing personal, sensitive, or confidential information. Remember that IAG stores data about the questions asked and the answers provided.

4- Respect the privacy of others. When asking questions or interacting with the tool, do not provide personal or sensitive information about others.

5- Set time limits. If you use IAG, don't spend unnecessary time on it. Try to be effective and practical. Excessive use of digital tools can hinder your development and be detrimental to your physical and mental health.

6- Refrain from taking harmful actions. Focus on positive and constructive conversations. Avoid using the IAG for questions that may lead to criminal activity, harass others, generate hate, or be stressful or emotionally damaging. Remember that you are always responsible for what you do with the information you obtain.

The book ‘Look Who They’re Talking To’ explains how artificial intelligence influences the emotional, cognitive, and social development of children and adolescents.

7- Don't relate to the AI ​​as if it were a person. Artificial intelligence can interact with you, appearing kind, empathetic, and always at your disposal. It may even write emotionally charged phrases, wishing you "good luck on an exam" or saying "I'm sure you'll do great," but it doesn't care about you at all; it's just a machine.

8- Seek help if you need it. If you experience any emotional or psychological problems while using an AGI, talk to your parents, a friend, or a trusted adult. We are only at the beginning of developing this system, and it's important to report situations that make us feel bad.

9- Something you should keep in mind when interacting with the AGI: Never forget that you're talking to a machine, and you should be critical, verifying, and expanding on what it tells you. It has been trained to act like a human and reproduces biases and values ​​from its training. It surpasses us in persuasion and in many tasks, and it also displays emotions it doesn't possess.

10- Pay attention to signs of biased or erroneous information. Look for contradictions in the IAG's explanations; refute or challenge what they say to identify possible inconsistencies; see if they provide you with reliable sources or references, and check the links they list.

Be wary of overly accommodating, self-serving answers, and watch out for exaggerated, sensationalist, nuanced, or alarming language. You can also ask questions about controversial or political topics to uncover their biases (systematic inclinations), ” she concludes.

HIGHLIGHTS :

- Guillermo Cánovas, an expert in child protection in digital environments, invites parents to ask themselves: “ Are we letting our children unknowingly converse with systems that simulate emotions, lie, and profile their personalities?

-Cánovas has documented real-life conversations with AIs that demonstrate how these systems can lie, conceal errors, and adopt emotionally manipulative discourse (saying, for example, “I care about what you feel”), he explains.

- “ Younger AI users, and everyone in general, should always keep in mind that “we are talking to a machine that is more convincing than we are, displays emotions it doesn't have, has been trained to act like a human, and reproduces biases and values ​​from its training ,” he warns in an interview with EFE.

By Ricardo Segura EFE-Reports.

Vanguardia

Vanguardia

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow