Protecting children in the age of artificial intelligence

In essence, there can be no substitute for an educated and literate public with a discerning and critically analytical mind, as well as to have cognitive and affective means to protect itself from transgressions.

Vitit Muntarbhorn

Vitit Muntarbhorn

The Jakarta Post

emiliano-vittoriosi-vEN1bsdSjxM-unsplash.jpg

Thematic image. AI also brings risks. It might be a tool of exploitation and alienation. It is an instrument of stress, replete with addiction and superficial self-validation. It is emerging as an instrument of human subjection and dejection, especially when and where it controls human lives, perhaps absolutely. PHOTO: UNSPLASH

March 5, 2025

BANGKOK – The age of artificial intelligence is very much here. The term “generative AI” is now commonplace, with the public fascinated that AI can actively produce content such as written and audio creations. In fact, the world is moving toward artificial general intelligence (AGI), whereby robots will be able to match and even outdo human intelligence. It is important, therefore, that its relationship with children (under 18 years) invites reflection and precaution.

On the one hand, AI can bring great benefits, building on the strengths of existing digitalization. It can be a useful educational tool, such as to help children who face learning difficulties or disabilities. It is a technology of connectivity and helps to facilitate communication and information dissemination.

On the other hand, AI also brings risks. It might be a tool of exploitation, such as in relation to sexual abuse and exploitation. It is a technology of alienation used for bullying, hate speech, discrimination and violence. It is an instrument of stress, replete with addiction and superficial self-validation. It is emerging as an instrument of human subjection and dejection, especially when and where it controls human lives, perhaps absolutely.

How then is the world community to handle that ambivalence? The international guiding framework is the Convention on the Rights of the Child and its General Comment No.25 on children’s rights in the digital environment, highlighting child protection.

In reality, implementation is open to a variety of orientations, bearing in mind that both AI and related responses are in a state of flux.

On one front, there is the two-track situation whereby a general approach is contrasted with a more specific approach in handling the relationship between AI and children. The former is exemplified by various laws and guidelines of a general nature, such as to protect children’s privacy and safety and to highlight AI transparency, especially to help explain the pros and cons of AI to children.

The more specific approach is to target various sectors for action. Twenty-five years ago, the United States Online Privacy Child Protection Act offered a preview. It imposed a condition related to minimum age; children under 13 years old cannot consent to have their data revealed. In 2025, California opted for an additional, specific intervention. Its recent Patients Communications’ law stipulates that healthcare facilities using AI must adopt clear disclaimers when there is AI-generated content. The possibility of contacting human healthcare providers must also be available.

On another front, there is the contrasting vision between ethical guidelines of a persuasive nature concerning AI utilization and the prescriptive approach of binding regulations with consequential accountability in the case of violations. The ethical approach has emerged from several international agencies and it highlights basic principles, such as “Do No Harm”, safety and security, privacy and data protection, responsibility and accountability, transparency and explainability of AI’s functions.

The prime example of the prescriptive approach is the European Union’s AI Act, which came into force in 2025. There is a list of prohibited practices. Social profiling, where data might be used to discriminate against people, is forbidden. Subliminal targeting of children’s emotions as a kind of manipulation is prescribed. The collection of real-time biometric data for surveillance purposes is not allowed, although there might be some leeway in regard to national security. With lesser risks, the business sector is called upon to have codes of conduct as a kind of self-regulation for policing itself, subject to linking up with the EU supervisory system as a whole.

Globally, certain realities are inevitable. Where there is illegal content, such as the sexual abuse or sexual exploitation of children, national laws already prohibit such practices and they automatically apply to AI-related actions. However, there might be differences in regard to whether children appearing in AI-generated content are real children or merely digitally generated.

From another dimension, there is the issue of how to deal with harmful content that is not illegal. For example, the mere fact that X hates Y is not necessarily illegal in international law or national law. Other actions may thus be required. At present, the digital industry, especially its developers and deployers, have already adopted some tools through self-regulation to moderate content and take down harmful content, at times with and through filtering. This might also cover various forms of bullying and grooming of children, which might otherwise lead to discrimination or violence.

The key lies with digital and AI literacy so that the public is able to enjoy the benefits of technology safely, securely, “smartly” and sustainably. This can be helped by the AI industry ensuring that its members are AI literate from the angle of assessing the risks as part of due diligence and mitigating them.

In essence, there can be no substitute for an educated and literate public with a discerning and critically analytical mind, as well as to have cognitive and effective means to protect itself from transgressions.

Urgently, families need to have options for “digital detox”. This would enable parents to work with children to safeguard some spaces at home that are free from technology. There need to be periods of human interaction without technology, together with leisure time together as humans.

Human activities such as pro bono help for disadvantaged groups need to be nurtured, to generate the warmth of empathy, which no technology can replace.

Hence, the community needs “Top Tips for Digital Detox” now !

The writer is a professor emeritus in the Faculty of Law at Chulalongkorn University.

scroll to top