
This is a summary of the keynote address titled “AI and Media Ethics: When Algorithms Influence Truth,” delivered by Dr. Chai Wutiwiwatchai, Executive Director of the National Electronics and Computer Technology Center (NECTEC NSTDA). The speech was part of the seminar “AI and Media Ethics: Decoding the Complex Relationship between Artificial Intelligence and Ethics in the Digital Age,” a side-event of the “UNESCO Global Forum on the Ethics of AI 2025,” organized by the National Press Council of Thailand and its partners on June 24, 2025.

The Future of AI: When Technology Outpaces Ethics
Dr. Chai explained that AI has evolved far beyond public perception. It began with Perception AI, which focused on specific recognition tasks, and has now entered the era of Generative AI, capable of creating new content. The next step is Agentic AI, where AI will act as an autonomous agent to perform complex, cross-application tasks. This aligns with the direction of leading tech companies currently developing advanced AI agents as highly capable personal assistants. Dr. Chai noted that Thailand’s Revenue Department is a prime candidate for becoming an Agentic AI, as it is legally empowered to pull comprehensive data.
Following this, we will enter the era of Physical AI, where AI has complete control over robots. NECTEC is already conducting research on Humanoid robots and Exoskeleton suits. This trend mirrors the global race to develop and deploy humanoid robots in the industrial sector. The ultimate goal is Artificial General Intelligence (AGI), where a single robot can perform any task a human can. This will all be powered by Multimodal AI, which can perceive and respond to both visual and audio inputs, along with on-device AI chips in mobile phones for faster processing.
However, the most concerning development is Brain Decoding. Research from the University of Texas at Austin has successfully used AI to continuously translate brain activity (from fMRI scans) into text. This breakthrough has prompted UNESCO to consider establishing a Neurological Committee to debate the governance of Neurotechnology and the protection of human Neuro-rights

Opportunities and Risks: AI as a Double-Edged Sword
Dr. Chai pointed out that AI offers immense benefits to the media industry, from content creation and curation to audience analysis. He cited real-world examples, such as its use at Thai PBS and an AI Avatar that NECTEC created for veteran journalist Suthichai Yoon, which successfully engaged a younger audience.
Conversely, the negative impacts are becoming clearer across four main dimensions:
- Copyright: Remains a subject of major legal battles worldwide.
- Misuse: Includes the use of deepfakes for political disinformation and financial fraud.
- AI Mistakes: A significant concern for government bodies. The incidentdatabase.ai project was created to compile and study AI-related errors for prevention purposes.
- Job Replacement: The World Economic Forum’s ‘Future of Jobs Report 2023’ estimates that administrative and routine jobs are at high risk of being replaced by automation.

Thailand's AI Governance Approach: Build 'Standards' Before 'Laws'
The core of the keynote was a proposal for a three-tiered AI governance framework that emphasizes balance and practicality.
1. National Level: Prioritize Standards, Defer Legislation
Dr. Chai stressed that legislation should be a last resort, as it could stifle innovation. Although the Electronic Transactions Development Agency (ETDA) is drafting an AI Act and has issued guidelines, Dr. Chai noted that these recommendations may not be sufficient to drive real-world change.
He proposed that Thailand should first focus on Standardization. NECTEC plans to establish an AI Standard Testing Center to evaluate AI systems based on five core principles:
- Safety Concern
- Human Intervention
- Precision
- Reliability & Robustness
- Data Governance
This approach aligns with the voluntary AI Risk Management Framework from the U.S. National Institute of Standards and Technology (NIST).
In contrast, the EU AI Act, officially approved in May 2024, is the world’s first comprehensive AI law, categorizing risks and strictly banning certain applications like government-led social scoring. The proposed Thai AI Act, which has already undergone at least two public hearings led by ETDA, is designed to be more flexible. It allows different sectors to collaboratively define what constitutes “high-risk” for them, while also promoting innovation through Data Mining and a Regulatory Sandbox.
2. Community Level: From Policy to Practice
Dr. Chai views the guidelines already issued by the National Press Council as a great start. He recommends that media organizations adapt them into internal regulations. He highlighted NSTDA’s own process—which includes an AI committee and a risk assessment tool to classify research projects as high, medium, or low risk—as a best practice that can be adapted by others.
“Furthermore, in education, the Office of the Permanent Secretary for Higher Education, Science, Research and Innovation is considering mandating the teaching of AI tools in all university programs. This is to ensure that students in every field become familiar with AI and can use it ethically,” Dr. Chai explained.
3. Individual Level: Focus on Upskilling and Upholding Ethics
Dr. Chai raised a concern, citing OECD data, that children’s reasoning and problem-solving abilities have declined since the widespread adoption of Generative AI. This issue can affect adults as well if they use AI tools without developing their own skills. The solution is not to ban AI, but to shift the paradigm to Active Learning, where teachers and professionals act as supervisors, using AI as a tool to enhance performance.
In his conclusion, Dr. Chai summarized that mandatory laws should be the last option. Instead, the focus should be on self-regulation through policies and guidelines at the community and professional association levels. He encouraged the promotion of best practices, such as strong AI governance within media organizations, to serve as models for others. The crucial role of professional councils is to protect the value of human judgment from being replaced. He used the medical field as an example, where a doctor must always be the one to sign off, preserving human accountability.
For individuals, Dr. Chai emphasized that the best defense is to create value for oneself, develop skills to use AI as a performance-enhancing tool, and strictly adhere to professional ethics.
