Key Takeaway
- AI hallucinations are accidental model errors, while misinformation involves deliberate manipulation using AI tools.
- Malaysia’s Online Safety Act 2025 creates legal duties for licensed platforms to manage “harmful content”, with time-bound actions set out in subsidiary regulations
- Deepfakes tied to Malaysia’s sensitive 3R issues (Race, Religion, Royalty) can carry high social risk and trigger strong reactions.
- Verification platforms like Sebenarnya.my and the MCMC Consumer Redress Portal (aduan.mcmc.gov.my) give Malaysians official channels to check claims and lodge reports.
- Developing AI literacy is one of the most important long-term defenses against deepfakes and synthetic media.
Table of Contents
ToggleAI misinformation is becoming more common in Malaysia because generative AI tools can now produce realistic videos, voices, and written content that are difficult to distinguish from real information. These fabricated messages often spread quickly through WhatsApp groups and social media, making it harder for users to verify facts before misinformation goes viral.
It’s no secret that AI proliferation has made it difficult for netizens to spot what’s real and what’s not. Malaysia is no stranger to viral clips that were fabricated by AI, further fuelling misinformation in the country.
Hence today, as a digital PR agency that really, really does not want any Malaysians falling for AI scams, will show you what, why and how AI misinformation became so widespread and the ways to identify them.
Why AI Misinformation Matters in Malaysia
AI-generated misinformation is no longer obvious internet hoaxes or poorly used CGI.
Modern generative AI tools can create:
- Realistic videos
- Cloned voices
- Convincing articles
- Synthetic photographs
“In February 2026, an AI-generated viral clip made with ByteDance’s Seedance 2.0 showed hyper-realistic depictions of celebrities (including a fake Brad Pitt–Tom Cruise fight) and sparked backlash over misuse of likeness and intellectual property.”
In Malaysia, these fabricated messages often spread through familiar channels:
- WhatsApp family groups
- Facebook community pages
- Telegram news channels
- TikTok viral clips
For many Malaysians, these platforms are primary sources of news alongside traditional outlets such as Astro Awani, Bernama, The Star, and Free Malaysia Today.
Because misinformation frequently arrives through trusted social circles, many users assume it is authentic before verifying it.
What Is AI Misinformation
AI misinformation refers to false or misleading content generated using artificial intelligence tools such as deepfake videos, voice cloning software, or AI-written articles.
Unlike traditional fake news, AI misinformation can be created rapidly and distributed at scale with little effort.
Common forms include:
- Deepfake political speeches
- Voice cloning scams
- Fake investment advertisements
- Fabricated news or press release screenshots
AI Hallucination vs AI Misinformation
Not all incorrect AI content is malicious, some are because of hallucinations.
Feature | AI Hallucination | AI Misinformation |
Intent | None | Deliberate deception |
Source | AI prediction error | Human manipulation |
Example | Fake historical fact | Deepfake political speech |
Primary Risk | Academic mistakes | Social unrest or fraud |
Legal Implications | Minimal | Possible violation of Malaysian law |
Hallucination
A student asks an AI chatbot about Malaysian history and receives an invented treaty between Johor and the British that did not happen.
Misinformation
A scammer uses AI voice cloning to imitate a relative claiming to be detained at a police station, requesting urgent funds via Touch ’n Go eWallet.
Why AI Misinformation Spreads So Fast in Malaysia
Malaysia’s digital culture enables information to travel extremely quickly.
Several factors contribute to this.
High WhatsApp Penetration
Malaysia has one of the region’s heaviest WhatsApp usage patterns.
Meltwater (citing the Global Digital Report 2024) reports that 90.7% of internet users aged 16–64 in Malaysia use WhatsApp monthly, making it a major pathway for fast-moving rumours in private group chats.
Information frequently spreads through:
- Family groups
- Neighbourhood chats
- School parent groups
These private networks are difficult for fact-checkers to monitor.
Emotional Viral Culture
Posts involving natural disasters, politics, or religious issues often trigger emotional reactions.
Users may share content before verifying its authenticity.
Cheap and Accessible AI Tools
Modern generative AI platforms allow users to produce:
- Voice clones
- Realistic images
- Synthetic videos
within minutes. Often with little to no cost.
Why Malaysia Is Particularly Vulnerable
Several characteristics of Malaysia’s digital ecosystem contribute to the rapid spread of synthetic media.
Factor | Why It Matters |
High WhatsApp usage | Private sharing makes moderation difficult |
Multilingual society | Content spreads across Malay, English, Chinese, Tamil |
Viral social media culture | Emotional posts spread rapidly |
Sensitive 3R issues | Race, religion, and royalty topics trigger strong reactions |
Multilingual Information Networks
Content spreads across multiple languages, including:
- Bahasa Malaysia
- English
- Mandarin
- Tamil
This makes misinformation detection more complicated as sometimes, the content maybe “rojak” together, creating a manglish style wording.
Sensitivity Around 3R Issues
Topics involving Race, Religion, and Royalty carry strong emotional weight and may escalate quickly if manipulated.
High Social Media Engagement
Platforms such as TikTok, Facebook, Instagram, and X play a major role in shaping public conversations.
How AI Misinformation Could Affect Malaysia’s Elections
Synthetic media could influence political narratives and create divisive ones.
AI tools can generate:
- Fake speeches from politicians
- Manipulated campaign footage
- Fabricated political statements
These materials may circulate through messaging apps before authorities verify their authenticity.
In Malaysia, election narratives often intersect with sensitive social topics. AI-generated misinformation targeting these themes could amplify tensions between communities.
Authorities such as the Malaysian Communications and Multimedia Commission (MCMC) and CyberSecurity Malaysia monitor coordinated misinformation campaigns to reduce their impact.
How AI Is Fueling Financial Scams in Malaysia
Financial fraud is one of the fastest-growing uses of AI-driven deception in Malaysia.
“According to police reporting, 35,368 scam cases were recorded in 2024, with roughly RM1.6 billion in losses. In Q1 2025, police recorded 12,110 online fraud cases involving RM573.7 million in losses.”
Security analysts estimate that AI-generated videos and voice clones now influence a growing share of scam attempts, particularly in investment fraud and impersonation scams.
Common AI Scam Tactics in Malaysia
Voice Cloning Scams
Scammers use AI to imitate relatives, employers, or company executives requesting urgent money transfers through platforms like Touch ’n Go eWallet or bank transfers.
Fake Investment Advertisements
Deepfaked videos featuring Malaysian public figures or entrepreneurs promote fake cryptocurrency, forex, or stock investments.
These videos often circulate through Facebook, TikTok, and Telegram, convincing viewers the opportunity is endorsed by trusted personalities.
Fake Bank Alerts
AI-generated messages or calls may impersonate major financial institutions such as:
- Maybank
- CIMB Bank
- Public Bank
- Bank Negara Malaysia
These messages usually claim suspicious transactions or urgent account issues to pressure victims into revealing personal information.
How Malaysians Can Spot AI Deception
Human awareness remains the strongest defense against AI-generated misinformation.
While detection software exists, many deepfakes and synthetic posts spread through WhatsApp groups, TikTok clips, and Telegram channels long before automated tools flag them.
Signs Content May Be AI-Generated
Look for these common warning signs before forwarding a viral post.
- Distorted fingers or hands in images
AI image generators often struggle with human anatomy. Extra fingers, blurred hands, or oddly shaped palms are common visual errors. - Unnatural blinking or facial movements
Deepfake videos may show irregular blinking patterns or stiff facial expressions that do not match natural speech. - Mismatched shadows or lighting
If shadows fall in different directions or lighting appears inconsistent across objects, the image may have been synthetically generated. - Overly formal Malay language
AI-generated Malay sometimes sounds robotic or overly formal and may include Indonesian vocabulary such as kantor instead of pejabat. - Unknown accounts posting sensational claims
Newly created accounts or anonymous pages sharing shocking news should be treated with caution.
Cross-Platform Verification
If a major event truly happened in Malaysia, it would usually be reported by established media outlets.
Before believing a viral post, check whether it appears on reputable Malaysian news platforms such as:
- Bernama
- The Star
- Malay Mail
- Astro Awani
If these outlets remain silent about a supposedly major incident, the content may be misleading or fabricated.
Fact-Checking Platforms
Malaysians can also verify suspicious claims using official fact-checking and reporting channels.
- Sebenarnya.my
A government-backed portal that debunks viral misinformation circulating online. - Aduan MCMC reporting portal
Managed by the Malaysian Communications and Multimedia Commission (MCMC), allowing users to report harmful or misleading digital content.
Using these resources before sharing content helps reduce the spread of AI-generated misinformation across Malaysia’s online communities.
Why AI Literacy Is the Long-Term Solution
Technology alone cannot eliminate misinformation.
As AI tools become more advanced, detection software may struggle to keep pace.
The most effective defense is digital literacy.
Important habits include:
- Verifying sources
- Questioning viral posts
- Understanding how generative AI works
Government initiatives, media literacy programs, and public awareness campaigns will play an important role in strengthening Malaysia’s information resilience.
Conclusion on AI-misinformation in Malaysia
AI misinformation is becoming an unavoidable part of the digital environment.
In Malaysia’s highly connected online culture, fabricated content can spread quickly through messaging platforms before verification occurs.
As synthetic media becomes more sophisticated, businesses and public institutions must also take a proactive approach to managing their reputation and communication.
As a professional PR agency, PRESS helps organisations respond to misinformation by establishing clear narratives and strengthening public trust before false narratives gain traction.
We work with businesses to build positive messaging and protect brand credibility across Malaysia’s digital and media, so that your business can continue to thrive.
Disclaimer: This article is for general information only and isn’t legal, financial, or cybersecurity advice. If you’ve been targeted by a scam or believe content is illegal, report it to the relevant authorities ( MCMC / PDRM) and seek professional advice where needed.
Source:
- Online Safety Act 2025 (Act 866) — Malaysia (gazetted law text; PDF) — 2025-05 —
- Online Safety (Period) Regulations 2025 — Digital Policy Alert (summary of subsidiary timelines) — 2025 —
- What the Online Safety Act changes and how it works — Free Malaysia Today (explainer/reporting) — 2026-02-04 —
- Online Safety Act changes: how it works (incl. in-force date reporting) — New Straits Times (reporting) — 2026-02 —
- Social media statistics Malaysia (includes WhatsApp usage figure via Global Digital Report 2024) — Meltwater — 2024 —
- Police/Bukit Aman scam statistics (2024 cases + losses) — Bernama — 2025-03-03 —
- Over 12,000 online fraud cases recorded in Q1 2025 (losses RM573.7m) — The Star — 2025-04-24 —
- MCMC warning on hateful 3R content — Ministry of Communications / MCMC — (page date on site) —
- Sebenarnya.my (official fact-check portal) — Sebenarnya.my — (ongoing) —
- Launch/overview reporting of Sebenarnya.my (government initiative context) — New Straits Times — 2017-03-15 —
- MCMC Consumer Redress Portal FAQ (confirms aduan.mcmc.gov.my channel) — CFM (PDF) — 2025-01 —
- Seedance 2.0 / viral AI-generated celebrity clip coverage — Reuters — 2026-02-16 —
- More reporting on the Seedance 2.0 viral clip (Brad Pitt/Tom Cruise example) — Business Insider — 2026-02 —
- AI-generated content detection (tool reference) — Hive Moderation — (ongoing) —
- Deepfake detection tool (tool reference) — TrueMedia.org — (ongoing) —
Frequently Asked Questions About AI-misinformation
How Do I Report a Deepfake In Malaysia?
You can report harmful AI-generated content through the Aduan MCMC portal or verify viral claims using Sebenarnya.my.
Is AI Parody Illegal In Malaysia?
Parody may be allowed, but if the content causes defamation or violates laws related to race, religion, or royalty, it could lead to legal consequences under Malaysian law.
Can AI-generated images be watermarked?
Many technology companies now use invisible watermarking systems to identify synthetic media, although advanced editing may remove these markers.
Why do AI systems hallucinate information?
AI models generate responses based on probability patterns rather than verified knowledge, which can produce plausible but incorrect information.
Are there tools to detect AI-generated media?
Platforms such as Hive’s AI-generated content detection and TrueMedia.org’s deepfake detector can help users analyse suspicious images, audio, or videos—but they work best when combined with basic verification habits.
Will AI misinformation affect future Malaysian elections?
Experts expect synthetic media to increase during political campaigns, making source verification and fact-checking increasingly important.

