The Importance of AI Risk Awareness Highlighted: Japan Detects 230 Million Malware Cases, the Most in Asia
NEWSBOARD / LOUNGE
February 10, 2026

The Importance of AI Risk Awareness Highlighted: Japan Detects 230 Million Malware Cases, the Most in Asia

As AI technology advances, cyber threats targeting individuals are entering a new phase. While offering increased convenience, the misuse of AI for information leaks, fraud, and malware attacks is rapidly expanding, creating a situation where traditional vigilance alone is no longer sufficient.

Analysis from personal security services indicates that the misuse of AI has become a tangible risk that general users cannot ignore. In Japan, in particular, the impact is notable, with the number of malware blocks being exceptionally high across Asia. This situation highlights a significant gap between individuals' awareness of risk and the actual damage incurred.

The Gap Between Legal Frameworks and Individual Awareness

In Japan, the "National Cyber Coordination Office" was established within the Cabinet Office in July 2025, and "proactive cyber defense," including harmless measures against attack servers, was introduced into the legal system. While national-level defense systems have been strengthened, individual users' risk awareness has not kept pace. In reality, while the experience of using generative AI is expanding, cases have been pointed out where individuals use it without fully considering the handling of personal information or security risks.

Against this backdrop of differing perceptions, information leaks and associated fraud losses continue to grow. According to data from the U.S. Federal Trade Commission (FTC), the total amount of fraud targeting personal assets reached $5.7 billion (approximately 850 billion yen) in 2024. The methods continue to evolve, including the mass generation of AI-powered "realistic investment sites" and the exploitation of trust in chatbots to steal information.

Key Threats Emerging from AI Misuse

Based on the analysis, three main AI-driven threats warrant particular attention.

Threat 1: The Risk of Information and Assumptions Entrusted to AI Being Betrayed
Cases have been confirmed where information provided in confidence to AI, or conditions assumed to be harmless, are compromised in unexpected ways.

Cases where conversation data is not protected
Conversations with AI are stored as digital records. In the past, reports have surfaced of instances where vulnerabilities in AI's sharing functions allowed third parties to view conversations and confidential information that should have remained private.

Cases where information is obtained through features
Attackers may exploit features like calendar invitations to illicitly obtain user data through meeting requests. There is a risk of information being handled even when users do not realize they have shared it.

Attacks exploiting "harmless assumptions" (LegalPwn)
This is a technique where AI is tricked into misidentifying malicious malware as safe due to instructions subtly embedded within terms of service or privacy policy documents. Consequently, there is a risk that consultation content with AI could be viewed by third parties or used for malicious operations.

Threat 2: Fraud Exploiting the "Realism" Generated by AI
AI-generated "realistic investment sites" and "impersonation ads by celebrities" are becoming more widespread due to their low creation cost. The high accuracy of their design and text makes it difficult to determine their authenticity based on appearance alone.

Over an eight-month period from March to October 2025, more than 4.5 million fake websites were blocked. Scams using AI-generated deepfake audio have also been confirmed, rendering intuitive judgments like "it sounds like their real voice" or "it looks like an official site" unreliable.

Threat 3: AI's Responses Are Not Always Accurate
AI provides natural and convincing responses, but the content is not always accurate. Cases exist where "AI hallucinations," the generation of non-existent information as fact, are exploited for attacks.

For example, in "typosquatting," attackers predict URLs or software names that AI might erroneously suggest and prepare fake sites or malware. If a user clicks a link believing the AI's response, they risk being directed to an attacker's site instead of a legitimate service.

 
 

Four Measures to Protect Yourself from AI Risks

Recognize the non-confidential nature of information
Treat conversations with AI as information that may be shared with third parties, and do not input confidential information.

Separate work and personal accounts
Use different accounts for different purposes and regularly delete chat histories.

Thoroughly verify the authenticity of AI-generated information
Always confirm presented URLs and software names on official websites or other reliable sources.

Utilize security tools
By using tools that automatically detect malware and malicious sites, and combining them with multi-factor authentication and ID monitoring alerts, risks can be reduced.

Marius Briedis, CTO of NordVPN, points out, "While AI is convenient, it can also be an entry point for attacks. A zero-trust approach, constantly suspecting attacks impersonating AI or famous brands, is effective in protecting assets."

Contact Information

NordVPN PR Secretariat (within Antill)
Tel. 03-5572-6081
https://nordvpn.com/ja/