The rise of the invisible colleague

While organizations secure the front door, employees are leaving the back door wide open.

Find out why Shadow AI is the new reality.

Mockup-trendreport-ENG

75% of employees are already using AI tools. 

The use of unauthorized AI tools among this group is 

78%

Click-through rates for phishing emails are rising due to AI.

There has been an increase in clicks on AI-generated phishing emails since 2003.

+549%

The Halo Effect Trap

In 56% of cases, first impressions trump actual safety.

56%

Control is an illusion.
Resilience is the key.

 Banning ChatGPT and other AI tools is counterproductive. In our latest trend report, we explore the psychology behind “Shadow AI” and reveal why employees are circumventing the rules en masse. 

Zap-icon-3

The gap

89% know the rules, but 54% don’t follow them. Find out why knowledge alone isn’t enough.

triangle-alert-1

The Halo Effect

More than half of your colleagues mistakenly believe that a “nice-looking” website is also a secure website.

Check-icon

From Risk to Strength

Learn how to turn employees’ curiosity into safe innovation through AI literacy.

 

Exclusive Data & Insights

Based on 33.000+ Culture Scan responses and over 130.000+ phishing simulation responses. This report presents facts, not opinions, about the state of security awareness in 2025.

  • purple-check
    Analysis of the “Curiosity Gap” phenomenon
  • purple-check

    Comparison between Traditional Shadow IT and Shadow AI

  • purple-check

    Practical guidance for the CISO and HR

Download the report

Get the PDF sent directly to your inbox.

 

“Bring AI out of the shadows: by facilitating experiments, we turn hidden risks into safe development.”
sjoerd
Sjoerd van Veldhuizen
Author of the Trend Report & Social Psychologist

From data to immediate action

Managing AI often feels like trying to bail out a sinking ship: 75% of employees use AI tools, yet IT has visibility into less than 11% of them. How do you tackle this? In this report, we explain how you can directly translate the insights from the report into a strong business case, more focused policies, and a resilient organizational culture.

 

 

Green icon graph

Benchmark

Compare your organization to the market: did you know that AI-powered phishing attacks are five times more successful than they were two years ago?

Yellow megaphone icon

Hard data

Use our analysis of 33,690 respondents to show senior management that AI awareness is urgently needed in a world where 75% of employees are already using AI tools.


Purple shield icon

Risk reduction

Learn how to effectively address the blind spot of “free” and unauthorized AI tools by focusing on AI literacy.


Make your organization resilient

 

Getting a Grip on Shadow AI

Employees are experimenting faster than your policy allows. Find out how to stay in control without stifling innovation.

 

 

Check out the blog
Woman with a smartphone and a white vest in front of a purple background.
Strengthen AI resilience

Safe adoption starts with behavior. Learn how to embed critical thinking and digital literacy into your culture.

 

Discover our approach
Illustration of a gray robot with a ChatGPT interface screen next to the Awareways mascot, who is working on a laptop while sitting on a bench

FAQ

Frequently Asked Questions

About Ransomware

and Security Awareness Training

Shadow AI refers to the unauthorized use of AI tools for work-related purposes. While traditional Shadow IT is often limited to storage services, AI involves the processing of intellectual property and sensitive business data. The risks are growing exponentially: the click-through rate for AI-related phishing has increased fivefold in just two years.

Traditional training programs often focus on theoretical knowledge, even though 89% of employees already know the rules. However, there is a gap between knowing and doing: 56% of professionals are influenced by the Halo effect, whereby a professional-looking interface is mistakenly perceived as secure. Effective security therefore requires a psychological approach focused on behavioral change.

Blocking AI tools is counterproductive and fosters a shadow culture in which risks become invisible. Furthermore, access to AI influences talent’s choice of employer. The solution lies in facilitating safe use by increasing AI literacy, so that employees learn to assess risks independently.

The report analyzes over 100,000 data points from various sectors, including healthcare, business services, and government. These quantitative insights provide professionals with a foundation for securing budgets and building support for comprehensive behavioral programs. It translates complex security challenges into concrete actions for a resilient digital culture.

The focus must shift from strict control to human resilience and digital agility. This involves bridging the knowledge-action gap by teaching employees how to classify data correctly and independently assess the reliability of tools. Organizations must invest in this agility to turn Shadow AI into a safe and productive force for innovation.

Want to know more?

sjoerd

Sjoerd van Veldhuizen

Sociaal Psycholoog

Want to know more?

Need help finding what you’re looking for?
Contact our client support experts!
 

Talk to an expert