Over the last few years, artificial intelligence has gone from a technology of the future to a mainstream tool that anyone can use. AI is now widely available and has helped many businesses work more efficiently.
However, advancements in AI technology have also introduced some dangerous new cybersecurity risks that every business should know about. These risks come from new vulnerabilities in AI systems and the growing use of AI-driven methods by attackers.
With that in mind, we’ve rounded up some of the latest AI cyberattack statistics to help you understand the scope of these new cyber threats.
Staying informed can help you use AI safely and implement security measures to prevent AI-generated attacks.
Key Takeaways
- Hackers now use AI tools like ChatGPT to write more convincing phishing emails.
- Cybercriminals are using AI-generated deepfakes and voice clones as part of identity theft scams.
- AI tools can be targeted by cyberattacks. The four most common types of cyberattacks on AI tools are poisoning, inference, extraction, and evasion.
- Cybersecurity professionals are investing in AI tools as a way to identify and respond to threats more quickly.
The Growing Role of AI in Cybercrime
AI is not just a tool for defense, it’s increasingly being used to power more sophisticated cybercrime operations. Hackers now use advanced algorithms and machine learning techniques to evade traditional security measures and personalize attacks at scale.
This shift means that organizations must rethink how they assess vulnerabilities and strengthen their incident response strategies. Reactive approaches are no longer enough when cyberattacks can adapt and evolve in real time.
AI Phishing Attack Statistics
One of the most common ways that hackers use AI technology is to launch phishing attacks. Phishing is a form of social engineering where the perpetrator pretends to be a trustworthy contact and gets the victim to share sensitive information, such as passwords or bank account information.
AI technology has helped malicious actors develop more advanced phishing campaigns. In the past, one of the most effective ways to spot a phishing email was to look for spelling or grammatical errors.
Now, hackers use AI tools like ChatGPT to mimic writing styles and avoid detection, taking advantage of vulnerabilities in email filters and user habits.
- There was a 202% increase in phishing email messages in the second half of 2024.
- Credential phishing attacks increased by 703% in the second half of 2024. This is due to an increase in the availability of pre-made phishing kits online, many of which are generated with AI tools and trained on a vast dataset of prior email templates.
- The global volume for all phishing attacks dropped by 20% in 2024, but hackers are shifting their focus to more sophisticated email and voice attacks. 65% of phishing attacks now target organizations, while 35% target individuals.
- 82.6% of phishing emails use AI technology in some form.
- 78% of people open AI-generated phishing emails, and 21% click on malicious content inside.
- Generative AI tools and automation help hackers compose phishing emails up to 40% faster.
AI Deep Fake Statistics
Another huge threat stemming from AI technology is the possibility of deep fakes. A deepfake is a digitally generated image or video that is made to look and sound real.
The rise of LLMs (large language models) has significantly influenced how cybercriminals create more convincing phishing content and deepfake narratives. These AI models are capable of generating human-like text, making scams harder to detect by both users and traditional filters.
- 63% of cybersecurity leaders are concerned about AI and the potential creation of deep fakes.
- Deep fake awareness is lacking; only 71% of people globally know what a deepfake is, and only 0.1% can consistently identify deepfakes.
- 98% of deep fake videos are pornographic.
- The financial industry is a common target for deepfake attacks. As of 2024, 53% of financial professionals had experienced attempted deepfake scams.
- There were 19% more deepfake incidents in the first quarter of 2025 than there were in all of 2024.
- In the months leading up to the 2024 US election, 77% of voters encountered AI deepfake content related to political candidates.
- Deepfakes are now responsible for 6.5% of all fraud attacks, a 2,137% increase from 2022. This dramatic rise highlights the urgent need for better risk management strategies that protect sensitive data from manipulation and impersonation.
- Of all social media platforms, YouTube has the highest deepfake exposure, with 49% of people surveyed reporting experiences with YouTube deepfakes.
In Hong Kong, a finance firm lost $25 million to a deep fake scam involving AI technology impersonating the company’s Chief Financial Officer.
AI Password Hacking Statistics
Cybercriminals are now relying heavily on AI technology to steal passwords, which has changed the way data breaches occur. Here are the latest AI-assisted password hacking statistics.
- AI password-hacking tools can bypass 81% of common passwords within a month.
- In one report by Home Security Heroes, AI cracked 51% of 15.68 million common passwords in under one minute.
- 94% of passwords are reused or duplicated, making them even easier for hackers to access.
AI Voice Cloning Statistics
Cybercriminals are also using AI systems for voice cloning. Voice cloning takes an existing recording of someone’s voice and uses it to create false recordings with the same voice.
This technology has helped cybercriminals conduct a variety of scams. Voice cloning is often used as part of phone phishing scams, and it’s quickly becoming a preferred method for AI-enabled cybercrime.
- One in 10 adults globally has experienced an AI voice scam.
- 77% of people who have experienced an AI voice scam lost money.
- 53% of adults share their voice data online at least once per week.
- Adults over the age of 60 are 40% more likely to fall for voice cloning scams. These scams often target victims with access to sensitive data, such as login credentials or banking details.
- Scientific research has found that people can correctly identify AI-generated voices only 60% of the time.
- Globally, the AI voice cloning market was valued at $2.1 billion in 2023. It is expected to reach $25.6 billion by 2033.
In April 2024, a LastPass employee was targeted by an AI voice-cloning scam. The voice impersonated the LastPass CEO, Karim Toubba, but fortunately, the employee did not fall for the scam.
If you watched the video at the beginning of the article, you have experienced an AI voice-clone. That’s right, that’s not a professional voice actor. That is the voice of a real-life person whose voice we cloned to create a voiceover for this script.
AI Cybersecurity Statistics
Although AI is often used to launch cyberattacks, it can also be used to prevent them. The cybersecurity industry is making use of AI technology for better threat detection and response.
In addition to detecting threats, AI is also being used to reduce incident response times and identify vulnerabilities before they’re exploited. For example, companies using AI-driven security platforms report detecting threats up to 60% faster than those using traditional methods.
- 69% of enterprises believe that AI is necessary for cybersecurity as threats increase in volume. As threats scale, data protection has become a core priority for businesses deploying AI systems. Ensuring that AI tools do not expose or mishandle confidential information is a growing concern.
- The value of the global market for AI in cybersecurity is expected to increase from $15 billion in 2021 to $135 billion in 2030.
- 52% of cybersecurity professionals believe that AI-powered tools will be more cost-efficient than humans.
- 80% of industrial cybersecurity professionals believe that the benefits of using AI in their work outweigh the risks.
- More than 90% of AI cybersecurity capabilities come from third-party tools, rather than in-house solutions. At the same time, security teams are being challenged to adapt quickly as AI threats become more frequent and unpredictable. These teams play a critical role in defending sensitive data across enterprise systems.
Last year, the cybersecurity industry also saw a noticeable rise in ransomware attacks that used AI to bypass traditional defenses and encrypt business-critical data.
In April 2024, Cornell researchers revealed a new malware named the “Morris II” worm. This worm can infiltrate infected systems and extract sensitive information such as credit details and social security numbers.
The worm can also send spam containing malicious software. Its discovery prompted rapid remediation efforts across multiple enterprise systems, highlighting how quickly AI-enabled malware can spread without immediate countermeasures.
Final Thoughts on AI and Cybersecurity in 2025
Hackers are now using AI to automate and personalize attacks faster than ever before.. From deepfake scams to password cracking, cyberattacks are getting more advanced, faster. But staying informed is the first step. Knowing how these tools are being used can help you spot the red flags and take action before something goes wrong.
Whether you’re running a business or just trying to stay safe online, now’s the time to take AI threats seriously and get ahead of them.