Over the last few years, artificial intelligence has gone from a technology of the future to a mainstream tool that anyone can use. The newfound accessibility of AI has plenty of positives, especially when it comes to boosting productivity.
However, advancements in AI technology have also introduced some dangerous new cybersecurity risks that every business should know about.
With that in mind, we’ve rounded up some of the latest AI cyberattack statistics to help you understand the scope of these new cyber threats.
Staying informed can help you use AI safely and put security measures in place to prevent AI-generated attacks.
Key Takeaways
- The rise of tools like ChatGPT has made it easier for hackers to send convincing fraud emails.
- Cybercriminals are using AI-generated deepfakes and voice clones as part of identity theft scams.
- AI tools can be targeted by cyberattacks. The four most common types of cyberattacks on AI tools are poisoning, inference, extraction, and evasion.
- Cybersecurity professionals are investing in AI tools as a way to identify and respond to threats more quickly.
AI Phishing Attack Statistics
One of the most common ways that hackers use AI technology is to launch phishing attacks. Phishing is a form of social engineering where the perpetrator pretends to be a trustworthy contact and gets the victim to share sensitive information, such as passwords or bank account information.
AI technology has helped malicious actors develop more advanced phishing campaigns. In the past, one of the most effective ways to spot a phishing email was to look for spelling or grammatical errors.
Now, hackers can use AI tools like ChatGPT to write messages that look more legitimate.
Here are the statistics to know about AI phishing attacks.
- Phishing attacks increased by 1,265% between Q4 2022 and Q4 2023.
- 94% of organizations had email security incidents happen in 2023.
- There was an average of 31,000 phishing threats sent per day in 2023, many of which were powered by ChatGPT.
- 78% of people open AI-generated phishing emails, and 21% click on malicious content inside.
- Generative AI tools and automation help hackers compose phishing emails up to 40% faster.
AI Deep Fake Statistics
Another huge threat stemming from AI technology is the possibility of deep fakes. A deepfake is a digitally-generated image or video that is made to look and sound real. Generative AI has made it easy for bad actors to generate deepfakes with relatively little effort.
- 63% of cybersecurity leaders are concerned about AI and the potential creation of deep fakes.
- Deep fake awareness is lacking—only 29% of people globally know what a deepfake is.
- 98% of deep fake videos are pornographic in nature.
- Deepfake videos online increased by 550% between 2022 and 2023.
- Deepfakes made up 3% of all identity fraud in the US in Q1 2023.
- Of all social media platforms, YouTube has the highest deepfake exposure, with 49% of people surveyed reporting experiences with YouTube deepfakes.
AI Password Hacking Statistics
AI-driven methods are increasingly being employed by cybercriminals to crack passwords, leading to a significant shift in how data breaches occur. Below we’ve gathered some of the latest statistics surrounding AI-assisted password hacking.
- AI password-hacking tools can bypass 81% of common passwords within a month
- In one report by Home Security Heroes, AI cracked 51% of 15.68 million common passwords in under one minute
- A AI model trained to overhear typing on a keyboard could steal and replicate passwords with over 93% accuracy
AI Voice Cloning Statistics
Cybercriminals are also using AI systems for voice cloning. Voice cloning takes an existing recording of someone’s voice and uses it to create false recordings with the same voice.
This technology has helped cybercriminals conduct a variety of scams. Voice cloning is often used as part of phone phishing scams.
- One in 10 adults globally have experienced an AI voice scam.
- 77% of people who have experienced an AI voice scam lost money.
- 53% of adults share their voice data online at least once per week.
- Adults over the age of 60 are 40% more likely to fall for voice cloning scams.
- Globally, the AI voice cloning market was valued at $1.45 billion in 2022. It is expected to grow at a compound growth rate of 26.1% between 2023 and 2030.
If you watched the video at the beginning of the article, you have experienced an AI voice-clone. That’s right – that’s not a professional voice actor. That is the voice of a real-life person whose voice we cloned to create a voiceover for this script.
AI Cybersecurity Statistics
Although AI is often used to launch cyberattacks, it can also be used to prevent them. The cybersecurity industry is making use of AI technology for better threat detection and response.
- 69% of enterprises believe that AI is necessary for cybersecurity as threats increase in volume.
- The value of the global market for AI in cybersecurity is expected to increase from $15 billion in 2021 to $135 billion in 2030.
- 52% of cybersecurity professionals believe that AI-powered tools will be more cost-efficient than humans.
In April 2024, Cornell researchers revealed a new malware named the “Morris II” worm. This worm can infiltrate infected systems and extract sensitive information such as credit details and social security numbers.
The worm can also send spam containing malicious software.