Managed IT Service starting at just $1,375

Managed IT Service starting at just $1,375

508-356-5565
508-359-4476

AI Cyber Attack Statistics 2024

Over the last few years, artificial intelligence has gone from a technology of the future to a mainstream tool that anyone can use. The newfound accessibility of AI has plenty of positives, especially when it comes to boosting productivity. 

However, advancements in AI technology have also introduced some dangerous new cybersecurity risks that every business should know about.

With that in mind, we’ve rounded up some of the latest AI cyberattack statistics to help you understand the scope of these new cyber threats. 

Staying informed can help you use AI safely and put security measures in place to prevent AI-generated attacks.

 

Key Takeaways

  • The rise of tools like ChatGPT has made it easier for hackers to send convincing fraud emails.
  • Cybercriminals are using AI-generated deepfakes and voice clones as part of identity theft scams.
  • AI tools can be targeted by cyberattacks. The four most common types of cyberattacks on AI tools are poisoning, inference, extraction, and evasion.
  • Cybersecurity professionals are investing in AI tools as a way to identify and respond to threats more quickly.

 

AI Phishing Attack Statistics

AI Phishing Attack

One of the most common ways that hackers use AI technology is to launch phishing attacks. Phishing is a form of social engineering where the perpetrator pretends to be a trustworthy contact and gets the victim to share sensitive information, such as passwords or bank account information.

AI technology has helped malicious actors develop more advanced phishing campaigns. In the past, one of the most effective ways to spot a phishing email was to look for spelling or grammatical errors. 

Now, hackers can use AI tools like ChatGPT to write messages that look more legitimate.

Here are the statistics to know about AI phishing attacks.

 

AI Deep Fake Statistics

Another huge threat stemming from AI technology is the possibility of deep fakes. A deepfake is a digitally-generated image or video that is made to look and sound real. Generative AI has made it easy for bad actors to generate deepfakes with relatively little effort.

In Hong Kong, a finance firm lost $25 million to a deep fake scam involving AI technology impersonating the company’s Chief Financial Officer.

 

AI Password Hacking Statistics

AI-driven methods are increasingly being employed by cybercriminals to crack passwords, leading to a significant shift in how data breaches occur. Below we’ve gathered some of the latest statistics surrounding AI-assisted password hacking. 

 

 

AI Voice Cloning Statistics

Cybercriminals are also using AI systems for voice cloning. Voice cloning takes an existing recording of someone’s voice and uses it to create false recordings with the same voice.

This technology has helped cybercriminals conduct a variety of scams. Voice cloning is often used as part of phone phishing scams.

  • One in 10 adults globally have experienced an AI voice scam.
  • 77% of people who have experienced an AI voice scam lost money.
  • 53% of adults share their voice data online at least once per week.
  • Adults over the age of 60 are 40% more likely to fall for voice cloning scams.
  • Globally, the AI voice cloning market was valued at $1.45 billion in 2022. It is expected to grow at a compound growth rate of 26.1% between 2023 and 2030.
In April 2024, a LastPass employee was targeted by an AI voice-cloning scam. The voice impersonated the LastPass CEO Karim Toubba, but fortunately, the employee did not fall for the scam.

 

If you watched the video at the beginning of the article, you have experienced an AI voice-clone. That’s right – that’s not a professional voice actor. That is the voice of a real-life person whose voice we cloned to create a voiceover for this script. 

 

AI Cybersecurity Statistics

Although AI is often used to launch cyberattacks, it can also be used to prevent them. The cybersecurity industry is making use of AI technology for better threat detection and response.

In April 2024, Cornell researchers revealed a new malware named the “Morris II” worm. This worm can infiltrate infected systems and extract sensitive information such as credit details and social security numbers.

The worm can also send spam containing malicious software.

Avatar photo
Written by
Konrad Martin
Konrad is a nationally recognized authority on cybersecurity and IT issues. He is the co-author of Cyber Storm, an Amazon #1 best seller, and the author of Hacked: How to Protect Your Business from the Fines, Lawsuits, Customer Loss & PR Nightmare Resulting from Data Breach and Cybercrime. 
He was a guest expert on the recently-released Amazon Prime documentary “Cyber Crime 2: The Dark Web and Cyber Crime.” His firm, Tech Advisors, Inc., provides technology consulting and management services to a wide range of professional services organizations across the country, and is ranked among the Top 250 Managed Security Services Providers by MSSP Alert.
To top

Contact Us Today
To Schedule Your
Initial Consultation