AI has significantly transformed organizations by enhancing efficiency, accuracy, and driving innovation. However, in the realm of cybersecurity, AI can be a double-edged sword; it can be harnessed for both beneficial and malicious purposes, or even misused entirely. Just like the Cloud before it, AI introduces its own set of risks.
As with previous technological advancements, user error remains as the most common vulnerability. Microsoft’s Work Trend Index found that 78% of office workers are using their AI tools of choice. This creates additional risks such as data leakage, depending on the AI models used, prompts made, and data accessed or uploaded, in addition to existing threats.
This expanding attack surface keeps many decisions-makers up at night: 47% of business leaders surveyed in a joint study by IBM and AWS expressed concerns about new attacks exploiting vulnerabilities in AI deployment.
For Cybersecurity Awareness Month, held each October, we shine the spotlight on the common AI-enhanced cyberattacks so organizations can understand emergent threats. There have been reports of cybercriminals harnessing AI to execute more advanced and scalable attacks, including highly personalized phishing emails and self-evolving malware that can evade traditional detection methods. A 2023 report found that 75% of security professionals have witnessed an increase in cyberattacks last year, and 85% were powered by generative AI.
This blog post explores how malicious actors use AI to bolster their attack arsenal and share strategies on how organizations can proactively implement effective defense strategies to counter these threats.
Understanding AI-Enhanced Cyberattacks: Some Real-Life Examples
Here are three significant forms of AI-enhanced cyberattacks, accompanied by recent examples of their use by malicious actors:
1. Automated Vulnerability Scanning and Exploitation
Cybercriminals use AI tools designed to autonomously scan systems for weaknesses, accelerating the attack process. They use AI bots in analyzing large datasets to identify exploitable vulnerabilities faster than manual scanning, and target known vulnerabilities in enterprise-level software.
Offensive use of AI technology
enables cybercriminals to conduct hyper-targeted attacks with unprecedented accuracy, speed, and scale, often evading traditional detection tools. Such AI tools can autonomously disguise operations
and blend in with regular activity, making it harder to detect. This includes the use of deepfakes and other sophisticated technologies to exploit vulnerabilities.
In August 2023, the Clop ransomware gang exploited a zero-day vulnerability in the MOVEit Transfer software. The attackers used automated tools to scan for and exploit this vulnerability, leading to data breaches in multiple organizations, including financial institutions and government agencies.
2. More Advanced Social Engineering Campaigns Go Beyond Phishing
Malicious actors are leveraging AI technology to create more sophisticated social engineering schemes. In February 2024, a finance employee at a global company was reported to have been deceived into transferring $25 million to scammers who used deepfake technology to impersonate the firm’s CFO during a video call, as reported by Hong Kong police.
The intricate scheme involved tricking the employee into joining a video call, believing he was meeting with several colleagues. However, all the participants were deepfake imitations. The Hong Kong police mentioned that the employee became suspicious after receiving a message, purportedly from the UK-based CFO, which discussed a secret transaction. Initially, the employee thought it might be a phishing attempt. Despite initial doubts, however, the worker felt reassured after the video call as the other participants appeared and sounded like familiar colleagues, according to the report.
3. AI-Driven Disinformation and Narrative Manipulation
Cybercriminals can use AI to influence and manipulate public opinion for political reasons or for other ill intents. For instance, the U.S. Department of Justice in September 2024 was reported to have dismantled a covert operation sponsored by the Russian government aimed at influencing audiences in the U.S. and other countries.
The Kremlin-backed narrative manipulationutilized AI to establish a bot farm, which disseminated misleading information via nearly 1,000 fake social media accounts, and also through social media influencers and advertisements that drove internet traffic to cybersquatted domains. This effort sought to create discord in the U.S. and sway opinion on the Russia-Ukraine conflict by impersonating Americans. The U.S. Justice Department intervened by seizing domain names and investigating the fake accounts.
Another recent example involves a fake image of a fire or explosion near the Pentagon, shared by multiple verified X (formerly Twitter) accounts, causing confusion and a brief stock market dip. Local officials confirmed no such incident occurred, and the image appeared to be AI-generated. The account that initially posted the image falsely claimed affiliation with Bloomberg News and has since been suspended. X’s verification process, now available for a monthly fee, no longer guarantees account authenticity. The false report also aired on Republic TV in India, which later retracted it upon realizing the incident was fabricated. This is why it’s important to not only have multiple trusted sources of information but also to regularly re-evaluate their reliability over time.
Defense Strategies to Counter AI-Enhanced Cyberattacks
We recommend the following best practices that you can adopt to combat AI-enabled attacks:
1. Proactively Manage Vulnerabilities in Your Digital Environment
Prioritize regular software patching and updates to eliminate potential attack vectors. Conduct frequent security audits and penetration testing using AI-powered tools to keep threats at bay. Take advantage of AI-driven tools to continuously monitor and analyze network traffic and detect anomalies in real-time to enable timely response to potential threats.
AI-driven solutions like AvePoint Insights help organizations identify risks and vulnerabilities in their digital environment, such as anonymous links or shadow users in Teams, Groups, Sites, and OneDrives. It leverages tenant-wide object or user-based searches. It provides out-of-the-box Risk Assessment Reports to summarize updates and prioritize high-risk action items, ensuring critical permissions are surfaced and addressed.
2. Fortify Your Email Security
Consider these best practices for strengthening your email security:
Implement secure email gateways with targeted attack protection to detect and block malicious emails that deliver ransomware.
Conduct anti-phishing campaigns regularly and employee training on how to spot phishing emails, including internal simulations to test their knowledge. And then, identify and resolve training gaps.
Enforce strong password policies and limit user access to necessary permissions, regularly reviewing and reaffirming access levels.
We provide a detailed guide on how you can implement anti-phishing campaigns and block malicious websites in our freeRansomware Readiness Checklist eBook.
Fighting against AI-driven cyberattacks requires AI-powered data security solutions. A comprehensive data management and security solution, the AvePoint Confidence Platform leverages AI capabilities to protect data with proactive measures, ensuring critical data is appropriately classified, accessed, and monitored. Additionally, it includes AI-powered ransomware and suspicious activity monitoring, which allows organizations to recover faster from such attacks.
3. Adopt a Forward-Looking Approach to Guard Against Disinformation
A proactive cybersecurity approach should be ingrained in an organization’s culture. Stay informed about emerging threats and disinformation tactics. Regular training ensures employees remain knowledgeable, enabling them to exercise sound judgment and critical thinking. Encourage collaboration and intelligence sharing within your industry. By working together, organizations can better understand the evolving threat landscape and develop more effective countermeasures.
Critical thinking is crucial in fighting against social engineering campaigns. It’s not just for debunking conspiracy theories; it helps us identify potential threats. Always take a moment to verify information with an unbiased third party or consider the motives behind a request. Pause and ask yourself: "Does this align with my job role? Do I have the authority to perform the requested task? Is this a common occurrence?" Instilling this mindset is vital as threats grow.
Boost Your Cyber Resilience with AvePoint
As cybercriminals leverage AI to create more sophisticated and scalable threats, it is imperative for businesses to stay ahead by implementing robust defense strategies. This includes adherence to core cybersecurity principles anchored on data governance, and fostering a culture of continuous employee learning and vigilance so your organization can effectively counteract AI-enabled cybercrimes and safeguard its digital assets.
AvePoint’s Resilience Suite
enhances cyber resilience by using AI to manage the entire information lifecycle, from creation to defensible disposal. The suite’s Cloud Backup component utilizes machine learning for detecting unusual activities, such as phishing and ransomware, enabling swift threat response and remediation.
With AvePoint by your side, you can confidently navigate the complexities of modern cybersecurity and protect your business from the ever-growing array of AI-driven threats.
Abby Payuyo is a Senior Technical Marketing Writer at AvePoint, covering Artificial Intelligence and Machine Learning. With over 20 years of experience in marketing communications and technical writing, including a recent stint in cybersecurity, Abby creates content that helps organizations navigate the challenges of the modern workplace with the help of AI & ML solutions.