Artificial Intelligence Threats:
WHAT EVERYONE SHOULD KNOW

STRATEGIC AND CYBER INTELLIGENCE PROGRAM | UNIVERSITY OF SOUTH FLORIDA
Executive Summary
Artificial intelligence (AI) has recently surged in popularity and is quickly becoming a part of daily life. It has also significantly changed the cybersecurity landscape, creating both new risks and new opportunities. This report focuses on the role of AI in Cybersecurity – how it enables cyber threat actors (CTAs) to escalate the speed, scale, and sophistication of their attacks and empowers defenders to enhance threat and vulnerability detection, deploy adaptive defenses, and assume a more proactive cyber defense posture.
A staged scenario is presented to make the tactics described more vivid. It centers on a financially motivated cybercriminal group (Babk!9) that targets a mid-sized financial services firm (Nouveau Riche). This scenario illustrates how AI is being used in both offensive and defensive cyber operations.
The report defines foundational AI concepts—such as machine learning, natural language processing, large language models, and generative AI—and explains how each plays a role in modern cyber tools. It then details how CTAs are leveraging AI to enable:
• Highly personalized phishing and social engineering through generative AI
• Smarter, evasive, and persistent malware using autonomous agents
• Accelerated discovery of software vulnerabilities through automated scanning and AI-based fuzzing
• Model manipulation and data poisoning attacks designed to degrade or hijack AI systems themselves
The report then outlines how defenders are applying AI to enhance their ability to:
• Detect threats in real time by identifying behavioral anomalies
• Analyze and neutralize malware using deep learning
• Prioritize and remediate vulnerabilities through predictive analytics
• Build deception technologies and adaptive defenses that confuse or contain intruders
After reviewing AI’s role in cyber-attacks and cyber defenses, while acknowledging that the nature and scale of AI’s future advances are difficult to predict with any certainty, the report highlight several areas to watch closely as AI evolves. Those areas include the growth of autonomous AI agents, the looming impact of quantum computing on encryption, and the increasing use of AI in critical infrastructure operations and offers short “Executive Takeaways” for each.
Finally, the report provides tailored recommendations for Florida legislators and critical infrastructure operators, including:
• Expanding AI transparency and cybersecurity legislation
• Investing in municipal AI-based defenses
• Conducting AI-specific risk assessments
• Funding AI red-teaming and collaborative intelligence sharing
Babk!9 Scenario
Consider the following scenario. Babk!91 a Russia-based group of financially motivated cyber criminals, has initiated a campaign of activity targeting medium-sized businesses in the financial services sector. One of the targeted organizations is Nouveau Riche Financial Services, a medium-sized financial services firm located in a midwestern US state. Nouveau Riche Financial Services has slightly better-than-average cyber defense systems. They know they operate in a risky sector, but to their knowledge, they have never had a serious breach, so in recent years, they have not made any major new investments in securing their information systems. Nouveau Riche has both a signature-based detection system (which minimizes false positives, but like any signature-based system is vulnerable to zero-day attacks) and an anomaly-based detection systems (which like all anomaly-based systems is prone to issue false positive alerts based on detected anomalies that do not reflect a threat or malign activity). In fact, their IT security team spends nearly all of their time chasing down alerts for activity that is ultimately determined not to pose a threat. As a result, they tend to suffer from “alert fatigue.”
Babk!9 mounts a broad phishing2 campaign against Nouveau Riche employees. They send an email—purporting to come from Nouveau Riche’s Administrative Services—indicating that the recipient has requested their email account be cancelled. The email instructs the user that if they did not request the cancellation, they should click on a link below to re-activate their account. Although the email slipped through the system’s anti-phishing filters, Nouveau Riche employees have received training on how to identify possible phishing emails. They notice that the sender’s email address, despite having the name “Admin” did not come from a Nouveau Riche company domain. As a result, some employees simply deleted the email. Others contacted the IT Team to verify its authenticity and were told that the message did not come from them and not to click on the link. None of the Nouveau Riche employees clicked on the phishing email, which denied Babk!9 the opportunity to use it to breach the system.
In this case, Nouveau Riche’s traditional security measures and staff training proved just strong enough to hold the line against Babk!9’s attack. But as the threat actors reassess their failed attempt, they are already preparing a more advanced follow-up—this time with help from artificial intelligence (AI). Before exploring how AI is reshaping the cybersecurity landscape for both attackers and defenders, it is worth understanding what AI is, how it works, and why it matters in this context. To be continued…
What is AI?
Artificial intelligence (AI) is defined as the ability of computer systems or algorithms to imitate human intelligence on tasks like identifying patterns, solving problems, understanding language, and making decisions. 3 Over the past decade, AI capabilities have developed rapidly, and is now widely used to accomplish a number of complex tasks. Stanford’s 2025 AI Index Report found that more than three quarters (78%) of organizations were using AI in 2024, while the year before, it was just over half (55%).
4Developmental advances continue to be made every day, producing ever more powerful tools and software. AI drives new technologies like big data, robotics, and the Internet. The rise of generative AI has boosted the potential and appeal of AI, large language models (LLM), and machine learning (ML).5,6
AI itself is not especially new. Virtual assistants like Siri and Alexa use Natural language processing (NLP). Built-in car navigation systems and apps like Apple Maps use algorithms for real-time predictive routing and estimating and adjusting arrival times. Streaming services use “recommendation engines” to suggest movies and content. Auto-correct and predictive text suggestions on phones and in word processing software use NLP to learn from the user’s typing habits. All of those are powered by AI, which highlights the fact that AI has many facets and uses a range of methods.
The following section will introduce some of AI’s component methods to provide a big picture view before digging into the details of how it is affecting the cybersecurity landscape. First, it is important to note that all of the remarkable AI achievements currently in use are still examples of Narrow Artificial Intelligence or “Weak AI,” the lowest level of AI. All current AI platforms and apps are designed and supervised by humans to perform specific tasks like image recognition, language translation, or playing chess. They lack self-awareness and general reasoning ability and cannot transfer learning across domains (e.g., Netflix’s recommendation engine cannot drive a car).
Higher levels of AI are basically theoretical at this point. Artificial General Intelligence (AGI) would be more self-directed, able to do adaptive general reasoning and to understand, learn, and apply knowledge across task domains without human input. The “Strongest AI,” Artificial Superintelligence (ASI) would demonstrate capabilities that exceed human intelligence in essentially all cognitive domains from creativity, emotional intelligence, to scientific reasoning. AGI and ASI are worth keeping in mind as we think about long-term AI safety research and planning and consider ethical implications, but we are not there yet.7
Since ChatGPT has been released into the wild in November 2022, everyday users have had to contend with new terms like machine learning, natural language processing, large language models. Understanding and distinguishing between these different methods can be confusing, so here is a very basic map of the terrain.
Machine Learning
If artificial intelligence is to simulate human intelligence, AI might be regarded metaphorically as a brain. Machine Learning (ML), then, might be regarded as the brain’s learning center that powers its ability to learn from experience. ML is a method and branch of AI that allows computers to learn from large datasets without explicit programming.8 While ML has the potential improve decision-making (particularly with deep learning), its effectiveness depends heavily on having massive amounts of high-quality, unbiased data to learn from. While the “Web” itself is massive, it would be hard to argue that most of the content/data is high-quality or unbiased. And because the ML models have such a voracious appetite for information, significant concerns remain about privacy and the ethical use of data.9
Natural Language Processing
Continuing the brain metaphor, Natural Language Processing (NLP) might be viewed as a subsection of the learning center that is focused on language, because most NLP is implemented via ML. NLP enables computers to understand, interpret, generate, and respond to human language as it is commonly spoken
(e.g., not through code or keywords). This is what allows AI and chatbots to summarize text, identify and categorize specific text elements (e.g., names, dates, times or sentiments), and answer questions.10
Large Language Models
Stretching the brain metaphor a bit further, Large Language Models (LLMs) function like a specialized section in the language area of a learning center. They are trained on large datasets, much like an overly-educated language expert who has studied and memorized a vast amount of human-generated text and conversation. Like MLs generally, LLMs have to be trained on massive datasets of text to effectively understand and generate human language. To do that, these models use deep learning architectures with multi-layer artificial neural networks called “transformers” (the “GPT” in ChatGPT stands for “Generative Pre-trained Transformer”). Transformers enable the model to adaptively learn complex patterns from these large textual datasets. Rather than just reviewing text sequentially, one word at a time, the transformers can see a large cluster of words at once and weigh the importance of each word relative to the others (known as parallel processing). LLMs are the tools powering most of today’s advanced chatbots and text-based AI assistants.11
Generative AI
Generative AI is like a creative function within the brain’s learning center. Using ML, NLP, neural networks and other tools, it can not only reproduce or mimic existing works, but— based on learned patterns from training data—to create new and original ideas, text (via LLMs like ChatGPT), poems, music (e.g., Jukebox), images (e.g., DALL·E, Midjourney), or even programming code (e.g., GitHub Copilot). Generative AI is not limited only to language. It can be applied across domains using different types of models, such as transformers for language, Generative Adversarial Networks (GANs) for images, and diffusion models for art. It is distinguishable from traditional AI, which performs specific tasks based on predefined rules.12 Generative AI has had a widespread impact across multiple fields and contexts, raising important but vexing questions about trust, security, and ethical use.13
AI is Reshaping the Cybersecurity Landscape for Attackers and Defenders
While AI tools are being increasingly used across government and industry to gain competitive advantage, these new capabilities create new risks and opportunities for cyber threat actors (CTAs) and cyber defenders as well.
Returning to our Babk!9 scenario from earlier, after their initial failure, Babk!9 regrouped—this time integrating AI tools to upgrade their approach. What follows is a glimpse into how AI-enhanced cyber-attacks can escalate in precision, speed, and impact.
Frustrated by their lack of success, Babk!9 decides to “level up” their attacks by augmenting their tactics with the use of AI tools. Specifically, Babk!9 adopted the following tactics:
1. they mined social media data (including LinkedIn) for information about Nouveau Riche employees;
2. using those data, they identified certain “high-value targets” (HVTs) —employees likely to have high-level system access privileges—and identified information about the software and systems that Nouveau uses;
3. using highly specific information about several HVTs, Babk!9 used generative AI to craft highly personalized spear-phishing emails to mimic the style and communication patterns of other company employees – a couple of employees clicked the link, which allowed Babk!9 to breach the system; and
4. the link installed malware that explored and identified vulnerabilities within the Nouveau Riche system and exploited those vulnerabilities to access financial transaction data to divert funds to disguised Babk!9-owned accounts.
AI and Cyber-Attacks
Hackers and cyber threat actors (CTAs) are developing and actively using AI to increase the efficiency, effectiveness, volume and sophistication of their attacks.14 Bugcrowd’s 2024 Inside the Mind of a Hacker survey found that 77% of hackers are now using generative AI for hacking and they are finding it increasingly effective. The survey reported that 71% believe AI technologies increase the value of hacking, a dramatic rise from the 21% who saw that value the year before (2023).15 One of the most basic implications is that using AI to increase their productivity means CTAs can more easily launch more attacks. Indeed, cybersecurity professionals say that AI has significantly increased the volume of cyber-attacks, leading to greater stress and higher burnout. Many IT and security professionals (51% in one survey16) consider AI-driven attacks as the foremost threat to their organizations, and most report that they have had to change their cybersecurity strategy in the last year due to the rise in AI-powered cyber threats.17 Strikingly, however, more than three quarters (77%) of organizations feel unprepared to effectively handle AI cybersecurity threats.18
What is the nature of those AI-powered threats? While the full range of hackers’ AI-related techniques is quite extensive, in the next section of this report, we will focus on four key tactical domains of activity:
• AI-Enhanced Social Engineering & Deception
• Intelligent Malware & Autonomous Agents
• Accelerated Vulnerability Discovery & Exploitation Data/AI Manipulation & Poisoning
AI-Enhanced Social Engineering & Deception
Social engineering is still CTAs’ primary weapon of choice. Humans (not technical system vulnerabilities) are the primary target of most cyber-attacks and many, if not most, successful breaches originate from a
phishing email, opened and clicked by an unsuspecting employee.19 Among the range of potential threats that AI poses to cybersecurity, there is especially widespread concern among professionals and the general public about hackers using generative AI (GenAI) tools like ChatGPT, Copilot, Genimi, and Claude AI to craft more convincing and scalable social engineering attacks.20 Some of the key developments include the following:
• Historically, phishing emails—particularly those from countries outside the US—have been riddled with spelling and grammatical errors that provide easy clues to their inauthenticity. Now, a simple scan can find and fix those errors in seconds. Not only can GenAI make the deceptive messages appear more realistic, they can also help to create more realistic dialog or more personalized messages (using publicly available data) and suggest to CTAs more effective phrasing and techniques for effective psychological manipulation.21
• Ransomware groups have started to mobilize AI to enhance their attacks. An example of this is the group FunkSec, that was publicly noticed in 2024 and affected over 85 supposed victims in a short amount of time.22 Some ransomware actors have also used AI to optimize their attack plan by identifying the most valuable data (like databases or backups) to exfiltrate or encrypt (to maximize extortion leverage).23 Three of the most troublesome possible enhancements for ransomware attacks are the intelligent targeting of important data, adaptive encryption methods, and supercharging the ability to avoid detection.24 Ransomware is already something that most businesses and organizations fear, with generative AI it has become even more menacing.
• Another major concern with GenAI is the increasing use—and ease of use—of deepfake technology to create convincing audio, video, and images that make fraudulent messages appear to be coming from a credible, known or convincing source.25 During the 2024 Republican presidential primary, for example, a video was released on X (formerly Twitter) on September 1, 2003 with the headline: “BREAKING NEWS: Governor Ron DeSantis drops out of the 2024 Republican presidential primary.” The video went viral but was later found to be a deepfake.26 In 2024, the Hong Kong branch of a multinational company was scammed out of £20 million via a deepfake video call, where criminals impersonated the CFO’s appearance and voice in real-time.27 While deepfake technology required some technical sophistication in the past, that barrier no longer exists. Cloning specific voices or video personas can be done with just a few sample snippets and a simple prompt.28
• While some ethics-conscious companies have attempted to build in “guardrails” to prevent malicious uses, workarounds are not terribly complicated. And some AI tool developers intentionally do not even attempt to impose any ethical restrictions. Cybercriminals often use AI deepfake tools like DeepFaceLab and Faceswap, for example, to bypass identity verification procedures on banking and cryptocurrency platforms29 or text-generating tools like FraudGPT and WormGPT to create more convincing phishing emails, craft malicious code, and even automate some social engineering campaigns.30 FortiLabs notes that Cybercrime-as-a-Service (CaaS) groups have been leveraging these innovative tools to focus and intensify their targeting of specific segments of the cybercrime supply (and attack) chain.31 All of this makes it easier for cyber threat actors to generate more, and more effective, attacks.
Smarter Malware & Autonomous Agents
On the more technical side of AI-related threats, cyber threat actors (CTAs) are mobilizing AI to create more evasive, adaptive, persistent malware. Before the widespread availability of AI, the process of developing and testing new variants of malware was time-consuming and required considerable technical expertise. With GenAI, however, new forms of malware are just a prompt away.32 In an earlier era of malware, less experienced hackers—sometimes called “script kiddies”—would try to copy or slightly tweak existing attack code to accelerate the deployment curve. GenAI, however, allows any CTA, regardless of their skill level, to rapidly improve the sophistication and of their malware and cyber-attacks.33 Some of the key developments include the following:
• AI can quickly create malware code (original code or modifications of existing code) that is smarter, more evasive (less visible and vulnerable to detection systems), more adaptive and more autonomous.34
• CTAs are training AI models on leaked data to make them better at guessing passwords.35
• Malicious actors are integrating AI enhancements to bypass traditional security measures by working around filters and intrusion detection systems, essentially neutralizing many off-the-shelf security tools.36
• Hackers are using polymorphic malware that can change its behavior or appearance based on its environment and the defenses it encounters, like a chameleon adjusting to its surroundings. It can re-encrypt or rewrite parts of its code automatically when it senses it is being analyzed, which confounds traditional antivirus signatures.37
CTAs have also developed agentic malware that can operate independently; exploring systems, making decisions and even escalating privileges without direct human control. That kind of autonomy allows malware to persist longer (operating as an Advanced Persistent Threat or APT) and to more easily navigate around any defenses it encounters.38
Accelerated Vulnerability Discovery & Exploitation
AI tools are increasing the efficiency of cyber threat actor (CTA) reconnaissance efforts, dramatically speeding up the process of discovering and exploiting system vulnerabilities. Fortinet’s 2025 Global Threat Landscape Report found that cybercriminals are using automated scanning at an unprecedent rate, with global estimates of about 36,000 scans every second.39 Some of the key developments include the following:
CTAs have used AI to automate the process of detecting vulnerabilities, enabling them to scan for weaknesses across networks, applications, and devices more quickly and efficiently than using traditional, manual methods.40
• Hackers have also used AI “fuzzing” tactics, which feed software a high volume of unexpected, random, or slightly wrong inputs to see what breaks the system. Once the vulnerabilities are identified, attackers can exploit them to break into systems, steal data, or take control of devices—often before developers even know the problem exists.41
• AI-driven vulnerability detection techniques not only expedite the process of looking through code for weaknesses, particularly novel or zero-day exploits, but the tools can provide malicious decision support to optimize attack strategies and suggest the most effective exploits to use.42
Data/AI Manipulation & Poisoning
Cyber threat actors (CTAs) are targeting the data underpinning AI models to corrupt, disrupt or mislead them.43 Some of the key developments include the following:
Unlike attacks that seek to exploit errors or vulnerabilities in code, these attacks—often called “poisoning” or “data poisoning”—target the model’s training data to skew its behavior and performance.44 Because the performance of LLMs and other ML models are determined by how they are trained on massive datasets, manipulating their training data to compromise their security and reliability can have significant consequences.
• Not only can data poisoning degrade the model’s overall performance (indiscriminate attacks), it can also be deployed to force specific misclassifications or introduce hidden vulnerabilities (targeted attacks).45 Corrupting even small segments of training data can have significant impacts on performance. For example, one study of email spam filtering systems found that manipulating just 1% of the training emails could render the algorithm ineffective, potentially causing the filter model to mistakenly classify harmful phishing emails as safe.46
• Another type of AI manipulation is a “model inversion attack.”47 These attacks target AI/ML systems by attempting to retrieve details about the training data itself. These attacks try to reverse engineer or trick the AI system into revealing sensitive information or secrets it learned during training. They use a model’s outputs to reconstruct or estimate information about the original inputs.48 For example, a healthcare organization may use an AI model, trained on private medical records, to estimate risk for diseases like diabetes. A CTA may be able to access the decision support model, but not the original data. By experimenting with various inputs, they might easily discern that a 52-year-old Caucasian male with high glucose and family history living within a certain Zip Code was likely part of the training set. With enough probing, they might even reconstruct a near-complete profile of a real patient’s health data.
AI and Cyber Defense
Returning once more to our Babk!9 scenario, the breach left Nouveau Riche reeling—financially and reputationally. But it also served as a wake-up call. Recognizing that conventional defenses were no match for AI-augmented threats, the company’s leadership decided to invest in AI-enhanced cybersecurity. What followed is an example of how applying AI to cyber defense can shift the balance.
This Babk!9 attack resulted in Nouveau Riche’s most significant breach and losses to date. They had to disclose the breach to customers and as a result, lost several important clients and the firm suffered
significant reputational damage. The Nouveau Riche Information Security Team knew they also needed to “level up” their defenses to include AI tools if they were going to defend their system against these kinds of sophisticated AI-enabled attacks. The Nouveau Riche CISO was able to convince the CEO to make the investment. Cyber threat intelligence analysts from Nouveau Riche’s Information Security Team gathered information from the Financial Services ISAC detailing a new trend in spear-phishing attacks targeting the sector that seemed to parallel the methodology used in their recent breach. As a result, the analysts were able to attribute the attack, with at least moderate confidence, to Babk!9. By knowing the adversary that was targeting them, the analysts were able to dig a bit deeper and discern the specific capabilities that Babk!9 possessed and the tactics, techniques, and procedures they had been, with some success, using against other mid-sized financial services firms.
The insights from Nouveau Riche’s cyber threat intelligence helped them develop a strategy to bolster their cyber defenses with AI capabilities. Specifically, the Nouveau Riche Information Security Team trained their AI to learn Nouveau Riche’s own specific network and user behavior, including social graph analysis (a map of who usually communicates with whom); style analysis (the way that each user tends to communicate in emails, such as syntax, sentence structure, word usage, punctuation, sentence length, etc.) and structural analysis (examining metadata from the headers, including details such as ARC headers, where the message originated, date of delivery and the destination address).
Those AI enhancements reduced the more general anomaly-based alerts and helped to highlight those that were most specific to that system and its users. Subsequently, Nouveau’s system detected a subtle but unusual pattern of activity. There was a message from the CEO to a new system administrator. The CEO almost never communicates with system admins directly. The style of the message was subtly different than the way the CEO typically communicates by email. And there was an attachment in the message which the employee was directed to click immediately. The packet was flagged for the security team. One of the team members was able to retrieve the malware embedded in that attachment and--by reverse engineering the malware in a secure virtual environment--was able to identify a vulnerability in the Nouveau Riche system which that malware was designed to target. They were then able to patch that vulnerability before Babk!9 or any other CTAs were able to exploit it.
The proliferation of AI in cybersecurity is not only amplifying threats. It is creating significant opportunities to “level up” cyber defenses by applying methods like machine learning, natural language processing, Gen AI, and neural networks, to enhance threat detection and analysis, automate responses, and provide predictive analysis. While human oversight remains essential, AI has become a powerful tool for safeguarding digital assets and adapting to the complex, evolving cyber threat landscape. Industry surveys show that nearly all (95%) cybersecurity professionals agree that AI-powered solutions significantly enhance the speed and efficiency of prevention, detection, response, and recovery.49 Three quarters of cyber defenders had adopted new AI tools within the past year50 and 73% of cybersecurity teams want to shift focus to an AI-powered preventive strategy.51 There is a similar trend among OT (operational technology) cybersecurity professionals, where 80% believe the benefits of AI in industrial control systems cybersecurity outweigh the risks.52 Overall, cyber defenders appear focused on using AI as a supporting resource, not just fearing it as a source of threat.
What are those AI-powered cyber defense resources? As with the description of attack techniques, it is not possible to provide an exhaustive list. New tools and techniques are being constantly being developed. In the next section of this report, however, we will focus on four key domains of activity where AI is being used to bolster cyber defenses:
• AI-Driven Threat Detection & Response
• Enhanced Malware Analysis & Detection
• Proactive Vulnerability Management
Adaptive Defense & Deception Technologies
AI-Driven Threat Detection & Response
Cyber defenders are using AI to detect, analyze, and respond to threats faster and more accurately. They are leveraging AI tools that can efficiently access and navigate large datasets, immediately detect and respond to identified threats in real time, and continuously adapt and improve on their own to execute tasks faster and more accurately, significantly reducing false positives. Some of the key developments include the following:
• AI models use Machine Learning to identify anomalies or deviations from normal patterns of network traffic or user behavior, which is especially useful for spotting new threats such as zero-day exploits, Advanced Persistent Threats (APTs), and even insider threats.53
AI-Powered SIEM (Security Information and Event Management) systems retrieve and synthesize large amounts of data from various sources (e.g., network and system logs, external threat databases and intelligence feeds), which allows security analysts to chase fewer weak leads (false positives) and to prioritize more significant and more likely threats.54
• AI systems can neutralize threats autonomously by isolating infected systems and blocking malicious communications and use data from past incidents to inform and automate future incident response strategies.55
AI also supports real-time threat intelligence integration and proactive defense by continuously analyzing internal historical data to predict potential future attacks and pulling external data (e.g., from threat intel feeds) to identify emerging threats.56
Enhanced Malware Analysis & Detection
Cyber defenders are increasingly leveraging AI to understand and neutralize malicious code. Some of the key developments include the following:
• AI and Machine Learning (ML) algorithms, classifiers (e.g., Support Vector Machines, k-Nearest Neighbors) and deep learning architectures (e.g., convolutional neural networks) are being used to more quickly
detect and accurately classify both known and unknown malware, including various types like ransomware and spyware.57
• AI systems are moving beyond traditional signature-based detection, using behavioral analysis and neural networks—based on extensive datasets—to identify patterns in file structure, code anomalies, and execution behavior and for identifying potential security threats and anomalies in user behavior. AI-driven systems can also identify polymorphic malware that traditional methods have trouble detecting.58
AI-powered tools can enable proactive threat hunting by identifying potential threats that may not be detected by traditional automated systems, allowing cybersecurity teams to stay ahead of emerging risks and threats within their systems.59
Proactive Vulnerability Management
AI enables a proactive approach to vulnerability management, which allows organizations to identify and mitigate risks before they can be exploited. Some of the key developments include the following:
• AI-powered tools can autonomously scan systems, applications, and networks to detect exploitable vulnerabilities.60 It can even automate penetration testing.61 These automated tools can also pull information from various external sources like vulnerability repositories and threat intelligence feeds to create a robust vulnerability detection capacity, which is updated in real-time. Security researchers have used AI-based fuzzing to discover vulnerabilities by injecting data into software and monitoring for anomalies.62
AI can use adversarial training—by exposing models to simulated attacks in a controlled environment—to enhance the robustness and resilience of the threat detection models themselves and the data that underpin their training, which enhances the models’ robustness and resilience.63
• AI systems use Predictive Risk Scoring to prioritize which vulnerabilities need to be addressed or patched first based on assessing their likelihood and potential impact. AI-powered threat intelligence can identify trends in vulnerability exploitation; predict potential attacks, and assign severity scores to specific vulnerabilities.64
Adaptive Defense & Deception Technologies
AI can drive dynamic and intelligent defenses that adapt to attacker behavior and evolving tactics. This approach anticipates and mitigates threats, shifting cyber defense from a reactive to proactive posture. Some of the key developments include the following:
• AI can deploy a range of deception technologies, including honeypots and broader deception grids to create fake assets (e.g., fake servers and systems, as well as decoy files, data, credentials, and network activity) that divert adversaries from real targets, lure them into the decoy targets and collect detailed intelligence about their tactics, techniques and behavior.65
AI-driven authentication and authorization systems can provide more secure, dynamic and efficient access control to sensitive infrastructure components66 and they may soon be integrated with zero trust architectures as well.67
• AI can enhance authentication systems with behavioral biometrics, including behavior analysis that continuously monitors network activities to identify deviations from normal patterns and distinguishes between authorized and unauthorized actions. These methods have higher accuracy and lower error rates than traditional authentication approaches.68
In addition to the four tactical areas outline above, AI is being used in cyber defense to generate complex cybersecurity exercises. In 2024, research was conducted into using LLMs as the means of generating cybersecurity exercises. The research concluded that the cybersecurity training exercise scenarios effectively trained practitioners to face a broad spectrum of both familiar and emerging cyber threats.69 While this is still not a well-researched application of AI, it has promising potential for the cybersecurity field. As the capabilities evolve, specialized LLMs could potentially be used to generate a limitless number of different training scenarios. Such a training tool would lead to cybersecurity professionals being more prepared for any cybersecurity threat that they might face.
AI in 2030 and Beyond
Based on the exponential growth of AI, it is difficult to predict what AI will look like in five years. However, there is some consensus on what fields might be affected by that time. Many say that fields such as transportation, healthcare, and public safety will be heavily influenced by AI at that point, which is hard to dispute, given how advancements in robotics, self-driving cars, etc. are happening at the same time. While there is no way to predict with certainty what AI breakthroughs may occur or when, but it is clear that AI is advancing at a remarkable pace and that now is the time to proactively consider the ethical implications of those capabilities and whether, and if so what, regulatory strategies may be needed to ensure human safety. The following areas, in particular, may be ones to watch:
Generative AI: Empowering Both Attackers and Defenders
As we noted, CTAs are already using Generative AI to produce credible, but fraudulent text, audio, and video content, which will make social engineering more convincing and scalable. On the defense side, AI will likely become a built-in assistant in every cybersecurity analyst’s workflow, helping screen out false alerts and identify and respond to real threats more quickly and intelligently.
Executive Takeaway: Organizations should think ahead about verification methods for sensitive communications and consider investing in deepfake detection technologies.
Autonomous Cyber Agents: AI vs. AI in the Digital Battlespace
AI agents can already operate semi-independently, and CTAs are evolving those capabilities to deeply conduct cyber-attacks—identifying targets, exploiting weaknesses, and exfiltrating data—without direct human input. On the defense side, AI agents are also being developed to counter these attacks in real time, monitoring networks, isolating anomalies, and automatically deploying countermeasures.
Executive Takeaway: Organizations should keep up with developments in AI-based automation and train cybersecurity teams to manage and audit AI behavior.
Quantum Computing: Disruptor of Trust and Encryption
Although quantum computing is still developing, it will completely change the cybersecurity landscape. A powerful quantum computer could break most of the cryptographic protocols that protect our data today. On the defense side security researchers are working to develop post-quantum cryptography (PQC), but the implementation and timing (relative to quantum decryption) will be critical.
Executive Takeaway: Sensitive sectors like finance, telecom, and government should prioritize early adoption of PQC.
Evolving Threat Landscape: The AI Arms Race
The cat-and-mouse exchange between cyber attackers and defenders will continue in the years ahead. CTAs will continue to leverage AI to bypass defenses, while defenders race to adapt. Hackers will increasingly deploy deepfake voice or video scams and new actors will join the fray using malicious AI-as-a-service offerings. On the defense side, AI will most certainly become embedded in most cybersecurity tools and have outsized benefit for smaller organizations.
Executive Takeaway: As AI capabilities advance, organizations should anticipate increased regulation around transparency, oversight, and ethical AI use—especially in critical infrastructure.
Key Takeaways and Action Items for Florida Legislators (State and Federal)
Support AI Transparency and Accountability Legislation
• Constituents are vulnerable to manipulation and fraud from AI-generated phishing and deepfakes. Florida’s new AI bill (HB 919) requiring disclaimers on AI-generated political ads is a step forward.70 Legislators should consider expanding this to cover disclosures in broader uses of AI in public communications.
Mandate Reporting and Response Standards for AI-Driven Incidents
SB 1536 (2025) already requires local governments to report ransomware incidents.71 Legislators should consider expanding these reporting requirements to include AI-specific threats such as model/data poisoning or autonomous malware, with clear timelines and support mechanisms.
Establish AI Risk Oversight for Critical Infrastructure (CI)
• Legislators should consider engaging with the Florida Government Technology Modernization Council (created by SB 1680) to ensure robust AI risk assessments are conducted across CI sectors like energy, water, and transportation, and align with CISA’s AI roadmap.72
Promote Public-Private Collaboration on AI Red Teaming
Legislators might consider encouraging and incentivizing partnerships between state agencies, universities, and private sector firms to conduct AI “red teaming” exercises to enable early identification of AI system vulnerabilities.
Key Takeaways and Action Items for Critical Infrastructure
Owners & Operators in Florida
Invest in AI-Augmented Threat Detection and Response
Because AI-powered tools can detect anomalies, filter false alerts, and respond to threats faster than traditional systems, CI owners/operators should consider prioritizing investments in AI-enhanced SIEMs, endpoint detection, and automated response platforms to counter evolving threats.
Conduct AI-Specific Risk Assessments
• Thoroughly assess (utilizing CISA’s AI roadmap as a guide for assessing vulnerabilities) how AI is used in your operations and where it has the potential to introduce new risks (e.g., autonomous control systems, predictive maintenance).
Conduct AI-Specific Risk Assessments
• Thoroughly assess (utilizing CISA’s AI roadmap as a guide for assessing vulnerabilities) how AI is used in your operations and where it has the potential to introduce new risks (e.g., autonomous control systems, predictive maintenance).
Implement AI-Driven Vulnerability Management
• Use AI to proactively scan for and prioritize vulnerabilities and automate patching and incident response, particularly in sectors like energy and water, where downtime or compromise can have cascading effects.
Train Staff on AI Threat Awareness
• Social engineering is still CTAs’ primary avenue of attack and human error remains a major vulnerability. Provide training on recognizing AI-generated phishing, deepfakes, and social engineering tactics, and include AI-specific modules in regular cybersecurity drills.
Collaborate with Sector-Specific ISACs and CISA
Engage with Information Sharing and Analysis Centers (ISACs) for relevant sectors and connect with CISA’s AI working groups to stay informed on emerging threats and best practices.
Conclusion
The rapid evolution of AI and its capabilities have made it abundantly clear that it is a technology that holds incredible potential. As countries and companies throughout the world continue to compete with one another to stand at the pinnacle of the cutting edge, more and more of the myriad ways that AI can be both a positive and negative force will be revealed to us. With that in mind, it is important to take note of how cyber threats develop and use AI for their malicious attacks and to begin preparing countermeasures and defenses against them. There is no doubt that the only way to combat AI is with AI. An AI-driven observe, orient, decide, act (OODA) loop, aka decision process, will always be faster than a human OODA loop. This means we need to continue to develop and improve AI defenses.
Cybersecurity professionals need to use AI to bolster cyber defenses:
AI-driven threat detection and response
Enhanced malware analysis and detection
• Proactive vulnerability management
• Adaptive defense and deception technologies
The key takeaways and action items for Florida Legislators (State and Federal) are:
Support AI transparency and accountability legislation
Fund AI-driven cybersecurity capacity for state and local agencies
• Mandate reporting and response standards for AI-driven incidents
• Establish AI risk oversight for critical infrastructure (CI)
• Promote public-private collaboration on AI red teaming
The key takeaways and action items for critical infrastructure owners & operators in Florida are:
• Invest in AI-augmented threat detection and response
• Conduct AI-specific risk assessments
• Implement AI-driven vulnerability management
• Train staff on AI threat awareness
Appendix A
Top 10 AI Nations (ranked by weighted global AI vibrancy score, as of 2023)73,74
1. United States – 70.06
2. China – 40.17
3. United Kingdom – 27.21
4. India – 25.54
5. United Arab Emirates – 22.72
6. France – 22.54
7. South Korea – 20.48
8. Germany – 18.49
9. Japan – 18.47
10. Singapore – 18.15
Appendix B
Top 10 AI Companies (ranked by market cap as of 2024)75
1. Apple – $3.34 trillion
2. Microsoft – $3.37 trillion
3. Alphabet Inc – $2.64 trillion
4. NVIDIA – $2.06 trillion
5. Meta Platforms Inc – $1.23 trillion
6. Tesla – $663.43 billion
7. Adobe – $247.88 billion
8. IBM – $174.20 billion
9. Palantir – $62.50 billion
10. Mobileye (Intel) – $12.87 billion
Endnotes
1 Babk!9 and Nouveau Riche Financial Services are fictitious names. The scenario was constructed based on actual events and existing AI capabilities.
2 “Phishing” is a tactic where hackers send a deceptive email, typically luring readers to open an attachment or click a link that will unleash malware or prompt them to provide sensitive information like login credentials or credit card numbers.
3 Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 4th ed. (Hoboken, NJ: Pearson, 2020); Dimiter Dobrev, “A Definition of Artificial Intelligence,” arXiv preprint, October 4, 2012, https://arxiv.org/abs/1210.1568
4 Stanford Institute for Human-Centered Artificial Intelligence, The AI Index Report 2025 (Stanford, CA: Stanford University, 2025), https:// www.businesswire.com/news/home/20250407539812/en/Stanford-HAIs-2025-AI-Index-Reveals-Record-Growth-in-AI-Capabilities-Investment-and-Regulation
5 Darrell M. West and John R. Allen, “How Artificial Intelligence Is Transforming the World,” Brookings, April 24, 2018, https://www.brookings. edu/articles/how-artificial-intelligence-is-transforming-the-world/
6 Mike Thomas, “The Future of AI: How Ai Is Changing the World,” ed. Matthew Urwin, Built In, January 28, 2025, https://builtin.com/artificial-intelligence/artificial-intelligence-future
7 IBM Data Team and AI Team, “Types of Artificial Intelligence,” IBM, October 12, 2023, https://www.ibm.com/think/topics/artificial-intelligence-types?mhsrc=ibmsearch_a&mhq=Understanding+the+different+types+of+artificial+intelligence
8 Samuel Fosso Wamba, et al. “Machine Learning and Deep Learning.” Electronic Markets 31, no. 3 (2021): 1–10. https://doi.org/10.1007/ s12525-021-00475-2.SpringerLink
9Hamed Taherdoost. “Navigating the ethical and privacy concerns of big data and machine learning in decision making.” Intelligent and Converged Networks 4, no. 4 (2023): 280-295.; Koosha Sharifani and Mahyar Amini. “Machine learning and deep learning: A review of methods and applications.” World Information Technology and Engineering Journal 10, no. 07 (2023): 3897-3904.
10 Diksha Khurana, Aditya Koli, Kiran Khatter, and Sukhdev Singh. “Natural language processing: state of the art, current trends and challenges.” Multimedia tools and applications 82, no. 3 (2023): 3713-3744. https://link.springer.com/content/pdf/10.1007/s11042-02213428-4.pdf
11 Humza Naveed, et al. “A Comprehensive Overview of Large Language Models.” arXiv preprint arXiv:2307.06435 (2023). https://arxiv.org/ abs/2307.06435
12 Stefan Feuerriegel, et al. “Generative Artificial Intelligence: A Systematic Review and Applications.” Multimedia Tools and Applications 83, no. 1 (2024): 1–25. https://doi.org/10.1007/s11042-024-20016-1
13 Valerie Wirtschafter, “The Impact of Generative AI in a Global Election Year,” Brookings, January 30, 2024, https://www.brookings.edu/ articles/the-impact-of-generative-ai-in-a-global-election-year/.
14 Michael Hill, “Generative AI Fueling Significant Rise in Cyberattacks,” CSO Online, August 23, 2023, https://www.csoonline.com/article/649944/generative-ai-fueling-significant-rise-in-cyberattacks.html
15 Bugcrowd. Inside the Mind of a Hacker: 2024 Edition San Francisco: Bugcrowd, 2024. Accessed May 16, 2025. https://www.bugcrowd. com/wp-content/uploads/2024/10/Inside-the-Mind-of-a-Hacker-2024.pdf.
16 Keeper Security. Top Data Threats Insight Report. 2024. Accessed May 16, 2025. https://www.keeper.io/hubfs/top-data-threats-insight-report-EN.pdf
17 Deep Instinct. Voice of SecOps: 5th Edition – AI in Cybersecurity: Friend or Foe? 2024. Accessed May 16, 2025. https://resources.wisdominterface.com/wp-content/uploads/2024/08/DI_Voice_of_SecOps_5th_Edition_2024_V2.pdf
18 Cloud Security Alliance. The State of Security Remediation. February 14, 2024. Accessed May 16, 2025. https://cloudsecurityalliance.org/press-releases/2024/02/14/cloud-security-alliance-survey-finds-77of-respondents-feel-unprepared-to-deal-with-security-threats
19 Ping Wang and Peyton Lutchkus, “Psychological Tactics of Phishing Emails,” Issues in Information Systems 24, no. 2 (2023): 71–83, https://doi.org/10.48009/2_iis_2023_107.
20 Malwarebytes. “Malwarebyt es ChatGPT Survey Reveals 81% Are Concerned by Generative AI Security Risks.” Press release, June 27, 2023. Accessed May 17, 2025. https://www.malwarebytes.com/press/2023/06/27/ malwarebytes-chatgpt-survey-reveals-81-are-concerned-by-generative-ai-security-risks
21 Marc Schmitt and Ivan Flechais. “Digital deception: Generative artificial intelligence in social engineering and phishing.” Artificial Intelligence Review 57, no. 12 (2024): 1-23.
22 “FunkSec – Alleged Top Ransomware Group Powered by Ai,” Check Point Research, January 10, 2025, https:// research.checkpoint.com/2025/funksec-alleged-top-ransomware-group-powered-by-ai/
23 Maha Charfeddine, Habib M. Kammoun, Bechir Hamdaoui, and Mohsen Guizani. “ChatGPT’s security risks and benefits: offensive and defensive use-cases, mitigation measures, and future implications.” IEEE Access (2024).
24 Matthew Delman , “AI-Powered Ransomware: The next Generation of Damaging Cyberattacks,” Sotero, October 16, 2024, https://www.soterosoft.com/blog/ai-powered-ransomware-the-next-generation-of-damaging-cyberattacks/
25 Bibhu Dash and Pawankumar Sharma. “Are ChatGPT and deepfake algorithms endangering the cybersecurity industry? A review.” International Journal of Engineering and Applied Sciences 10, no. 1 (2023): 1-5.
26 PolitiFact Florida, “How a Deepfake Video of Ron DeSantis Dropping Out Went Viral,” WLRN, September 13, 2023, https://www.cfpublic.org/politics/2023-09-13/politifact-florida-ron-desantis-deepfake-video-elections
27 Dan Milmo, “UK Engineering Firm Arup Falls Victim to £20m Deepfake Scam,” The Guardian May 17, 2024, https://www.theguardian.com/technology/article/2024/may/17/uk-engineering-arup-deepfake-scamhong-kong-ai-video
28 Jawad Ahmad, Wajiha Salman, Muzamal Amin, Zain Ali, and Shumail Shokat. “A Survey on Enhanced Approaches for Cyber Security Challenges Based on Deep Fake Technology in Computing Networks.” Spectrum of Engineering Sciences 2, no. 4 (2024): 133-149.
29 Rizwan Khan, Mohd Taqi, and Atif Afzal, “Deepfakes in Finance: Unraveling the Threat Landscape and Detection Challenges,” in Navigating the World of Deepfake Technology, ed. Girish Lakhera, Sanjay Taneja, Ercan Ozen, Mohit Kukreti, and Pawan Kumar (Hershey, PA: IGI Global, 2024), 91–120, https://doi.org/10.4018/979-83693-5298-4.ch006
30 P. V. Falade (2023). Decoding the threat landscape: ChatGPT, FraudGPT, and WormGPT in social engineering attacks. arXiv preprint arXiv:2310.05595.
31 FortiGuard Labs. 2025 Global Threat Landscape Report. Fortinet, 2025. Accessed May 17, 2025. https://www. fortinet.com/content/dam/fortinet/assets/threat-reports/threat-landscape-report-2025.pdf
32 Christine Barry, “5 Ways Cybercriminals Are Using AI: Malware Generation,” Barrcuda Blog, April 16, 2024, https://blog.barracuda.com/2024/04/16/5-ways-cybercriminals-are-using-ai--malware-generation.
33 Swapnil Chawande, “AI-Driven Malware: The Next Cybersecurity Crisis,” World Journal of Advanced Engineering Technology and Sciences 12, no. 1 (2024): 542–554, https://doi.org/10.30574/ wjaets.2024.12.1.0172
34 M. G. Gaber, Ahmed, M., & Janicke, H. (2024). Malware detection with artificial intelligence: A systematic literature review. ACM Computing Surveys, 56(6), 1-33.
35 Abhilash Chakraborty, Anupam Biswas, and Ajoy Kumar Khan, “Artificial Intelligence for Cybersecurity: Threats, Attacks and Mitigation,” in Artificial Intelligence for Societal Issues ed. Anupam Biswas, Vijay Bhaskar Semwal, and Durgesh Singh (Cham: Springer International Publishing, 2023), 3–25, https://doi. org/10.1007/978-3-031-12419-8_1
36 Blessing Guembe, Ambrose Azeta, Sanjay Misra, Victor Chukwudi Osamor, Luis Fernandez-Sanz, and Vera Pospelova. “The emerging threat of AI-driven cyber attacks: A review.” Applied Artificial Intelligence 36, no. 1 (2022): 2037254.
37 Xiyue Deng and Jelena Mirkovic. “Polymorphic malware behavior through network trace analysis.” In 2022 14th International Conference on COMmunication Systems & NETworkS (COMSNETS), pp. 138-146. IEEE, 2022, https://www.isi.edu/people-mirkovic/wp-content/uploads/sites/52/2023/10/a25-dengfinal.pdf.
38 Lothar Fritsch, Aws Jaber, and Anis Yazidi, An Overview of Artificial Intelligence Used in Malware, in New Advances in Information Systems and Technologies, ed. Eleni Zouganeli et al., Communications in Computer and Information Science 1650 (Cham: Springer, 2022), 41–51, https://doi.org/10.1007/978-3-031-17030-0_4
39 FortiGuard Labs, 2025 Global Threat Landscape Report
40 Leroy Jacob Valencia. “Artificial Intelligence as the New Hacker: Developing Agents for Offensive Security.” arXiv preprint arXiv:2406.07561 (2024).
41 Sadegh Bamohabbat Chafjiri, Phil Legg, Jun Hong, and Michail-Antisthenis Tsompanas. “Vulnerability detection through machine learning-based fuzzing: A systematic review.” Computers & Security (2024): 103903.
42 Ibad Rehman, “The Capabilities of Large Language Models in Executing/Preventing Cyber Attacks,” LinkedIn, May 8, 2024, https://www.linkedin.com/pulse/capabilities-large-language-models-cyber-attacks-ibad-rehman-jbjqc/
43 Emad Alsuwat. “Analysis on Data Poisoning Attack Detection Using Machine Learning Techniques and Artificial Intelligence.” Journal of Nanoelectronics and Optoelectronics 18, no. 5 (2023): 628-638.
44 Laxminarayana Korada. “Data Poisoning-What Is It and How It Is Being Addressed by the Leading Gen AI Providers.” European Journal of Advances in Engineering and Technology 11, no. 5 (2024): 105-109.
45 Pinlong Zhao, Weiyao Zhu, Pengfei Jiao, Di Gao, and Ou Wu. “Data Poisoning in Deep Learning: A Survey.” arXiv preprint arXiv:2503.22759 (2025).
46 Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D. Joseph, Benjamin IP Rubinstein, Udam Saini, Charles Sutton, J. Doug Tygar, and Kai Xia. “Exploiting machine learning to subvert your spam filter.” LEET 8, no. 1-9 (2008): 16-17.
47 Wencheng Yang, Song Wang, Di Wu, Taotao Cai, Yanming Zhu, Shicheng Wei, Yiying Zhang, Xu Yang, Zhaohui Tang, and Yan Li. “Deep learning model inversion attacks and defenses: a comprehensive survey.” Artificial Intelligence Review 58, no. 8 (2025): 1-52.
48 Md Mostafizur Rahman, Aiasha Siddika Arshi, Md Mehedi Hasan, Sumayia Farzana Mishu, Hossain Shahriar, and Fan Wu. “Security risk and attacks in AI: A survey of security and privacy.” In 2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC), pp. 1834-1839. IEEE, 2023.
49 Darktrace. The State of AI Cybersecurity: 2025 Report. 2025. Accessed May 17, 2025. https://www.darktrace. com/the-state-of-ai-cybersecurity-2025
50 Jason Lamar. “From Burnout to Balance: How AI Supports Cybersecurity Professionals.”Cyber Defense Magazine. November 7, 2024. Accessed May 17, 2025. https://www.cyberdefensemagazine.com/from-burnout-to-balance-how-ai-supports-cybersecurity-professionals-2/
51 Deep Instinct. Voice of SecOps
52 Anna Ribeiro, “Takepoint Research: 80% of Industrial Cybersecurity Professionals Favor AI Benefits over Evolving Risks,” Industrial Cyber, October 24, 2024, https://industrialcyber.co/ai/takepoint-research-80-of-cybersecurity-professionals-favor-ai-benefits-over-evolving-risks/
53 Mohammed B. Karaja et al., “AI-Driven Cybersecurity: Transforming the Prevention of Cyberattacks,” International Journal of Academic Engineering Research 8, no. 10 (October 2024): 38–44, www.ijeais.org/ijaer.
54 Yijie Weng and Jianhao Wu, “Leveraging Artificial Intelligence to Enhance Data Security and Combat Cyber Attacks,” Journal of Artificial Intelligence General Science 5, no. 1 (2024), https://doi.org/10.60087
55 P. Neelakrishnan, “AI-Driven Proactive Cloud Application Data Access Security,” International Journal of Innovative Science and Research Technology (IJISRT) (2024), https://doi.org/10.38124/ijisrt/ijisrt24apr957; Karaja et al., AI-Driven Cybersecurity
56 Muritala Aminu, Ayokunle Akinsanya, Dickson Apaleokhai Dako, and Oyewale Oyedokun. “Enhancing cyber threat detection through real-time threat intelligence and adaptive defense mechanisms.” International Journal of Computer Applications Technology and Research 13, no. 8 (2024): 11-27.
57 Malti Bansal, Apoorva Goyal, and Apoorva Choudhary. “A comparative analysis of K-nearest neighbor, genetic, support vector machine, decision tree, and long short term memory algorithms in machine learning.” Decision Analytics Journal 3 (2022): 100071.
58 Ammar Almomani, Samer Aoudi, Ahmad Al-Qerem, Amjad Aldweesh, and Mouhammd Alkasassbeh. “Behavioral Analysis of AI-Generated Malware: New Frontiers in Threat Detection.” In Examining Cybersecurity Risks Produced by Generative AI pp. 211-234. IGI Global Scientific Publishing, 2025.; Maksym Chaikovskyi, Inna Chaikovska, Tomas Sochor, Inna Martyniuk, and Oleksii Lyhun. “Comprehensive approach to the detection and analysis of polymorphic malware.” In CEUR Workshop Proceedings, vol. 3736, pp. 312-323. 2024.
59 Kumrashan Indranil Iyer. “Proactive Threat Hunting: Leveraging AI for Early Detection of Advanced Persistent Threats.” European Journal of Advances in Engineering and Technology 11, no. 2 (2024): 69-76.
60 Zarif Bin Akhtar, and Ahmed Tajbiul Rawol. “Enhancing cybersecurity through AI-powered security mechanisms.” IT Journal Research and Development 9, no. 1 (2024): 50-67.
61 Mariam Soltanifar. “AI-Driven Penetration Testing: Automating Vulnerability Assessments.” Journal of Computing and Information Technology 2, no. 1 (2022). https://universe-publisher.com/index.php/jcit/article/view/31
62 Linghan Huang, Peizhou Zhao, Huaming Chen, and Lei Ma. “Large language models based fuzzing techniques: A survey.” arXiv preprint arXiv:2402.00350 (2024).
63 Iqbal H. Sarker, “Multi‐aspects AI‐based modeling and adversarial learning for cybersecurity intelligence and robustness: A comprehensive overview.” Security and Privacy 6, no. 5 (2023): e295.
64 Venkata Bhardwaj Komaragiri and Andrew Edward. “AI-Driven Vulnerability Management and Automated Threat Mitigation.” International Journal of Scientific Research and Management (IJSRM) 10, no. 10 (2022): 981-998.
65 Amir Javadpour, Forough Ja’fari, Tarik Taleb, Mohammad Shojafar, and Chafika Benzaïd. “A comprehensive survey on cyber deception techniques to improve honeypot performance.” Computers & Security (2024): 103792.; Zlatan Morić, Vedran Dakić, and Damir Regvart. “Advancing Cybersecurity with Honeypots and Deception Strategies.” Informatics 12, no. 1 (2025): 14. doi:https://doi.org/10.3390/informatics12010014. https://www.proquest.com/scholarly-journals/advancing-cybersecurity-with-honeypots-deception/docview/3181481249/se-2
66 Srikanth Mandru, “How AI Can Improve Identity Verification and Access Control Processes,” Journal of Artificial Intelligence & Cloud Computing 1, no. 4 (2022): 1–5, https://doi.org/10.47363/JAICC/2022(1)E101
67 Sundar Tiwari, Writuraj Sarma, and Aakash Srivastava. “Integrating Artificial Intelligence with Zero Trust Architecture: Enhancing Adaptive Security in Modern Cyber Threat Landscape.” International Journal of Research and Analytical Reviews 9 (2022): 712-728.
68 Shoroog Albalawi, Lama Alshahrani, Nouf Albalawi, Reem Kilabi, and A’aeshah Alhakamy. “A comprehensive overview on biometric authentication systems using artificial intelligence techniques.” International Journal of Advanced Computer Science and Applications 13, no. 4 (2022): 1-11.
69 Muhammad Mudassar Yamin et al., “Applications of LLMS for Generating Cyber Security Exercise Scenarios | IEEE Journals & Magazine | IEEE Xplore,” IEEE Xplore, September 26, 2024, https://ieeexplore.ieee. org/document/10695083/
70 Florida House of Representatives. CS/HB 919 (2024) - Artificial Intelligence Use in Political Advertising. Accessed May 17, 2025. https://www.flhouse.gov/Sections/Bills/billsdetail.aspx?BillId=79571
71 Florida House of Representatives. SB 1536 (2025) - Cybersecurity. Accessed May 17, 2025. https://flhouse.gov/Sections/Bills/billsdetail.aspx?BillId=82036
72 Florida House of Representatives. Information Technology Budget & Policy Subcommittee - 03/25/2025 09:00 AM Meeting Bill Summary Report. Accessed May 17, 2025. https://www.flhouse.gov/meeting-bill-summary-report?MeetingId=14655&CommitteeId=3280
73 “Global AI Power Rankings: Stanford HAI Tool Ranks 36 Countries in AI.” Stanford University Human-Centered Artificial Intelligence, November 21, 2024, https://hai.stanford.edu/news/global-ai-power-rankings-stanford-hai-tool-ranks-36-countries-ai
74 “Which Countries are Leading in AI?” Stanford HAI, accessed May 7, 2025, https://hai.stanford.edu/ ai-index/global-vibrancy-tool
75 “15 Largest AI Companies in 2024,” Team Stash, August 8, 2024, https://www.stash.com/learn/ top-ai-companies/
