AI in Cybersecurity: Opportunities and Threats

Steven Atnip, Principal Consultant for Verizon Threat Research Advisory Center

Key points about types of AI and their functions

In the July edition of the Verizon VTRAC’s Monthly Intelligence Briefing, Stephen Atnip, Principal, Cybersecurity Consulting, discussed some of the various types of AI engines and platforms that are commonly used, what opportunities they present for enterprise, along with a high-level view of some of the threats that come along with those capabilities.


Common types of AI

  • Generative AI (think ChatGPT and Microsoft Copilot) generates text based on your prompts.

  • Autonomous Systems such as Waymo and Tesla. These are self-directed using things such as electro optical or different wavelengths of light in order to measure distance and perform mapping.

  • Recommender Systems – This is the sort of AI many of us have been using for years. Think Netflix, YouTube, Spotify, etc. The user likes a certain song, movie or tv show and based on that, AI is used to recommend other things similar in nature.

  • Predictive AI – AI that is used to forecast things such as credit scoring models and google maps.

  • Conversational AI – Alexa, Siri, Hey Google, etc. They only respond in certain ways. They're not predictive, they don't know what you're going to ask next, but they can respond to prompts and provide you with an answer. 

  • Computer vision systems – Smartphones, security systems, and so on. Based on the way we see AI trending currently, one possible outcome of AI use in enterprise is that AI systems may possibly be increasingly integrated into our mobile devices. This may be seen by some as the logical next step given the ubiquitous nature of AI in our daily interactions already. Moving forward it is possible that we may see a move away from the marketing and consumption use we mainly see today, towards a larger integration within the overall security framework.

So, let’s take a look at what sort of opportunities (and threats because they often go hand in hand) that we might see on the horizon with regard to AI.


Possible AI-driven Cybersecurity Opportunities

Real-time fraud detection: One rather important possible outcome of continued improvement of AI security features might be real-time monitoring of transactions to detect fraud. The possible reduction of the time it can take to recognize fraud to minutes or even real time as opposed to hours and days would be beneficial to enterprise.

Enhanced “Know Your Customer” (KYC) Capabilities: There can be increased opportunities to know and understand your customer base, which is an advantage for any business. Understanding the preferences and habits of your customer base can provide significant positive outcomes.

Enhanced “Know Your Adversary” (KYA) Capabilities: However, on the flip side, you can also get to know your adversaries more fully.  Are they utilizing AI (almost certainly) and what types of AI are they using, and for what purpose exactly? This closing of the gap regarding adversary tactics and capabilities should provide a huge area of opportunity for enterprise organizations.


Possible AI-driven Cybersecurity Threats

Voice cloning – AI continues to pave the way for things such as voice cloning, which can be used to obtain victim credentials. These credentials can then be used for basic fraud, cyber espionage, and any number of other criminal activities. This means, of course, that many industries are going to have a lot of difficult upgrades to help aid in the detection and prevention of tactics such as this.

Advanced Spear Phishing Attacks Threat actors are using AI systems to cross language barriers much more easily than in the past. This capability may prove to be a tremendous boon to attackers with regards to creating more efficient and effective phishing campaigns. For instance, spear phishing attacks can be quickly tailored to not just one, but to thousands of targets. As one can imagine, the resulting success rate could be exponentially higher.

Data Leaks due to unsanctioned use of AI at work The 2025 Verizon Data Breach Investigations Report states that approximately 14% of employees are routinely accessing Generative AI (GenAI) systems on their corporate devices. Of particular concern is how these systems are being accessed:

  • 72% are using a non-corporate email address to identify their accounts.
  • 17% are using their corporate email address without an integrated authentication system like Security Assertion Markup Language (SAML).

The lesson that can be gleaned here is that this activity can pose a significant risk due to the types of data being used for these purposes, along with the fact that the AI tools being used may not be an approved tool. And as a result, may not be secured at all, or might not be adequately secured. This means that you could lose that sensitive information just by the act of providing it to AI in order to get a better response back from the platform. Even though the user may get more productive or actionable feedback based on the prompts provided, it may potentially come at the price of that unsecured and very valuable organizational data falling into the wrong hands.  Therefore, there is a need to balance quick results from AI vs the possible unwanted outcomes and privacy violations.


Best Practices for AI implementation

The main pillars to consider when designing an AI system are:

  • Transparency – ensuring that your AI decisions are explainable to your stakeholders. For example, when an AI system suffers ‘hallucinations,’ organizations need a process to determine how they arrived at that outcome and how to avoid it in future.

  • Fairness  these systems need to be audited regularly in order to get biased answers removed and to make sure that the problems caused do not outweigh the benefit they are providing.
  • Accountability – establish on company levels and on regulatory levels.
  • Security – organizations must implement robust security measures to protect AI systems.
  • Align AI practices with existing laws and regulations.

A key focus will be on clearly defining our end-state objectives, especially when leveraging AI. Whether we're utilizing, acquiring, or developing AI solutions, half of our effort should be dedicated to establishing these clear goals and then structuring our parameters and data accordingly. This approach should yield significantly better results.

From a business perspective, identifying pain points to create clear value is crucial. AI is designed to be rewarded by positive feedback and continued utilization, unlike human interaction which typically provides direct feedback on pain points. Therefore, we should establish robust feedback systems to continually identify and address these areas.

In the coming years, we anticipate customer service and marketing will be prominent areas for AI systems. These applications will likely extend far beyond current chatbot functionalities on retail websites.

It's essential to invest in employee training to enhance AI literacy and awareness. While our IT security teams are already well-versed in AI, a significant portion of our workforce may not be using it, be aware of its capabilities, or be hesitant to adopt it. Training will help ensure they understand both the benefits and risks of AI. On the positive side, AI can boost individual and organizational productivity. However, we must also be aware of the threats posed by malicious actors and ensure we use this powerful tool responsibly.

Furthermore, for leaders, it's vital to align our AI strategy with our institution's broader goals. We should avoid implementing AI just for the sake of it. Instead, we need a clear vision with a focus on return on investment. This will help ensure that our AI initiatives effectively contribute to our organizational objectives and help prevent us from wasting resources or heading in the wrong direction.

Finally, when engaging with third-party vendors, it's vital to utilize due diligence to thoroughly vet them and maintain active engagement.

To learn more about AI in Cybersecurity, view a replay of the Monthly Intelligence Briefing webinar at: https://www.brighttalk.com/webcast/15099/634212.

Steven is a Principal Consultant for Verizon Threat Research Advisory Center (VTRAC). His focus is in SMS Mobile Fraud, Dark Web Hunting, Threat Intelligence, and Identity Protection. Steven has worked as a Cyber Intelligence Analyst and Dark Web Hunter throughout the telecoms and financial industry for 11 years. He formerly served in Naval Intelligence for 8 years, to include the National Geospatial Intelligence Agency (NGA), Office of Naval Intelligence, and Combined Joint Task Force – Horn of Africa. He holds a masters in National Security, bachelors in Intelligence Studies, and CISSP certification.