Outpaced, Outsized, Outsmarted: Is Your Enterprise Security AI-Ready?

Blogs and Articles

AI is ushering in new discoveries, creativity, and customer-focused innovation, but security risks loom. How can enterprises accelerate digital and security transformation that continuously balances reward and risk?

Mike Purtell
Mike Purtell
Innovation Senior Software Engineering Manager, Iron Mountain
9 July 20247 min
Internet security concept

An entire gap-free sequence of the 3+ billion letters of our DNA is crucial to understanding and preventing disease. In 1990, the Human Genome Project 13 required years to sequence the human genome to about 92% completion. Fast forward from 2003 to 2022, when Stanford University and its partners used AI to achieve the Guinness World Record for sequencing human DNA in 5 hours and 2 minutes, a record that still holds. Both time and cost matter when health is at stake, and the cost to sequence the human genome has fallen from about $3B in 2003 to $500.

This example illustrates the rising value and accessibility of AI to organisations. But there is also a catch: an increasing reliance on AI-driven processes creates new vulnerabilities and potential attack vectors for cyber threats.

Bad actors have used AI in malware, distributed denial of service, and other attacks for years. Generative AI adds to the threat equation, as cybercriminals use it to create convincing fake content for phishing and social media campaigns and to deceive other AI systems such as spam filters, automate hacking tools, develop sophisticated malware, etc. In some cases, no coding and only limited prompting skills are required; users can simply chat.

Further, even well-meaning developers can do harm inadvertently with generative AI by relying too heavily on AI output without sufficient testing to detect security exposures and other issues.

The risks are real, and the stakes are high

Enterprise and consumer use of generative AI large language models such as ChatGPT, Google Gemini, or Microsoft Copilot is growing rapidly. As an example, In April 2024, the consumer version of OpenAI ChatGPT is used in “more than 92% of Fortune 500 companies.” Further, about 28% of employees use generative AI tools each day. How they use these tools introduces outsized vulnerabilities. Much of the information shared on generative AI is “sensitive” and is most often internal business information and source code that controls computer functionality and corporate intellectual property.

While these security vulnerabilities grow, the ease, breadth, and speed of generative AI cyberattacks are alarming. In one case, a zero-budget, zero-day malware attack was created by one person in two hours. This was accomplished as a test in a controlled environment with an engineer simply asking questions in the chat tool without writing a single line of code. This experiment and its outcome are not unique. Anyone can misuse generative AI, lowering the entry point for malicious activity.

Last year, Samsung banned the use of ChatGPT and other generative AI chatbots after three instances of data leaks. First, an engineer pasted source code into a chatbot to fix errors in the code. Second, an employee pasted code into a chatbot to optimise the code. Third, an employee used a chatbot to generate minutes of an internal Samsung meeting. All three of these scenarios created data leaks and security vulnerabilities.

The three O’s of AI security risks

Simply put, AI and generative AI create a “perfect storm” of vulnerability for these three reasons:

  1. Outpaced. The ability of generative AI to rapidly create new types of attacks is outpacing the development of cyber defense to remedy those attacks, escalating vulnerabilities.
  2. Outsized. Cybercriminals can use generative AI to quickly scale the magnitude of their attacks, skyrocketing risks. And if code produced by generative AI is based on code that has security vulnerabilities, the vulnerabilities are readily propagated without the organisation knowing about it.
  3. Outsmarted. Criminals using AI, particularly generative AI, make cyberattacks “smarter” by creating fake text, voice, image, and video, increasing their effectiveness at very low costs.

These three realities demonstrate the use and breadth of AI for malicious purposes. In addition to helping scamming and phishing schemes with fake content, AI-based attacks also include data poisoning, model theft, model evasion, and model data extraction.

How to secure your enterprise

The promise of AI is so profound that enterprises can’t ignore it. AI leaders can significantly outpace the competition, but only if they mitigate security and other risks of using it. Here are some steps that enterprises can take to do so:

  1. Shore up your IT threat detection and response capabilities. Enterprises can use AI to automate, detect, and respond to threats faster and more accurately. Using machine learning enables organisations to adapt and mitigate evolving threats dynamically.
  2. Expand your acceptable use policies. Organisations can note when auditing, logging, and monitoring are acceptable when using AI detection tools. Also, enterprises can promote personal responsibility in the safe use of AI. Companies and employees share joint responsibility for and content they make, AI generated or not, this should be clearly noted in company policies.
  3. Provide employees with the knowledge to anticipate threats and help avoid them. For example, organisations can bolster employee understanding of when AI works best and where it may be limited, particularly when inputs and outputs include sensitive or proprietary data.
  4. Activate a unified asset strategy to protect, manage, and optimise the value of digital and physical assets from creation, acquisition, and deployment to generating new value. When assets reach end-of-life, you can securely dispose of them in compliant and environmentally responsible ways to minimise risk. According to Moor Insights and Strategy (MI&S), organisations should “implement a strategic plan for securing physical and digital assets. While cybersecurity is a priority for many IT organisations, MI&S has found that many security teams overlook these physical assets that hide in plain view.

Implement a strategic plan for securing physical and digital assets. While cybersecurity is a priority for many IT organisations, MI&S has found that many security teams overlook these physical assets that hide in plain view.

Achieving a deft balance

Future innovation leaders are enterprises that achieve a deft balance between accelerating the extraordinary potential of AI and establishing countermeasures to reduce risk. The first steps are understanding the magnitude of the cybersecurity vulnerability, unpacking the mechanisms that can be used maliciously, and then taking action to mitigate its impact. By taking appropriate action, organisations can achieve notable outcomes like sequencing the human genome in hours versus years while protecting their organisations from the risks of AI.

Learn more about cybersecurity in an evolving global environment

For more information about security threats and how businesses and governments can address them, read Protecting data where cybersecurity and global realities converge. The paper is based on a panel discussion hosted by Iron Mountain at the 2024 World Economic Forum. The panel featured world-renowned experts from business, academia, and two of the world’s foremost intelligence agencies, moderated by a Pulitzer Prize-winning journalist and author.