The rewards and risks to companies of generative artificial intelligence

Blogs and Articles

Generative artificial intelligence (GenAI) products offer exciting new features to make individuals, industries, and businesses more efficient and automated. The technology has also raised alarms over issues like copyright infringement, intellectual property rights, plagiarism, and criminal misuse. Within companies, there are additional risks, like data leaks, privacy concerns, insecure code, and cybercrime.

26 December 20247 mins
Employe working on a laptop

Since the term ‘artificial intelligence’ (AI) was first introduced in 1956 as a new field of computer science, its application was mostly limited and theoretical. Researchers lacked regular access to the computing resources required to process massive amounts of data to run AI algorithms.

An AI milestone was achieved in 1997, when the IBM supercomputer Deep Blue, programmed with AI models trained on chess moves, defeated world chess champion Garry Kasparov. Twenty-five years later, with the introduction of the Chatbot Generative Pre-trained Transformer (ChatGPT) in 2022, AI has entered the mainstream. Within two months of its launch, ChatGPT had 100 million users, making it the fastest growing computer application in history.

ChatGPT has since been prominently featured on news media around the world, fuelling intense debate over its ability to displace workers, enable plagiarism, steal intellectual property, and fuel misinformation. As an example of GenAI, ChatGPT is a large-scale AI natural language model that uses machine learning to generate human-like text in response to prompts.

Broad use cases for GenAI

Information technology professionals, computer scientists, tech industry analysts, and futurists see GenAI products to be a game changer with a multitude of uses. Some of them predict that it represents a step closer to artificial general intelligence (AGI), when computers will be able to accomplish any intellectual tasks that human beings can perform.

In nearly every industry of enterprise scale, the use of GenAI is being investigated.

In healthcare, the technology is being used in drug discovery to increase the speed of antibody design, to create personalised treatment plans, and to identify anomalies in medical imaging. In law, it’s being used to generate documents, provide analysis, and do research. In business, it’s used to develop written and graphic marketing content and handle customer service queries. In IT organisations, it’s being used very effectively to detect cyber threats, identify attacks, and enhance zero-trust strategies.

Inherent risks of unregulated use

But developers and users of GenAI products have also been open about the many potential risks ― both real and potential ― posed by the technology, including:

  • Data and privacy leaks: Employees input personal or proprietary information, company secrets and intellectual property into a GenAI query.
  • Insecure code: Relying on a predictive language model to write code, without code review processes and code vetting, can result in cybersecurity vulnerabilities.
  • Bad actors: Using GenAI to commit crimes, criminals on the Dark Web can now generate malware without writing original code.
  • Realistic phishing attacks: Delivered in chats, videos, or deep fake ‘live’ encounters, phishing attacks are expected to become more persuasive.
  • Malware: GenAI can generate malicious code that can replicate and adapt to mitigation efforts with a speed and approach that is beyond human capabilities.

On a global scale, people working in AI worry about the use of GenAI to spread propaganda and to create societal-scale disruptions, along with eliminating millions of jobs. In the United States, an initiative to mandate the reporting to the government of any large-scale AI training models in development or if a foreign individual attempts to purchase computational services capable of training a large AI model has been proposed. The European Union has passed the AI Act, which seeks to regulate AI risks and to provide more transparency of the use of copyrighted material.

For businesses, a combination of pervasive cybersecurity solutions, education on the proper use of GenAI, clear usage and data management policies, monitoring and enforcement is how organisations can reduce risks to data, privacy, and cyber-attacks.

As GenAI becomes part of digital products and tools, IT departments have no choice but to figure out how to harness its benefits while minimising its risks.

For more on the security impacts of generative AI, read “ How Generative AI is Reshaping Security.”