Cyber-crime is a complicated and multifaceted problem leaving anyone with access to a computer or smartphone vulnerable. The continuous barrage of phishing attempts found within both personal and corporate email accounts demonstrates the persistence of attackers as well as the skill employed by those same attackers to continuously thwart efforts to defend against such attacks. Even with the increasing sophistication of the tools used by attackers, there remains the inescapable reality that most cyber-attacks can ultimately be traced back to a human being who inadvertently opened the door to such an attack. 

Impact of AI Tools on Cybersecurity 

A new tool in the hands of hackers or other cyber criminals is the artificial intelligence tool known as ChatGPT and other like tools. The development and commercialization of these types of artificial intelligence tools has given rise to many new risks. Generative AI is defined as artificial intelligence capable of generating text, images, or other media in response to provided prompts. Generative AI models learn the patterns and structure of their input training data by applying neural network machine learning techniques, and then generate new data that has similar characteristics. While there is much discussion of the ethical implications of the use of such tools, criminals are not terribly concerned with the ethicality of their actions. 

So, what impact will the use and availability of such tools likely have on businesses’ ability to ensure the security or privacy of the data they create, collect, use, or maintain? The new risks created by the advent of such AI driven tools will require that companies add more layers of protection to their systems. It will also require that companies employ or retain individuals or resources skilled in the newest technologies and able to adapt as the threat landscape continues to evolve. 

Utilization of ChatGPT in Cyber Attacks 

One of the obvious uses by cybercriminals of tools utilizing artificial intelligence is in the formulation of phishing or social engineering communications. After decades of training employees in cybersecurity to carefully read emails with an eye towards identifying communications with misspellings and grammatical errors as the hallmarks of a malicious phishing communication, the criminals now have a tool to create finely crafted and error free communications to ensnare the recipients into opening the door to malfeasance. 

Unfortunately, now that email or text communication carrying a malicious payload will be targeted, well written, and seemingly from an articulate human being. ChatGPT enables hackers from across the globe to converse fluently in any language, effectively increasing the likelihood that cybercriminals will successfully avoid detection when executing a phishing scheme. 1 

To counter the new efficacy of ChatGPT generated phishing attacks, companies must implement system monitoring tools that can detect those communications that have been generated through the use of such AI driven tools and to segregate those tagged communications for further scrutiny. The marketplace has already responded with tools designed to detect communications created using ChatGPT and the continuing development of such detection tools is essential to meet what is sure to be a quickly evolving threat.2 

AI Technology in Malicious Coding 

The feasibility of using ChatGPT to craft communications to be delivered to the recipient through text or email messaging is readily apparent, but cyber criminals have found even more uses for AI technology. ChatGPT has proven very effective at assisting in the creation of computer code and other forms of computer programming. In anticipation of the misuse of such AI driven technology, ChatGPT has been programmed to detect that appears to be malicious in nature or that is clearly intended for the purposes of hacking into systems. When the tool detects that the user is creating malicious or hacking code, the tool advises the user that the tool is to be used to assist with useful and ethical tasks in adherence with its ethical guidelines and policies. The list of activities that OpenAI has identified as “disallowed” is extensive and includes: 

  • Illegal activity 
  • Generation of malware “Content that attempts to generate code that is designed to disrupt, damage, or gain unauthorized access to a computer system.” 
  • Activity that violates people’s privacy.3 

What should come as no surprise, mere warnings or alerts are unlikely to dissuade criminals from attempting to use these types of tools to assist in their endeavors. It is actually most likely that hackers have already discovered ways to manipulate ChatGPT, and other like tools, for malicious purposes. 

Data Security Risks and Employee Behavior 

An additional concerning emerging trend regarding the use of ChatGPT within business relates to employees loading sensitive, confidential, or proprietary corporate data to ChatGPT. One data security company reported that it detected 4.2% of its clients’ employees doing exactly that. Their examples included an executive who “cut and pasted the firm’s 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck” and another instance involving a doctor who “input his patient’s name and their medical condition and asked ChatGPT to craft a letter to the patient’s insurance company”. 4 

While seemingly innocuous in nature, this behavior puts such data at high risk of breach or misuse. In fact, ChatGPT has already issued notification that customer data has already been released in a breach.5 

Inescapably, the use of AI driven tools such as ChatGPT will continue to grow and evolve, so too will the risks resulting from the use of this type of technology. Companies should be ever mindful that while it is good to be on the leading edge of technology, it is never good to be on the bleeding edge. 

1 “ChatGPT and data: Everything you need to know”, Cyber Security Hub Editor, May 24, 2023 (https://www.cshub.com/attacks/articles/chatgpt-and-data-everything-you-need-to-know) 

2 “The New Risks ChatGPT Poses to Cybersecurity”, Chilton, Jim; Harvard Business Review, April 21, 2023 (https://hbr.org/2023/04/the-new-risks-chatgpt-poses-to-cybersecurity) 

3 OpenAI website, (https://openai.com/policies/usage-policies) 

4 “Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears”, Lemos, Robert, Dark Reading, March 7, 2023 (https://www.darkreading.com/risk/employees-feeding- sensitive-business-data-chatgpt-raising-security-fears) 

5 “March 20 Chat GPT outage: Here’s what happened”, OpenAI webpage, (https://openai.com/blog/march-20-chatgpt-outage); “ChatGPT Confirms Data Breach, Raising