ChatGPT doesn’t discriminate – it’s like an eager puppy, trying to fulfill its owner’s every demand, all for that little treat (or in ChatGPT’s case, a little “thumbs up” click). ChatGPT, the AI language model that knows all the answers, has been causing a sensation in all the right and wrong ways. Yes, this robot can enrich our professional lives, so despite the worries about potentially lost jobs and taking over the world, everyone’s buzzing about it. You, us, and unfortunately, cybercriminals too.
The language model has good intentions – to help the human! – which sounds wonderful... not just for the good guys but for cybercriminals as well. It can be misused, accidentally or maliciously, becoming a potential threat to your business and its data.
In this article, we’ll explore some of the ways ChatGPT is used to help businesses, how it can be a threat to data security, and provide tips on how businesses can protect themselves.
ChatGPT is a great tool for many things in day-to-day business operations, from replying to emails to data analysis and processing, and even much more complicated tasks. You could even use it to create a chatbot for your business, or have it organize tons of data for you without breaking a sweat.
Sound enticing? Some of these tasks would take a human hours to complete, and a bot can do it in less than a minute... Are you yelling “Sign me up!” as you read this?
But while using a bot to analyse your company’s latest financial information seems like a great time-saving idea, it could backfire pretty quickly.
ChatGPT tries to simulate how the human brain works. It absorbs information into its robot “brain” and can recall that information when it needs to and use in consequent analyses and interactions. But as they say, with great power comes great responsibility – ChatGPT “knows” an incredible amount of information.
While there are thousands of prompts you can give ChatGPT, “keep my sensitive data confidential” isn’t one of them.
ChatGPT is a deep learning model, which allows it to learn from its conversations with users. In other words, your conversations will not stay between the two of you. No matter how personable ChatGPT may sound, it’s just a heartless, emotionless program with no regard for you or your business.
Consider these tasks that you or your IT department could ask ChatGPT to assist with:
Can you spot the threats?
But wait, there’s more! Not only does ChatGPT remember any input you give it for its future analysis, OpenAI employees are able to access data from your ChatGPT chats, adding an extra layer of human factor to this whole data security situation. Remember, the human factor is involved in 82% of all data security breaches in 2022 (based on the 2022 Verizon Data Breach Investigations Report). And this was before ChatGPT became a part of our everyday lives.
And that’s still not everything – if you aren’t careful and, say, use an unsecured or public wifi network to have a conversation with ChatGPT, someone with ill intentions could potentially access your chat and see what data you’re sharing.
Those examples at the beginning of this section? They come from real life situations that could’ve easily resulted in data breaches caused by ChatGPT (most of them are from an investigation at Samsung!).
Overreliance on AI and ChatGPT can lead to neglecting important aspects of data security, such as manual review and verification.
| Mitigation measures: Long story short, don’t let ChatGPT’s promise of a quick and easy data analysis cloud your judgement. Adopt a security-first mindset, educate your employees, not only about your company’s data security policies, but also about AI and the potential threats it can mean for businesses. Remember, there are extra considerations when it comes to remote workers. |
There is an entire dark network thriving and making millions from stolen data belonging to organizations and individuals worldwide. In fact, ransomware groups are run like regular businesses, complete with marketing departments and RaaS (ransomware as a service) products! It's safe to assume that once ChatGPT became available for everyone to explore back in November 2022, cyber criminals had a field day.
And explore they did. The bad guys quickly jumped on the language bot bandwagon, using it to find data security system vulnerabilities, write convincing phishing emails, help create ransomware and even custom malware to evade security systems.
All of this faster, less detectable, and more grammatically correct than ever before (phishing emails are known to raise red flags with their grammar mistakes).
ChatGPT is making it easier for cyber criminals to execute attacks and steal sensitive data form businesses.
| Mitigation measures: Businesses should implement appropriate DLP security measures, such as encryption, access controls, and regular security audits. If security threats are getting more sophisticated, so should your DLP measures. A dedicated data loss prevention software (such as Safetica NXT) could be a game-changer for any SMB or larger organization. |
AI and ChatGPT can also be used by insiders to carry out data breaches. For example, an employee could use AI to identify sensitive data and then use ChatGPT to generate well-written phishing emails to other employees or business partners.
Insider threats can be difficult to detect because the insider already has access to sensitive data that they can use to steal more data.
| Mitigation measures: To reduce the risk of insider threats, businesses use the Zero Trust approach to limit access to sensitive data only to those who need it. Safetica ONE is one of the best DLP products out there, and it includes Insider Threat Protection mechanisms. |
In addition to the potential damage to a business's reputation and finances, data breaches, including those caused or assisted by ChatGPT, can lead to legal consequences and regulatory compliance failures. For example, businesses may be subject to fines and other penalties if they are found to be in violation of data protection laws such as GDPR, CCPA, or HIPAA. There are no exceptions saying it’s not your fault if the robot did it!
Italy has even banned ChatGPT over privacy concerns. Whether or not those concerns are justified is yet to be seen, as investigations are currently underway.
| Mitigation measures: Businesses have to stay up-to-date on regulations that they need to comply to and have a comprehensive information security management system in place (ISO 27001 can help with that). Talk to your employees about privacy policies and data protection, and discuss the threats of using ChatGPT with them. |
Here are some tips for businesses looking to protect themselves from the potential threats posed by ChatGPT:
|