ChatGPT: A Risk To Businesses?

Open AI developed a very sophisticated AI chatbot called ChatGPT which is a remarkably intelligent software that collects user information to help it develop and train to become more responsive. There are ethical concerns that this artificial intelligence is able to manipulate and deceive people by giving biased or misinformation, but there are also safety concerns that ChatGPT can put personal user data at risk of harm. 

What is ChatGPT?

ChatGPT was released in November 2022 by Open AI and is an artificial intelligence-driven chatbot that is trained to engage in written dialogue. The chatbot can answer questions by using natural language and when asked, can mimic certain writing styles. For instance, you could ask ChatGPT to write about the weather in the style of a politician. The technology that is used to train the chatbot is called large language model (LLM) technology which gathers text-based data and creates models by analysing relationships between words and prompts.

Although chatbots generally have been around for a number of years now, ChatGPT is seen as a big advancement compared to other chatbot technologies due to its sheer intelligence and sophistication. You can ask ChatGPT to talk about almost any topic in any style and it is able to come up with detailed and very human-like responses. 

What Are the Concerns Over ChatGPT?

Whilst ChatGPT is a great tool for businesses as it can help draft emails and other content, the pace at which this AI is developing is concerning for a number of reasons. 

  • Inaccurate Content

People use ChatGPT to draft messages, emails, articles, songs, and even CVs and cover letters. You can ask the AI to write about practically anything and adapt its writing style depending on what you need it for.  

ChatGPT constructs its content using information that it takes from open sources on the internet. The problem is that the information is not guaranteed to be accurate and there is no way of it knowing whether the information it scrapes is reliable. Unlike Google where you can see if ranked content is coming from legitimate sources such as government websites or reputable companies, ChatGPT simply provides content in response to user requests. But, users cannot identify where the information has come from or whether it is particularly biased or even correct. 

  • Scam Attacks

The National Cyber Security Centre has raised concerns about the way LLM software such as ChatGPT can be used as a vehicle to write malicious content they wouldn’t otherwise be able to put together, making attacks more sophisticated. 

This is because ChatGPT is able to give more context and detailed responses to questions that users ask of it, unlike search engines. As such, there are concerns that cybercriminals could use the software to write convincing phishing emails that sound very real when they otherwise would not have had the ability to do so. It is said that people who are behind phishing attacks do not have a high level of English language and so do not have the skills to carefully craft a convincing email. This can sometimes make it obvious when an email is not quite legitimate.  

In light of this, it is important that companies have up-to-date threat-detection software in place to identify more advanced attacks. In addition, organisations should train employees on how to spot malicious emails including developing a clear company protocol when requests are made to divulge sensitive information. 

  • Privacy Protection 

Open AI’s privacy policy states that it collects personal user information, file uploads, and feedback that is provided to the chatbot. It also says that personal data will be used “to develop new programs and services,” and may be provided to third parties such as service providers, affiliates, and during business transfers. 

The concern is that ChatGPT functions by learning from prompts and information that is provided to it, therefore, if users provide personal information and file uploads containing sensitive data, this is all captured and stored. In addition, if sensitive questions are asked about health, finances, or legal matters for instance, which are personal and specific to the individual asking them, these queries will be saved and utilised by the chatbot in its learning process. This means that ChatGPT can hold very personal data about anyone and can also disclose it to third parties (as stated in the privacy policy). If this data is then hacked, this could put users at risk of having their sensitive information exposed and potentially stolen and used to commit crimes like fraud. 

As such, companies must be very careful with the information that they input into ChatGPT and must ensure that they do not provide personal data that can be traced back to them and used against them, or, that could put people at risk. For instance, if an organisation asks the chatbot to write an email including an employee’s date of birth, name, and passport number, ChatGPT will retain this information and if leaked, the employee’s personal data may be at risk.

In fact, earlier this year, a temporary glitch with ChatGPT resulted in some users seeing the title of other users’ conversations. Whilst this issue has since been fixed, it shows that this AI software is not entirely safe and we do not know to what extent our privacy is at risk. The Information Commissioner’s Office (ICO) has said that organisations that use chatbots must respect the privacy and personal data of their users, especially software that uses LLM which is specifically trained to understand and respond to a lot of sensitive data.

Is ChatGPT Blocked in Any Countries?

ChatGPT is blocked in countries, including China, Iran, North Korea, and Russia. Italy was the first Western country to block ChatGPT over privacy concerns. Part of the reasoning was that there had been a reported data breach on 20th March 2023 involving user conversations and payment information being leaked. 

The Italian data protection watchdog (Garante) said that there is no legal basis for the mass collection and storage of personal data to train algorithms. In addition, there is no way of identifying users, which exposes minors to receiving inappropriate or unsuitable content.

How Can Gerrish Legal Help?

Gerrish Legal is a dynamic digital law firm. We pride ourselves on giving high-quality and expert legal advice to our valued clients. We specialise in many aspects of digital law such as GDPR, data privacy, digital and technology law, commercial law, and intellectual property. 

We give companies the support they need to successfully and confidently run their businesses whilst complying with legal regulations without the burdens of keeping up with ever-changing digital requirements. 

We are here to help you, get in contact with us today for more information.

Article by Nathalie Pouderoux, Paralegal / Consultant for Gerrish Legal

Previous
Previous

How You Can Use AI to Automate Contract Drafting

Next
Next

The US Restricts Social Media to Protect Children Online