Thinking of adopting ChatGPT in your organisation? Make sure you’ve considered these three risks before you do.
Disclaimer: a small part of this article was written by ChatGPT, and edited by a human, as part of researching its capabilities for this article.
As organisations increasingly turn to advanced technologies to streamline their operations and improve their productivity, the use of chatbots like OpenAI’s ChatGPT has become increasingly popular.
While this technology can certainly offer many benefits and time efficiencies, some of which HRM outlined in a previous article, there are also a number of significant risks and other implications that employers should be aware of when it comes to using ChatGPT in their workplace.
Here are three to keep in mind.
1. Data and privacy concerns
One of the biggest dangers of ChatGPT is the potential for data privacy and confidentiality breaches. Employees may be inputting sensitive and highly confidential information into ChatGPT, such as employees’ personal details or sensitive company data.
This creates the risk that such data may be inadvertently or deliberately revealed to third parties or the general public, including through ChatGPT using such data when responding to other queries or through hacking.
The disclosure or misuse of such data could also result in significant consequences for businesses, including not just the revelation of personally and/or commercially sensitive and confidential data, but being exposed to claims for damages and penalties for breaches of the law.
The exposure of commercially sensitive data to ChatGPT may also be a relevant consideration when it comes to the enforcement of restraints (as Courts may consider that the data is not truly confidential and should not be protected in those circumstances).
2. A wave of inaccuracies
Another significant potential danger of this technology is that the information it provides may not be accurate. You’ve only got to look at Google’s AI blunder from last week for proof of this.
This risk is acknowledged by ChatGPT itself. Because the model is trained on such a large amount of data, it may not always be able to distinguish between credible and inaccurate information.
The information in its system is also not up to date (at the time of writing this article, ChatGPT states that it’s only aware of world developments up to 2021). Accordingly, there may be significant consequences in relying on the data provided by ChatGPT, particularly if such data is inaccurate, as that can lead to legal liability for the business.
A related issue is whether businesses can terminate employees who use ChatGPT, particularly in the instance where something goes wrong, such as where an employee provides advice or correspondence that is wrong because of reliance on the data from ChatGPT.
This may depend on the policies and procedures of the business in relation to the use of ChatGPT, for example, whether the business has expressly prohibited employees from using ChatGPT in their work, or has only permitted use in certain limited circumstances.
Usual unfair dismissal considerations will also apply. Businesses should consider its position on ChatGPT and implement policies and procedures accordingly.
Learn how to develop robust policies and procedures for various topics, such as the use of AI at work, with this short course from AHRI.
3. Intellectual property and job loss risks
Further, there is a question as to ownership and intellectual property and moral rights in relation to content prepared by ChatGPT based on the data inputted by employees (and employers).
ChatGPT can also create a risk of job loss for employees. With the increasing use of chatbots and the time efficiencies associated with ChatGPT, many tasks currently performed by human workers could be automated, potentially leading to widespread job losses.
Employers should be mindful of their obligations to consult (including under Awards, enterprise agreements, contracts and policies) when considering the use of this technology by their business, and should consider their redundancy obligations if it comes to that.
While ChatGPT can offer many benefits to organisations, employers should be aware of the significant risks and other considerations that come with using this technology.
At the very least, employers should consider their position on ChatGPT and formulate a policy, including whether it will be allowed to be used in the workplace and to what extent, and what control measures need to be implemented (for example, blocking access to the site or requiring that no confidential or sensitive information be inputted into the platform).
Consideration of these issues will assist with mitigating the risks associated with ChatGPT, however employers should remain vigilant as such technologies continue to develop and advance at warp speed.
Amy Zhang is an Executive Counsel & Team Leader at Harmers Workplace Lawyers.
Instead of seeing AI a a potential cause of “widespread job losses” perhaps it is worth considering AI as an opportunity to be able to free up employees to add greater value to organisations and allow for more fulfilling and engaging roles for employees through deeper or more creative work than the repetitive or process driven work that AI might be utilised for.
The article reads like an alarmist’s view of AI/ChatGPT. I hope HRM shares a more neutral and objective perspective in the near future.
It is not alarmist to view AI and ChatGPT as worrisome. Privacy, safety, intellectual property, are vulnerable. As for accuracy, the references produced by chatgpt are nonsensical even though they look like the real McCoy. Taking chatgpt at face value would be foolish in terms of trusting the slick content, and long term ramifications of outsourcing intellect to a machine essentially powered by fallible humans who do the inputting. Mediocrity is the essence and output of chatgpt and AI. God help us if we trust chatgpt.