Latest in Employment Law>Webinars & Podcasts>Unregulated Use of ChatGPT in the Workplace – five things HR absolutely must do right now to reduce the risks of litigation
Unregulated Use of ChatGPT in the Workplace – five things HR absolutely must do right now to reduce the risks of litigation
Published on: 30/03/2023
Issues Covered: Webinars & Podcasts
Article Authors The main content of this article was provided by the following authors.
Legal Island
Legal Island

Earlier this month our Chairman, Barry Phillips, stated at a previous event that this webinar is one of the most important we have organised for many years….
ChatGPT3 is a groundbreaking technology with the potential to revolutionize the way we work. Its sophisticated language capabilities make it a powerful tool for organizations seeking to streamline their operations and improve their productivity. However, as with any new technology, there are risks associated with its implementation, particularly when it comes to data protection and privacy.

Organizations must be proactive in managing these risks by taking steps to ensure they are using ChatGPT3 in compliance with relevant laws and regulations. Failure to do so could result in fines from data compliance regulators, as well as reputational damage and legal claims from employees or other affected parties.
To help organizations navigate these risks, we are hosting an essential webinar featuring expert advice from the employment law experts Carson McDowell LLP, James Milliken, Laura Cunningham, Sarah Cochrane and Dawn McKnight, and cybersecurity expert Simon Whittaker from Vertical Structure.

This webinar will provide HR professionals with actionable steps they can take right now to safeguard their organization against potential claims caused by careless or improper use by employees of ChatGPT3.

 

Recording

Transcript

Barry:  Well, good morning to everybody. You are very welcome. My name is Barry Phillips. I'm the Chairman of Legal-Island.

Earlier this year, Sam Altman, the co-founder of OpenAI that produced ChatGPT, admitted to being a little bit scared of what it is that they've created. And certainly earlier this year, he also said that it's time for regulation of ChatGPT and other AI modules.

Certainly, from our conclusion from the introductory webinar that we did earlier this month that I know a lot of people attended, we noted that ChatGPT has a lot of benefits for HR departments, but also we need to be careful about its use in the workplace.

We noted last time out for anyone who didn't attend the introductory webinar that there are at least five really good uses for HR.

It can be used in recruitment. For example, drafting job descriptions.

It can be used to check for inclusive language in job descriptions and adverts.

It can be used for learning and development plans. It can be used for drafting HR policies, of course, with the proviso that it is checked thoroughly by the HR department before it before being sent off to the legal team for sign-off.

We noted it could be used for report writing, and it could be used for checking grammar, spelling, syntax, punctuation, etc.

And a final thing we looked at was that we felt that really everyone in HR, and indeed across many departments in a company, can use it almost as their personal PA. It could do things like draft emails and basically reduce or eliminate some of the more annoying admin tasks that we all have to do every day.

But why is ChatGPT such a game changer? And the answer we noted was that it's different because it can build on earlier answers, and it gives the user the experience or the feeling that they are actually dealing with a human being for the first time rather than just a machine that is retrieving information, albeit very useful information.

But we finished the introductory webinar by saying, "Look, there are concerns about the unregulated use of this", and we felt it was advisable to get a legal team in to help us in terms of how to regulate the use of this in the workplace.

So this webinar is entitled "Unregulated Use of ChatGPT in the Workplace: Five Things HR Departments Absolutely Must Do Right Now To Avoid Litigation". And we really have got the A-team with us from Carson McDowell.

Today, we've got James Milliken, who is doing the main presentation. He works as a solicitor in the Commercial Team at Carson McDowell and advises on a range of commercial matters, including commercial contracts, technology and innovation, intellectual property, and data protection.

And standing by to field the questions that we're hoping that you will send into us, we have Dawn McKnight, who is a Partner in the Commercial Team, we have Laura Cunningham, Senior Associate in the Commercial Team, and we also have Sarah Cochrane who is a Partner in the Employment Team.

So it just really leaves me to pass over the reins to James Milliken, and say if you do have any questions at any point throughout the presentation, please do put them into the questions box, and I'll do my best to field those in the Q&A session, which is going to come up later on after about half an hour and we've heard from James.

So, without further ado, James, welcome to the webinar. Thank you in advance for your presentation. And it's over to you, James.

James:  Thank you very much, Barry, for your introduction. I hope everyone can see the slideshow okay.

I'd like to welcome everybody, first and foremost, to this webinar. I'm very grateful to you all for taking the time to attend, and it's great to see such a wide range of attendees from such a range of industries. It's particularly pleasing to see a number of our existing clients attending, and I would like to extend a particular welcome to them.

 So, just before we start, I would like to give you a little bit of background just about Carson McDowell and about the panel to build on what Barry's already said.

As Barry said, I'm a solicitor here in the commercial team at Carson McDowell. I've sort of closely been following developments in consumer AI technology, to use an overarching phrase.

 And I've already looked at the use of ChatGPT in terms of a legal drafting tool and an article that was published in the Irish Legal News, Scottish Legal News, and a couple of other outlets.

My email address will be at the end of the slide deck, so if you'd actually like a copy of that article, I'd be more than happy to send it to you.

I'm very lucky to have a panel of my colleagues at Carson McDowell here. We've got Dawn who has very extensive experience in dealing with a broad range of commercial matters, including cybersecurity and IT.

 Laura Cunningham is a senior associate in our commercial team, and she specialises in all aspects of information law, including privacy, confidentiality, data protection, the GDPR, and freedom of information, among other things.

 Sarah Cochrane, as well, is a partner in our employment team, and she provides a range of practical employment advice to employers from a wide variety of industries, from both an HR and a management perspective.

So lastly, just before we get started, I would like to take the opportunity to thank Barry, Maria, and everybody else at the Legal-Island team for the hard work in organising and facilitating this webinar, and for inviting me and the rest of the team at Carson McDowell to attend.

I actually had an opportunity to attend Part 1 of the webinar series that Barry hosted. And I must say that I think Legal-Island are doing an excellent job in driving the conversation and then providing sort of welcome guidance and instruction.

 I can also wholeheartedly recommend the content that's freely available on Legal-Island's website. I'm not just saying that to be nice to you, Barry. I genuinely think it's very useful and very relevant content.

I was going to run through a couple of housekeeping points, but I think Barry has more or less has covered everything. Just to reiterate, should you have any questions during the presentation, please feel free to drop them into the question box, and we'll make sure that the relevant panel member gets a chance to see that question.

Unfortunately, I don't think we'll be able to deal with all of the questions, but we'll do our best to get it to as many as we can. Our contact details and some useful links will also be available at the end of the webinar if you would like to contact us directly.

I think that's all the practical points. So, we're about ready to get started here.

 And I just wanted to start with a bit of an outline a couple of introductory remarks. First of all, though, I did want to take a moment to express my gratitude to Barry for the fantastic introduction to the subject that he gave in Part 1 of this webinar series. His comprehensive coverage of the basics, uses, and applications of ChatGPT in HR was incredibly helpful and informative, and the way he explained complex concepts in a clear and concise manner made it easy for all attendees to follow along.

 So, at this point, I do have to come clean and admit that the thanks I've just given to Barry were drafted in their entirety by ChatGPT. Barry, I hope you're not offended. And for what it's worth, I do agree with everything that ChatGPT just said. I just thought it would be fun to open the webinar with a quick demonstration of the practical uses of ChatGPT in everyday working life.

 I'm sure you're all sort of very well versed in this functionality already in no small part to the examples given by Barry in Part 1 of the webinar. And I do have to say that Donald Trump's version of the Gettysburg Address was a particular highlight of mine, and the ever-increasing coverage in the mainstream media has also, I'm sure, made you very familiar with what ChatGPT can do.

In this Part 2 of the webinar, we will primarily be looking at the dangers and potential pitfalls that HR departments should be aware of when using or dealing with the use of ChatGPT in the workplace.

I want to particularly focus on the potential consequences of unregulated use and suggest some practical steps that employers might wish to take to minimise the risks and, indeed, steps that I think that they should take to minimise the risks.

I've broken this down into five suggested steps, but I do want to note that these are all very closely related, and really they should form constituent parts of a more overarching AI policy or strategy.

So, the five steps that we're going to be looking at are early communication with employees, drafting an AI use policy, the implementation of practical procedures including, for example, making employees aware of the importance of double-checking their work, the importance of taking care when using AI tools in recruitment, and looking at a couple of the data protection dangers of using ChatGPT and some suggested steps that you can take to minimise those dangers.

At this point, I do think it's worth saying that this is an area that's not really covered, at least explicitly, by much if any legislation in the UK or in other jurisdictions.

It seems to me that lawmakers are grappling with a number of the same issues that are faced by everybody else in relation to the rapid development of AI tools like ChatGPT to the extent that their functionality and the risks associated with them are not yet fully known or understood.

Until bespoke legal frameworks are put in place, we largely have to rely on existing law, which wasn't really drafted with AI tools such as ChatGPT in mind. Some pieces of legislation that we have to rely on include, but certainly aren't limited to, the Copyright, Designs and Patents Act and the existing GDPR.

 The extent to which these existing laws apply to AI can be hard for organisations and smaller businesses to navigate. Overlaps, inconsistencies, and gaps in the current approaches by regulators can also confuse the rules, making it harder for organisations and the public to have confidence where AI is used.

 That said, there are some changes that seem to be coming down the track reasonably soon. For instance, the Data Protection and Digital Information Bill was reintroduced to Parliament on 8 March of this year, just a couple of weeks ago after a bit of a hiatus.

Among other things, this bill is intended to transform the UK's data laws to boost innovation and technology, and one of those technologies that it seems to envisage is AI.

The UK government has also published an AI paper outlining the government's proposed approach to regulating this technology in the UK. Instead of giving responsibility for AI governance to a central regulatory body, which the EU seems to be envisaging, the government's proposals will allow different regulators within the UK . . . existing regulators such as Ofcom, the Competition and Markets Authority, the Information Commissioner's Office, and the Financial Conduct Authority, it'll allow those regulators to take a tailored approach to the use of AI and in a range of settings.

These sorts of proposals contained in the AI paper are based on six core principles that the government thinks are particularly important in respect of AI. And these principles are ensuring that AI is used safely, ensuring that it's technically secure and functions as designed, making sure that AI is appropriately transparent and explainable, making sure that fairness is considered, the importance of identifying legal persons to be responsible for AI, and to clarify routes to address.

Now, how close it is to addressing a number of those principles is up for debate, but those are the stated principles that the UK government is putting in place.

 I do have to say that this is just at sort of the consultation stage at the moment. This approach isn't enshrined in law yet, and we need to keep a very close eye on developments over the coming months.

So that was a bit of a whistle stop tour of the existing legal framework, but a couple of things just to point out. Changes are coming. We need to keep an eye on those, but there are practical steps that we need to take now in the absence of more detailed legislation that I think are particularly important.

So, we will move to our first step, which is really early communication with employees. I think it's really important to open a line of communication with employees about the use of AI tools in the workplace.

It's very clear to me that employees are becoming increasingly aware of ChatGPT and other such tools and their uses. To illustrate this, it's worth noting that in January of this year, it was estimated that 13 million individual active users visited ChatGPT every single day. And these numbers are only increasing, although more recent statistics aren't available yet.

ChatGPT is effectively as ubiquitous as the smartphone, and I think that it has a similar potential to completely revolutionise employees' relationship with their work.

Therefore, as I've said in the slides here, I think it's important not to bury your head in the sand. You can't pretend that ChatGPT and other such tools don't exist. I think it's instead important to encourage dialogue with your employees, to signal that you're aware of ChatGPT, and that you know that employees might seek to harness its power in their everyday tasks.

Practically speaking, then, I would recommend circulating an email to all employees and staff on the subject of responsible use of AI in the workplace.

Looking at the cross-section of attendees here today, I must say that it's impossible to be entirely prescriptive of the content of that email. However, there are a couple of general principles that you might seek to address when communicating with your employees.

I think you should acknowledge that AI has the potential to improve productivity, streamline processes, and drive innovation, but that it can pose risks if not used responsibly.

You should stress that employees should take care when using AI in the workplace and ensure that their actions align with your company's values and ethical standards.

You should highlight the importance of employees being mindful of the potential consequences of their actions, and encourage them to strive to use AI in a way that promotes fairness, transparency, and respect for individuals.

You should discuss the need for employees to familiarise themselves with all relevant policies, including data protection, responsible use of the internet, and any specific AI use policy that you might have in place. And we'll come to that in more detail in just a moment.

Finally, dialogue is important, and I think you should invite employees to reach out to their line managers or to the HR department if they have any queries or concerns.

 So, an email like this shouldn't really be considered as a replacement for having a comprehensive AI policy in place. And again, we'll come to that in just a moment. But I would argue that it's an important step which flags to employees, regulators, and industry bodies that you're aware of the issues and are taking the use of AI tools in the workplace seriously.

 So, the next logical step, I think, is to consider putting in place a fulsome policy on the appropriate use of AI in the workplace.

I'm aware, having attended Barry's previous webinar, that there was a bit of an impromptu poll carried out amongst the attendees as to whether they had any sort of AI use policy in place. The results were largely as I would've expected, given the rapid development of these AI tools. Ninety-one per cent of voters said no, that they didn't have an appropriate or any AI policy in place, and 9% said they weren't sure. And I suspect that that means that they don't have any sort of policy in place either. So, effectively, 100% of the attendees didn't have any AI policy in place.

So, a little bit of background on the importance of AI policies. As I'm sure many of you will already be aware, HR policies in general are widely used by employers for many reasons, including supporting fairness and consistency across an organisation, as well as protecting the organisation from legal action.

They're effectively a written source of guidance on how issues should be handled, and they generally include a description of principles, rights and responsibilities for managers, staff, and employees.

 By doing so, they can both support the attitudes and behaviours needed for sustainable performance within an organisation and improve the speed of decision-making by ensuring that clear guidance is readily available to cover a range of issues that personnel might face.

Certain policies are specifically required to comply with law. For example, in the UK, a written health and safety policy is required as soon as an employer has more than five employees. There are also important legislative provisions setting out formal disciplinary and grievance procedures, to say nothing of the importance of privacy policies under the GDPR.

Even where a policy or procedure isn't specifically required by law, employers often find it helpful to have a policy in place to provide clear guidance that reflects the legal framework for handling the issue in question. And it also helps employees to be clear about the organisation's stance on a particular subject.

That is a general overview of why policies are used in the workplace. An AI policy is certainly not yet required by law, which I'm sure you'll have gleaned from my previous discussion of the existing legal framework. But I do think that such a policy might be an appropriate piece of the puzzle, and it would allow you to encourage appropriate and responsible use of AI, to provide a framework in which employees are comfortable using the tools safely, and helps you realise the full potential of AI in the workplace while ensuring that it's used ethically, responsibly, and in alignment with your wider values and goals.

I also think it would be an appropriate step to demonstrate a willingness to comply with applicable laws, to signal to regulators and industry bodies that you've considered the issues, and that you're properly prepared for the on-going development of AI tools in the coming months and years.

So at this point, you're probably thinking, "Well, that's great. We should definitely have an AI policy in place, but what should it actually say and what should its content be?" Unfortunately, there's no one-size-fits-all approach to designing an AI policy.

I'd love to be able to present you with a policy that works for everybody, but its content will really depend on the unique needs of every company, on the characteristics of its industry, on its workflow, the nature of its workforce, and so on.

I have given a bit of thought, though, to what a general structure might be, and I think this might be a good starting point for you to work from when developing your own AI policy.

 I think it should provide a clear and concise explanation of why the policy is necessary and what it contains. 

I think it should define some key terms related to AI, such as machine learning, algorithms, and data sets.

I think it should outline the guiding principles that will inform the organisation's use of AI. This would include ethical considerations such as fairness, transparency, and accountability.

It should establish the rules and responsibilities of key stakeholders in the AI development and implementation process. For example, this might include IT personnel, data scientists, legal and compliance teams, and senior management.

It should set out the procedures for collecting and using data in AI systems, including the sources of data, the quality of data, and the potential risks and benefits of using that data.

 It should address the potential for bias in AI systems and establish procedures for identifying and mitigating bias in the development and use of these systems.

 It should establish the procedures for protecting the privacy and security of data used in AI systems, as well as the procedures for handling security breaches and data breaches. More on that later.

It should establish the procedures for ensuring transparency in the use of AI systems, including procedures for explaining how AI systems arrive at their decisions, as well as procedures for holding employees accountable for their use of the AI systems.

Finally, I think it's important to outline the training and education requirements for employees, the employees that will be working with the AI systems, including the skills and knowledge that they need to use these systems responsibly.

 So, I'd just like to stress, again, this is a basic suggested structure, and it's no substitute for a bespoke policy that's been drafted to specifically reflect the needs of your company.

If you need detailed advice on how to draft a policy that's appropriate for your organisation, we would certainly be more than happy to assist. As I've said, our contact details will be available at the end of the webinar, and I think they're also available in the materials that Barry has circulated or will circulate shortly after the end of the webinar.

Couple of final thoughts on AI policies just before I move on.

 Firstly, any policy should be regularly reviewed and updated to reflect changes in the organisation's use of AI and evolving best practices in AI governance, as well as to reflect any legal requirements that come into force. And I've already looked at some of those potential legal requirements.

As well, no matter how well any policy is written, it's the effective communication and implementation of that policy, particularly by management, that's crucial in ensuring its effectiveness.

So, that last point really brings me onto my third suggested step. As I've just explained, an AI policy would provide general practical advice for managers and staff in relation to the use of AI tools such as ChatGPT in the workplace.

 However, I do think it's also prudent to develop more detailed, practical procedures for employees to supplement and support the new AI policy that you may have put in place. These procedures would give step-by-step accounts of specific arrangements that would apply in particular circumstances.

Again, there is no one-size-fits-all approach to drafting appropriate AI procedures. The form and content of your procedures will depend very much on your industry, your workforce, your day-to-day operations, and your general workflow.

When I looked at AI policies as part of Step 2, I suggested a general range of content that you might wish to consider. I actually want to use a slightly different approach here, and instead look at one particular issue that you might want to cover as one of your practical procedures. And this really relates to the accuracy, or potential lack thereof, of ChatGPT.

As Barry explained in the first part of the webinar series, ChatGPT is a language model, and it aims to create fluid and convincing answers to user inputs. It was trained on text gleaned from the internet from a wide variety of sources, and that allows it to discuss all sorts of topics. But it doesn't generate its answers by looking for the information in some database. Instead, it draws on patterns that it learned in its training.

So, to give an example of how this works, if you ask ChatGPT, for example, to tell you about data protection laws, it doesn't think, "Okay, what do I know about data protection laws?" Instead, it asks itself, "Well, what do statements about data protection laws normally look like?"

But the answers are based on patterns rather than facts, and it normally can’t cite one particular source for a specific piece of information. And that's because the model doesn't really know things. It produces text based on the patterns it was trained on.

It never actually deliberately lies, but it might not have a clear understanding of what's true and what's false. It regurgitates things that have been said, whether they are true or false. And sometimes it can end up contradicting itself or not quite grasp exactly what's being asked and then this leads to inaccurate responses.

So ChatGPT, it's very likely to give correct answers to most general knowledge questions, most of the time at least, but it can go wrong or, as Barry stated, seem to make things up. The developers have called that process hallucinating, because the question might be phrased in an unusual way or it concerns a more specialised topic.

The real problem is that ChatGPT acts just as confident in terms of its wrong answers as it does in respect of its correct answers.

So, having looked at why and how it's inaccurate, I don't think you really need me to point out the risks of using the tool for complex tasks. There's a real danger that an employee uses the tool to generate some answer or some report that's provided to a customer. The customer then relies on this report, which it turns out is factually incorrect. This could lead to costly and difficult litigation. And I'm sure that you can all imagine a range of scenarios where something along these lines might happen.

I think it might be prudent to develop a procedure whereby employees are permitted to use AI to explain the basics of a topic, perhaps to brainstorm and explore ideas, to draft certain procedural emails, and to ask for feedback. But I think that they should be required to consult external and perhaps verifiable sources before relying on ChatGPT for factual information.

Employees should be made keenly aware that ChatGPT can be inaccurate and that all outputs should be double-checked before being used as part of an employee's day-to-day tasks.

This is just one issue with ChatGPT from a practical perspective, and you should be sure to implement a range of appropriate policies that closely align with the potential uses of AI in your organisation.

I want, then, to turn to the use of AI tools by employers as an integral part of the recruitment process. So, this is a slightly different scenario and it's certainly a growing use of ChatGPT and other AI tools.

According to a statistic I read from SHRM, 88% of employers globally already use AI in some way for HR. One major growth area is the use of tools like ChatGPT in the recruitment process as part of a company's strategies for attracting, engaging, and retaining talent.

As Barry demonstrated in Part 1 of the webinar, ChatGPT can be used to improve efficiency through the automation of tedious tasks, which ideally would free recruitment teams to focus on more strategic, big-picture goals.

Just by way of example, the uses of ChatGPT in the recruitment process include but aren't limited to sourcing, which means identifying best-fit candidates through job matching and scoring capabilities; screening, where AI is used to score and rank candidates based on defined criteria; scheduling, which is, as the name would suggest, assisting in the coordination of interviews and other meetings; and preparation, which might mean drafting appropriate job descriptions and interview questions for a relevant role.

And that's something that Barry ably demonstrated in Part 1 of the webinar, which I believe was for a knowledge worker at Legal-Island, which is a role that doesn't actually exist. But that doesn't stop ChatGPT from bringing up a detailed job description for it.

There's also an interesting role for AI to play in terms of bias. It may be that the use of AI can actually reduce bias in the recruitment process, which is critical to an organisation's commitment to diversity and inclusion.

Candidate fit scoring ensures that they're screened based on skillset and experience rather than a bias-based demographic. But I will come back to the question of bias in just a moment.

I think that these potential uses are really, really exciting and they could contribute to greater productivity, better quality hires, and higher levels of efficiency in terms of time and cost.

To give just one example, in 2019, Unilever reported that the use of AI had saved its human recruiters approximately 100,000 hours in that year in interviewing time, as well as it estimated a million pounds. So there are huge benefits to begin there.

But HR departments should take great care when implementing AI tools in the recruiting process. An over-reliance on AI when making recruitment decisions can see employers easily being wrong-footed and inadvertently breaching UK data protection and anti-discrimination laws.

In respect of data privacy, the UK GDPR restricts employers from making solely automated decisions that have a significant impact on job applicants except in limited circumstances, such as where the decision is necessary for entering into or performing in the employment contract, or where the data subject has consented.

An employer will make a solely automated decision where the decision is reached through AI without human scrutiny. Employers are unlikely to meet the UK GDPR exemptions and should always ensure that there is some human influence on the outcome in any employment decisions that involve AI. The ability to process a job applicant's health data in solely automated decisions is even more limited and must be avoided.

Turning then to employment law, the use of AI can result in indirect discrimination claims where someone with a protected characteristic suffers a disadvantage as a result of an algorithm's output based on dataset analysis.

I just want to give a couple of examples here. In October 2018, an industry-leading retailer was reported to have scrapped an algorithm for recruiting new staff after the machine learning system was configured in a way that saw male candidates being preferable to female candidates and therefore creating bias.

It was reported that the reason for this was that to create the algorithm, datasets based on patterns in CVs that had previously been received had been used. As the overwhelming majority of those CVs had come from men, this inadvertently led to an algorithm which discriminated on the basis of sex.

As another example, in September 2021, so slightly more recently, the UK campaign group Global Witness accused Facebook of discrimination in its job advertisements after an experiment which showed that certain jobs were predominantly being advertised to a specific sex.

Global Witness said that, as an example, it created two job advertisements on Facebook, and of the people shown an advertisement for mechanics, 96% of men, whereas 95% of those who'd been shown a job advert for nursery nurses were women.

 Global Witness complained that Facebook's algorithm, which decided whom the advertisements were shown to using AI, showed a clear bias in its application.

So, having taken a whistle stop tour of some of the dangers in terms of the use of AI in recruitment, HR departments, I think, should have their own defined, clear, and transparent policies and practices around the use of AI to make recruitment decisions.

They should also ensure that they've got fully trained, experienced individuals responsible for the development and use of AI to minimise the risk of bias.

HR departments should identify appropriate people to actively weigh up and interpret recommendations and decisions made by AI in the recruitment process before those decisions are applied to any individual.

I think that the most important thing to remember is that HR departments shouldn't solely rely on AI. It should be used as an element to streamline the process and assist in recruitment decisions.

So, the last practical step that I want to discuss today relates closely to an issue that I've alluded to a couple of times already in the webinar, and that's data protection.

As I've already said, ChatGPT is a language model that produces text based on the patterns it was trained on. It was trained on a huge number of words that have been sort of scraped from the internet, and that necessarily includes a large amount of personal data that had effectively been scraped without consent of the data subjects.

I think this is fairly scary, but I want to look at data privacy actually from the opposite perspective. Instead of considering the data protection implications of how the tool was trained, I want to primarily look here at how ChatGPT really uses user inputs to continue to develop.

So, every single conversation with ChatGPT has the potential to become part of the tool's learning database. This means that anything put into the chat can be used to further train the tool and . . . and this is really important . . . be included in responses to other users' prompts and questions.

This is a bit of a nightmare from a data protection perspective. ChatGPT represents that it doesn't actually retain information provided in conversations. And whether that representation is true or not is another thorny question, but in any case, ChatGPT does learn from those conversations.

There is every possibility that employees could share personal data with the model, which could then be used to train it and form part of a response to another user somewhere else in the world. The implications of this are fairly worrying.

While I've no doubt that the organisations here are well-versed in the need to have a comprehensive privacy policy that complies with GDPR requirements, and that privacy policy deals with the information it has to be providing to data subjects where personal data are collected, including requirements to tell data subjects how you will protect and use their data, I think it's fairly unlikely that privacy policies that organisations currently have in place adequately provide for sharing personal data in any form with AI tools like ChatGPT, which would effectively mean that an employee inputting personal data into ChatGPT could be considered a data breach.

Even if the privacy policy does contemplate the use of AI tools, the input of personal data may still be regarded as a data breach by regulatory authorities.

Just to reiterate, as I'm sure many of you're aware, if found guilty of non-compliance with data protection laws or of some data breach by the Information Commissioner's Office in the UK, or its equivalent in the Republic of Ireland, the maximum penalties that could be imposed are £17.5 million or 4% of total annual worldwide turnover in the preceding financial year, whichever of those two figures is higher.

This being the case, then, I would strongly recommend that employers include in employee confidentiality agreements, and in their policies and procedures as well, prohibitions on employees referring to or entering personal data into AI chatbots or language models like ChatGPT.

Furthermore, I think it would be prudent to prepare and deliver appropriate training for all staff covering these issues. This means that you would have something to refer to in the event of a data breach, given that the ICO, the Information Commissioner's Office, may wish to see evidence of recent training undertaken.

I'm aware that Legal-Island are offering a mini eLearning module designed to train employees on how to use ChatGPT in the workplace. I certainly think this is a great starting point, and I think Barry will be sharing details of the module in due course. Actually, in just a moment.

The last thing I want to say is that you should probably consider reviewing and updating your privacy policy to explicitly deal with the use of AI tools in the workplace.

Just as an aside and before I finish, because this is really outside the scope of today's webinar, I would also recommend employers are wary of the opposite problem.

As ChatGPT has been trained on wide swathes of online information, employees might actually receive and use information from the tool that's considered personal data, that's trademarked, that's protected by copyright, or is otherwise the intellectual property or confidential information of some other person or entity.

There's a number of legal risks here, not least being sued for breach of intellectual property rights or confidentiality provisions. So, that's just another issue to be aware of.

So those are the practical steps that I wanted to bring to your attention today. And just by way of conclusion, I'm really excited by the potential of AI tools such as ChatGPT. I genuinely think that they're going to revolutionise how work is done in the same way as the internet did, as smartphones did, and even remote working more recently.

I think that they can improve efficiency, productivity, and decision-making, that they can be used to automate sort of tedious, repetitive tasks, and give employees more space for deep work and creativity.

More specifically, there are also a number of key benefits for HR departments who can spend more time ensuring that they attract and retain the best talent.

However, I do hope that you're now much more aware of the potential issues which range from risks of data breach to intellectual property infringement, discrimination, inaccurate work, and the litigation that might arise from that.

I want to reiterate that the steps I've suggested in today's webinar are a starting point. You should consider carefully which, if any, of them are most appropriate for your organisation, although I would say that most of those steps are appropriate in most circumstances.

I would also strongly recommend taking bespoke legal advice before putting these steps in place to ensure that you are fully protected from the potential consequences of unregulated use of ChatGPT and tools like it in the workplace.

Our contact details are available on the slides now, and I would be more than delighted to receive an email from you if I can provide any further advice. And I think, as well, Barry will be sharing both these slides and those contact details after the webinar as well. So I, again, would be delighted to hear from you if you require any further advice.

Lastly, I'd just like to thank you all very much for taking the time to attend the presentation today, and I hope you found it useful and instructive.

I think the questions have been coming in, Barry, and I think we're going to move to a live Q&A now, and we'll do our best to get through as many questions as possible. Thank you very much. I'll hand it back to Barry.

Barry:  Thank you, James. That's a really practical session full of very useful tips. If you could just advance to the next slide there, James, that'd be very useful.

 Just while we gather the questions coming into us, and we've had quite a few in, so thank you already to everybody that has put in a question, I just want to take a moment to say a little bit more about the online training that James has alluded to there that we've devised at Legal-Island.

Whilst the arrival of ChatGPT is very exciting on many levels, it does mean that almost overnight, workplace training on data management and protection has been rendered out of date or possibly even obsolete. The good news is that at Legal-Island, we have developed some eLearning training to take into account this new type of technology.

The eLearning module was signed off just on Monday of this week, so it's very up-to-date. It's a mini module, which is designed to be done by everyone in the workplace, taking about 10 to 15 minutes to complete.

It sets out clearly to every user how ChatGPT can be used, but more importantly, how it is not to be used, with a warning that inappropriate use of it may breach data protection measures.

After much discussion at Legal-Island, we've taken the decision to give this training away free of charge to everybody who is in this webinar today so that you all have the opportunity to feel that you've done everything possible to minimise the risks of litigation in this area.

I've directed that extra resources be put in place to handle this, but they'll be needed elsewhere soon, so we are limiting this to any organisation that is registering their interest by the end of the week.

To register, all you have to do is respond to an email that we'll be sending out after this webinar finishes. Alternatively, you can jump the queue by booking a 10-minute appointment with our digital team. And a link to their appointments diary, Maria is going to drop into your sidebar now and again in a few minutes' time.

All they need to know from you is an answer to a few questions in order to get you and your employees up and running and doing the training as soon as you wish. These include how many employees you would like to be able to access the eLearning module, and whether you have your own LMS or would like to use ours. Either is fine by us. As soon as they have this information, they'll be able to get your organisation set up and the training rolled out at a time to suit you.

As I said, it's absolutely free for every employee of your organisation at the moment. There are no catches. The only thing we ask is that you register your interest before the end of this week whilst we still have the resources spared to deal with all the inquiries and the setup arrangements.

The module, we have three. One is for GB, one is for Northern Ireland, and one is for the Republic of Ireland.

So, questions. We have so many that I don't think we're going to be able to get through them all today. So can I just say in advance here that I think what we'll need to do is just to ask the team at Carson McDowell to respond to questions that we don't get through by email, and we'll file that out to everybody early next week.

We did have a question from a lady Eileen, who was asking about Part 1 and when that was shown. Don't worry, we will send a link out to Part 1 as well in a follow-up email.

 So, the first question that is in, and it came in from Connor, "What are the potential dangers of employees inadvertently using material generated by ChatGPT that, for example, turns out to infringe the trademarks or copyright of a third party?"

Perhaps I can ask Dawn from the team just to pop up and to answer that. I guess, Dawn, just to illustrate that with an example, what we're looking at there. Let's say somebody drafts a press release and they use ChatGPT. ChatGPT pulls in a paragraph that is virtually identical to something that they've seen on a webpage somewhere, and the author of that webpage recognises that paragraph and thinks, "My work has been taken, stolen". So what are the legal implications of that?

Dawn:  Yeah, I think that James certainly alluded to this at the end of his presentation. There is a significant risk of intellectual property infringement associated with the use of this sort of technology. And again, as James said, I think we need to be mindful of how this tool was trained, which was literally by scraping over 300 billion words systematically from the internet.

And that came from a huge range of materials, whether books, articles, websites, blog posts, commentary, numerous other sources. So, a significant amount of that information will, of course, be protected by whether it's copyright, it could be registered, even unregistered trademarks, and so on.

And good examples, in addition to what you've put forward there, Barry, if you prompt ChatGPT, it'll accurately reproduce for you the first couple of paragraphs of "Harry Potter and the Philosopher's Stone". Similarly, if you ask it about McDonald's' slogan, it'll respond and say, "I'm loving it".

 But at the same time, the problem is that the tool doesn't inform the user that that material is actually subject to copyright or may be trademarked material, which means that if an employee went on to use it, they'd be inadvertently breaching the owner's prior rights.

 And given how new this tool is, it's hard. You can see how there are certain examples there where it would be obvious that the material was protected by way of copyright or trademark, but there'd be lots of other uses and a blindness as to the source from which the information was obtained, which means that people could inadvertently be breaching prior rightholder's interest in it.

It's a bit hard to predict at this point, given how early we are in the development of the technology. It's hard to predict exactly how the court would deal with the liability and damages that could arise in all instances. But what is clear is that the employer will be vicariously liable for breaches that the employee makes by use of stuff that they've obtained through the tool.

And at the same time, it's also hard to imagine how, as an owner of copyright material, business could meaningfully monitor for potential breaches of its own IP rights by third parties who are using the tool. So there are two sides to this.

And again, as James alluded to, the law, as is usually the case, is seriously lagging behind here. It just simply cannot keep up with the pace of the technology.

 I think I saw in one of my feeds this morning that there's another white paper due. It might actually be today, this week. And the focus of that paper, amongst other things, will be on how to deal with the exponential pace at which the technology is developing, and come up with law that can match that.

I think in the absence of law, the best we can do at the moment is try and keep an eye on some of the case law that's going on. And we've already got cases out there such as the Getty Images claim against Stability AI.

As you probably know, Stability AI is another similar tool, except it generates images instead of text. And in that particular case, Getty is arguing that Stability AI has infringed its copyright by training its tool on millions of images which were scraped at least in part from Getty's online database, which is protected by way of copyright.

So I think cases such as that and the position the court adopts on them in due course will help us establish some general principles around all of this whilst we wait for the statute book just to catch up,

Barry:  Dawn, yeah, I'd agree. I think every lawyer that I've talked to so far about ChatGPT has admitted that the law has just been completely taken over by this or left behind, and it's going to take quite a while before we catch it up either with legislation codes of practice or case law, as you say, that gives us some indication of how all of this is going to work.

Got an interesting contribution in from Gareth Moorhead. And Gareth has said that OpenAI's frequently asked questions said, "Submissions to ChatGPT are not used for training purposes", which is interesting. That goes against what a lot of the tech journalists are saying. And perhaps until we see absolute proof and assurance of that, perhaps the advice should still remain that you should not share with it information that is personal or commercially sensitive. But thank you, Gareth, for that.

Another question here. What are the potential implications of the recent ChatGPT data breach? I think that's probably a question for Laura. Laura, might you just take a moment to remind people or tell people for the first time what the ChatGPT data breach actually was?

Laura:  Yeah, absolutely. So, I think first and foremost, in addition to the data protection concerns that James outlined in his presentation in and around data collection, transparency, and data subject rights, a key concern with this technology is the potential for a significant data breach to occur. And that's particularly given the vast amounts of data being processed by the tool.

James did highlight a number of circumstances in which employers could be exposed to the danger of data breaches in relation to the use of ChatGPT. Now, those breaches that James talked about were primarily envisioning circumstances in which employees were entering personal data into the chat function during the course of their employment. And given that the tool learns through user inputs, there's obviously the chance that personal data could end up being made available to another user, which in itself would be a data breach.

Now, the recent and highly publicised data breach involving ChatGPT is different. It occurred in different circumstances. For anyone who hasn't seen their reports over the past couple of days, the breach actually centred around the disclosure of personal payment data of a number of users. This took the form of name, email address, the last four digits of credit card number, and the credit card expiry date, which became visible to other users during a nine-hour time window.

Now, according to OpenAI, this was due to a software bug rather than being part of the tool's functionality. They, in their statement, said that they took immediate action to mitigate the breach, they took the tool offline, and they contacted any affected users.

Despite the fact that this doesn't sit on all fours with examples that James discussed and is slightly different, nevertheless it highlights and serves as a bit of a stark warning about the potential for serious data breaches to occur through the use of this tool.

As James pointed out, data breaches can have significant financial reputational consequences for organisations. I know James spoke about the potential for the ICO to fine organisations up to £17.5 million or 4% of their global turnover, whichever is higher.

So I think this latest breach, although OpenAI have played it down and have said that it's been contained, I think nonetheless it is important and I think it's a timely reminder that when users are using this tool, they should exercise caution in relation to inputting personal data and sharing personal data in order to minimise the risk of a data breach.

Barry:  Sure. Okay. Laura, thank you. Onto a very interesting employment question. Should AI policies be made part of an employee's contract of employment? I think that's one for Sarah. Sarah, over to you.

Sarah:  Hi there. Thanks, Barry.

Barry:  Hi.

Sarah:  Yeah, thank you and good afternoon, everyone. I think that the short answer to that is that policies and procedures relating to AI and ChatGPT should be non-contractual. That would be our strong recommendation.

 I think the world of AI is moving at such a quick and rapid rate that changes and updates to policies and procedures are going to be needed quickly, and employers are going to want to be able to make those quickly and as efficiently as possible.

 If a policy is contractual, then that means it forms part of an employee's terms and conditions, and in order to change that policy, an employer would have to consult with employees in relation to those changes. Consultation in that regard can be a long and laborious process, and it can be really problematic if you don't get agreement from the employees.

 So, non-contractual policies in this area, I think, are really, really important, particularly until it develops further and we're clear on the implications of it.

In order for it to be non-contractual, I think it's just very clear in the language. You need to be very clear and say it in the policy that this does not form part of terms and conditions of employment, and it can be amended at any time by the employer.

And just a final point on that, if I may. I think sometimes there can be concerns from employers that if a policy is non-contractual, it somehow holds less weight than other policies. That is not the case. The employees have an obligation and an implied duty to obey lawful and reasonable instructions.

So, we would tend to find and advise that the majority of policies should be non-contractual for the reasons I have already outlined, but it does not preclude the employer from relying on those policies in, say, disciplinary proceedings, by way of example.

Barry: Great. That's very helpful. Thank you, Sarah.

 Just got a message in from [Rosaree 00:58:52] McCullough, and her question is, "Is there a time deadline with regard to the free training that has been offered by Legal-Island?" And the answer to that is no, not in terms of the actual doing of the training. As long as you've registered your interest by Friday, then that is fine.

 Of course, I think most lawyers would say the sooner the training is done, the better it's going to be because you are going to be better protected having done the training.

 We do have quite a few more questions, but that is really our time now at the top of the hour. So I think what I will ask our legal team at Carson McDowell to do is just to look at the questions later today, work on answers, and then one of our admin team will send those out to you next week.

 So it just leaves me to wrap up and to say thank you to everybody for attending today. We've had some really good organisations throughout the Island of Ireland at this webinar. We had hundreds that signed up for it, which is an indication that this topic really is of great interest to an awful lot of people.

 It leaves me to say a big thank you to Carson McDowell, James for his really practical and useful presentation there, and also for making the other lawyers available to field a wide range of questions there. So a huge thank you to Carson McDowell for that.

And just leaves me to wrap up there and to say a thank you to everybody for attending, and I look forward to seeing you again at a future webinar very soon. Thank you and bye-bye.

Questions & Answers

Presumably ChatGPT can write your email for you? What kind of training and reminders might individuals need about use of AI, given that AI will develop more quickly that an employee's understanding?

Yes. ChatGPT can draft emails, assist with production of a wide range of documents and carry out a huge amount of other tasks. For a full overview of ChatGPT’s functionality, we would recommend reviewing the first part of the webinar provided by Legal Island. We would recommend training that:

  • Acknowledges that AI has the potential to improve productivity, streamline processes and drive innovation, but that it can also pose risks if not used responsibly;
  • stress that employees should take care when using AI in the workplace and ensure that their actions align with your company’s values and ethnical standards;
  • highlights the importance of being mindful of the potential consequences of our actions and striving to use AI in a way that promotes fairness, transparency and respect for individuals; and
  • discusses the need for employees to familarise themselves with all relevant policies.

The practical aspects of this training will certainly depend on your company’s workforce, workflow and industry. We recommend taking personalised legal advice tailored to your circumstances.

Is it the case that an employer could be held liable for discriminatory language etc if an untrained employee sends out ChatGPT-generated content?

Potentially. The doctrine of vicarious liability imposes strict liability on employers for the wrongdoings of their employees. Generally, an employer can be held liable for any wrongful act committed while an employee is conducting their duties. An employer may therefore be liable for the negligence or breach of statutory duty of its employees, workers or even contractors where harm is caused to a third party. The third party can then pursue a claim against the employer for the loss suffered. The employer can also be liable for, amongst other things, breach of copyright, or breach of trust and confidentiality. Various legal tests apply and we would recommend taking legal advice that is tailored to your circumstances.

If a company has a dedicated Legal department - would an AI policy be a HR-owned policy or Legal-owned?

As would usually be the case with HR policies, we would expect that HR and Legal work collaboratively. HR will be more familiar with the operational aspects of the company and its workflow, but there are a number of complex legal issues at play (including data protection and intellectual property) and the Legal Department will play an important role in ensuring that the provisions put in place by HR fully protect the company from a legal perspective. It is worth noting, in particular, that the new legislation which we will very likely see coming into force in the coming months and years will need to be reflected in the policy.

Will Chat GPT V4 become even more intelligent as it is only new and already at version 4?

From a review of the literature on ChatGPT-4, this is certainly the case. For example, ChatGPT-4 is stated to be 8x more powerful than ChatGPT-3, and can take on larger, more complex tasks. However, we are not equipped to deal with technical questions and would recommend familiarising yourself with the wealth of published material on ChatGPT-4 and its capabilities.

This OpenAI FAQ says submissions to ChatGPT are NOT used for training purposes - https://help.openai.com/en/articles/7039943-data-usage-for-consumer-services-faq - please comment.

We have noticed, when reviewing the literature in respect of ChatGPT, that a number of software journalists and academics in the industry appear to disagree with OpenAI’s FAQ and have suggested that user inputs may be used to train the model in some sense. We are not software experts and have approached this subject from a purely legal point of view. This being the case, we consider that caution is absolutely essential – if there is any doubt whatsoever about how user inputs may be used, and whether this could lead to breaches of copyright or data protection law, employers should take steps to protect themselves.

If a software developer entered several lines of proprietary code into ChatGPT, where could this information go to?

Please refer to our comment at question 5 above. While we are not software experts, from a legal point of view we urge caution and suggest that proprietary or confidential information should not be entered into ChatGPT.

Please comment on copyright issues e.g. text produced by ChatGPT which is covered by copyright.

Copyright is a legal right which protects the author of a work (which may include books, novels, technical reports, films, computer software, databases and many other things besides). As stated in the webinar, ChatGPT was trained on a huge amount of material systematically scraped from the internet, the majority of which will be protected by copyright. ChatGPT’s responses to user inputs necessarily draw on that copyrighted material. It is unclear what the legal position is on the reuse of such content, given that it may be derived from intellectual property belonging to a third party. While a number of cases are ongoing, it seems that the courts are fairly hostile to the idea of non-humans owning intellectual property, meaning that ChatGPT (an autonomous AI tool) likely cannot own the copyright in a given response. Things get more complicated when we consider material that is owned by existing authors. It may be possible for a third party to bring a claim for copyright infringement if a user incorporates ChatGPT-generated material into its own work. We expect that this issue will be litigated in the near future, and would recommend keeping a close watch on the Getty Images vs StabilityAI case. In the meantime, we would urge caution when using material generated by ChatGPT in works that are or may be available to the public.

PLEASE NOTE that these responses are intended to be used for information purposes only and should not be considered a substitute for legal advice tailored to a recipients’ circumstances.

ANY QUERIES?  PLEASE CONTACT:

Carson McDowell, Murray House, Murray Street, Belfast, BT1 6DN

Sponsored by:

Continue reading

We help hundreds of people like you understand how the latest changes in employment law impact your business.

Already a subscriber?

Please log in to view the full article.

What you'll get:

  • Help understand the ramifications of each important case from NI, GB and Europe
  • Ensure your organisation's policies and procedures are fully compliant with NI law
  • 24/7 access to all the content in the Legal Island Vault for research case law and HR issues
  • Receive free preliminary advice on workplace issues from the employment team

Already a subscriber? Log in now or start a free trial

Disclaimer The information in this article is provided as part of Legal Island's Employment Law Hub. We regret we are not able to respond to requests for specific legal or HR queries and recommend that professional advice is obtained before relying on information supplied anywhere within this article. This article is correct at 30/03/2023