ChatGPT is likely to be a game-changer across multiple industries, but are businesses aware of the risks of ChatGPT? Businesses that want to increase efficiency and productivity are attracted to artificial intelligence (AI) to generate, in a matter of seconds, content that would take a human several hours to create. This blog addresses several legal implications to using ChatGPT that every business should consider.
What is ChatGPT?
ChatGPT is an AI chatbot developed by OpenAI that can generate content in seconds. It is trained on a large dataset of over 45 terabytes of text data and continues to “learn” as more users engage with it.
ChatGPT is a powerful new tool, and its uses are limitless. It can revise an essay, draft a letter, come up with advertisements, provide recipes, or engage in conversation. For example:
o A realtor can ask ChatGPT to draft a listing for a particular address, and ChatGPT will spit one out in seconds.
o A computer programmer can enter software code and ask ChatGPT to find errors.
o A professor can ask it to write a whole exam.
What are the risks of ChatGPT?
ChatGPT comes with risks. Before entering a prompt, ChatGPT lists several limitations. These include occasionally generating incorrect information, producing harmful instructions or biased content, and having a limited knowledge of the world and events after 2021. Other risks are also present.
Privacy concerns arise because ChatGPT may collect and store whatever information a user inputs. So, for example, entering customer names and contact information risks exposing your customer’s private information. Similarly, disclosing confidential information to ChatGPT could result in your company’s trade secrets or other confidential information being shared with other users or losing legal protections.
A company also may risk reputational damage based on its use of ChatGPT. It is possible that ChatGPT’s responses may be perceived as offensive or may not align with your company’s goals and values. Also, the fact that ChatGPT is being used at all may make one look insensitive or insincere in certain contexts. For example, a Vanderbilt administrator sent an email to the student body to address the tragic mass shooting at Michigan State University. The email was later revealed to have been drafted by ChatGPT.
Legal exposure is also a significant risk of using ChatGPT. While a blog post cannot be exhaustive, below are a few examples.
What are the legal considerations for using ChatGPT?
The use of ChatGPT raises a host of legal issues, from employment law to intellectual property law. Legal landmines abound for those who are not careful to understand the limitations and risks of generative AI programs, like ChatGPT.
1. Privacy Laws
Privacy laws apply to almost every business. For example:
• Covered entities under the Health Insurance Portability and Accountability Act (HIPAA) need to protect Protected Health Information, or PHI.
• Financial institutions are subject to the requirements of the Gramm-Leach-Bliley Act.
• Companies that violate their own privacy policies could face an enforcement action by the Federal Trade Commission (FTC) for engaging in unfair or deceptive acts or practices.
• State laws, such as the California Consumer Privacy Act, may also provide broad protections for consumers that businesses must abide by.
• And, companies operating in Europe, or even those outside Europe that still have European customers, need to comply with the General Data Protection Regulation (GDPR).
Because ChatGPT may store information that users input and then “trains” on such information, ChatGPT may disclose confidential information shared with it if it is responsive to a prompt. In fact, the “massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies” led Italy to temporarily ban ChatGPT until privacy concerns are addressed.
It is critical that businesses using ChatGPT do not disclose a customers’ personal information or other confidential information subject to privacy laws. Doing so could risk liability for violating privacy laws or a contractual confidentiality obligation.
2. Trade Secrets
The collection of, and “training” on, information input into ChatGPT has similar implications for a business’s valuable trade secrets and other confidential information. For an owner of information to maintain trade secret protection under the Defend Trade Secrets Act or similar state laws, the owner must take reasonable measures to keep that information secret. Disclosing confidential information to ChatGPT risks a court finding that the information is no longer a trade secret.
One large company has already accidentally leaked company secrets to ChatGPT. According to recent reports, three Samsung employees used ChatGPT as part of their work and disclosed confidential information.
One employee copied and pasted into ChatGPT source code for a particular program and asked ChatGPT to identify errors. Another employee also shared code with ChatGPT and requested “code optimization.” A third employee used ChatGPT to convert internal meeting notes, which contained confidential and sensitive information, into a presentation.
Not only does ChatGPT present a risk of losing trade secret protection through disclosure, but it also raises the risk that a user would be found to have not created the trade secret at all. If someone uses ChatGPT to develop a trade secret, it is unclear whether ChatGPT’s contribution to the creation means that the person using ChatGPT can claim ownership of it from a trade secret standpoint. It is likely that the extent to which ChatGPT was used will be a major factor, but these are uncharted waters.
3. Defamation
Businesses using ChatGPT also need to remain aware that the information ChatGPT generates may be inaccurate. It appears that ChatGPT aims to please and will create an answer even when one does not exist. In one experiment, a user input a fake link to a nonexistent New York Times article and asked ChatGPT to summarize it. ChatGPT proceeded to provide a summary of the nonexistent article.
In another instance, Professor Eugene Volokh at UCLA Law School asked ChatGPT, “What scandals have involved law professors? Please cite and quote newspaper articles.” Not only did ChatGPT falsely accuse another prominent law professor of sexually harassing a student, but it also cited a nonexistent Washington Post article as support. Instances of AI generating content that sounds plausible but is factually incorrect are referred to as “hallucinations.”
OpenAI, the organization behind ChatGPT, has already been threatened with a defamation lawsuit after ChatGPT falsely stated an Australian mayor was convicted of a crime. But it is also possible that users who reproduce false information generated by ChatGPT, even unknowingly, could also be liable for defamation. Accordingly, businesses must not assume everything that ChatGPT generates is correct and should fact check ChatGPT-generated content, especially if it could be considered defamatory.
4. Copyright
ChatGPT also has implications for copyright law. ChatGPT’s large dataset includes a vast amount of publicly available information, which includes numerous copyrighted works. Because ChatGPT’s responses are based on its dataset, its output could contain copyrighted material or be substantially similar to copyrighted material. A user may not know it and, believing the content to be original, publish ChatGPT-generated content and then inadvertently infringe someone else’s copyright.
Using ChatGPT can also affect your ability to copyright your own material. Consider an author who inputs a short story and asks ChatGPT to find spelling and grammatical errors or otherwise edit the story. The story may become part of ChatGPT’s dataset and then be reproduced in response to someone else’s prompt. If that person then copyrights the material first, it could affect the first author’s ability to claim copyright protection.
Use of ChatGPT also affects whether a work can be copyrighted at all. Under U.S. copyright law, works created solely by a computer cannot be copyrighted. Instead, while machines may assist in creating a work, “substantial human involvement” is required to obtain copyright protection.
The U.S. Copyright Office has issued guidance regarding the copyrightability of works containing material generated by AI. As one would expect, the degree to which a human had creative control over the work will determine whether such work will be granted copyright protection.
Asking ChatGPT to “write a poem about copyright law in the style of William Shakespeare” will not get protection. However, writing a poem yourself and asking ChatGPT to revise it might. Examples in between may be closer calls that courts will likely need to resolve. The U.S. Copyright Office also requires copyright applications to disclose whether a work includes AI-generated content and identify the level of human contribution to the work.
ChatGPT thus has implications for liability for copyright infringement as well as one’s own ability to copyright works.
5. Employment Discrimination
ChatGPT may be helpful in making employment decisions. It could evaluate resumes or answer questions about candidates’ experience. But its responses may be biased, either by the wording of a prompt or the underlying information it uses to generate a response.
Organizations that use AI must make sure that it does not result in employment decisions that are discriminatory or biased. Some jurisdictions require that an employer disclose whether AI is used to make employment decisions. Others, such as New York City, go even further and require an AI tool to undergo a “bias audit” before it is permissible to use it to make employment decisions.
What can businesses do to protect themselves when using ChatGPT?
Some businesses may choose to avoid ChatGPT altogether to avoid its risks. But that seems to be a shortsighted approach. Such businesses will likely fall behind competitors who are using ChatGPT. And it is naïve to believe that employees will not be tempted to make their work easier with assistance from ChatGPT. Instead, most businesses should embrace ChatGPT, but be smart about it.
There are several steps that businesses seeking to capitalize on ChatGPT’s benefits can take to avoid exposure.
Businesses should update their existing policies to address the use of ChatGPT. In addition, it is also important to create new policies, such as an acceptable use policy, that outline what uses of ChatGPT and other AI are permitted or prohibited. For example, a business in the healthcare field may want to update its HIPAA policy to prohibit disclosures of PHI to ChatGPT and other AI programs. Depending on your business, you may also want to review the policies of your vendors or subcontractors and consider imposing requirements on them regarding the use of ChatGPT.
Training employees is another step businesses can take to protect themselves from exposure. In addition to writing or revising policies, employees should be trained on these updates as well as the risks of using ChatGPT and what type of information is appropriate to disclose to ChatGPT.
Businesses can also work with their IT departments to impose security limitations. For example, you may want to restrict which employees are able to access ChatGPT on work devices. After the data leak mentioned above, Samsung limited its ChatGPT upload capacity to 1,024 bytes per person. It has since completely banned ChatGPT. Other limitations may also be appropriate.
Disclosure and consent is also important. A business that inputs potentially sensitive information into ChatGPT, such as customer information, should obtain consent to collect and use such information, while ensuring compliance with applicable privacy laws. The company should also provide opt-ins or opt-outs for customers where appropriate. It may also be appropriate for a company to disclose that it is using ChatGPT to generate content, particularly if it is creating a deliverable to a customer or client.
Rather than using ChatGPT blindly, businesses should use it as a tool to assist someone in performing their work. It should not be used to create a final product with a few clicks. Content generated by ChatGPT should be reviewed and revised for accuracy to ensure that it aligns with the goals and values of your business.
Conclusion
Businesses should not shy away from the benefits that ChatGPT offers, but it is important to proceed with caution and to understand the legal implications. Privacy concerns, exposing confidential information, reputational damage, and legal liability are all risks that businesses need to consider when using ChatGPT. By understanding these risks and implementing appropriate measures, businesses can use ChatGPT effectively and without stepping on legal landmines.
As always, let us know if we can help.
Jeremy L. Kahn is a thoughtful and strategic litigator, with a creative approach. He enjoys crafting strategies to resolve difficult and legally challenging problems, always seeking to achieve his clients’ desired results in an efficient manner.