Blog
BFV Perspectives, Corporate Matters, Navigating HR, Noncompete & Trade Secrets, | Apr 01, 2024

Why Your Business Needs an AI Policy

In April 2023, a series of inadvertent leaks of sensitive information by Samsung employees highlighted the potential risks associated with the use of generative AI.

A Samsung engineer input confidential source code and asked ChatGPT to check the code for errors. And then another employee shared additional code with ChatGPT and requested “code optimization.” And then yet a third Samsung employee created minutes of a confidential company meeting. The employee fed a word-for-word transcription of the meeting into ChatGPT. In each instance, the employee inadvertently—but publicly—disclosed Samsung’s confidential information to the public.

How could Samsung have prevented this? With a well-written AI policy and employee training specifically on the acceptable uses of generative AI.

It doesn’t matter what industry you are in; what happened at Samsung could happen at your business. Generative AI programs such as ChatGPT, Google Gemini, Microsoft Copilot, and others are changing the way people do business across almost every industry. As just a few examples:

  • Retailers can use it to analyze customer data and preferences and provide more personalized shopping experiences.
  • Medical professionals can use generative AI to assist in medical image analysis and interpretation.
  • Manufacturers can use it in quality control and defect detection.
  • Musicians can generate song lyrics.
  • Financial investors can use AI to analyze historical market data and identify trends.
  • Those in tech can use it to optimize or even create code.
  • And I may know a fellow attorney or two who have used it to draft a demand letter or cross-examination.

The list goes on and on. Because it has what seem to be infinite use cases, it is hard to think of any industry that generative AI will not affect.

So why does your business need an AI policy?

If you are subject to any privacy laws, you need an AI policy.

Privacy laws affect a vast array of businesses. Medical professionals are regulated by HIPAA. Financial firms are regulated by the Gramm-Leach-Blilely Act. Lawyers are regulated by their states’ Rules of Professional Conduct. In every state, this require lawyers to protect their clients’ confidential information. Companies doing business in California are subject to the California Consumer Privacy Act (CCPA). And companies doing business in Europe or with European customers are subject to the GDPR.

If your business has a legal obligation to protect its customers’ private information, then you need an AI policy. That is because information inputted into an AI chatbot is no longer private.

First, any information input is disclosed to the company that owns the generative AI system, such as OpenAI for ChatGPT or Google for Gemini.

Second, many generative AI systems continue to “learn” based on the information that users input. That means that information one user inputs as a prompt can make its way into the answer generated to another user. The first user may never even know about it.

For example, let’s consider a doctor who uses ChatGPT to assist in forming a diagnosis. The doctor would likely violate HIPAA—and be subject to liability and penalties—if the doctor input the patient’s protected health information. So might a psychologist who asked Microsoft Copilot to summarize therapy notes. An underwriter at a bank or other lending institution might run afoul of the Gramm-Leach-Blilely Act if he input a borrower’s personal information into an AI chatbot to assist in underwriting the loan.

Without training employees on a clear AI acceptable use policy, businesses risk employees violating privacy laws when using generative AI. This risk is present even when the employee is earnestly trying to help the business. It may not occur to employees that information input into a non-human chatbot is unsecure and possibly violating privacy laws. Nor that the information may be disclosed to other users around the world.

It is not enough for a business to have a general policy regarding privacy laws that affect their industry. Think of a medical office with a HIPAA policy, for example. Businesses should revise general policies to expressly incorporate rules regarding the use of AI. Or, draft a separate AI policy so employees are on notice.

If you have a publicly posted privacy policy, you need an AI policy

“But my business is not subject to any privacy laws,” you say. Well, you still need an AI policy if your business has a publicly posted privacy policy. Many businesses do. Simply scroll to the bottom of almost any company’s website and there will be a link to a privacy policy.

The privacy policy is important because of the Federal Trade Commission (FTC). The FTC is empowered under the Federal Trade Commission Act to sue businesses that engage in “unfair or deceptive acts or practices.” Its position is that a business’s violation of its own posted privacy policy is an unfair or deceptive act or practice. The logic makes sense. A business posting its privacy policy tells customers how it will or will not use their personal information. It would be unfair or deceptive for a business to  turn around and use a customer’s personal information in a contrary way.

So what if your business is not subject to a specific privacy law? Businesses need to ensure that employees are not using generative AI in ways that conflict with their own privacy policies. The way to do that is to draft and then train employees on an AI acceptable use policy.

If your company has trade secrets or other confidential information, you need an AI policy

Besides the privacy of their customers’ information, many companies have their own confidential information that is valuable and competitively sensitive.

Companies that have trade secrets or other confidential information likely have policies or employee contracts that govern their employees’ use of such information. But it may not be clear to employees that inputting that information into ChatGPT is a disclosure. After all, it was not clear to the three Samsung employees discussed above, even though they seemingly were trying to use ChatGPT for Samsung’s benefit.

Additionally, once a trade secret is disclosed, a business can’t un-ring the bell. A court may find that the information is no longer a trade secret. Therefore, it’s critical for businesses with confidential information, especially trade secrets, to form an AI policy that protects information from disclosure.

If accuracy is important to your business, you need an AI policy

Almost every business wants to be accurate. While generative AI systems are impressive with speedily providing information and content, they are far from infallible. In fact, it is not uncommon for them to “hallucinate.” This means that they may generate a response that seems plausible but is factually incorrect or inapposite to the context. Worse, it is not always obvious when this occurs.

Generative AI programs have generated cites to fake new articles where the newspaper is real but the article does not exist. Lawyers have gotten in trouble having ChatGPT draft legal briefs that include citations to fake, non-existent cases. Ironically, even technology news website CNET retracted over 70 articles because multiple “facts” in articles written with ChatGPT were inaccurate.

Publishing inaccurate information has negative consequences from embarrassment to legal liability. Publishing inaccurate information about people can lead to defamation lawsuits. An inaccurate advertisement can lead to liability for misrepresentations to consumers. And putting aside the legal ramifications, no company wants to lose the trust of its customers.

Because AI can generate inaccurate responses, businesses need policies that make sure that employees are aware of the risks of inaccurate information. These policies also ensure that factual information is verified rather than blindly relied upon.

If you hire employees, you need an AI policy

Unless you’re the sole employee of your own business, your business hires employees. Generative AI can be helpful in developing job descriptions, sorting résumés, and determining successful candidates. But again, an AI policy is necessary to direct the appropriate uses of AI in making hiring decisions.

A generative AI program is only as good as the information it is trained on. And AI programs have been found to have biases. This usually is not deliberate.

For example, if an AI program sees that all or most employees are white males, that may inadvertently reinforce the selection of white males as likely successful candidates. Another example of bias was seen in Google’s attempt to correct for bias. Google’s attempts to avoid bias and incorporate diversity principles in Gemini resulted in an embarrassing overcorrection where prompts for pictures of historical popes resulted in images of black women and Native Americans as popes.

The bottom line is that even inadvertent biases inherent in an AI program’s model may lead to biased results. That has particular ramifications in the employment context. Businesses risk being sued for discrimination if their employment decisions are biased. Some jurisdictions, such as New York City, have passed laws that prohibit the use of AI in hiring decisions unless the AI system has undergone a bias audit. Once again, businesses can avoid trouble by developing and training their employees on an AI policy.

If your business generates any content, you need an AI policy

If your business generates any content—and most businesses do in at least come way—you need an AI policy. The risks of inaccuracies are discussed above. But there are also risks of unknowingly committing copyright infringement, or even losing copyright protections for your own works.

The data that generative AI systems are trained on includes numerous copyrighted works, such as books, songs, films, and other such content.

A generative AI system could create a response that includes copyrighted material with the user not even knowing about. That could lead to a user then publishing that content and then being sued for copyright infringement. In fact, many authors have sued OpenAI (the creator and owner of ChatGPT) for copyright infringement. But it’s also possible that users themselves could be sued.

On the other side of the coin, content creators could risk losing copyright protection for their own works. Let’s say an editor inputs an author’s manuscript into an AI program to check for grammar or change the tone. That manuscript is now part of the database that the AI program is trained on and could be incorporated into responses to other users. Now let’s say some other user asked the same AI program to revise a short story she wrote. The AI program could add a character from the original author’s manuscript. Which work gets copyright protection may depend on who submits to the Copyright Office first. (Note that there are also a host of issues regarding how much use of AI is too much that the Copyright Office will not grant copyright protection, but that could be a whole other article.)

Issues like these underscore the need for content-creating business to have an AI policy that advises employees about the appropriate and inappropriate uses of AI.

Conclusion

Generative AI is a revolutionary tool with the power to increase your business’s productivity and efficiency. But with any shiny, new technology, there are also significant risks.

If companies or their employees misuse AI, they could risk a Samsung-esqe disclosure of confidential information. Or they could be fined for violating their customers’ privacy rights. Or be sued for copyright infringement. As with the helpful use-cases, the parade of potential horribles also goes on and on. It is therefore critical for businesses to implement an AI acceptable use policy and train their employees on the appropriate and inappropriate uses of AI.

As always, please let me know how I can help.

BFV Perspectives, Corporate Matters, Navigating HR, Noncompete & Trade Secrets, | Apr 01, 2024
Jeremy L. Kahn
Jeremy L. Kahn

Jeremy L. Kahn is a thoughtful and strategic litigator, with a creative approach. He enjoys crafting strategies to resolve difficult and legally challenging problems, always seeking to achieve his clients’ desired results in an efficient manner.