The Ethics of Using Generative AI

Ellen Lockwood, ACP, RP

Ellen Lockwood, ACP, RP

By now it seems everyone has heard of Artificial Intelligence (AI) and the ability of certain AI programs to do our writing for us. Legal software has used AI for decades, but with limited capabilities. It is only recently that generative AI programs have become accessible to most people and received widespread coverage in the media.

Generative AI programs such as ChatGPT have been described as being able to write a student’s paper, draft legal documents, including contracts and briefs, and, by some descriptions, make most writers obsolete. However, as with most recent technology, the hype and the reality are quite different. While generative AI programs may be convenient and even useful in some circumstances, there are many issues with using these programs in the legal field.

As with any other software, it is imperative to understand how each AI program processes data. For example, ChatGPT was trained on billions of bits of information, the vast majority from the internet. It then takes the prompt you submit and responds with strings of text, based on the data on which it was training, that it guesses is the best answer. However, ChatGPT and other generative AI quite often gets information incorrect and sometimes generates false statements and

assertions. There is even a term, “AI hallucinations,” to refer to the tendency of generative AI programs to make statements that appear to be factual but have no basis in fact. The reason generative AI provides information that isn’t always factual is because the programs aren’t copying and spitting out information that was found on the internet. The software is just predicting lines of text based on the information on which it was trained.

Since attorneys (and therefore, paralegals) have a duty of competency, someone must ensure the accuracy of the output generated by AI software. Documents with factual errors and miscalculations are not only unacceptable, they also have the potential to cause real harm in a legal proceeding, as well as leave the attorney who files such a document vulnerable to sanctions and malpractice claims.

Generative AI also almost always includes biases.  Depending upon the data on which the program was trained, these biases may include gender, age, race, culture, income, and any other potential bias. Users should be aware of this tendency and review any AI-generated text carefully to remove biases. Other issues to consider are privilege and confidentiality. Before inputting personal, privileged, or sensitive data into generative AI programs, users should determine whether that information will truly be kept confidential. For example, ChatGPT’s publisher reserves the right to have its staff review inputs for training purposes and safety monitoring. There is also the possibility that uploading personal or sensitive data could result in a violation of data protection laws or codes of conduct.

Plagiarism and copyright infringement are also common when using generative AI. Since the applications are trained on content from the Internet, usually without the authors’ permission or even awareness, there is the potential for authors to object when their work is used without their knowledge or consent.

A few courts have determined that some AI programs have violated UPL laws. In one situation, a generative AI program was completing legal forms on behalf of individuals who were not represented by counsel. Attorneys in other states have filed pleadings generated by AI that included cites to fictional case law. Obviously, the attorneys did not check the cites before filing those pleadings. These situations have caused judges in some jurisdictions to issue orders in response. For example, Judge Star of the Northern District of Texas has ordered the following:

 All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being.

 Some judges are requiring attorneys submit a disclosure if any portion of a filing was created using generative AI. The Federal Rules of Civil Procedure already addresses some of these issues:

FRCP 11(b)

b) Representations to Court. By presenting to the court (whether by signing, filing, submitting, or later advocating) a pleading, written motion, or other paper, an attorney or unrepresented party is certifying that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances:

(2) the claims, defenses, and other legal contentions therein are warranted by existing law or by a non frivolous argument for the extension, modification, or reversal of existing law or the establishment of new law.

Obviously, someone must review any AI generated material to comply with this rule. There are a few private AI software applications designed for the legal field such as the contract drafting tool by Henchman’s and Casetext’s Co-Counsel. These focus on security, protect confidentiality, and use appropriate and known data sources. These applications are generally safe to use, although they should be thoroughly vetted just as you would do with any software.

Recommendation: avoid use of most generative AI software for now, other than those programs specifically developed for the legal field. Other generative AI software is fraught with too many issues, and you will spend more time reviewing it for inaccuracies, AI hallucinations, biases, privilege and confidentiality issues, plagiarism and copyright infringement to make it worth your while to use it.

Regulations and codes of conduct continue to struggle to keep up with advances in technology. Given the issues that use of generative AI programs may have on the legal field, we may expect case law, as well as new legislation and updates to codes to conduct, to be addressing these matters sooner rather than later.

Ellen Lockwood, ACP, RP, is the chair of the Professional Ethics Committee of the Paralegal Division and a past president of the Division. She is a frequent speaker on paralegal ethics and intellectual property, and the lead author of the Division’s Paralegal Ethics Handbook published by Thomson Reuters. She may be contacted at ethics@txpd.org.

If you have any questions regarding any ethical issue, please contact the Professional Ethics Committee.

Return to the Ethics Articles Home Page

Originally published in the Texas Paralegal Journal © Copyright Paralegal Division, State Bar of Texas.