Reprinted from the “Shelton Shares” Column in Attorney at Law Magazine
While I am a strong proponent of AI and the exponentially increased efficiency it offers, I would be remiss if I didn’t cover the perils and pitfalls it can present to attorneys and their clients, if not properly used and especially if attorneys neglect the final, critical step in its use i.e., Human Eyes Failsafe (HEF).
Attorney-Client Privilege & Factual Errors
Donata Stroink-Skillrud, who received her law degree in 2015 and embarked on various roles that included in-house counsel and COO before starting Termageddon – a company that automates website privacy policies and updates them as laws change – agrees with me.
Stroink-Skillrud confirmed that there are serious risks to be considered, when attorneys use Ai.
“Attorneys evaluating whether to use AI in their practice should be cognizant of the fact that such use may present ethical and reputational pitfalls for the attorneys, as well as negative outcomes for clients.
“The first concern that attorneys should consider is the potential compromise attorney-client privilege, which can lead to malpractice claims or even suspension or disbarment.”
She continued, getting more specific. “Prompts that include information protected by the attorney-client privilege can lose that privilege if they are shared with third parties such as the ChatGPT trainers. Attorneys should, at the very least, remove all sensitive personal information and information protected by attorney-client privilege from prompts.”
“Prompts that include information protected by the attorney-client privilege can lose that privilege if they are shared with third parties such as the ChatGPT trainers.”
This was demonstrated all too well by the accidental leak of critical code by Samsung employees, which resulted in the company-wide banning of the use of any generative AI, in order to prevent accidental leaks in the future.
Stroink-Skillrud offered a potential solution. “Attorneys should also consider hosting AI tools such as ChatGPT locally, meaning that all information input into the tool would be stored on your machine and not shared with OpenAI or any other third parties, reducing the likelihood of a compromise of data.”
Stroink-Skillrud went on to address the other main concern about the use of generative AI by attorneys.
“The second concern is the fact that the information produced by AI tools may not be factually accurate. Any case citations or facts provided by AI tools should be double checked for accuracy using a primary source to ensure the information is correct, as providing an incorrect citation or incorrect facts could certainly result in the loss of an argument or even a malpractice case.”
Stroink-Skillrud offered me this quote before the now infamous Avianca Airlines debacle became widely know and publicized. In that case, a New York Attorney had Chat GPT write a 10-page brief that cited various cases and precedents – all of which were fictitious. The brief was tossed and the lawyer was sanctioned.
Shortly thereafter, a Texas judge issued an order requiring all attorneys appearing in his court, to file an affidavit verifying that they did not use Chat GPT, Bard or other generative AI in any filings at all, or that if they did, they incorporated the final step of using HEF (Human Eyes Failsafe) prior to finalizing anything filed with the court.
Shelton’s Crystal Ball: The U.S. Government is overwhelmingly reactive. Sarbanes-Oxley wasn’t passed until after Enron. Dodd-Frank was passed only after the 2008 financial crisis. Thus it will be with AI. Unlike the European AI Act and actions being taken elsewhere, our government will wait until a black swan event occurs, and then begin the political posturing and grandstanding that will have both sides fighting to prove they warned of disaster first.
However, in the meantime, more judges will take more proactive steps to ensure that debacles like the Avianca Airlines case are not widely replicated.
Supervise the Work of AI like Rule 5.1 & 5.3
I also discussed this with Ethics Professor David Grenardo from the University of St. Thomas School of Law. As expected, he offered insights that were both practical and erudite.
“AI can be misused by lawyers when they ask AI to draft a contract or motion but fail to check the work of AI. You hit the nail on the head when you mentioned Human Eyes Failsafe (HEF), which means that lawyers must check the work of AI, just like they would check and supervise the work of lawyers (ABA Model Rule 5.1) and non-lawyers (ABA Model Rule 5.3). To that end, using the assistance of AI should be added to Rule 5.3 to ensure that the work of AI that the lawyer uses is ‘compatible with the professional obligations of the lawyer.’ ABA Model Rule 5.3.
“Lawyers must check the work of AI, just like they would check and supervise the work of lawyers (ABA Model Rule 5.1) and non-lawyers (ABA Model Rule 5.3).”
“Another misuse includes being overly reliant on AI,” he continued. “For example, a lawyer could ask AI to draft a contract, but that would not be enough if the lawyer fails to anticipate new trends or issues that might arise for the client and thus fails to add language in the contract to protect the client. Generative AI relies on past and existing data, but lawyers have a duty to communicate with their clients (ABA Model Rule 1.4) to determine what the client wants and needs. Generative AI can produce a similar contract to what your client has used in the past based on former and existing data, but a lawyer needs to talk to their client and anticipate what issues may arise in the future, and create contract language to protect the client.”
Shelton’s Crystal Ball: Using AI in the practice of law is inevitably, going to become mainstream. There is simply no escaping it. So while state bars and the American Bar Association aren’t exactly nimble market adaptors, they will act more quickly than state and federal governments, when it comes to regulating AI in their niche.
Eventually, this will include requiring attorneys to use AI but that’s the subject of a future article.
Frederick Shelton is the CEO of Shelton & Steele and provides Rainmaking & Legal-Specific AI Consulting to lawyers and law firms. He can be reached at fs@sheltonsteele.com
Kommentare