Training Data: The information used to “teach” an AI model how to generate content or respond to inputs. For generative AI, training data includes vast collections of text, images, or other materials that help the model recognize patterns, language structures, and relationships between concepts. In legal contexts, the quality and type of training data affect how reliable or accurate the AI’s responses are.
Prompt Engineering: The practice of designing effective inputs or instructions (called “prompts”) to guide AI tools toward generating useful or accurate responses. In law, this could mean carefully phrasing a prompt to ask for a case summary, draft a memo, or identify relevant legal rules.
Large Language Model (LLM): A type of AI model trained on massive amounts of text to understand and generate human-like language. Examples include ChatGPT, Google Gemini, and Anthropic’s Claude. These models power most generative AI tools used in legal research and writing.
Hallucination: A term used when an AI generates incorrect or made-up information that appears believable. In legal research, hallucinations can be especially problematic if the AI invents fake case citations, statutes, or facts.
Lexis+ Protégé is a generative AI assistant developed by Lexis. It is designed to help legal professionals reduce repetitive tasks, assist with complex analysis, and support legal research and drafting. Protégé is just one example of how AI is being integrated into legal research platforms to enhance productivity and support legal work.
For more information, visit the Lexis+ Protégé FAQs.
CoCounsel is a generative AI tool developed by Thomson Reuters that integrates with platforms like Westlaw, Practical Law, and Microsoft 365. It is designed to support legal professionals by streamlining tasks such as legal research, drafting, and document analysis. CoCounsel combines large language model (LLM) capabilities with access to verified legal content and is supported by a team of AI and machine learning specialists.
For more information, visit the CoCounsel Help & Support page.
Generative artificial intelligence (AI) is a type of AI that uses complex models and algorithms to create new and original content. This could include text, images, code, or even legal documents. Unlike tools that simply summarize or classify existing information, generative AI can produce something entirely new—based on patterns it has learned from large sets of data.
In the legal field, generative AI has already begun to change how lawyers work. While not a replacement for legal analysis or professional judgment, it can be a helpful tool in legal research, drafting, and more.
Generative AI offers several potential benefits in a law practice setting:
Helps Identify Relevant Sources: AI tools can assist with locating key cases, statutes, and secondary sources based on a legal issue.
Aids in Drafting: It can generate first drafts of documents like memos, contracts, or client emails, saving time on routine writing tasks.
Improves Efficiency: Tasks that normally take hours—like document review or summarizing cases—can be sped up with the help of AI.
Assists in Review: AI can help review a wide range of legal documents by flagging inconsistencies, summarizing content, and identifying key terms or potential issues.
Simplifies Complex Topics: Some tools can explain legal concepts in simpler terms, which can help with client communication.
However, AI tools are not perfect—and they introduce risks if not used carefully:
May Lack Contextual Understanding: AI doesn’t always understand the full legal context or jurisdiction-specific rules.
Can Reflect Bias: The content it generates may reflect biases present in the data it was trained on.
Legal and Ethical Concerns: Outputs will need careful review to ensure they meet legal and ethical standards.
Security and Confidentiality: Using AI tools could involve uploading sensitive information, which raises privacy concerns.
Overdependence: It’s important not to rely too heavily on AI. Legal analysis, reasoning, and ethical decision-making still require human judgment.
Outdated or Incomplete Results: Some tools aren’t connected to up-to-date legal databases and may miss recent developments.
Firms May Have Specific Rules: Law firms and courts may have their own policies on whether and how AI can be used in legal work.
As AI tools become more common in legal settings, it’s also important to understand the costs of using them and the intellectual property issues they raise.
Law firms and legal departments must consider more than just the price of an AI subscription:
Software & Licensing Fees
Training and Onboarding for Staff
Preparing and Cleaning Data for Use
Maintaining and Updating Tools
Cybersecurity and Data Protection
Hiring Experts or Consultants
Monitoring Outputs for Accuracy and Fairness
Providing Ongoing Support and Training for Users
AI-generated content can raise a number of legal questions, such as:
Who Owns the Content Created by AI?
Does the AI Output Infringe on Someone Else’s Copyright or Trademark?
Is the Use of AI Training Data Considered Fair Use?
Are Trade Secrets or Confidential Information Being Exposed?
What Rights or Limitations Come with the Software Itself (especially open-source tools)?
Does the Output Qualify for Legal Protection, like Patents or Copyright?
The use of generative AI in legal practice is growing rapidly. According to the American Bar Association’s 2024 Legal Technology Survey, AI adoption among law firms nearly tripled in just one year—from 11% in 2023 to 30% in 2024. Larger firms are leading the way, but even solo and small firm practitioners are increasingly using AI tools.
The most commonly reported benefit is increased efficiency, especially when it comes to legal research, document review, and drafting. Tools like ChatGPT, Lexis+ Protege, and CoCounsel are among the most widely used or considered, with smaller firms showing a strong preference for general-use platforms like ChatGPT, while larger firms lean more toward legal-specific solutions.
Still, concerns remain. Attorneys cite accuracy, reliability, and data privacy as their top worries. Despite these concerns, the profession sees AI as an inevitable part of the future—more than half of respondents believe AI will be mainstream in legal practice within the next three years.
So far, legal research is the top area where AI is being used, followed by case strategy, understanding judges, and even predicting outcomes. As AI becomes more common in law, attorneys are turning to CLEs, legal publications, and peer discussions to learn how to use these tools responsibly and effectively.
As AI tools become more common in legal research, writing, and client service, lawyers must consider how their use fits within ethical and professional responsibility standards. One key obligation is the Lawyer's Duty of Tech Competence.
Under Texas Disciplinary Rule of Professional Conduct 1.01, Comment 8, lawyers are required “to become and remain proficient and competent in the practice of law, including the benefits and risks associated with relevant technology.” This includes staying informed about how AI is being used in the profession—and understanding both its capabilities and limitations.
Using generative AI in legal work also touches on several core ethical duties:
Duty of Competence: Attorneys must understand how to use AI tools appropriately and ensure that their use does not lead to errors, omissions, or reliance on incorrect information.
Duty of Supervision: Lawyers cannot delegate responsibility for legal judgment to a machine. If AI is used by staff, non-lawyers, or even attorneys themselves, the lawyer must supervise and review all output.
Duty of Confidentiality: Using AI tools—especially those hosted on third-party platforms—requires careful attention to data privacy and client confidentiality.
Duty of Candor to the Court: Attorneys must ensure that filings generated with AI tools are accurate, truthful, and supported by legitimate legal authority.
Duty to Avoid Misrepresentation: Misstating the capabilities or limitations of AI tools (to clients, courts, or colleagues) could lead to ethical violations.
In short, using AI doesn’t lower a lawyer’s ethical obligations—it raises new questions that require careful, informed judgment. Lawyers must become familiar with both the benefits and risks of AI tools, and keep an eye on evolving rules, guidance, and best practices from courts and bar associations.
As generative AI tools become more common in legal practice, courts are starting to respond with new expectations and rules. One of the most notable trends is the rise of AI-use certifications—statements lawyers must submit to the court confirming whether they used generative AI in drafting a filing, and if so, affirming that a human attorney has carefully reviewed and verified the content.
These certifications aim to prevent issues like inaccurate case citations or fabricated legal authorities, which some AI tools have been known to produce. In a now widely discussed 2023 case, lawyers who submitted a filing containing fake case law generated by ChatGPT faced sanctions. Since then, similar incidents have continued to arise, and courts across the country have begun implementing proactive policies to ensure that filings are accurate and attorney-reviewed.
These developments underscore a key principle: AI can assist with legal work, but it cannot replace the responsibility attorneys have to verify and stand behind their submissions. As AI continues to evolve, staying current with local court rules and professional standards will be critical.