Prompt Engineering 101 for Lawyers
Generative artificial intelligence has taken the world by storm. The advent of publicly and inexpensively available tools to create text, images, audio, video, and more has had an enormous impact on educators, authors, artists, and attorneys. Attorneys have long had technology that used advanced artificial intelligence, including eDiscovery tools, analysis, forecasting, and legal research. Generative artificial intelligence, however, is springing up everywhere and promises to free lawyers up from doing mundane tasks and let them spend more time on high-touch, high-value activities.
How much of generative artificial intelligence (GAI) is just hype? This quote from Ryan McClead from Sente Advisors legal tech consultancy summarizes: “We’re reaching a critical mass where [lawyers are] using it, finally, and saying: ‘But it doesn’t do what I thought it was going to do.’” Despite the attention, we are in the infancy of GAI. However, it is unlikely to go away. Lawyers have time to get up to speed using this technology. Assessing the products is certainly part of the equation, but lawyers will also need to learn prompt engineering.
What is Prompt Engineering?
Prompt engineering is the art and science of interfacing with a GAI tool to get the most reliable response. Pre-GAI law librarians came up with a mnemonic to assist lawyers with constructing a legal research strategy – JUST ASK (a Law Librarian):
J urisdiction – Federal or State, Court or administrative, regulatory or legislative, or a combination?
U seful Tips – don’t recreate the wheel! Has this research been done?
S cope of Research – How extensive should the search be?
T erms of Art – Determine and define search terms, synonyms, similar concepts
A cronyms – Look them up, find out what they mean
S ources – Any secondary treatments? A treatise, law review articles?
K ey Cost Restraints – How much can you bill the client?
A similar strategy must be deployed if lawyers are to get the most out of a GAI tool. Determine what output is needed – a spreadsheet, chart, plain text, an image? What are the strengths and weaknesses of the tools you have access to? What is included in the data set? Does it include anchoring data like a firm’s document repository or a legal research database? When was the core data in the LLM (large language models) last updated? Does it include information from web indexes like Google? Will it cite to sources? Asking these questions first and using the best tool for the job is imperative for the best possible output.
Prompting Pitfalls and Challenges
When crafting prompts for AI systems, legal professionals should be mindful of the following common pitfalls:
Using Overly Broad Prompts: Prompts that are too broad can lead to vague or irrelevant responses from the AI system. It’s important to be specific and clear in your prompts to guide the AI toward the desired output.
Assuming GAI Understands Context: AI systems do not have the same understanding of context as humans do. Avoid crafting prompts that rely heavily on implied context or unspoken assumptions.
Disregarding Confidentiality and Privacy: users should ensure that the prompts they craft do not lead the AI to generate responses that could violate confidentiality or privacy rules.
Underestimating the Time and Effort Required: Crafting effective prompts can be time-consuming and require a significant amount of effort. It’s important to plan accordingly and allocate sufficient time for this task.
Precision: Legal language is highly precise and structured. Crafting prompts that can guide the AI to generate responses with the same level of precision can be challenging.
Avoiding Hallucinations: GAI systems can generate “hallucinations” or information that isn’t based on the input data. Crafting prompts that minimize these hallucinations is a significant challenge.
Limitations of GAI Tools: All GAI tools are not the same and each has strengths and weaknesses. Just like when you assess a legal research tool for reliability, you will need to assess the tools you use for assisting with work product. For instance, a study from June 2024 showed that many big AI models have recency bias – it “remembers” the last thing it heard better than other information even if it is not as important.
By being aware of these pitfalls and challenges, lawyers and legal professionals can craft more effective prompts, scrutinize outputs, and make better use of GAI systems in their work.
Priming
Once you determine which GAI tool is best suited for the task, consider spending some time priming the GAI. What is priming? Priming is a foundational technique in prompt engineering that enhances the interaction between users and GAI models by establishing clear context and guiding the model’s behavior from the outset. Priming are the initial prompts you input to the GAI to provide context, structure and style. In the same way you would not ask a first-year associate to generate a pretrial motion without significant guidance and instruction, you must guide the GAI to get the best output.
Priming helps the LLM model understand the context of the task at hand. By providing relevant information or instructions in the first prompt, users can influence how the model interprets subsequent prompts and generates responses. This is essential for achieving more accurate and relevant outputs.
Effective priming often involves multiple iterations of interaction before requesting the desired output. This iterative process allows users to refine the context and guide the model more effectively, enhancing the quality of the final response.
By carefully crafting the initial prompt, users can establish the structure and style of the conversation. This gives them fine-grained control over the AI’s responses, making it possible to tailor the output to specific needs or preferences.
Providing examples or detailed instructions within the priming prompt can significantly improve the model’s performance. The more specific and clear the initial prompt, the better the model can align its responses with user expectations.
Best Practices for Prompting
You have chosen the best GAI for the output you want, entered some priming prompts, and now you are ready to “engineer” your prompt. Remember, prompt engineering is both an art and a science. It requires creativity, testing, and a deep understanding of the AI system you’re working with.
Here are some best practices that can lead to more effective prompts:
Be specific and precise: Provide clear, detailed instructions in your prompt. The more specific you are, the less room there is for the AI to “fill in the gaps” with potentially incorrect information.
Provide Context: While AI systems don’t understand context in the same way humans do, providing relevant context in your prompt can help guide the AI’s response.
Use Clear Language: Avoid using jargon or overly complex language in your prompts. The clearer and simpler your language, the better the AI will understand what you’re asking for. If you need to, define terms.
Break complex queries into steps: For complicated tasks, break them down into smaller, manageable steps. This allows you to guide the AI’s reasoning process more carefully. Many GAI tools will suggest further prompts. Sometimes they can be helpful, other times they may lead the prompting in a different direction. Note the suggestions, but don’t divert from your carefully crafted prompts.
Test and Refine: Prompt engineering is an iterative process. Test your prompts, assess the AI’s responses, and refine your prompts based on the results.
Use few-shot prompting: Provide examples of the kind of response you’re looking for. This can help steer the AI towards the correct format and content.
Much of the prompt engineering guidance suggests that you tell the GAI to role-play (“you are a data scientist explaining prompt engineering”) and respond with a specific tone (“responses should be professional, courteous, and inclusive”). Using these tactics is called RICE – Role, Instructions, Context, Expectations. See this excellent guide from the AI Law Librarians on prompt engineering for legal research: A Legal Research Prompting Guide and Generative AI System Comparison Exercise – AI Law Librarians.
Learning How to Prompt
There are many resources, tools, and training guides available to assist legal professionals with prompt engineering. Here are a few:
- Legal Prompting 101- The legal practice of using generative AI chatbots
- Introducing AI Prompt Worksheets for the Legal Profession
- Fundamentals of Prompt Engineering for Lawyers (altaclaro.com)
- Prompt Engineering Skills for Legal Professionals | Udemy
Whether you take a course, use a prompt generator for help, or read lots of articles on the topic, the most important way to learn how to prompt in GAI systems is to practice. Record your prompt strings in a spreadsheet, noting which model/tool you used, what you asked and what the output was, and if you considered it successful. Remember that the answer will be different each time you submit your prompt so check to see what the second or third answer looks like compared to the first. Try the same prompt on different LLMs, like test a prompt on Poe.com to see how responses compare.
Conclusion
By incorporating these strategies, lawyers can craft prompts that are more likely to elicit accurate, nuanced, and reliable information from AI systems. However, it’s crucial to remember that while these techniques can significantly reduce hallucinations, they don’t eliminate them entirely. AI outputs should always be verified by legal professionals and not be considered as authoritative legal advice.
In the future reliance on complex prompting will be diminished, just like the need for Boolean search was reduced by natural language searches. However, understanding how to build a good prompt will help you understand how to get the best results from different products.
Catherine Sanders Reach serves as director of the NCBA Center for Practice Management.