How AI is Reshaping Legal Skills Development and Practice

 

A recent empirical study, “AI Assistance in Legal Analysis: An Empirical Study”, by Jonathan H. Choi and Daniel Schwarcz, considers the impact of AI, specifically the GPT-4 model, on law school exam performance. This research offers additional insights into legal skills development, the future trajectory of law, and the current capabilities and constraints of large language models. 

 

Key Takeaways: 

  1. AI’s Role in Exam Performance: The study revealed that AI assistance significantly enhanced students’ performance in multiple-choice law school exam questions. However, its influence could have been more pronounced in essay questions. This suggests that while AI can aid in quick fact-based queries, its utility in complex analytical tasks remains to be determined. 
  2. Varied Impact Based on Skill Level: Interestingly, the benefits of AI were not uniform across all student skill levels. While it proved advantageous for lower-performing students, top-tier students witnessed a decline in performance. The decline in top-performing students’ results is attributed to a mix of over-reliance on AI, the challenge of integrating AI outputs, and GPT-4’s current limitations in processing complex prompts. This raises interesting questions about the optimal integration of AI in legal education. 
  3. GPT-4’s Standalone Capabilities: With the proper prompting techniques, GPT-4 showcased its prowess by outperforming the average human student and those with AI assistance. This underscores the importance of effective communication with AI models to harness their full potential. 
  4. Efficiency Boost: One of the undeniable advantages of AI assistance was the reduced time students took to complete exams. This indicates that AI can improve the speed and quality of legal tasks, allowing more time for deeper legal analysis. 

 

 

The Study Design:

  • The research was conducted with University of Minnesota Law School participants and involved students who volunteered to participate. The authors acknowledge that the sample did not entirely represent the entire performance spectrum, with most participants classed as average.  
  • Methodology: Students were tasked with attempting genuine exam segments with and without AI support. Their outcomes were subsequently juxtaposed against historical student performances and GPT4 alone to ascertain the impact. 
  • Prompting Techniques: The study used AI prompting methods to gauge GPT-4’s performance. These included: 
  • Basic Prompting: Directly copying and pasting the exam question. 
  • Chain-of-Thought Prompting: Encouraging the AI to think sequentially before producing an answer. 
  • Few-Shot Prompting: Providing GPT-4 with exemplary responses to shape its answer. 
  • Grounded Prompting: Feed the AI relevant lecture notes and other pertinent sources. 

 

 

Thought-Provoking Insights: 

  • AI’s Transformative Potential: Undoubtedly, AI will revolutionise legal practice and reshape the development of modern lawyer skills. Currently, it is limited in its ability to perform nuanced legal analysis. 
  • Human vs. AI: The fact that GPT4 alone outperforms humans, and humans assisted by AI entirely, the study queries whether having a human in the loop is advantageous for specific tasks or if a well-trained model can suffice. 
  • Clarity vs. Engagement: While AI assistance enhanced the clarity, reasoning, and accuracy of students’ analyses, it also led to organisational challenges, overlooked issues, and diminished engagement with specific cases. It was hypothesised that students deferred to GPT4’s output instead of using it as a springboard for their critical evaluation of the output. 
  • The Art of Prompting: The research underscores the significance of how AI is prompted. Grounded prompts, where the model was provided with lecture notes and other pertinent data, emerged as the most effective. 

The complete study can be accessed here. 

 

 

Final Thoughts: 

The art of prompting and a shift in mindset might be the key to harnessing AI’s full potential in legal education. By reallocating time from routine tasks to critical thinking and deeper engagement with legal nuances, we might not replace human endeavour but augment it. 

The study serves as a testament to the pivotal role of AI prompting. Grounded prompts, where the model is provided with lecture notes and other relevant data, emerged as the most effective. It cannot be assumed that emerging lawyers have the necessary tech fluency to fully maximise this transformative technology’s potential, despite being digital natives.