Extending the use of open-source LLMs for context-specific legal understanding

This study investigates the potential of open-source Large Language Models (LLMs) to enhance the accessibility and understanding of complex legal information. Legal texts often present linguistic and structural barriers for non-experts, highlighting the need for tools that can support natural langua...

Ful tanımlama

Kaydedildi:
Detaylı Bibliyografya
Yazar: García Montero, Patricio Santiago (author)
Materyal Türü: masterThesis
Baskı/Yayın Bilgisi: 2025
Konular:
Online Erişim:https://repositorio.yachaytech.edu.ec/handle/123456789/988
Etiketler: Etiketle
Etiket eklenmemiş, İlk siz ekleyin!
Diğer Bilgiler
Özet:This study investigates the potential of open-source Large Language Models (LLMs) to enhance the accessibility and understanding of complex legal information. Legal texts often present linguistic and structural barriers for non-experts, highlighting the need for tools that can support natural language interaction with normative content. We propose a framework that integrates Fine-Tuning and Retrieval-Augmented Generation (RAG) techniques to adapt LLMs for domain-specific legal applications. Using synthetic, validated question-answer datasets, we train and evaluate multiple open-source models and introduce novel benchmarks to assess performance across different difficulty levels. Results show that the fine-tuned models significantly reduce perplexity (up to 10×) and, when combined with RAG, achieve accuracy levels exceeding 90%. To demonstrate adaptability, an additional use case involving travel-related legal norms in an environmentally protected insular region is included. This confirms the scalability of the approach to diverse legal scenarios,supporting its potential for broader adoption in legal information systems.