LawGPT: A Chinese Legal Knowledge-Enhanced Large Language Model

Z Zhou, JX Shi, PX Song, XW Yang, YX Jin…�- arXiv preprint arXiv�…, 2024 - arxiv.org
Z Zhou, JX Shi, PX Song, XW Yang, YX Jin, LZ Guo, YF Li
arXiv preprint arXiv:2406.04614, 2024arxiv.org
Large language models (LLMs), including both proprietary and open-source models, have
showcased remarkable capabilities in addressing a wide range of downstream tasks.
Nonetheless, when it comes to practical Chinese legal tasks, these models fail to meet the
actual requirements. Proprietary models do not ensure data privacy for sensitive legal cases,
while open-source models demonstrate unsatisfactory performance due to their lack of legal
knowledge. To address this problem, we introduce LawGPT, the first open-source model�…
Large language models (LLMs), including both proprietary and open-source models, have showcased remarkable capabilities in addressing a wide range of downstream tasks. Nonetheless, when it comes to practical Chinese legal tasks, these models fail to meet the actual requirements. Proprietary models do not ensure data privacy for sensitive legal cases, while open-source models demonstrate unsatisfactory performance due to their lack of legal knowledge. To address this problem, we introduce LawGPT, the first open-source model specifically designed for Chinese legal applications. LawGPT comprises two key components: legal-oriented pre-training and legal supervised fine-tuning. Specifically, we employ large-scale Chinese legal documents for legal-oriented pre-training to incorporate legal domain knowledge. To further improve the model's performance on downstream legal tasks, we create a knowledge-driven instruction dataset for legal supervised fine-tuning. Our experimental results demonstrate that LawGPT outperforms the open-source LLaMA 7B model. Our code and resources are publicly available at https://github.com/pengxiao-song/LaWGPT and have received 5.7K stars on GitHub.
arxiv.org