Authors: Dor Bernsohn, Gil Semo, Yaron Vazana, Gila Hayat, Ben Hagag, Joel Niklaus, Rohit Saha, Kyryl Truskovskyi
Published on: February 06, 2024
Impact Score: 8.05
Arxiv code: Arxiv:2402.04335
Summary
- What is new: Fine-tuning models from the BERT family and conducting few-shot experiments with closed-source Large Language Models for legal NLP tasks.
- Why this is important: Detecting legal violations and associating them with affected individuals within unstructured textual data.
- What the research proposes: Constructed and used two datasets with Large Language Models, validated by expert annotators, specifically for class-action cases.
- Results: Achieved an F1-score of 62.69% in identifying legal violations and 81.02% in associating victims to these violations.
Technical Details
Technological frameworks used: BERT family models, open-source and closed-source LLMs
Models used: Fine-tuned and few-shot learning models
Data used: Two datasets constructed for legal NLP tasks, validated by domain experts
Potential Impact
Legal tech firms, law offices specializing in class-action lawsuits, legal documentation services
Want to implement this idea in a business?
We have generated a startup concept here: LegalTech Insight.
Leave a Reply