The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
id: string
url: string
case_number: string
contenu: string
sommaire: string
sommaire_bis: string
metadata: string
dataset_source: string
previous_text: string
current_text: string
next_text: string
triplet_index: int64
window_index: int64
-- schema metadata --
huggingface: '{"info": {"features": {"id": {"dtype": "string", "_type": "' + 646
to
{'id': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'case_number': Value(dtype='string', id=None), 'contenu': Value(dtype='string', id=None), 'sommaire': Value(dtype='string', id=None), 'sommaire_bis': Value(dtype='string', id=None), 'metadata': Value(dtype='string', id=None), 'dataset_source': Value(dtype='string', id=None), 'previous_text': Value(dtype='string', id=None), 'current_text': Value(dtype='string', id=None), 'next_text': Value(dtype='string', id=None), 'triplet_index': Value(dtype='int64', id=None)}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 197, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2093, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 279, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 93, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 71, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              id: string
              url: string
              case_number: string
              contenu: string
              sommaire: string
              sommaire_bis: string
              metadata: string
              dataset_source: string
              previous_text: string
              current_text: string
              next_text: string
              triplet_index: int64
              window_index: int64
              -- schema metadata --
              huggingface: '{"info": {"features": {"id": {"dtype": "string", "_type": "' + 646
              to
              {'id': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'case_number': Value(dtype='string', id=None), 'contenu': Value(dtype='string', id=None), 'sommaire': Value(dtype='string', id=None), 'sommaire_bis': Value(dtype='string', id=None), 'metadata': Value(dtype='string', id=None), 'dataset_source': Value(dtype='string', id=None), 'previous_text': Value(dtype='string', id=None), 'current_text': Value(dtype='string', id=None), 'next_text': Value(dtype='string', id=None), 'triplet_index': Value(dtype='int64', id=None)}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for French Legal Cases Dataset

Dataset Summary

This dataset combines French legal cases from multiple sources (INCA, JADE, CASS, CAPP) into a unified format with overlapping text triplets. It includes decisions from various French courts, processed to facilitate natural language processing tasks.

Supported Tasks and Leaderboards

  • Tasks:
    • Text Generation
    • Legal Document Analysis
    • Text Classification
    • Language Modeling

Languages

The dataset is monolingual (French).

Dataset Structure

Data Instances

Each instance contains:

  • Document identifiers (id, url, case_number)
  • Original text content (contenu)
  • Summaries (sommaire, sommaire_bis)
  • Metadata (JSON formatted string)
  • Text triplets (previous_text, current_text, next_text)
  • Position tracking (triplet_index, window_index)

Example:

{
    'id': 'CASS12345',
    'url': 'https://...',
    'case_number': '12-34567',
    'contenu': 'Full text...',
    'sommaire': 'Summary...',
    'sommaire_bis': 'Additional summary...',
    'metadata': '{"date_decision": "2023-01-01", ...}',
    'dataset_source': 'CASS',
    'previous_text': 'Previous chunk...',
    'current_text': 'Current chunk...',
    'next_text': 'Next chunk...',
    'triplet_index': 0,
    'window_index': 0
}

Data Fields

  • id: Unique identifier
  • url: Source URL
  • case_number: Case reference number
  • contenu: Full text content
  • sommaire: Primary summary
  • sommaire_bis: Secondary summary
  • metadata: JSON string containing additional metadata
  • dataset_source: Origin dataset (INCA/JADE/CASS/CAPP)
  • previous_text: Previous text chunk
  • current_text: Current text chunk
  • next_text: Next text chunk
  • triplet_index: Position in sequence of triplets
  • window_index: Window number for long texts

Data Splits

  • Training split only

Dataset Creation

Curation Rationale

This dataset was created to provide a standardized format for French legal texts, with overlapping text chunks suitable for various NLP tasks.

Source Data

Initial Data Collection and Normalization

  • INCA: Court of Cassation decisions
  • JADE: Administrative court decisions
  • CASS: Court of Cassation decisions
  • CAPP: Court of Appeal decisions

Preprocessing

  • Text chunking with 230-token chunks and 30-token overlap
  • Sliding window approach for long texts
  • Metadata preservation and standardization
  • Token count verification
  • JSON formatting for metadata

Quality Control

  • Token length verification
  • Chunk coherence checks
  • Metadata validation
  • Error logging and handling

Considerations for Using the Data

Social Impact of Dataset

This dataset aims to improve access to and understanding of French legal decisions, potentially benefiting legal research and analysis.

Discussion of Biases

The dataset may reflect inherent biases in the French legal system and case selection/publication processes.

Other Known Limitations

  • Limited to published decisions
  • Varying detail levels across sources
  • Potential OCR errors in source texts

Additional Information

Dataset Curators

La-Mousse

Licensing Information

CC-BY-4.0

Citation Information

@misc{french-legal-cases-2024,
    title={French Legal Cases Dataset},
    author={[Your Name]},
    year={2024},
    publisher={HuggingFace},
    url={https://huggingface.co./datasets/la-mousse/combined-fr-caselaw}
}

Contributions

Thanks to @huggingface for the dataset hosting and infrastructure.

Downloads last month
9
Papers with Code