Model Card for EnvironmentalBERT-biodiversity
Model Description
Based on this paper, this is the EnvironmentalBERT-biodiversity language model. A language model that is trained to better classify biodiversity texts in the ESG/nature domain.
Using the EnvironmentalBERT-base model as a starting point, the EnvironmentalBERT-biodiversity Language Model is additionally fine-trained on a 2.2k biodiversity dataset to detect biodiversity text samples.
How to Get Started With the Model
It is highly recommended to first classify a sentence to be "environmental" or not with the EnvironmentalBERT-environmental model before classifying whether it is "biodiversity" or not.
See these tutorials on Medium for a guide on model usage, large-scale analysis, and fine-tuning.
You can use the model with a pipeline for text classification:
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
tokenizer_name = "ESGBERT/EnvironmentalBERT-biodiversity"
model_name = "ESGBERT/EnvironmentalBERT-biodiversity"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer) # set device=0 to use GPU
# See https://huggingface.co./docs/transformers/main_classes/pipelines#transformers.pipeline
print(pipe("The majority of species are eliminated by modern agriculture techniques.", padding=True, truncation=True))
More details can be found in the paper
@article{Schimanski23ExploringNature,
title={{Exploring Nature: Datasets and Models for Analyzing Nature-Related Disclosures}},
author={Tobias Schimanski and Andrin Reding and Nico Reding and Julia Bingler and Mathias Kraus and Markus Leippold},
year={2023},
journal={Available on SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514},
}
- Downloads last month
- 24
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.