Edit model card

Model Card for EnvironmentalBERT-environmental

Model Description

Based on this paper, this is the EnvironmentalBERT-environmental language model. A language model that is trained to better classify environmental texts in the ESG domain.

Using the EnvironmentalBERT-base model as a starting point, the EnvironmentalBERT-environmental Language Model is additionally fine-trained on a 2k environmental dataset to detect environmental text samples.

How to Get Started With the Model

See these tutorials on Medium for a guide on model usage, large-scale analysis, and fine-tuning.

You can use the model with a pipeline for text classification:

from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
 
tokenizer_name = "ESGBERT/EnvironmentalBERT-environmental"
model_name = "ESGBERT/EnvironmentalBERT-environmental"
 
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, max_len=512)
 
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer) # set device=0 to use GPU
 
# See https://huggingface.co./docs/transformers/main_classes/pipelines#transformers.pipeline
print(pipe("Scope 1 emissions are reported here on a like-for-like basis against the 2013 baseline and exclude emissions from additional vehicles used during repairs.", padding=True, truncation=True))

More details can be found in the paper

@article{Schimanski23ESGBERT,
    title={{Bridiging the Gap in ESG Measurement: Using NLP to Quantify Environmental, Social, and Governance Communication}},
    author={Tobias Schimanski and Andrin Reding and Nico Reding and Julia Bingler and Mathias Kraus and Markus Leippold},
    year={2023},
    journal={Available on SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514},
}
Downloads last month
61,559
Inference API

Dataset used to train ESGBERT/EnvironmentalBERT-environmental

Space using ESGBERT/EnvironmentalBERT-environmental 1