File size: 1,818 Bytes
7e2b96d
d361bd9
7e2b96d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6c5b066
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# Identifying and Analysing political quotes from the Danish Parliament related to climate change using NLP
**KlimaBERT**, a sequence-classifier fine-tuned to predict whether political quotes are climate-related. When predicting the class 1, "climate-related", (positive class), the model achieves a F1-score of 0.97, Precision of 0.97, and Recall of 0.97. The negative class, 0, is defined as "non-climate-related".

KlimaBERT is fine-tuned using the pre-trained DaBERT-uncased model, on a training set of 1.000 manually labelled data-points. The training set contains both political quotes and summaries of bills from the [Danish Parliament](https://www.ft.dk/).

The model is created to identify political quotes related to climate change, and performs best on official texts from the Danish Parliament.

### Fine-tuning
To fine-tune a model similar to KlimaBERT, follow the [fine-tuning notebooks](https://github.com/jonahank/Vote-Prediction-Model/tree/main/climate_classifier)

### References
BERT:
Devlin, J., M.-W. Chang, K. Lee, and K. Toutanova (2018). Bert: Pre-training of deep
bidirectional transformers for language understanding.
https://arxiv.org/abs/1810.04805

DaBERT:
Certainly (2021). Certainly has trained the most advanced danish bert model to date.
https://www.certainly.io/blog/danish-bert-model/.

### Acknowledgements
The resources are created through the work of my Master's thesis, so I would like to thank my supervisors [Leon Derczynski](https://www.derczynski.com/itu/) and [Vedran Sekara](https://vedransekara.github.io/) for the great support throughout the project! And a HUGE thanks to [Gustav Gyrst](https://github.com/Gyrst) for great sparring and co-development of the tools you find in this repo. 

---
language: 
  - da
  - danish

tags:
- climate change
- climate-classifier
---