Text Classification
Transformers
PyTorch
English
deberta-v2
Inference Endpoints
ikrysinska commited on
Commit
91c3a24
1 Parent(s): 3693df1

Update README.md

Browse files

Tweak presentation

Files changed (1) hide show
  1. README.md +8 -0
README.md CHANGED
@@ -48,17 +48,25 @@ The model was trained to classify a pair of short texts: tweet and conspiracy th
48
  ### Out-of-Scope Use
49
 
50
  **Spreading/Generating Tweets that support conspiracy theories:**
 
51
  This model is explicitly designed for the purpose of classifying and understanding tweets related to COVID-19 conspiracy theories, particularly to determine whether a tweet supports or denies a specific conspiracy theory. It is not intended for, and should not be used to generate or propagate tweets that endorse or support any conspiracy theory. Any use of the model for such purposes is considered unethical and goes against the intended use case.
 
52
  **Amplifying echo chambers of social subnetworks susceptible to conspiracy theories:**
 
53
  While the model can help identify tweets that are related to conspiracy theories, it is important to note that it should not be used to target or amplify echo chambers or social subnetworks that are susceptible to believing in conspiracy theories. Ethical use of this model involves promoting responsible and unbiased information dissemination and discourages actions that may contribute to the spread of misinformation or polarization. Users should be cautious about using this model in ways that may further divide communities or promote harmful narratives.
54
 
55
  ## Bias, Risks, and Limitations
56
 
57
  **Results may be distorted for conspiracy theories out of the training dataset:**
 
58
  This model has been specifically fine-tuned to classify tweets related to a predefined set of COVID-19 conspiracy theories. As a result, its performance may not be as reliable when applied to conspiracy theories or topics that were not included in the training data. Users should exercise caution and consider the potential for distorted results when applying this model to subjects beyond its training scope. The model may not perform well in categorizing or understanding content that falls outside the designated conspiracy theories.
 
59
  **Unintentional stifling of legitimate public discourse:**
 
60
  The model's primary purpose is to identify tweets related to COVID-19 conspiracy theories, and it is not intended to stifle legitimate public discourse or eliminate discussions that merely resemble conspiracy theories. There is a risk that using this model inappropriately may lead to the suppression of valid conversations and the removal of content that is not explicitly conspiratorial but might be flagged due to similarities in language or topic. Users should be aware of this limitation and use the model judiciously, ensuring that it does not impede the free exchange of ideas and discussions.
 
61
  **Bias in decision making:**
 
62
  Like many machine learning models, this model may exhibit bias in its decision-making process. Factors such as the text style which may represent the socio-economical status of the individuals may inadvertently affect the model's classifications. The model's outputs may not always be entirely free from bias and to use its predictions as supplementary information rather than definitive judgments.
63
 
64
 
 
48
  ### Out-of-Scope Use
49
 
50
  **Spreading/Generating Tweets that support conspiracy theories:**
51
+
52
  This model is explicitly designed for the purpose of classifying and understanding tweets related to COVID-19 conspiracy theories, particularly to determine whether a tweet supports or denies a specific conspiracy theory. It is not intended for, and should not be used to generate or propagate tweets that endorse or support any conspiracy theory. Any use of the model for such purposes is considered unethical and goes against the intended use case.
53
+
54
  **Amplifying echo chambers of social subnetworks susceptible to conspiracy theories:**
55
+
56
  While the model can help identify tweets that are related to conspiracy theories, it is important to note that it should not be used to target or amplify echo chambers or social subnetworks that are susceptible to believing in conspiracy theories. Ethical use of this model involves promoting responsible and unbiased information dissemination and discourages actions that may contribute to the spread of misinformation or polarization. Users should be cautious about using this model in ways that may further divide communities or promote harmful narratives.
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
  **Results may be distorted for conspiracy theories out of the training dataset:**
61
+
62
  This model has been specifically fine-tuned to classify tweets related to a predefined set of COVID-19 conspiracy theories. As a result, its performance may not be as reliable when applied to conspiracy theories or topics that were not included in the training data. Users should exercise caution and consider the potential for distorted results when applying this model to subjects beyond its training scope. The model may not perform well in categorizing or understanding content that falls outside the designated conspiracy theories.
63
+
64
  **Unintentional stifling of legitimate public discourse:**
65
+
66
  The model's primary purpose is to identify tweets related to COVID-19 conspiracy theories, and it is not intended to stifle legitimate public discourse or eliminate discussions that merely resemble conspiracy theories. There is a risk that using this model inappropriately may lead to the suppression of valid conversations and the removal of content that is not explicitly conspiratorial but might be flagged due to similarities in language or topic. Users should be aware of this limitation and use the model judiciously, ensuring that it does not impede the free exchange of ideas and discussions.
67
+
68
  **Bias in decision making:**
69
+
70
  Like many machine learning models, this model may exhibit bias in its decision-making process. Factors such as the text style which may represent the socio-economical status of the individuals may inadvertently affect the model's classifications. The model's outputs may not always be entirely free from bias and to use its predictions as supplementary information rather than definitive judgments.
71
 
72