File size: 3,497 Bytes
8258b1d 1fed022 8258b1d 1fed022 8258b1d 1fed022 8258b1d 1fed022 8258b1d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 |
---
license: mit
tags:
- textual-entailment
- logical-reasoning
- deberta
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
---
# DELTA: Description Logics with Transformers
Fine-tuning a transformer model for textual entailment over expressive contexts generated from description logic knowledge bases.
Specifically, the model is given a context (a set of facts and rules) and a question.
The model should answer with "True" if the question is logically implied from the context, "False" if it contradicts the context, and "Unknown" if none of the two.
For more info please see our paper.
## Model Details
### Model Description
DELTA<sub>M</sub> is a DeBERTaV3 large model fine-tuned on the DELTA<sub>D</sub> dataset.
- **License:** MIT
- **Finetuned from model:** `microsoft/deberta-v3-large`
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/angelosps/DELTA
- **Paper:** [Transformers in the Service of Description Logic-based Contexts](https://arxiv.org/abs/2311.08941)
<!-- ## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
<!-- ### Direct Use -->
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- [More Information Needed] -->
<!-- ### Downstream Use [optional] -->
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- [More Information Needed] -->
<!-- ## Training Details
### Training Data -->
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
<!-- [More Information Needed]
### Training Procedure -->
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
<!-- #### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
<!-- #### Speeds, Sizes, Times [optional] -->
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
<!-- [More Information Needed]
## Evaluation -->
<!-- This section describes the evaluation protocols and provides the results. -->
<!-- ### Testing Data, Factors & Metrics
#### Testing Data
-->
<!-- This should link to a Dataset Card if possible. -->
<!-- [More Information Needed]
#### Metrics -->
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
<!-- [More Information Needed]
### Results
[More Information Needed] -->
<!-- #### Summary -->
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{poulis2024transformers,
title={Transformers in the Service of Description Logic-based Contexts},
author={Angelos Poulis and Eleni Tsalapati and Manolis Koubarakis},
year={2024},
eprint={2311.08941},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- ## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] --> |