Spaces:
Sleeping
Sleeping
replace q8 with q5_K_M
Browse files- app.py +1 -1
- article.md +1 -1
app.py
CHANGED
@@ -10,7 +10,7 @@ MODEL_PATH = snapshot_download("nopperl/emissions-extraction-lora-merged-GGUF")
|
|
10 |
|
11 |
def predict(input_method, document_file, document_url):
|
12 |
document_path = document_file if input_method == "File" else document_url
|
13 |
-
emissions = extract_emissions(document_path, MODEL_PATH, model_name="ggml-model-
|
14 |
return emissions.model_dump_json()
|
15 |
|
16 |
with open("description.md", "r") as f:
|
|
|
10 |
|
11 |
def predict(input_method, document_file, document_url):
|
12 |
document_path = document_file if input_method == "File" else document_url
|
13 |
+
emissions = extract_emissions(document_path, MODEL_PATH, model_name="ggml-model-Q5_K_M.gguf")
|
14 |
return emissions.model_dump_json()
|
15 |
|
16 |
with open("description.md", "r") as f:
|
article.md
CHANGED
@@ -1 +1 @@
|
|
1 |
-
Technical overview: The system retrieves the relevant pages of the uploaded report using simple search. These pages are input into a finetuned [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) language model, which outputs a JSON object containing the emission information. The system achieves an emission extraction accuracy of
|
|
|
1 |
+
Technical overview: The system retrieves the relevant pages of the uploaded report using simple search. These pages are input into a finetuned [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) language model, which outputs a JSON object containing the emission information. The system achieves an emission extraction accuracy of 65% and a source citation accuracy of 69% on the [corporate-emission-reports](https://huggingface.co/datasets/nopperl/corporate-emission-reports) dataset. Note that the model is quantized due to resource limitations.
|