intoxication commited on
Commit
3fe47db
β€’
1 Parent(s): 4d43e49

Upload 12 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Slides[[:space:]]-[[:space:]]How[[:space:]]to[[:space:]]Build[[:space:]]a[[:space:]]QA[[:space:]]Application[[:space:]]With[[:space:]]Haystack.pdf filter=lfs diff=lfs merge=lfs -text
Dockerfile ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ # copy code
4
+ COPY . /ui
5
+
6
+ # install as a package
7
+ RUN pip install --upgrade pip && \
8
+ pip install /ui/
9
+
10
+ WORKDIR /ui
11
+ EXPOSE 8501
12
+
13
+ # cmd for running the API
14
+ CMD ["python", "-m", "streamlit", "run", "ui/webapp.py"]
LICENSE ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ Apache License
3
+ Version 2.0, January 2004
4
+ http://www.apache.org/licenses/
5
+
6
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7
+
8
+ 1. Definitions.
9
+
10
+ "License" shall mean the terms and conditions for use, reproduction,
11
+ and distribution as defined by Sections 1 through 9 of this document.
12
+
13
+ "Licensor" shall mean the copyright owner or entity authorized by
14
+ the copyright owner that is granting the License.
15
+
16
+ "Legal Entity" shall mean the union of the acting entity and all
17
+ other entities that control, are controlled by, or are under common
18
+ control with that entity. For the purposes of this definition,
19
+ "control" means (i) the power, direct or indirect, to cause the
20
+ direction or management of such entity, whether by contract or
21
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
22
+ outstanding shares, or (iii) beneficial ownership of such entity.
23
+
24
+ "You" (or "Your") shall mean an individual or Legal Entity
25
+ exercising permissions granted by this License.
26
+
27
+ "Source" form shall mean the preferred form for making modifications,
28
+ including but not limited to software source code, documentation
29
+ source, and configuration files.
30
+
31
+ "Object" form shall mean any form resulting from mechanical
32
+ transformation or translation of a Source form, including but
33
+ not limited to compiled object code, generated documentation,
34
+ and conversions to other media types.
35
+
36
+ "Work" shall mean the work of authorship, whether in Source or
37
+ Object form, made available under the License, as indicated by a
38
+ copyright notice that is included in or attached to the work
39
+ (an example is provided in the Appendix below).
40
+
41
+ "Derivative Works" shall mean any work, whether in Source or Object
42
+ form, that is based on (or derived from) the Work and for which the
43
+ editorial revisions, annotations, elaborations, or other modifications
44
+ represent, as a whole, an original work of authorship. For the purposes
45
+ of this License, Derivative Works shall not include works that remain
46
+ separable from, or merely link (or bind by name) to the interfaces of,
47
+ the Work and Derivative Works thereof.
48
+
49
+ "Contribution" shall mean any work of authorship, including
50
+ the original version of the Work and any modifications or additions
51
+ to that Work or Derivative Works thereof, that is intentionally
52
+ submitted to Licensor for inclusion in the Work by the copyright owner
53
+ or by an individual or Legal Entity authorized to submit on behalf of
54
+ the copyright owner. For the purposes of this definition, "submitted"
55
+ means any form of electronic, verbal, or written communication sent
56
+ to the Licensor or its representatives, including but not limited to
57
+ communication on electronic mailing lists, source code control systems,
58
+ and issue tracking systems that are managed by, or on behalf of, the
59
+ Licensor for the purpose of discussing and improving the Work, but
60
+ excluding communication that is conspicuously marked or otherwise
61
+ designated in writing by the copyright owner as "Not a Contribution."
62
+
63
+ "Contributor" shall mean Licensor and any individual or Legal Entity
64
+ on behalf of whom a Contribution has been received by Licensor and
65
+ subsequently incorporated within the Work.
66
+
67
+ 2. Grant of Copyright License. Subject to the terms and conditions of
68
+ this License, each Contributor hereby grants to You a perpetual,
69
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70
+ copyright license to reproduce, prepare Derivative Works of,
71
+ publicly display, publicly perform, sublicense, and distribute the
72
+ Work and such Derivative Works in Source or Object form.
73
+
74
+ 3. Grant of Patent License. Subject to the terms and conditions of
75
+ this License, each Contributor hereby grants to You a perpetual,
76
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77
+ (except as stated in this section) patent license to make, have made,
78
+ use, offer to sell, sell, import, and otherwise transfer the Work,
79
+ where such license applies only to those patent claims licensable
80
+ by such Contributor that are necessarily infringed by their
81
+ Contribution(s) alone or by combination of their Contribution(s)
82
+ with the Work to which such Contribution(s) was submitted. If You
83
+ institute patent litigation against any entity (including a
84
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
85
+ or a Contribution incorporated within the Work constitutes direct
86
+ or contributory patent infringement, then any patent licenses
87
+ granted to You under this License for that Work shall terminate
88
+ as of the date such litigation is filed.
89
+
90
+ 4. Redistribution. You may reproduce and distribute copies of the
91
+ Work or Derivative Works thereof in any medium, with or without
92
+ modifications, and in Source or Object form, provided that You
93
+ meet the following conditions:
94
+
95
+ (a) You must give any other recipients of the Work or
96
+ Derivative Works a copy of this License; and
97
+
98
+ (b) You must cause any modified files to carry prominent notices
99
+ stating that You changed the files; and
100
+
101
+ (c) You must retain, in the Source form of any Derivative Works
102
+ that You distribute, all copyright, patent, trademark, and
103
+ attribution notices from the Source form of the Work,
104
+ excluding those notices that do not pertain to any part of
105
+ the Derivative Works; and
106
+
107
+ (d) If the Work includes a "NOTICE" text file as part of its
108
+ distribution, then any Derivative Works that You distribute must
109
+ include a readable copy of the attribution notices contained
110
+ within such NOTICE file, excluding those notices that do not
111
+ pertain to any part of the Derivative Works, in at least one
112
+ of the following places: within a NOTICE text file distributed
113
+ as part of the Derivative Works; within the Source form or
114
+ documentation, if provided along with the Derivative Works; or,
115
+ within a display generated by the Derivative Works, if and
116
+ wherever such third-party notices normally appear. The contents
117
+ of the NOTICE file are for informational purposes only and
118
+ do not modify the License. You may add Your own attribution
119
+ notices within Derivative Works that You distribute, alongside
120
+ or as an addendum to the NOTICE text from the Work, provided
121
+ that such additional attribution notices cannot be construed
122
+ as modifying the License.
123
+
124
+ You may add Your own copyright statement to Your modifications and
125
+ may provide additional or different license terms and conditions
126
+ for use, reproduction, or distribution of Your modifications, or
127
+ for any such Derivative Works as a whole, provided Your use,
128
+ reproduction, and distribution of the Work otherwise complies with
129
+ the conditions stated in this License.
130
+
131
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
132
+ any Contribution intentionally submitted for inclusion in the Work
133
+ by You to the Licensor shall be under the terms and conditions of
134
+ this License, without any additional terms or conditions.
135
+ Notwithstanding the above, nothing herein shall supersede or modify
136
+ the terms of any separate license agreement you may have executed
137
+ with Licensor regarding such Contributions.
138
+
139
+ 6. Trademarks. This License does not grant permission to use the trade
140
+ names, trademarks, service marks, or product names of the Licensor,
141
+ except as required for reasonable and customary use in describing the
142
+ origin of the Work and reproducing the content of the NOTICE file.
143
+
144
+ 7. Disclaimer of Warranty. Unless required by applicable law or
145
+ agreed to in writing, Licensor provides the Work (and each
146
+ Contributor provides its Contributions) on an "AS IS" BASIS,
147
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148
+ implied, including, without limitation, any warranties or conditions
149
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150
+ PARTICULAR PURPOSE. You are solely responsible for determining the
151
+ appropriateness of using or redistributing the Work and assume any
152
+ risks associated with Your exercise of permissions under this License.
153
+
154
+ 8. Limitation of Liability. In no event and under no legal theory,
155
+ whether in tort (including negligence), contract, or otherwise,
156
+ unless required by applicable law (such as deliberate and grossly
157
+ negligent acts) or agreed to in writing, shall any Contributor be
158
+ liable to You for damages, including any direct, indirect, special,
159
+ incidental, or consequential damages of any character arising as a
160
+ result of this License or out of the use or inability to use the
161
+ Work (including but not limited to damages for loss of goodwill,
162
+ work stoppage, computer failure or malfunction, or any and all
163
+ other commercial damages or losses), even if such Contributor
164
+ has been advised of the possibility of such damages.
165
+
166
+ 9. Accepting Warranty or Additional Liability. While redistributing
167
+ the Work or Derivative Works thereof, You may choose to offer,
168
+ and charge a fee for, acceptance of support, warranty, indemnity,
169
+ or other liability obligations and/or rights consistent with this
170
+ License. However, in accepting such obligations, You may act only
171
+ on Your own behalf and on Your sole responsibility, not on behalf
172
+ of any other Contributor, and only if You agree to indemnify,
173
+ defend, and hold each Contributor harmless for any liability
174
+ incurred by, or claims asserted against, such Contributor by reason
175
+ of your accepting any such warranty or additional liability.
176
+
177
+ END OF TERMS AND CONDITIONS
178
+
179
+ APPENDIX: How to apply the Apache License to your work.
180
+
181
+ To apply the Apache License to your work, attach the following
182
+ boilerplate notice, with the fields enclosed by brackets "[]"
183
+ replaced with your own identifying information. (Don't include
184
+ the brackets!) The text should be enclosed in the appropriate
185
+ comment syntax for the file format. We also recommend that a
186
+ file or class name and description of purpose be included on the
187
+ same "printed page" as the copyright notice for easier
188
+ identification within third-party archives.
189
+
190
+ Copyright 2021 deepset GmbH
191
+
192
+ Licensed under the Apache License, Version 2.0 (the "License");
193
+ you may not use this file except in compliance with the License.
194
+ You may obtain a copy of the License at
195
+
196
+ http://www.apache.org/licenses/LICENSE-2.0
197
+
198
+ Unless required by applicable law or agreed to in writing, software
199
+ distributed under the License is distributed on an "AS IS" BASIS,
200
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201
+ See the License for the specific language governing permissions and
202
+ limitations under the License.
README.md CHANGED
@@ -1,10 +1,139 @@
1
- ---
2
- title: Wbrule
3
- emoji: 🐠
4
- colorFrom: red
5
- colorTo: yellow
6
- sdk: docker
7
- pinned: false
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Question Answering Application for Healthcare
2
+
3
+ This is a streamlit-based NLP application powering a question answering demo on healthcare data. It's easy to change and extend and can be used to try out Haystack's capabilities.
4
+
5
+ A video presentation of this demo is available on [YouTube](https://www.youtube.com/watch?v=pOnkGdOvYfo). To get started with Haystack please visit the [README](https://github.com/deepset-ai/haystack/tree/main#key-components) or check out our [tutorials](https://haystack.deepset.ai/tutorials/first-qa-system).
6
+
7
+ ## Usage
8
+
9
+ The easiest way to run the application is through [Docker compose](https://docs.docker.com/compose/).
10
+ From this folder, just run:
11
+
12
+ ```sh
13
+ docker compose up -d
14
+ ```
15
+
16
+ Docker will start three containers:
17
+ - `elasticsearch`, running an Elasticsearch instance with some data pre-loaded.
18
+ - `haystack-api`, running a pre-loaded Haystack pipeline behind a RESTful API.
19
+ - `ui`, running the streamlit application showing the UI and querying Haystack under the hood.
20
+
21
+ Once all the containers are up and running, you can open the user interface pointing your
22
+ browser to [http://localhost:8501](http://localhost:8501).
23
+
24
+ ## Screencast
25
+ https://user-images.githubusercontent.com/4181769/231965471-48d581a2-e1aa-4316-b3a4-990d9c86800e.mov
26
+
27
+ ## Evaluation Mode
28
+
29
+ The evaluation mode leverages the feedback REST API endpoint of haystack. The user has the options
30
+ "Wrong answer", "Wrong answer and wrong passage" and "Wrong answer and wrong passage" to give
31
+ feedback.
32
+
33
+ In order to use the UI in evaluation mode, you need an ElasticSearch instance with pre-indexed files
34
+ and the Haystack REST API. You can set the environment up via docker images. For ElasticSearch, you
35
+ can check out our [documentation](https://haystack.deepset.ai/usage/document-store#initialisation)
36
+ and for setting up the REST API this [link](https://github.com/deepset-ai/haystack/blob/main/README.
37
+ md#7-rest-api).
38
+
39
+ To enter the evaluation mode, select the checkbox "Evaluation mode" in the sidebar. The UI will load
40
+ the predefined questions from the file [`eval_labels_examples`](https://raw.githubusercontent.com/
41
+ deepset-ai/haystack/main/ui/ui/eval_labels_example.csv). The file needs to be prefilled with your
42
+ data. This way, the user will get a random question from the set and can give his feedback with the
43
+ buttons below the questions. To load a new question, click the button "Get random question".
44
+
45
+ The file just needs to have two columns separated by semicolon. You can add more columns but the UI
46
+ will ignore them. Every line represents a questions answer pair. The columns with the questions needs
47
+ to be named β€œQuestion Text” and the answer column β€œAnswer” so that they can be loaded correctly.
48
+ Currently, the easiest way to create the file is manually by adding question answer pairs.
49
+
50
+ The feedback can be exported with the API endpoint `export-doc-qa-feedback`. To learn more about
51
+ finetuning a model with user feedback, please check out our [docs](https://haystack.deepset.ai/usage/
52
+ domain-adaptation#user-feedback).
53
+
54
+ ## Query different data
55
+
56
+ If you want to use this application to query a different corpus, the easiest way is to build the
57
+ Elasticsearch image, load your own text data and then use the same Compose file to run all the
58
+ three containers needed. This will require [Docker](https://docs.docker.com/get-docker/) to be
59
+ properly installed on your machine.
60
+
61
+ ### Running your custom build
62
+
63
+ Once done, modify the `elasticsearch` section in the `docker-compose.yml` file, changing this line:
64
+ ```yaml
65
+ image: "julianrisch/elasticsearch-healthcare"
66
+ ```
67
+
68
+ to:
69
+
70
+ ```yaml
71
+ image: "my-docker-acct/elasticsearch-custom"
72
+ ```
73
+
74
+ Finally, run the compose file as usual:
75
+ ```sh
76
+ docker-compose up
77
+ ```
78
+
79
+ ## Development
80
+
81
+ If you want to change the streamlit application, you need to setup your Python environment first.
82
+ From a virtual environment, run:
83
+ ```sh
84
+ pip install -e .
85
+ ```
86
+
87
+ The app requires the Haystack RESTful API to be ready and accepting connections at `http://localhost:8000`, you can use Docker compose to start only the required containers:
88
+
89
+ ```sh
90
+ docker-compose up elasticsearch haystack-api
91
+ ```
92
+
93
+ At this point you should be able to make changes and run the streamlit application with:
94
+
95
+ ```
96
+ streamlit run ui/webapp.py
97
+ ```
98
+
99
+ ## Using GPUs with Docker
100
+
101
+ Assuming you have [nvidia drivers installed](https://developer.nvidia.com/cuda-downloads) on your machine, you can configure docker to use the GPU for the Haystack API container to speed it up.
102
+ First, configure the nvidia repository as described here: https://nvidia.github.io/nvidia-container-runtime/. For example:
103
+ ```sh
104
+ curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | \
105
+ sudo apt-key add -
106
+ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
107
+ curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | \
108
+ sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
109
+ sudo apt-get update
110
+ ```
111
+ Then, install nvidia-container-runtime as described here: https://docs.docker.com/config/containers/resource_constraints/#access-an-nvidia-gpu.
112
+ For example:
113
+ ```sh
114
+ sudo apt-get install nvidia-container-runtime
115
+ ```
116
+ Restart the Docker daemon (or simply the machine).
117
+ Finally, you can change the docker compose file `healthcare/docker-compose.yml` so that a docker image prepared for usage with GPUs is used and one GPU is reserved for the Haystack API container:
118
+ ```yaml
119
+ haystack-api:
120
+ image: "deepset/haystack:gpu-v1.14.0"
121
+ ports:
122
+ - 8000:8000
123
+ restart: on-failure
124
+ volumes:
125
+ - ./haystack-api:/home/node/app
126
+ environment:
127
+ - DOCUMENTSTORE_PARAMS_HOST=elasticsearch
128
+ - PIPELINE_YAML_PATH=/home/node/app/pipelines_biobert.haystack-pipeline.yml
129
+ depends_on:
130
+ elasticsearch:
131
+ condition: service_healthy
132
+ deploy:
133
+ resources:
134
+ reservations:
135
+ devices:
136
+ - driver: nvidia
137
+ count: 1
138
+ capabilities: [gpu]
139
+ ```
Slides - How to Build a QA Application With Haystack.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e280926d06c88738dc87546feaa66973c35cd7689a520da208bbcc66217bb014
3
+ size 4073318
docker-compose.yml ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: "3"
2
+
3
+ services:
4
+ elasticsearch:
5
+ image: "julianrisch/elasticsearch-healthcare"
6
+ ports:
7
+ - 9200:9200
8
+ restart: on-failure
9
+ # Uncomment the healthcheck section on Apple M1, as on M1 elasticsearch might need longer to start
10
+ healthcheck:
11
+ test: ["CMD", "curl", "-f", "http://localhost:9200/_cat/health"]
12
+ interval: 10s
13
+ timeout: 1s
14
+ retries: 30
15
+ start_period: "30s"
16
+
17
+ haystack-api:
18
+ image: "deepset/haystack:cpu-v1.14.0"
19
+ ports:
20
+ - 8000:8000
21
+ restart: on-failure
22
+ volumes:
23
+ - ./haystack-api:/home/node/app
24
+ environment:
25
+ - DOCUMENTSTORE_PARAMS_HOST=elasticsearch
26
+ - PIPELINE_YAML_PATH=/home/node/app/pipelines_biobert.haystack-pipeline.yml
27
+ depends_on:
28
+ elasticsearch:
29
+ condition: service_healthy
30
+
31
+ ui:
32
+ image: "julianrisch/demo-healthcare"
33
+ ports:
34
+ - 8501:8501
35
+ restart: on-failure
36
+ environment:
37
+ - API_ENDPOINT=http://haystack-api:8000
38
+ # The value fot the following variables will be read from the host, if present.
39
+ # They can also be temporarily set for docker-compose, for example:
40
+ # $ DISABLE_FILE_UPLOAD=1 DEFAULT_DOCS_FROM_RETRIEVER=5 docker-compose up
41
+ - DEFAULT_QUESTION_AT_STARTUP
42
+ - DEFAULT_DOCS_FROM_RETRIEVER
43
+ - DEFAULT_NUMBER_OF_ANSWERS
44
+ command: "/bin/bash -c 'sleep 15 && python -m streamlit run ui/webapp.py'"
haystack-api/pipelines_biobert.haystack-pipeline.yml ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: ignore
2
+
3
+ components:
4
+ - name: DocumentStore
5
+ type: ElasticsearchDocumentStore
6
+ params:
7
+ host: localhost
8
+ - name: Retriever # Selects the most relevant documents from the document store and passes them on to the Reader
9
+ type: EmbeddingRetriever # Uses a Transformer model to encode the document and the query
10
+ params:
11
+ document_store: DocumentStore
12
+ embedding_model: sentence-transformers/multi-qa-mpnet-base-dot-v1 # multi-qa-MiniLM-L6-dot-v1
13
+ embed_meta_fields:
14
+ - filename
15
+ top_k: 10 # The number of results to return
16
+ - name: BM25
17
+ type: BM25Retriever
18
+ params:
19
+ document_store: DocumentStore
20
+ top_k: 10
21
+
22
+ - name: Joiner
23
+ type: JoinDocuments
24
+ params:
25
+ join_mode: reciprocal_rank_fusion
26
+ - name: Reader # The component that actually fetches answers from among the 20 documents returned by retriever
27
+ type: FARMReader # Transformer-based reader, specializes in extractive QA
28
+ params:
29
+ model_name_or_path: dmis-lab/biobert-large-cased-v1.1-squad # dmis-lab/biobert-base-cased-v1.1-squad
30
+ context_window_size: 700 # The size of the window around the answer span
31
+ - name: FileTypeClassifier # Routes files based on their extension to appropriate converters, by default txt, pdf, md, docx, html
32
+ type: FileTypeClassifier
33
+ - name: TextConverter # Converts files into documents
34
+ type: TextConverter
35
+ - name: PDFConverter # Converts PDFs into documents
36
+ type: PDFToTextConverter
37
+ - name: Preprocessor # Splits documents into smaller ones and cleans them up
38
+ type: PreProcessor
39
+ params:
40
+ # With a vector-based retriever, it's good to split your documents into smaller ones
41
+ split_by: word # The unit by which you want to split the documents
42
+ split_length: 250 # The max number of words in a document
43
+ split_overlap: 20 # Enables the sliding window approach
44
+ split_respect_sentence_boundary: True # Retains complete sentences in split documents
45
+ language: en # Used by NLTK to best detect the sentence boundaries for that language
46
+
47
+
48
+ # Here you define how the nodes are organized in the pipelines
49
+ # For each node, specify its input
50
+ pipelines:
51
+ - name: query
52
+ nodes:
53
+ - name: Retriever
54
+ inputs: [Query]
55
+ - name: BM25
56
+ inputs: [Query]
57
+ - name: Joiner
58
+ inputs: [Retriever, BM25]
59
+ - name: Reader
60
+ inputs: [Joiner]
61
+ - name: indexing
62
+ nodes:
63
+ # Depending on the file type, we use a Text or PDF converter
64
+ - name: FileTypeClassifier
65
+ inputs: [File]
66
+ - name: TextConverter
67
+ inputs: [FileTypeClassifier.output_1] # Ensures this converter receives TXT files
68
+ - name: PDFConverter
69
+ inputs: [FileTypeClassifier.output_2] # Ensures this converter receives PDFs
70
+ - name: Preprocessor
71
+ inputs: [TextConverter, PDFConverter]
72
+ - name: Retriever
73
+ inputs: [Preprocessor]
74
+ - name: DocumentStore
75
+ inputs: [Retriever]
pyproject.toml ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [build-system]
2
+ requires = ["hatchling"]
3
+ build-backend = "hatchling.build"
4
+
5
+ [project]
6
+ name = "ui"
7
+ description = 'Minimal UI for Haystack (https://github.com/deepset-ai/haystack)'
8
+ readme = "README.md"
9
+ requires-python = ">=3.7"
10
+ license = "Apache-2.0"
11
+ keywords = []
12
+ authors = [
13
+ { name = "deepset.ai", email = "[email protected]" },
14
+ ]
15
+ classifiers = [
16
+ "Development Status :: 5 - Production/Stable",
17
+ "Intended Audience :: Science/Research",
18
+ "Topic :: Scientific/Engineering :: Artificial Intelligence",
19
+ "Operating System :: OS Independent",
20
+ "Programming Language :: Python",
21
+ "Programming Language :: Python :: 3.7",
22
+ "Programming Language :: Python :: 3.8",
23
+ "Programming Language :: Python :: 3.9",
24
+ "Programming Language :: Python :: 3.10",
25
+ "Programming Language :: Python :: Implementation :: CPython",
26
+ ]
27
+ dependencies = [
28
+ "streamlit >= 1.9.0, < 2",
29
+ "st-annotated-text >= 2.0.0, < 3",
30
+ "markdown >= 3.3.4, < 4"
31
+ ]
32
+ dynamic = ["version"]
33
+
34
+ [project.urls]
35
+ Documentation = "https://github.com/deepset-ai/haystack/tree/main/ui#readme"
36
+ Issues = "https://github.com/deepset-ai/haystack/issues"
37
+ Source = "https://github.com/deepset-ai/haystack/tree/main/ui"
38
+
39
+ [tool.hatch.version]
40
+ path = "ui/__about__.py"
41
+
42
+ [tool.hatch.build.targets.sdist]
43
+ [tool.hatch.build.targets.wheel]
44
+
45
+ [tool.hatch.envs.default]
46
+ dependencies = [
47
+ "pytest",
48
+ "pytest-cov",
49
+ ]
50
+ [tool.hatch.envs.default.scripts]
51
+ cov = "pytest --cov-report=term-missing --cov-config=pyproject.toml --cov=ui --cov=tests"
52
+ no-cov = "cov --no-cov"
53
+
54
+ [[tool.hatch.envs.test.matrix]]
55
+ python = ["37", "38", "39", "310"]
56
+
57
+ [tool.coverage.run]
58
+ branch = true
59
+ parallel = true
60
+ omit = [
61
+ "ui/__about__.py",
62
+ ]
63
+
64
+ [tool.coverage.report]
65
+ exclude_lines = [
66
+ "no cov",
67
+ "if __name__ == .__main__.:",
68
+ "if TYPE_CHECKING:",
69
+ ]
70
+
71
+ [tool.black]
72
+ line-length = 120
73
+ skip_magic_trailing_comma = true # For compatibility with pydoc>=4.6, check if still needed.
ui/__about__.py ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+
3
+ from pathlib import Path
4
+
5
+
6
+ __version__ = "0.0.0"
7
+ try:
8
+ __version__ = open(Path(__file__).parent.parent / "VERSION.txt", "r").read()
9
+ except Exception as e:
10
+ logging.exception("No VERSION.txt found!")
ui/__init__.py ADDED
File without changes
ui/eval_labels_example.csv ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ "Question Text";"Answer"
2
+ "What are treatments for oesophageal cancer?";"radical surgery"
3
+ "What are symptoms of an infusion reaction?";""
4
+ "What reduces inflammation in cancer patients?";"physical exercise"
5
+ "What are the symptoms of cancer related fatigue?";"diminished energy, increased need to rest"
6
+ "How can pain be treated in cancer patients?";"using opioid combination therapies and carefully dosed adjuvants"
7
+ "What side-effects can occur if pain is treated with opioids?";
8
+ "What symptoms may occur when using Ipilimumab?";"Pruritus, maculopapular rash, cough, shortness of breath, chills, rigors, facial flushing, chest, abdominal or back pain."
9
+ "How should a patient with an anaphylactic reaction be handled?"; "an observation period"
10
+ "Should stem-cell transplantation be offered to young adults?";"In the younger population, consolidation with autologous stem cell transplantation (ASCT) in patients achieving a CR has been shown to improve long-term outcomes"
ui/utils.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # pylint: disable=missing-timeout
2
+
3
+ from typing import List, Dict, Any, Tuple, Optional
4
+
5
+ import os
6
+ import logging
7
+ from time import sleep
8
+
9
+ import requests
10
+ import streamlit as st
11
+
12
+
13
+ API_ENDPOINT = os.getenv("API_ENDPOINT", "http://localhost:8000")
14
+ STATUS = "initialized"
15
+ HS_VERSION = "hs_version"
16
+ DOC_REQUEST = "query"
17
+ DOC_FEEDBACK = "feedback"
18
+ DOC_UPLOAD = "file-upload"
19
+
20
+
21
+ def haystack_is_ready():
22
+ """
23
+ Used to show the "Haystack is loading..." message
24
+ """
25
+ url = f"{API_ENDPOINT}/{STATUS}"
26
+ try:
27
+ if requests.get(url).status_code < 400:
28
+ return True
29
+ except Exception as e:
30
+ logging.exception(e)
31
+ sleep(1) # To avoid spamming a non-existing endpoint at startup
32
+ return False
33
+
34
+
35
+ def haystack_version():
36
+ """
37
+ Get the Haystack version from the REST API
38
+ """
39
+ url = f"{API_ENDPOINT}/{HS_VERSION}"
40
+ return requests.get(url, timeout=0.1).json()["hs_version"]
41
+
42
+
43
+ def query(query, filters={}, top_k_reader=5, top_k_retriever=5) -> Tuple[List[Dict[str, Any]], Dict[str, str]]:
44
+ """
45
+ Send a query to the REST API and parse the answer.
46
+ Returns both a ready-to-use representation of the results and the raw JSON.
47
+ """
48
+
49
+ url = f"{API_ENDPOINT}/{DOC_REQUEST}"
50
+ params = {"filters": filters, "Retriever": {"top_k": top_k_retriever}, "Reader": {"top_k": top_k_reader}}
51
+ req = {"query": query, "params": params}
52
+ response_raw = requests.post(url, json=req)
53
+
54
+ if response_raw.status_code >= 400 and response_raw.status_code != 503:
55
+ raise Exception(f"{vars(response_raw)}")
56
+
57
+ response = response_raw.json()
58
+ if "errors" in response:
59
+ raise Exception(", ".join(response["errors"]))
60
+
61
+ # Format response
62
+ results = []
63
+ answers = response["answers"]
64
+ for answer in answers:
65
+ if answer.get("answer", None):
66
+ results.append(
67
+ {
68
+ "context": "..." + answer["context"] + "...",
69
+ "answer": answer.get("answer", None),
70
+ "source": answer["meta"]["name"],
71
+ "relevance": round(answer["score"] * 100, 2),
72
+ "document": [doc for doc in response["documents"] if doc["id"] in answer["document_ids"]][0],
73
+ "offset_start_in_doc": answer["offsets_in_document"][0]["start"],
74
+ "_raw": answer,
75
+ }
76
+ )
77
+ else:
78
+ results.append(
79
+ {
80
+ "context": None,
81
+ "answer": None,
82
+ "document": None,
83
+ "relevance": round(answer["score"] * 100, 2),
84
+ "_raw": answer,
85
+ }
86
+ )
87
+ return results, response
88
+
89
+
90
+ def send_feedback(query, answer_obj, is_correct_answer, is_correct_document, document) -> None:
91
+ """
92
+ Send a feedback (label) to the REST API
93
+ """
94
+ url = f"{API_ENDPOINT}/{DOC_FEEDBACK}"
95
+ req = {
96
+ "query": query,
97
+ "document": document,
98
+ "is_correct_answer": is_correct_answer,
99
+ "is_correct_document": is_correct_document,
100
+ "origin": "user-feedback",
101
+ "answer": answer_obj,
102
+ }
103
+ response_raw = requests.post(url, json=req)
104
+ if response_raw.status_code >= 400:
105
+ raise ValueError(f"An error was returned [code {response_raw.status_code}]: {response_raw.json()}")
106
+
107
+
108
+ def upload_doc(file):
109
+ url = f"{API_ENDPOINT}/{DOC_UPLOAD}"
110
+ files = [("files", file)]
111
+ response = requests.post(url, files=files).json()
112
+ return response
113
+
114
+
115
+ def get_backlink(result) -> Tuple[Optional[str], Optional[str]]:
116
+ if result.get("document", None):
117
+ doc = result["document"]
118
+ if isinstance(doc, dict):
119
+ if doc.get("meta", None):
120
+ if isinstance(doc["meta"], dict):
121
+ if doc["meta"].get("url", None) and doc["meta"].get("title", None):
122
+ return doc["meta"]["url"], doc["meta"]["title"]
123
+ return None, None
ui/webapp.py ADDED
@@ -0,0 +1,294 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import logging
4
+ from pathlib import Path
5
+ from json import JSONDecodeError
6
+
7
+ import pandas as pd
8
+ import streamlit as st
9
+ from annotated_text import annotation
10
+ from markdown import markdown
11
+
12
+ from ui.utils import haystack_is_ready, query, send_feedback, upload_doc, haystack_version, get_backlink
13
+
14
+
15
+ # Adjust to a question that you would like users to see in the search bar when they load the UI:
16
+ DEFAULT_QUESTION_AT_STARTUP = os.getenv("DEFAULT_QUESTION_AT_STARTUP", "What are the symptoms of cancer related fatigue?")
17
+ DEFAULT_ANSWER_AT_STARTUP = os.getenv("DEFAULT_ANSWER_AT_STARTUP", "diminished energy, increased need to rest")
18
+
19
+ # Sliders
20
+ DEFAULT_DOCS_FROM_RETRIEVER = int(os.getenv("DEFAULT_DOCS_FROM_RETRIEVER", "3"))
21
+ DEFAULT_NUMBER_OF_ANSWERS = int(os.getenv("DEFAULT_NUMBER_OF_ANSWERS", "3"))
22
+
23
+ # Labels for the evaluation
24
+ EVAL_LABELS = os.getenv("EVAL_FILE", str(Path(__file__).parent / "eval_labels_example.csv"))
25
+
26
+ # Whether the file upload should be enabled or not
27
+ DISABLE_FILE_UPLOAD = bool(os.getenv("DISABLE_FILE_UPLOAD"))
28
+
29
+
30
+ def set_state_if_absent(key, value):
31
+ if key not in st.session_state:
32
+ st.session_state[key] = value
33
+
34
+
35
+ def main():
36
+
37
+ st.set_page_config(page_title="Haystack Demo", page_icon="https://haystack.deepset.ai/img/HaystackIcon.png")
38
+
39
+ # Persistent state
40
+ set_state_if_absent("question", DEFAULT_QUESTION_AT_STARTUP)
41
+ set_state_if_absent("answer", DEFAULT_ANSWER_AT_STARTUP)
42
+ set_state_if_absent("results", None)
43
+ set_state_if_absent("raw_json", None)
44
+ set_state_if_absent("random_question_requested", False)
45
+
46
+ # Small callback to reset the interface in case the text of the question changes
47
+ def reset_results(*args):
48
+ st.session_state.answer = None
49
+ st.session_state.results = None
50
+ st.session_state.raw_json = None
51
+
52
+ # Title
53
+ st.write("# Healthcare Demo")
54
+ st.markdown(
55
+ """
56
+ Ask a question and see if Haystack can find the correct answer to your query!
57
+
58
+ *Note: do not use keywords, but full-fledged questions.* The demo is not optimized to deal with keyword queries and might misunderstand you.
59
+ """,
60
+ unsafe_allow_html=True,
61
+ )
62
+
63
+ # Sidebar
64
+ st.sidebar.header("Options")
65
+ top_k_reader = st.sidebar.slider(
66
+ "Max. number of answers",
67
+ min_value=1,
68
+ max_value=10,
69
+ value=DEFAULT_NUMBER_OF_ANSWERS,
70
+ step=1,
71
+ on_change=reset_results,
72
+ )
73
+ top_k_retriever = st.sidebar.slider(
74
+ "Max. number of documents from retriever",
75
+ min_value=1,
76
+ max_value=10,
77
+ value=DEFAULT_DOCS_FROM_RETRIEVER,
78
+ step=1,
79
+ on_change=reset_results,
80
+ )
81
+ eval_mode = st.sidebar.checkbox("Evaluation mode")
82
+ debug = st.sidebar.checkbox("Show debug info")
83
+
84
+ # File upload block
85
+ if not DISABLE_FILE_UPLOAD:
86
+ st.sidebar.write("## File Upload:")
87
+ data_files = st.sidebar.file_uploader(
88
+ "upload", type=["pdf", "txt", "docx"], accept_multiple_files=True, label_visibility="hidden"
89
+ )
90
+ for data_file in data_files:
91
+ # Upload file
92
+ if data_file:
93
+ try:
94
+ raw_json = upload_doc(data_file)
95
+ st.sidebar.write(str(data_file.name) + " &nbsp;&nbsp; βœ… ")
96
+ if debug:
97
+ st.subheader("REST API JSON response")
98
+ st.sidebar.write(raw_json)
99
+ except Exception as e:
100
+ st.sidebar.write(str(data_file.name) + " &nbsp;&nbsp; ❌ ")
101
+ st.sidebar.write("_This file could not be parsed, see the logs for more information._")
102
+
103
+
104
+ hs_version = ""
105
+ try:
106
+ hs_version = f" <small>(v{haystack_version()})</small>"
107
+ except Exception:
108
+ pass
109
+
110
+ st.sidebar.markdown(
111
+ f"""
112
+ <style>
113
+ a {{
114
+ text-decoration: none;
115
+ }}
116
+ .haystack-footer {{
117
+ text-align: center;
118
+ }}
119
+ .haystack-footer h4 {{
120
+ margin: 0.1rem;
121
+ padding:0;
122
+ }}
123
+ footer {{
124
+ opacity: 0;
125
+ }}
126
+ </style>
127
+ <div class="haystack-footer">
128
+ <hr />
129
+ <h4>Built with <a href="https://haystack.deepset.ai/">Haystack</a> 1.14.0</h4>
130
+ <p>Get it on <a href="https://github.com/deepset-ai/haystack/">GitHub</a> &nbsp;&nbsp; - &nbsp;&nbsp; Read the <a href="https://docs.haystack.deepset.ai/docs">Docs</a></p>
131
+ </div>
132
+ """,
133
+ unsafe_allow_html=True,
134
+ )
135
+
136
+ # Load csv into pandas dataframe
137
+ try:
138
+ df = pd.read_csv(EVAL_LABELS, sep=";")
139
+ except Exception:
140
+ st.error(
141
+ f"The eval file was not found. Please check the demo's [README](https://github.com/deepset-ai/haystack/tree/main/ui/README.md) for more information."
142
+ )
143
+ sys.exit(
144
+ f"The eval file was not found under `{EVAL_LABELS}`. Please check the README (https://github.com/deepset-ai/haystack/tree/main/ui/README.md) for more information."
145
+ )
146
+
147
+ # Search bar
148
+ question = st.text_input(
149
+ value=st.session_state.question,
150
+ max_chars=100,
151
+ on_change=reset_results,
152
+ label="question",
153
+ label_visibility="hidden",
154
+ )
155
+ col1, col2 = st.columns(2)
156
+ col1.markdown("<style>.stButton button {width:100%;}</style>", unsafe_allow_html=True)
157
+ col2.markdown("<style>.stButton button {width:100%;}</style>", unsafe_allow_html=True)
158
+
159
+ # Run button
160
+ run_pressed = col1.button("Run")
161
+
162
+ # Get next random question from the CSV
163
+ if col2.button("Random question"):
164
+ reset_results()
165
+ new_row = df.sample(1)
166
+ while (
167
+ new_row["Question Text"].values[0] == st.session_state.question
168
+ ): # Avoid picking the same question twice (the change is not visible on the UI)
169
+ new_row = df.sample(1)
170
+ st.session_state.question = new_row["Question Text"].values[0]
171
+ st.session_state.answer = new_row["Answer"].values[0]
172
+ st.session_state.random_question_requested = True
173
+ # Re-runs the script setting the random question as the textbox value
174
+ # Unfortunately necessary as the Random Question button is _below_ the textbox
175
+ if hasattr(st, "scriptrunner"):
176
+ raise st.scriptrunner.script_runner.RerunException(
177
+ st.scriptrunner.script_requests.RerunData(widget_states=None)
178
+ )
179
+ raise st.runtime.scriptrunner.script_runner.RerunException(
180
+ st.runtime.scriptrunner.script_requests.RerunData(widget_states=None)
181
+ )
182
+ st.session_state.random_question_requested = False
183
+
184
+ run_query = (
185
+ run_pressed or question != st.session_state.question
186
+ ) and not st.session_state.random_question_requested
187
+
188
+ # Check the connection
189
+ with st.spinner("βŒ›οΈ &nbsp;&nbsp; Haystack is starting..."):
190
+ if not haystack_is_ready():
191
+ st.error("🚫 &nbsp;&nbsp; Connection Error. Is Haystack running?")
192
+ run_query = False
193
+ reset_results()
194
+
195
+ # Get results for query
196
+ if run_query and question:
197
+ reset_results()
198
+ st.session_state.question = question
199
+
200
+ with st.spinner(
201
+ "🧠 &nbsp;&nbsp; Performing neural search on documents... \n "
202
+ "Do you want to optimize speed or accuracy? \n"
203
+ "Check out the docs: https://haystack.deepset.ai/usage/optimization "
204
+ ):
205
+ try:
206
+ st.session_state.results, st.session_state.raw_json = query(
207
+ question, top_k_reader=top_k_reader, top_k_retriever=top_k_retriever
208
+ )
209
+ except JSONDecodeError as je:
210
+ st.error("πŸ‘“ &nbsp;&nbsp; An error occurred reading the results. Is the document store working?")
211
+ return
212
+ except Exception as e:
213
+ logging.exception(e)
214
+ if "The server is busy processing requests" in str(e) or "503" in str(e):
215
+ st.error("πŸ§‘β€πŸŒΎ &nbsp;&nbsp; All our workers are busy! Try again later.")
216
+ else:
217
+ st.error("🐞 &nbsp;&nbsp; An error occurred during the request.")
218
+ return
219
+
220
+ if st.session_state.results:
221
+
222
+ # Show the gold answer if we use a question of the given set
223
+ if eval_mode and st.session_state.answer:
224
+ st.write("## Correct answer:")
225
+ st.write(st.session_state.answer)
226
+
227
+ st.write("## Results:")
228
+
229
+ for count, result in enumerate(st.session_state.results):
230
+ if result["answer"]:
231
+ answer, context = result["answer"], result["context"]
232
+ start_idx = context.find(answer)
233
+ end_idx = start_idx + len(answer)
234
+ # Hack due to this bug: https://github.com/streamlit/streamlit/issues/3190
235
+ st.write(
236
+ markdown(context[:start_idx] + str(annotation(answer, "ANSWER", "#8ef")) + context[end_idx:]),
237
+ unsafe_allow_html=True,
238
+ )
239
+ source = ""
240
+ url, title = get_backlink(result)
241
+ if url and title:
242
+ source = f"[{result['document']['meta']['title']}]({result['document']['meta']['url']})"
243
+ else:
244
+ source = f"{result['source']}"
245
+ st.markdown(f"**Relevance:** {result['relevance']} - **Source:** {source}")
246
+
247
+ else:
248
+ st.info(
249
+ "πŸ€” &nbsp;&nbsp; Haystack is unsure whether any of the documents contain an answer to your question. Try to reformulate it!"
250
+ )
251
+ st.write("**Relevance:** ", result["relevance"])
252
+
253
+ if eval_mode and result["answer"]:
254
+ # Define columns for buttons
255
+ is_correct_answer = None
256
+ is_correct_document = None
257
+
258
+ button_col1, button_col2, button_col3, _ = st.columns([1, 1, 1, 6])
259
+ if button_col1.button("πŸ‘", key=f"{result['context']}{count}1", help="Correct answer"):
260
+ is_correct_answer = True
261
+ is_correct_document = True
262
+
263
+ if button_col2.button("πŸ‘Ž", key=f"{result['context']}{count}2", help="Wrong answer and wrong passage"):
264
+ is_correct_answer = False
265
+ is_correct_document = False
266
+
267
+ if button_col3.button(
268
+ "πŸ‘ŽπŸ‘", key=f"{result['context']}{count}3", help="Wrong answer, but correct passage"
269
+ ):
270
+ is_correct_answer = False
271
+ is_correct_document = True
272
+
273
+ if is_correct_answer is not None and is_correct_document is not None:
274
+ try:
275
+ send_feedback(
276
+ query=question,
277
+ answer_obj=result["_raw"],
278
+ is_correct_answer=is_correct_answer,
279
+ is_correct_document=is_correct_document,
280
+ document=result["document"],
281
+ )
282
+ st.success("✨ &nbsp;&nbsp; Thanks for your feedback! &nbsp;&nbsp; ✨")
283
+ except Exception as e:
284
+ logging.exception(e)
285
+ st.error("🐞 &nbsp;&nbsp; An error occurred while submitting your feedback!")
286
+
287
+ st.write("___")
288
+
289
+ if debug:
290
+ st.subheader("REST API JSON response")
291
+ st.write(st.session_state.raw_json)
292
+
293
+
294
+ main()