CSAle commited on
Commit
cd5d16e
·
1 Parent(s): 695203b

Preparing for release

Browse files
Files changed (2) hide show
  1. .gitignore +3 -1
  2. README.md +229 -3
.gitignore CHANGED
@@ -1,2 +1,4 @@
1
  __pycache__/
2
- .chainlit/
 
 
 
1
  __pycache__/
2
+ .chainlit/
3
+ .venv/
4
+ .env
README.md CHANGED
@@ -10,16 +10,228 @@ license: apache-2.0
10
 
11
  # Deploying Pythonic Chat With Your Text File Application
12
 
13
- In today's breakout rooms, we will be following the process that you saw during the challenge - for reference, the instructions for that are available [here](https://github.com/AI-Maker-Space/Beyond-ChatGPT/tree/main).
14
 
15
  Today, we will repeat the same process - but powered by our Pythonic RAG implementation we created last week.
16
 
17
  You'll notice a few differences in the `app.py` logic - as well as a few changes to the `aimakerspace` package to get things working smoothly with Chainlit.
18
 
 
 
19
  ## Reference Diagram (It's Busy, but it works)
20
 
21
  ![image](https://i.imgur.com/IaEVZG2.png)
22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  ## Deploying the Application to Hugging Face Space
24
 
25
  Due to the way the repository is created - it should be straightforward to deploy this to a Hugging Face Space!
@@ -113,6 +325,20 @@ You just deployed Pythonic RAG!
113
 
114
  Try uploading a text file and asking some questions!
115
 
116
- ## 🚧CHALLENGE MODE 🚧
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
 
118
- For more of a challenge, please reference [Building a Chainlit App](./BuildingAChainlitApp.md)!
 
10
 
11
  # Deploying Pythonic Chat With Your Text File Application
12
 
13
+ In today's breakout rooms, we will be following the process that you saw during the challenge.
14
 
15
  Today, we will repeat the same process - but powered by our Pythonic RAG implementation we created last week.
16
 
17
  You'll notice a few differences in the `app.py` logic - as well as a few changes to the `aimakerspace` package to get things working smoothly with Chainlit.
18
 
19
+ > NOTE: If you want to run this locally - be sure to use `uv run chainlit run app.py` to start the application outside of Docker.
20
+
21
  ## Reference Diagram (It's Busy, but it works)
22
 
23
  ![image](https://i.imgur.com/IaEVZG2.png)
24
 
25
+ ### Anatomy of a Chainlit Application
26
+
27
+ [Chainlit](https://docs.chainlit.io/get-started/overview) is a Python package similar to Streamlit that lets users write a backend and a front end in a single (or multiple) Python file(s). It is mainly used for prototyping LLM-based Chat Style Applications - though it is used in production in some settings with 1,000,000s of MAUs (Monthly Active Users).
28
+
29
+ The primary method of customizing and interacting with the Chainlit UI is through a few critical [decorators](https://blog.hubspot.com/website/decorators-in-python).
30
+
31
+ > NOTE: Simply put, the decorators (in Chainlit) are just ways we can "plug-in" to the functionality in Chainlit.
32
+
33
+ We'll be concerning ourselves with three main scopes:
34
+
35
+ 1. On application start - when we start the Chainlit application with a command like `chainlit run app.py`
36
+ 2. On chat start - when a chat session starts (a user opens the web browser to the address hosting the application)
37
+ 3. On message - when the users sends a message through the input text box in the Chainlit UI
38
+
39
+ Let's dig into each scope and see what we're doing!
40
+
41
+ ### On Application Start:
42
+
43
+ The first thing you'll notice is that we have the traditional "wall of imports" this is to ensure we have everything we need to run our application.
44
+
45
+ ```python
46
+ import os
47
+ from typing import List
48
+ from chainlit.types import AskFileResponse
49
+ from aimakerspace.text_utils import CharacterTextSplitter, TextFileLoader
50
+ from aimakerspace.openai_utils.prompts import (
51
+ UserRolePrompt,
52
+ SystemRolePrompt,
53
+ AssistantRolePrompt,
54
+ )
55
+ from aimakerspace.openai_utils.embedding import EmbeddingModel
56
+ from aimakerspace.vectordatabase import VectorDatabase
57
+ from aimakerspace.openai_utils.chatmodel import ChatOpenAI
58
+ import chainlit as cl
59
+ ```
60
+
61
+ Next up, we have some prompt templates. As all sessions will use the same prompt templates without modification, and we don't need these templates to be specific per template - we can set them up here - at the application scope.
62
+
63
+ ```python
64
+ system_template = """\
65
+ Use the following context to answer a users question. If you cannot find the answer in the context, say you don't know the answer."""
66
+ system_role_prompt = SystemRolePrompt(system_template)
67
+
68
+ user_prompt_template = """\
69
+ Context:
70
+ {context}
71
+
72
+ Question:
73
+ {question}
74
+ """
75
+ user_role_prompt = UserRolePrompt(user_prompt_template)
76
+ ```
77
+
78
+ > NOTE: You'll notice that these are the exact same prompt templates we used from the Pythonic RAG Notebook in Week 1 Day 2!
79
+
80
+ Following that - we can create the Python Class definition for our RAG pipeline - or *chain*, as we'll refer to it in the rest of this walkthrough.
81
+
82
+ Let's look at the definition first:
83
+
84
+ ```python
85
+ class RetrievalAugmentedQAPipeline:
86
+ def __init__(self, llm: ChatOpenAI(), vector_db_retriever: VectorDatabase) -> None:
87
+ self.llm = llm
88
+ self.vector_db_retriever = vector_db_retriever
89
+
90
+ async def arun_pipeline(self, user_query: str):
91
+ ### RETRIEVAL
92
+ context_list = self.vector_db_retriever.search_by_text(user_query, k=4)
93
+
94
+ context_prompt = ""
95
+ for context in context_list:
96
+ context_prompt += context[0] + "\n"
97
+
98
+ ### AUGMENTED
99
+ formatted_system_prompt = system_role_prompt.create_message()
100
+
101
+ formatted_user_prompt = user_role_prompt.create_message(question=user_query, context=context_prompt)
102
+
103
+
104
+ ### GENERATION
105
+ async def generate_response():
106
+ async for chunk in self.llm.astream([formatted_system_prompt, formatted_user_prompt]):
107
+ yield chunk
108
+
109
+ return {"response": generate_response(), "context": context_list}
110
+ ```
111
+
112
+ Notice a few things:
113
+
114
+ 1. We have modified this `RetrievalAugmentedQAPipeline` from the initial notebook to support streaming.
115
+ 2. In essence, our pipeline is *chaining* a few events together:
116
+ 1. We take our user query, and chain it into our Vector Database to collect related chunks
117
+ 2. We take those contexts and our user's questions and chain them into the prompt templates
118
+ 3. We take that prompt template and chain it into our LLM call
119
+ 4. We chain the response of the LLM call to the user
120
+ 3. We are using a lot of `async` again!
121
+
122
+ Now, we're going to create a helper function for processing uploaded text files.
123
+
124
+ First, we'll instantiate a shared `CharacterTextSplitter`.
125
+
126
+ ```python
127
+ text_splitter = CharacterTextSplitter()
128
+ ```
129
+
130
+ Now we can define our helper.
131
+
132
+ ```python
133
+ def process_file(file: AskFileResponse):
134
+ import tempfile
135
+ import shutil
136
+
137
+ print(f"Processing file: {file.name}")
138
+
139
+ # Create a temporary file with the correct extension
140
+ suffix = f".{file.name.split('.')[-1]}"
141
+ with tempfile.NamedTemporaryFile(delete=False, suffix=suffix) as temp_file:
142
+ # Copy the uploaded file content to the temporary file
143
+ shutil.copyfile(file.path, temp_file.name)
144
+ print(f"Created temporary file at: {temp_file.name}")
145
+
146
+ # Create appropriate loader
147
+ if file.name.lower().endswith('.pdf'):
148
+ loader = PDFLoader(temp_file.name)
149
+ else:
150
+ loader = TextFileLoader(temp_file.name)
151
+
152
+ try:
153
+ # Load and process the documents
154
+ documents = loader.load_documents()
155
+ texts = text_splitter.split_texts(documents)
156
+ return texts
157
+ finally:
158
+ # Clean up the temporary file
159
+ try:
160
+ os.unlink(temp_file.name)
161
+ except Exception as e:
162
+ print(f"Error cleaning up temporary file: {e}")
163
+ ```
164
+
165
+ Simply put, this downloads the file as a temp file, we load it in with `TextFileLoader` and then split it with our `TextSplitter`, and returns that list of strings!
166
+
167
+ #### ❓ QUESTION #1:
168
+
169
+ Why do we want to support streaming? What about streaming is important, or useful?
170
+
171
+ ### On Chat Start:
172
+
173
+ The next scope is where "the magic happens". On Chat Start is when a user begins a chat session. This will happen whenever a user opens a new chat window, or refreshes an existing chat window.
174
+
175
+ You'll see that our code is set-up to immediately show the user a chat box requesting them to upload a file.
176
+
177
+ ```python
178
+ while files == None:
179
+ files = await cl.AskFileMessage(
180
+ content="Please upload a Text or PDF file to begin!",
181
+ accept=["text/plain", "application/pdf"],
182
+ max_size_mb=2,
183
+ timeout=180,
184
+ ).send()
185
+ ```
186
+
187
+ Once we've obtained the text file - we'll use our processing helper function to process our text!
188
+
189
+ After we have processed our text file - we'll need to create a `VectorDatabase` and populate it with our processed chunks and their related embeddings!
190
+
191
+ ```python
192
+ vector_db = VectorDatabase()
193
+ vector_db = await vector_db.abuild_from_list(texts)
194
+ ```
195
+
196
+ Once we have that piece completed - we can create the chain we'll be using to respond to user queries!
197
+
198
+ ```python
199
+ retrieval_augmented_qa_pipeline = RetrievalAugmentedQAPipeline(
200
+ vector_db_retriever=vector_db,
201
+ llm=chat_openai
202
+ )
203
+ ```
204
+
205
+ Now, we'll save that into our user session!
206
+
207
+ > NOTE: Chainlit has some great documentation about [User Session](https://docs.chainlit.io/concepts/user-session).
208
+
209
+ #### ❓ QUESTION #2:
210
+
211
+ Why are we using User Session here? What about Python makes us need to use this? Why not just store everything in a global variable?
212
+
213
+ ### On Message
214
+
215
+ First, we load our chain from the user session:
216
+
217
+ ```python
218
+ chain = cl.user_session.get("chain")
219
+ ```
220
+
221
+ Then, we run the chain on the content of the message - and stream it to the front end - that's it!
222
+
223
+ ```python
224
+ msg = cl.Message(content="")
225
+ result = await chain.arun_pipeline(message.content)
226
+
227
+ async for stream_resp in result["response"]:
228
+ await msg.stream_token(stream_resp)
229
+ ```
230
+
231
+ ### 🎉
232
+
233
+ With that - you've created a Chainlit application that moves our Pythonic RAG notebook to a Chainlit application!
234
+
235
  ## Deploying the Application to Hugging Face Space
236
 
237
  Due to the way the repository is created - it should be straightforward to deploy this to a Hugging Face Space!
 
325
 
326
  Try uploading a text file and asking some questions!
327
 
328
+ #### Discussion Question #1:
329
+
330
+ Upload a PDF file of the recent DeepSeek-R1 paper and ask the following questions:
331
+
332
+ 1. What is RL and how does it help reasoning?
333
+ 2. What is the difference between DeepSeek-R1 and DeepSeek-R1-Zero?
334
+ 3. What is this paper about?
335
+
336
+ Does this application pass your vibe check? Are there any immediate pitfalls you're noticing?
337
+
338
+ ## 🚧 CHALLENGE MODE 🚧
339
+
340
+ For the challenge mode, please instead create a simple FastAPI backend with a simple React (or any other JS framework) frontend.
341
+
342
+ You can use the same prompt templates and RAG pipeline as we did here - but you'll need to modify the code to work with FastAPI and React.
343
 
344
+ Deploy this application to Hugging Face Spaces!