CSAle commited on
Commit
d40a83a
β€’
1 Parent(s): 67bc0df

Updating App

Browse files
Files changed (4) hide show
  1. README.md +177 -2
  2. app.py +31 -9
  3. chainlit.md +2 -13
  4. tools.py +14 -1
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- title: GPT4.5TurboApp
3
  emoji: πŸ“Š
4
  colorFrom: blue
5
  colorTo: purple
@@ -7,4 +7,179 @@ sdk: docker
7
  pinned: false
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: GPT4TurboApp
3
  emoji: πŸ“Š
4
  colorFrom: blue
5
  colorTo: purple
 
7
  pinned: false
8
  ---
9
 
10
+ <p align = "center" draggable=”false” ><img src="https://github.com/AI-Maker-Space/LLM-Dev-101/assets/37101144/d1343317-fa2f-41e1-8af1-1dbb18399719"
11
+ width="200px"
12
+ height="auto"/>
13
+ </p>
14
+
15
+
16
+ ## <h1 align="center" id="heading">:wave: Welcome to Beyond ChatGPT!!</h1>
17
+
18
+ ## πŸ€– GPT-4 Turbo Application with DALL-E 3 Image Generation
19
+
20
+ > If you need an introduction to `git`, or information on how to set up API keys for the tools we'll be using in this repository - check out our [Interactive Dev Environment for LLM Development](https://github.com/AI-Maker-Space/Interactive-Dev-Environment-for-LLM-Development/tree/main) which has everything you'd need to get started in this repository!
21
+
22
+ In this repository, we'll walk you through the steps to create a Large Language Model (LLM) application using Chainlit, then containerize it using Docker, and finally deploy it on Huggingface Spaces.
23
+
24
+ Are you ready? Let's get started!
25
+
26
+ <details>
27
+ <summary>πŸ–₯️ Accessing "gpt-3.5-turbo" (ChatGPT) like a developer</summary>
28
+
29
+ 1. Head to [this notebook](https://colab.research.google.com/drive/1mOzbgf4a2SP5qQj33ZxTz2a01-5eXqk2?usp=sharing) and follow along with the instructions!
30
+
31
+ 2. Complete the notebook and try out your own system/assistant messages!
32
+
33
+ That's it! Head to the next step and start building your application!
34
+
35
+ </details>
36
+
37
+
38
+ <details>
39
+ <summary>πŸ—οΈ Building Your GPT-4 Turbo Application with DALL-E 3 Image Generation</summary>
40
+
41
+ 1. Clone [this](https://github.com/AI-Maker-Space/Beyond-ChatGPT/tree/main) repo.
42
+
43
+ ``` bash
44
+ git clone https://github.com/AI-Maker-Space/Beyond-ChatGPT.git
45
+ ```
46
+
47
+ 2. Navigate inside this repo
48
+ ``` bash
49
+ cd Beyond-ChatGPT
50
+ ```
51
+
52
+ 3. Install the packages required for this python envirnoment in `requirements.txt`.
53
+ ``` bash
54
+ pip install -r requirements.txt
55
+ ```
56
+
57
+ 4. Open your `.env` file. Replace the `###` in your `.env` file with your OpenAI Key and save the file.
58
+ ``` bash
59
+ OPENAI_API_KEY=sk-###
60
+ ```
61
+
62
+ 5. Let's try deploying it locally. Make sure you're in the python environment where you installed Chainlit and OpenAI. Run the app using Chainlit. This may take a minute to run.
63
+ ``` bash
64
+ chainlit run app.py -w
65
+ ```
66
+
67
+ <p align = "center" draggable=”false”>
68
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/54bcccf9-12e2-4cef-ab53-585c1e2b0fb5">
69
+ </p>
70
+
71
+ Great work! Let's see if we can interact with our chatbot.
72
+
73
+ <p align = "center" draggable=”false”>
74
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/854e4435-1dee-438a-9146-7174b39f7c61">
75
+ </p>
76
+
77
+ Awesome! Time to throw it into a docker container and prepare it for shipping!
78
+ </details>
79
+
80
+
81
+
82
+ <details>
83
+ <summary>🐳 Containerizing our App</summary>
84
+
85
+ 1. Let's build the Docker image. We'll tag our image as `llm-app` using the `-t` parameter. The `.` at the end means we want all of the files in our current directory to be added to our image.
86
+
87
+ ``` bash
88
+ docker build -t llm-app .
89
+ ```
90
+
91
+ 2. Run and test the Docker image locally using the `run` command. The `-p`parameter connects our **host port #** to the left of the `:` to our **container port #** on the right.
92
+
93
+ ``` bash
94
+ docker run -p 7860:7860 llm-app
95
+ ```
96
+
97
+ 3. Visit http://localhost:7860 in your browser to see if the app runs correctly.
98
+
99
+ <p align = "center" draggable=”false”>
100
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/2c764f25-09a0-431b-8d28-32246e0ca1b7">
101
+ </p>
102
+
103
+ Great! Time to ship!
104
+ </details>
105
+
106
+
107
+ <details>
108
+ <summary>πŸš€ Deploying Your First LLM App</summary>
109
+
110
+ 1. Let's create a new Huggingface Space. Navigate to [Huggingface](https://huggingface.co) and click on your profile picture on the top right. Then click on `New Space`.
111
+
112
+ <p align = "center" draggable=”false”>
113
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/f0656408-28b8-4876-9887-8f0c4b882bae">
114
+ </p>
115
+
116
+ 2. Setup your space as shown below:
117
+
118
+ - Owner: Your username
119
+ - Space Name: `llm-app`
120
+ - License: `Openrail`
121
+ - Select the Space SDK: `Docker`
122
+ - Docker Template: `Blank`
123
+ - Space Hardware: `CPU basic - 2 vCPU - 16 GB - Free`
124
+ - Repo type: `Public`
125
+
126
+ <p align = "center" draggable=”false”>
127
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/8f16afd1-6b46-4d9f-b642-8fefe355c5c9">
128
+ </p>
129
+
130
+ 3. You should see something like this. We're now ready to send our files to our Huggingface Space. After cloning, move your files to this repo and push it along with your docker file. You DO NOT need to create a Dockerfile. Make sure NOT TO push your `.env` file. This should automatically be ignored.
131
+
132
+ <p align = "center" draggable=”false”>
133
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/cbf366e2-7613-4223-932a-72c67a73f9c6">
134
+ </p>
135
+
136
+ 4. After pushing all files, navigate to the settings in the top right to add your OpenAI API key.
137
+
138
+ <p align = "center" draggable=”false”>
139
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/a1123a6f-abdd-4f76-bea4-39acf9928762">
140
+ </p>
141
+
142
+ 5. Scroll down to `Variables and secrets` and click on `New secret` on the top right.
143
+
144
+ <p align = "center" draggable=”false”>
145
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/a8a4a25d-752b-4036-b572-93381370c2db">
146
+ </p>
147
+
148
+ 6. Set the name to `OPENAI_API_KEY` and add your OpenAI key under `Value`. Click save.
149
+
150
+ <p align = "center" draggable=”false”>
151
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/0a897538-1779-48ff-bcb4-486af30f7a14">
152
+ </p>
153
+
154
+ 7. To ensure your key is being used, we recommend you `Restart this Space`.
155
+
156
+ <p align = "center" draggable=”false”>
157
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/fb1d83af-6ebe-4676-8bf5-b6d88f07c583">
158
+ </p>
159
+
160
+ 8. Congratulations! You just deployed your first LLM! πŸš€πŸš€πŸš€ Get on linkedin and post your results and experience! Make sure to tag us at #AIMakerspace !
161
+
162
+ Here's a template to get your post started!
163
+
164
+ ```
165
+ πŸš€πŸŽ‰ Exciting News! πŸŽ‰πŸš€
166
+
167
+ πŸ—οΈΒ Today, I'm thrilled to announce that I've successfully built and shipped my first-ever LLM using the powerful combination of Chainlit, Docker, and the OpenAI API! πŸ–₯️
168
+
169
+ Check it out πŸ‘‡
170
+ [LINK TO APP]
171
+
172
+ A big shoutout to the @**AI Makerspace** for all making this possible. Couldn't have done it without the incredible community there. πŸ€—πŸ™
173
+
174
+ Looking forward to building with the community! πŸ™Œβœ¨Β Here's to many more creations ahead! πŸ₯‚πŸŽ‰
175
+
176
+ Who else is diving into the world of AI? Let's connect! πŸŒπŸ’‘
177
+
178
+ #FirstLLM #Chainlit #Docker #OpenAI #AIMakerspace
179
+ ```
180
+
181
+ </details>
182
+
183
+ <p></p>
184
+
185
+ ### That's it for now! And so it begins.... :)
app.py CHANGED
@@ -20,7 +20,8 @@ def rename(orig_author):
20
  "AgentExecutor": "The LLM Brain",
21
  "LLMChain": "The Assistant",
22
  "GenerateImage": "DALL-E 3",
23
- "ChatOpenAI": "GPT-4.5 Turbo",
 
24
  }
25
  return mapping.get(orig_author, orig_author)
26
 
@@ -67,33 +68,54 @@ async def start():
67
  async def setup_agent(settings):
68
  print("Setup agent with following settings: ", settings)
69
 
 
70
  llm = ChatOpenAI(
71
  temperature=settings["Temperature"],
72
  streaming=settings["Streaming"],
73
  model=settings["Model"],
74
  )
 
 
75
  memory = get_memory()
 
 
76
  _SUFFIX = "Chat history:\n{chat_history}\n\n" + SUFFIX
77
 
 
 
78
  agent = initialize_agent(
79
- llm=llm,
80
- tools=[generate_image_tool],
81
- agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
82
- memory=memory,
 
 
83
  agent_kwargs={
84
- "suffix": _SUFFIX,
85
  "input_variables": ["input", "agent_scratchpad", "chat_history"],
86
  },
87
  )
88
- cl.user_session.set("agent", agent)
89
 
90
 
91
  @cl.on_message
92
  async def main(message: cl.Message):
93
- agent = cl.user_session.get("agent") # type: AgentExecutor
 
 
 
 
 
 
 
 
 
 
 
 
 
94
  cl.user_session.set("generated_image", None)
95
 
96
- # No async implementation in the Stability AI client, fallback to sync
97
  res = await cl.make_async(agent.run)(
98
  input=message.content, callbacks=[cl.LangchainCallbackHandler()]
99
  )
 
20
  "AgentExecutor": "The LLM Brain",
21
  "LLMChain": "The Assistant",
22
  "GenerateImage": "DALL-E 3",
23
+ "ChatOpenAI": "GPT-4 Turbo",
24
+ "Chatbot": "Coolest App",
25
  }
26
  return mapping.get(orig_author, orig_author)
27
 
 
68
  async def setup_agent(settings):
69
  print("Setup agent with following settings: ", settings)
70
 
71
+ # We set up our agent with the user selected (or default) settings here.
72
  llm = ChatOpenAI(
73
  temperature=settings["Temperature"],
74
  streaming=settings["Streaming"],
75
  model=settings["Model"],
76
  )
77
+
78
+ # We get our memory here, which is used to track the conversation history.
79
  memory = get_memory()
80
+
81
+ # This suffix is used to provide the chat history to the prompt.
82
  _SUFFIX = "Chat history:\n{chat_history}\n\n" + SUFFIX
83
 
84
+ # We initialize our agent here, which is simply being used to decide between responding with text
85
+ # or an image
86
  agent = initialize_agent(
87
+ llm=llm, # our LLM (default is GPT-4 Turbo)
88
+ tools=[
89
+ generate_image_tool
90
+ ], # our custom tool used to generate images with DALL-E 3
91
+ agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, # the agent type we're using today
92
+ memory=memory, # our memory!
93
  agent_kwargs={
94
+ "suffix": _SUFFIX, # adding our chat history suffix
95
  "input_variables": ["input", "agent_scratchpad", "chat_history"],
96
  },
97
  )
98
+ cl.user_session.set("agent", agent) # storing our agent in the user session
99
 
100
 
101
  @cl.on_message
102
  async def main(message: cl.Message):
103
+ """
104
+ This function is going to intercept all messages sent by the user, and
105
+ move through our agent flow to generate a response.
106
+
107
+ There are ultimately two different options for the agent to respond with:
108
+ 1. Text
109
+ 2. Image
110
+
111
+ If the agent responds with text, we simply send the text back to the user.
112
+
113
+ If the agent responds with an image, we need to generate the image and send
114
+ it back to the user.
115
+ """
116
+ agent = cl.user_session.get("agent")
117
  cl.user_session.set("generated_image", None)
118
 
 
119
  res = await cl.make_async(agent.run)(
120
  input=message.content, callbacks=[cl.LangchainCallbackHandler()]
121
  )
chainlit.md CHANGED
@@ -1,14 +1,3 @@
1
- # Welcome to Chainlit! πŸš€πŸ€–
2
 
3
- Hi there, Developer! πŸ‘‹ We're excited to have you on board. Chainlit is a powerful tool designed to help you prototype, debug and share applications built on top of LLMs.
4
-
5
- ## Useful Links πŸ”—
6
-
7
- - **Documentation:** Get started with our comprehensive [Chainlit Documentation](https://docs.chainlit.io) πŸ“š
8
- - **Discord Community:** Join our friendly [Chainlit Discord](https://discord.gg/ZThrUxbAYw) to ask questions, share your projects, and connect with other developers! πŸ’¬
9
-
10
- We can't wait to see what you create with Chainlit! Happy coding! πŸ’»πŸ˜Š
11
-
12
- ## Welcome screen
13
-
14
- To modify the welcome screen, edit the `chainlit.md` file at the root of your project. If you do not want a welcome screen, just leave this file empty.
 
1
+ # Welcome to a GPT-4 Turbo application with DALL-E 3 Image Generation capabilities!
2
 
3
+ Hey! We're excited to provide you with a GPT-4 Turbo-powered application complete with Image Generation!
 
 
 
 
 
 
 
 
 
 
 
tools.py CHANGED
@@ -11,6 +11,10 @@ import chainlit as cl
11
 
12
 
13
  def get_image_name():
 
 
 
 
14
  image_count = cl.user_session.get("image_count")
15
  if image_count is None:
16
  image_count = 0
@@ -23,6 +27,13 @@ def get_image_name():
23
 
24
 
25
  def _generate_image(prompt: str):
 
 
 
 
 
 
 
26
  client = OpenAI()
27
 
28
  response = client.images.generate(
@@ -50,8 +61,10 @@ def generate_image(prompt: str):
50
  return f"Here is {image_name}."
51
 
52
 
 
 
 
53
  generate_image_format = '{{"prompt": "prompt"}}'
54
-
55
  generate_image_tool = Tool.from_function(
56
  func=generate_image,
57
  name="GenerateImage",
 
11
 
12
 
13
  def get_image_name():
14
+ """
15
+ We need to keep track of images we generate, so we can reference them later
16
+ and display them correctly to our users.
17
+ """
18
  image_count = cl.user_session.get("image_count")
19
  if image_count is None:
20
  image_count = 0
 
27
 
28
 
29
  def _generate_image(prompt: str):
30
+ """
31
+ This function is used to generate an image from a text prompt using
32
+ DALL-E 3.
33
+
34
+ We use the OpenAI API to generate the image, and then store it in our
35
+ user session so we can reference it later.
36
+ """
37
  client = OpenAI()
38
 
39
  response = client.images.generate(
 
61
  return f"Here is {image_name}."
62
 
63
 
64
+ # this is our tool - which is what allows our agent to generate images in the first place!
65
+ # the `description` field is of utmost imporance as it is what the LLM "brain" uses to determine
66
+ # which tool to use for a given input.
67
  generate_image_format = '{{"prompt": "prompt"}}'
 
68
  generate_image_tool = Tool.from_function(
69
  func=generate_image,
70
  name="GenerateImage",