Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
Commit
β’
3ca0269
1
Parent(s):
f5d8038
new version, able to support Inference API models
Browse files- .env +30 -6
- README.md +28 -8
- public/favicon.ico +0 -0
- public/favicon/Icon/r +0 -0
- public/favicon/favicon-114-precomposed.png +0 -0
- public/favicon/favicon-120-precomposed.png +0 -0
- public/favicon/favicon-144-precomposed.png +0 -0
- public/favicon/favicon-152-precomposed.png +0 -0
- public/favicon/favicon-180-precomposed.png +0 -0
- public/favicon/favicon-192.png +0 -0
- public/favicon/favicon-32.png +0 -0
- public/favicon/favicon-36.png +0 -0
- public/favicon/favicon-48.png +0 -0
- public/favicon/favicon-57.png +0 -0
- public/favicon/favicon-60.png +0 -0
- public/favicon/favicon-72-precomposed.png +0 -0
- public/favicon/favicon-72.png +0 -0
- public/favicon/favicon-76.png +0 -0
- public/favicon/favicon-96.png +0 -0
- public/favicon/favicon.ico +0 -0
- public/favicon/index.html +133 -0
- public/favicon/manifest.json +41 -0
- public/icon.png +0 -0
- src/app/engine/caption.ts +2 -2
- src/app/engine/censorship.ts +2 -0
- src/app/engine/forbidden.ts +1 -1
- src/app/engine/render.ts +7 -7
- src/app/favicon.ico +0 -0
- src/app/icon.png +0 -0
- src/app/interface/top-menu/index.tsx +1 -0
- src/app/queries/predict.ts +44 -6
- src/components/ui/input.tsx +1 -1
- src/types.ts +11 -0
.env
CHANGED
@@ -1,6 +1,30 @@
|
|
1 |
-
|
2 |
-
#
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# ------------- IMAGE API CONFIG --------------
|
2 |
+
# Supported values:
|
3 |
+
# - VIDEOCHAIN
|
4 |
+
RENDERING_ENGINE="VIDEOCHAIN"
|
5 |
+
|
6 |
+
VIDEOCHAIN_API_URL="http://localhost:7860"
|
7 |
+
VIDEOCHAIN_API_TOKEN=
|
8 |
+
|
9 |
+
# Not supported yet
|
10 |
+
REPLICATE_TOKEN=
|
11 |
+
|
12 |
+
# ------------- LLM API CONFIG ----------------
|
13 |
+
# Supported values:
|
14 |
+
# - INFERENCE_ENDPOINT
|
15 |
+
# - INFERENCE_API
|
16 |
+
LLM_ENGINE="INFERENCE_ENDPOINT"
|
17 |
+
|
18 |
+
# Hugging Face token (if you choose to use a custom Inference Endpoint or an Inference API model)
|
19 |
+
HF_API_TOKEN=
|
20 |
+
|
21 |
+
# URL to a custom text-generation Inference Endpoint of your choice
|
22 |
+
# -> You can leave it empty if you decide to use an Inference API Model instead
|
23 |
+
HF_INFERENCE_ENDPOINT_URL=
|
24 |
+
|
25 |
+
# You can also use a model from the Inference API (not a custom inference endpoint)
|
26 |
+
# -> You can leave it empty if you decide to use an Inference Endpoint URL instead
|
27 |
+
HF_INFERENCE_API_MODEL="codellama/CodeLlama-7b-hf"
|
28 |
+
|
29 |
+
# Not supported yet
|
30 |
+
OPENAI_TOKEN=
|
README.md
CHANGED
@@ -21,8 +21,8 @@ If you try to duplicate the project, you will see it requires some variables:
|
|
21 |
|
22 |
- `HF_INFERENCE_ENDPOINT_URL`: This is the endpoint to call the LLM
|
23 |
- `HF_API_TOKEN`: The Hugging Face token used to call the inference endpoint (if you intent to use a LLM hosted on Hugging Face)
|
24 |
-
- `
|
25 |
-
- `
|
26 |
|
27 |
This is the architecture for the current production AI Comic Factory.
|
28 |
|
@@ -32,17 +32,37 @@ This is the architecture for the current production AI Comic Factory.
|
|
32 |
|
33 |
Currently the AI Comic Factory uses [Llama-2 70b](https://huggingface.co/blog/llama2) through an [Inference Endpoint](https://huggingface.co/docs/inference-endpoints/index).
|
34 |
|
35 |
-
You have
|
36 |
|
37 |
-
### Option 1:
|
38 |
|
39 |
-
|
40 |
|
41 |
-
To
|
42 |
|
43 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
|
45 |
-
Another option could be to disable the LLM completely and replace it with a human-generated story instead (by returning mock or static data).
|
46 |
|
47 |
### Notes
|
48 |
|
|
|
21 |
|
22 |
- `HF_INFERENCE_ENDPOINT_URL`: This is the endpoint to call the LLM
|
23 |
- `HF_API_TOKEN`: The Hugging Face token used to call the inference endpoint (if you intent to use a LLM hosted on Hugging Face)
|
24 |
+
- `VIDEOCHAIN_API_URL`: This is the API that generates images
|
25 |
+
- `VIDEOCHAIN_API_TOKEN`: Token used to call the rendering engine API (not used yet, but it's gonna be because [πΈ](https://en.wikipedia.org/wiki/No_such_thing_as_a_free_lunch))
|
26 |
|
27 |
This is the architecture for the current production AI Comic Factory.
|
28 |
|
|
|
32 |
|
33 |
Currently the AI Comic Factory uses [Llama-2 70b](https://huggingface.co/blog/llama2) through an [Inference Endpoint](https://huggingface.co/docs/inference-endpoints/index).
|
34 |
|
35 |
+
You have three options:
|
36 |
|
37 |
+
### Option 1: Use an Inference API model
|
38 |
|
39 |
+
This is a new option added recently, where you can use one of the models from the Hugging Face Hub. By default we suggest to use CodeLlama.
|
40 |
|
41 |
+
To activate it, create a `.env.local` configuration file:
|
42 |
|
43 |
+
```bash
|
44 |
+
HF_API_TOKEN="Your Hugging Face token"
|
45 |
+
|
46 |
+
# codellama/CodeLlama-7b-hf" is used by default, but you can change this
|
47 |
+
# note: You should use a model able to generate JSON responses
|
48 |
+
HF_INFERENCE_API_MODEL="codellama/CodeLlama-7b-hf"
|
49 |
+
```
|
50 |
+
|
51 |
+
### Option 2: Use an Inference Endpoint URL
|
52 |
+
|
53 |
+
If your would like to run the AI Comic Factory on a private LLM running on the Hugging Face Inference Endpoint service, create a `.env.local` configuration file:
|
54 |
+
|
55 |
+
```bash
|
56 |
+
HF_API_TOKEN="Your Hugging Face token"
|
57 |
+
HF_INFERENCE_ENDPOINT_URL="path to your inference endpoint url"
|
58 |
+
```
|
59 |
+
|
60 |
+
To run this kind of LLM locally, you can use [TGI](https://github.com/huggingface/text-generation-inference) (Please read [this post](https://github.com/huggingface/text-generation-inference/issues/726) for more information about the licensing).
|
61 |
+
|
62 |
+
### Option 3: Fork and modify the code to use a different LLM system
|
63 |
+
|
64 |
+
Another option could be to disable the LLM completely and replace it with another LLM protocol and/or provider (eg. OpenAI, Replicate), or a human-generated story instead (by returning mock or static data).
|
65 |
|
|
|
66 |
|
67 |
### Notes
|
68 |
|
public/favicon.ico
ADDED
public/favicon/Icon/r
ADDED
File without changes
|
public/favicon/favicon-114-precomposed.png
ADDED
public/favicon/favicon-120-precomposed.png
ADDED
public/favicon/favicon-144-precomposed.png
ADDED
public/favicon/favicon-152-precomposed.png
ADDED
public/favicon/favicon-180-precomposed.png
ADDED
public/favicon/favicon-192.png
ADDED
public/favicon/favicon-32.png
ADDED
public/favicon/favicon-36.png
ADDED
public/favicon/favicon-48.png
ADDED
public/favicon/favicon-57.png
ADDED
public/favicon/favicon-60.png
ADDED
public/favicon/favicon-72-precomposed.png
ADDED
public/favicon/favicon-72.png
ADDED
public/favicon/favicon-76.png
ADDED
public/favicon/favicon-96.png
ADDED
public/favicon/favicon.ico
ADDED
public/favicon/index.html
ADDED
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!DOCTYPE html>
|
2 |
+
<head>
|
3 |
+
<title>
|
4 |
+
Favicons
|
5 |
+
</title>
|
6 |
+
<meta charset="utf-8" />
|
7 |
+
|
8 |
+
<!-- For old IEs -->
|
9 |
+
<link rel="shortcut icon" href="favicon.ico" />
|
10 |
+
|
11 |
+
<!-- For new browsers multisize ico -->
|
12 |
+
<link rel="icon" type="image/x-icon" sizes="16x16 32x32" href="favicon.ico">
|
13 |
+
|
14 |
+
<!-- Chrome for Android -->
|
15 |
+
<link rel="icon" sizes="192x192" href="favicon-192.png">
|
16 |
+
|
17 |
+
<!-- For iPhone 6+ downscaled for other devices -->
|
18 |
+
<link rel="apple-touch-icon" sizes="180x180" href="favicon-180-precomposed.png">
|
19 |
+
|
20 |
+
<!-- For IE10 Metro -->
|
21 |
+
<meta name="msapplication-TileColor" content="#FFFFFF">
|
22 |
+
<meta name="msapplication-TileImage" content="favicon-114-precomposed.png">
|
23 |
+
|
24 |
+
<style>
|
25 |
+
|
26 |
+
body {
|
27 |
+
background-color: #f5f5f5;
|
28 |
+
border: 0px;
|
29 |
+
margin: 0px;
|
30 |
+
padding: 0px;
|
31 |
+
font-family: Consolas,Menlo,Monaco,Lucida Console,Liberation Mono,DejaVu Sans Mono,Bitstream Vera Sans Mono,Courier New,monospace,serif;
|
32 |
+
color: black;
|
33 |
+
}
|
34 |
+
|
35 |
+
pre {
|
36 |
+
margin: 0px;
|
37 |
+
color: black;
|
38 |
+
padding: 0px 5%;
|
39 |
+
}
|
40 |
+
|
41 |
+
code {
|
42 |
+
|
43 |
+
}
|
44 |
+
|
45 |
+
.container {
|
46 |
+
background-color: white;
|
47 |
+
max-width: 800px;
|
48 |
+
width: 100%;
|
49 |
+
margin: 0 auto;
|
50 |
+
padding: 1% 0;
|
51 |
+
height: 100%;
|
52 |
+
}
|
53 |
+
|
54 |
+
.comment {
|
55 |
+
color: gray;
|
56 |
+
padding: 0px;
|
57 |
+
margin: 0px;
|
58 |
+
}
|
59 |
+
|
60 |
+
hr {
|
61 |
+
width: 80%;
|
62 |
+
padding: 0 5%;
|
63 |
+
border-color: #f5f5f5;
|
64 |
+
background-color: #D1D1D1;
|
65 |
+
}
|
66 |
+
|
67 |
+
p {
|
68 |
+
padding: 1% 5%;
|
69 |
+
}
|
70 |
+
|
71 |
+
</style>
|
72 |
+
|
73 |
+
</head>
|
74 |
+
<body class="">
|
75 |
+
|
76 |
+
<div class="container">
|
77 |
+
<p>
|
78 |
+
To use the favicons insert into your head section some of these tags accordly to your needs.
|
79 |
+
</p>
|
80 |
+
<hr>
|
81 |
+
<pre>
|
82 |
+
<code>
|
83 |
+
<span class="comment"><!-- For old IEs --></span>
|
84 |
+
<link rel="shortcut icon" href="favicon.ico" />
|
85 |
+
|
86 |
+
<span class="comment"><!-- For new browsers - multisize ico --></span>
|
87 |
+
<link rel="icon" type="image/x-icon" sizes="16x16 32x32" href="favicon.ico">
|
88 |
+
|
89 |
+
<span class="comment"><!-- For iPad with high-resolution Retina display running iOS ≥ 7: --></span>
|
90 |
+
<link rel="apple-touch-icon" sizes="152x152" href="favicon-152-precomposed.png">
|
91 |
+
|
92 |
+
<span class="comment"><!-- For iPad with high-resolution Retina display running iOS ≤ 6: --></span>
|
93 |
+
<link rel="apple-touch-icon" sizes="144x144" href="favicon-144-precomposed.png">
|
94 |
+
|
95 |
+
<span class="comment"><!-- For iPhone with high-resolution Retina display running iOS ≥ 7: --></span>
|
96 |
+
<link rel="apple-touch-icon" sizes="120x120" href="favicon-120-precomposed.png">
|
97 |
+
|
98 |
+
<span class="comment"><!-- For iPhone with high-resolution Retina display running iOS ≤ 6: --></span>
|
99 |
+
<link rel="apple-touch-icon" sizes="114x114" href="favicon-114-precomposed.png">
|
100 |
+
|
101 |
+
<span class="comment"><!-- For iPhone 6+ --></span>
|
102 |
+
<link rel="apple-touch-icon" sizes="180x180" href="favicon-180-precomposed.png">
|
103 |
+
|
104 |
+
<span class="comment"><!-- For first- and second-generation iPad: --></span>
|
105 |
+
<link rel="apple-touch-icon" sizes="72x72" href="favicon-72-precomposed.png">
|
106 |
+
|
107 |
+
<span class="comment"><!-- For non-Retina iPhone, iPod Touch, and Android 2.1+ devices: --></span>
|
108 |
+
<link rel="apple-touch-icon" sizes="57x57" href="favicon-57.png">
|
109 |
+
|
110 |
+
<span class="comment"><!-- For Old Chrome --></span>
|
111 |
+
<link rel="icon" sizes="32x32" href="favicon-32.png" >
|
112 |
+
|
113 |
+
<span class="comment"><!-- For IE10 Metro --></span>
|
114 |
+
<meta name="msapplication-TileColor" content="#FFFFFF">
|
115 |
+
<meta name="msapplication-TileImage" content="favicon-144.png">
|
116 |
+
<meta name="theme-color" content="#ffffff">
|
117 |
+
|
118 |
+
<span class="comment"><!-- Chrome for Android --></span>
|
119 |
+
<link rel="manifest" href="manifest.json">
|
120 |
+
<link rel="icon" sizes="192x192" href="favicon-192.png">
|
121 |
+
|
122 |
+
</code>
|
123 |
+
</pre>
|
124 |
+
|
125 |
+
<hr>
|
126 |
+
|
127 |
+
<p>
|
128 |
+
For more informations about favicons consult <a href="https://github.com/audreyr/favicon-cheat-sheet">The Favicon Cheat Sheet</a> by Audrey Roy.
|
129 |
+
</p>
|
130 |
+
|
131 |
+
</div>
|
132 |
+
|
133 |
+
</body>
|
public/favicon/manifest.json
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"name": "pollo",
|
3 |
+
"icons": [
|
4 |
+
{
|
5 |
+
"src": "\/favicon-36.png",
|
6 |
+
"sizes": "36x36",
|
7 |
+
"type": "image\/png",
|
8 |
+
"density": 0.75
|
9 |
+
},
|
10 |
+
{
|
11 |
+
"src": "\/favicon-48.png",
|
12 |
+
"sizes": "48x48",
|
13 |
+
"type": "image\/png",
|
14 |
+
"density": 1
|
15 |
+
},
|
16 |
+
{
|
17 |
+
"src": "\/favicon-72.png",
|
18 |
+
"sizes": "72x72",
|
19 |
+
"type": "image\/png",
|
20 |
+
"density": 1.5
|
21 |
+
},
|
22 |
+
{
|
23 |
+
"src": "\/favicon-96.png",
|
24 |
+
"sizes": "96x96",
|
25 |
+
"type": "image\/png",
|
26 |
+
"density": 2
|
27 |
+
},
|
28 |
+
{
|
29 |
+
"src": "\/favicon-144.png",
|
30 |
+
"sizes": "144x144",
|
31 |
+
"type": "image\/png",
|
32 |
+
"density": 3
|
33 |
+
},
|
34 |
+
{
|
35 |
+
"src": "\/favicon-192.png",
|
36 |
+
"sizes": "192x192",
|
37 |
+
"type": "image\/png",
|
38 |
+
"density": 4
|
39 |
+
}
|
40 |
+
]
|
41 |
+
}
|
public/icon.png
ADDED
src/app/engine/caption.ts
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
|
3 |
import { ImageAnalysisRequest, ImageAnalysisResponse } from "@/types"
|
4 |
|
5 |
-
const apiUrl = `${process.env.
|
6 |
|
7 |
export async function see({
|
8 |
prompt,
|
@@ -33,7 +33,7 @@ export async function see({
|
|
33 |
headers: {
|
34 |
Accept: "application/json",
|
35 |
"Content-Type": "application/json",
|
36 |
-
// Authorization: `Bearer ${process.env.
|
37 |
},
|
38 |
body: JSON.stringify(request),
|
39 |
cache: 'no-store',
|
|
|
2 |
|
3 |
import { ImageAnalysisRequest, ImageAnalysisResponse } from "@/types"
|
4 |
|
5 |
+
const apiUrl = `${process.env.VIDEOCHAIN_API_URL || ""}`
|
6 |
|
7 |
export async function see({
|
8 |
prompt,
|
|
|
33 |
headers: {
|
34 |
Accept: "application/json",
|
35 |
"Content-Type": "application/json",
|
36 |
+
// Authorization: `Bearer ${process.env.VIDEOCHAIN_API_TOKEN}`,
|
37 |
},
|
38 |
body: JSON.stringify(request),
|
39 |
cache: 'no-store',
|
src/app/engine/censorship.ts
CHANGED
@@ -2,3 +2,5 @@
|
|
2 |
|
3 |
// unfortunately due to abuse by some users, I have to add this NSFW filter
|
4 |
const secretSalt = `${process.env.SECRET_CENSORSHIP_KEY || ""}`
|
|
|
|
|
|
2 |
|
3 |
// unfortunately due to abuse by some users, I have to add this NSFW filter
|
4 |
const secretSalt = `${process.env.SECRET_CENSORSHIP_KEY || ""}`
|
5 |
+
|
6 |
+
// TODO the censorship is not implement yet actually
|
src/app/engine/forbidden.ts
CHANGED
@@ -2,5 +2,5 @@
|
|
2 |
// the NSFW has to contain bad words, but doing so might get the code flagged
|
3 |
// or attract unwanted attention, so we hash them
|
4 |
export const forbidden = [
|
5 |
-
|
6 |
]
|
|
|
2 |
// the NSFW has to contain bad words, but doing so might get the code flagged
|
3 |
// or attract unwanted attention, so we hash them
|
4 |
export const forbidden = [
|
5 |
+
// TODO implement this
|
6 |
]
|
src/app/engine/render.ts
CHANGED
@@ -1,12 +1,12 @@
|
|
1 |
"use server"
|
2 |
|
3 |
-
import { RenderRequest, RenderedScene } from "@/types"
|
|
|
|
|
4 |
|
5 |
// note: there is no / at the end in the variable
|
6 |
// so we have to add it ourselves if needed
|
7 |
-
const apiUrl = process.env.
|
8 |
-
|
9 |
-
const cacheDurationInSec = 30 * 60 // 30 minutes
|
10 |
|
11 |
export async function newRender({
|
12 |
prompt,
|
@@ -44,7 +44,7 @@ export async function newRender({
|
|
44 |
headers: {
|
45 |
Accept: "application/json",
|
46 |
"Content-Type": "application/json",
|
47 |
-
Authorization: `Bearer ${process.env.
|
48 |
},
|
49 |
body: JSON.stringify({
|
50 |
prompt,
|
@@ -114,7 +114,7 @@ export async function getRender(renderId: string) {
|
|
114 |
headers: {
|
115 |
Accept: "application/json",
|
116 |
"Content-Type": "application/json",
|
117 |
-
Authorization: `Bearer ${process.env.
|
118 |
},
|
119 |
cache: 'no-store',
|
120 |
// we can also use this (see https://vercel.com/blog/vercel-cache-api-nextjs-cache)
|
@@ -166,7 +166,7 @@ export async function upscaleImage(image: string): Promise<{
|
|
166 |
headers: {
|
167 |
Accept: "application/json",
|
168 |
"Content-Type": "application/json",
|
169 |
-
Authorization: `Bearer ${process.env.
|
170 |
},
|
171 |
cache: 'no-store',
|
172 |
body: JSON.stringify({ image, factor: 3 })
|
|
|
1 |
"use server"
|
2 |
|
3 |
+
import { RenderRequest, RenderedScene, RenderingEngine } from "@/types"
|
4 |
+
|
5 |
+
const renderingEngine = `${process.env.RENDERING_ENGINE || ""}` as RenderingEngine
|
6 |
|
7 |
// note: there is no / at the end in the variable
|
8 |
// so we have to add it ourselves if needed
|
9 |
+
const apiUrl = process.env.VIDEOCHAIN_API_URL
|
|
|
|
|
10 |
|
11 |
export async function newRender({
|
12 |
prompt,
|
|
|
44 |
headers: {
|
45 |
Accept: "application/json",
|
46 |
"Content-Type": "application/json",
|
47 |
+
Authorization: `Bearer ${process.env.VIDEOCHAIN_API_TOKEN}`,
|
48 |
},
|
49 |
body: JSON.stringify({
|
50 |
prompt,
|
|
|
114 |
headers: {
|
115 |
Accept: "application/json",
|
116 |
"Content-Type": "application/json",
|
117 |
+
Authorization: `Bearer ${process.env.VIDEOCHAIN_API_TOKEN}`,
|
118 |
},
|
119 |
cache: 'no-store',
|
120 |
// we can also use this (see https://vercel.com/blog/vercel-cache-api-nextjs-cache)
|
|
|
166 |
headers: {
|
167 |
Accept: "application/json",
|
168 |
"Content-Type": "application/json",
|
169 |
+
Authorization: `Bearer ${process.env.VIDEOCHAIN_API_TOKEN}`,
|
170 |
},
|
171 |
cache: 'no-store',
|
172 |
body: JSON.stringify({ image, factor: 3 })
|
src/app/favicon.ico
CHANGED
src/app/icon.png
ADDED
src/app/interface/top-menu/index.tsx
CHANGED
@@ -213,6 +213,7 @@ export function TopMenu() {
|
|
213 |
className={cn(
|
214 |
`rounded-l-none cursor-pointer`,
|
215 |
`transition-all duration-200 ease-in-out`,
|
|
|
216 |
)}
|
217 |
onClick={() => {
|
218 |
handleSubmit()
|
|
|
213 |
className={cn(
|
214 |
`rounded-l-none cursor-pointer`,
|
215 |
`transition-all duration-200 ease-in-out`,
|
216 |
+
`bg-[rgb(59,134,247)] hover:bg-[rgb(69,144,255)] disabled:bg-[rgb(59,134,247)]`
|
217 |
)}
|
218 |
onClick={() => {
|
219 |
handleSubmit()
|
src/app/queries/predict.ts
CHANGED
@@ -1,22 +1,60 @@
|
|
1 |
"use server"
|
2 |
|
3 |
-
import {
|
|
|
4 |
|
5 |
-
const
|
6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
|
8 |
export async function predict(inputs: string) {
|
9 |
|
10 |
console.log(`predict: `, inputs)
|
11 |
|
|
|
|
|
12 |
let instructions = ""
|
13 |
try {
|
14 |
-
for await (const output of
|
|
|
15 |
inputs,
|
16 |
parameters: {
|
17 |
do_sample: true,
|
18 |
-
|
19 |
-
// hard limit for max_new_tokens is 1512
|
20 |
max_new_tokens: 330, // 1150,
|
21 |
return_full_text: false,
|
22 |
}
|
|
|
1 |
"use server"
|
2 |
|
3 |
+
import { LLMEngine } from "@/types"
|
4 |
+
import { HfInference, HfInferenceEndpoint } from "@huggingface/inference"
|
5 |
|
6 |
+
const hf = new HfInference(process.env.HF_API_TOKEN)
|
7 |
+
|
8 |
+
|
9 |
+
// note: we always try "inference endpoint" first
|
10 |
+
const llmEngine = `${process.env.LLM_ENGINE || ""}` as LLMEngine
|
11 |
+
const inferenceEndpoint = `${process.env.HF_INFERENCE_ENDPOINT_URL || ""}`
|
12 |
+
const inferenceModel = `${process.env.HF_INFERENCE_API_MODEL || ""}`
|
13 |
+
|
14 |
+
let hfie: HfInferenceEndpoint
|
15 |
+
|
16 |
+
switch (llmEngine) {
|
17 |
+
case "INFERENCE_ENDPOINT":
|
18 |
+
if (inferenceEndpoint) {
|
19 |
+
console.log("Using a custom HF Inference Endpoint")
|
20 |
+
hfie = hf.endpoint(inferenceEndpoint)
|
21 |
+
} else {
|
22 |
+
const error = "No Inference Endpoint URL defined"
|
23 |
+
console.error(error)
|
24 |
+
throw new Error(error)
|
25 |
+
}
|
26 |
+
break;
|
27 |
+
|
28 |
+
case "INFERENCE_API":
|
29 |
+
if (inferenceModel) {
|
30 |
+
console.log("Using an HF Inference API Model")
|
31 |
+
} else {
|
32 |
+
const error = "No Inference API model defined"
|
33 |
+
console.error(error)
|
34 |
+
throw new Error(error)
|
35 |
+
}
|
36 |
+
break;
|
37 |
+
|
38 |
+
default:
|
39 |
+
const error = "No Inference Endpoint URL or Inference API Model defined"
|
40 |
+
console.error(error)
|
41 |
+
throw new Error(error)
|
42 |
+
}
|
43 |
|
44 |
export async function predict(inputs: string) {
|
45 |
|
46 |
console.log(`predict: `, inputs)
|
47 |
|
48 |
+
const api = llmEngine ==="INFERENCE_ENDPOINT" ? hfie : hf
|
49 |
+
|
50 |
let instructions = ""
|
51 |
try {
|
52 |
+
for await (const output of api.textGenerationStream({
|
53 |
+
model: llmEngine ==="INFERENCE_ENDPOINT" ? undefined : (inferenceModel || undefined),
|
54 |
inputs,
|
55 |
parameters: {
|
56 |
do_sample: true,
|
57 |
+
// we don't require a lot of token for our task
|
|
|
58 |
max_new_tokens: 330, // 1150,
|
59 |
return_full_text: false,
|
60 |
}
|
src/components/ui/input.tsx
CHANGED
@@ -11,7 +11,7 @@ const Input = React.forwardRef<HTMLInputElement, InputProps>(
|
|
11 |
<input
|
12 |
type={type}
|
13 |
className={cn(
|
14 |
-
"flex h-10 w-full rounded-md border border-stone-200 bg-white px-3 py-2 text-sm ring-offset-white file:border-0 file:bg-transparent file:text-sm file:font-medium placeholder:text-stone-500 focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-
|
15 |
className
|
16 |
)}
|
17 |
ref={ref}
|
|
|
11 |
<input
|
12 |
type={type}
|
13 |
className={cn(
|
14 |
+
"flex h-10 w-full rounded-md border border-stone-200 bg-white px-3 py-2 text-sm ring-offset-white file:border-0 file:bg-transparent file:text-sm file:font-medium placeholder:text-stone-500 focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-blue-[rgb(59,134,247)] focus-visible:ring-offset-0 disabled:cursor-not-allowed disabled:opacity-50 dark:border-stone-800 dark:bg-stone-950 dark:ring-offset-stone-950 dark:placeholder:text-stone-400 dark:focus-visible:ring-stone-800",
|
15 |
className
|
16 |
)}
|
17 |
ref={ref}
|
src/types.ts
CHANGED
@@ -80,3 +80,14 @@ export interface ImageAnalysisResponse {
|
|
80 |
}
|
81 |
|
82 |
export type LLMResponse = Array<{panel: number; instructions: string; caption: string }>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
}
|
81 |
|
82 |
export type LLMResponse = Array<{panel: number; instructions: string; caption: string }>
|
83 |
+
|
84 |
+
export type LLMEngine =
|
85 |
+
| "INFERENCE_API"
|
86 |
+
| "INFERENCE_ENDPOINT"
|
87 |
+
| "OPENAI"
|
88 |
+
| "REPLICATE"
|
89 |
+
|
90 |
+
export type RenderingEngine =
|
91 |
+
| "VIDEOCHAIN"
|
92 |
+
| "OPENAI"
|
93 |
+
| "REPLICATE"
|