Xenova HF staff commited on
Commit
59000b4
1 Parent(s): c30c600

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -1
README.md CHANGED
@@ -5,6 +5,51 @@ colorFrom: green
5
  colorTo: pink
6
  sdk: static
7
  pinned: false
 
 
 
 
 
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  colorTo: pink
6
  sdk: static
7
  pinned: false
8
+ license: apache-2.0
9
+ models:
10
+ - onnx-community/Llama-3.2-1B-Instruct-q4f16
11
+ short_description: A powerful AI chatbot that runs locally in your browser
12
+ thumbnail: https://huggingface.co/spaces/webml-community/llama-3.2-webgpu/resolve/main/banner.png
13
  ---
14
 
15
+ # Llama-3.2 WebGPU
16
+
17
+ A simple React + Vite application for running [Llama-3.2-1B-Instruct](https://huggingface.co/onnx-community/Llama-3.2-1B-Instruct-q4f16), a powerful small language model, locally in the browser using Transformers.js and WebGPU-acceleration.
18
+
19
+ ## Getting Started
20
+
21
+ Follow the steps below to set up and run the application.
22
+
23
+ ### 1. Clone the Repository
24
+
25
+ Clone the examples repository from GitHub:
26
+
27
+ ```sh
28
+ git clone https://github.com/huggingface/transformers.js-examples.git
29
+ ```
30
+
31
+ ### 2. Navigate to the Project Directory
32
+
33
+ Change your working directory to the `llama-3.2-webgpu` folder:
34
+
35
+ ```sh
36
+ cd transformers.js-examples/llama-3.2-webgpu
37
+ ```
38
+
39
+ ### 3. Install Dependencies
40
+
41
+ Install the necessary dependencies using npm:
42
+
43
+ ```sh
44
+ npm i
45
+ ```
46
+
47
+ ### 4. Run the Development Server
48
+
49
+ Start the development server:
50
+
51
+ ```sh
52
+ npm run dev
53
+ ```
54
+
55
+ The application should now be running locally. Open your browser and go to `http://localhost:5173` to see it in action.