joaoalvarenga commited on
Commit
2598b16
1 Parent(s): aa70b2e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -13
README.md CHANGED
@@ -46,19 +46,6 @@ language:
46
  - yo
47
  - zh
48
  - zu
49
- - C
50
- - C++
51
- - C#
52
- - Go
53
- - Java
54
- - JavaScript
55
- - Lua
56
- - PHP
57
- - Python
58
- - Ruby
59
- - Rust
60
- - Scala
61
- - TypeScript
62
  pipeline_tag: text-generation
63
  ---
64
  ### Quantized bigscience/bloom with 8-bit weights
@@ -68,6 +55,11 @@ Heavily inspired by [Hivemind's GPT-J-6B with 8-bit weights](https://huggingface
68
  Here, we also apply [LoRA (Low Rank Adapters)](https://arxiv.org/abs/2106.09685) to reduce model size. The original version takes \~353GB memory, this version takes **\~180GB**.
69
 
70
  Our main goal is to generate a model compressed enough to be deployed in a traditional Kubernetes cluster.
 
 
 
 
 
71
  ### How to use
72
 
73
  This model can be used by adapting Bloom original implementation. This is an adaptation from [Hivemind's GPT-J 8-bit](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb):
 
46
  - yo
47
  - zh
48
  - zu
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  pipeline_tag: text-generation
50
  ---
51
  ### Quantized bigscience/bloom with 8-bit weights
 
55
  Here, we also apply [LoRA (Low Rank Adapters)](https://arxiv.org/abs/2106.09685) to reduce model size. The original version takes \~353GB memory, this version takes **\~180GB**.
56
 
57
  Our main goal is to generate a model compressed enough to be deployed in a traditional Kubernetes cluster.
58
+
59
+ ### How to fine tune
60
+
61
+ In this [notebook]() you can find an adaptation from [Hivemind's GPT-J 8-bit fine-tuning notebook](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es) to fine-tune Bloom 8-bit with a 3x NVIDIA A100.
62
+
63
  ### How to use
64
 
65
  This model can be used by adapting Bloom original implementation. This is an adaptation from [Hivemind's GPT-J 8-bit](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb):