librarian-bot commited on
Commit
45c3fe2
1 Parent(s): 7c82f4c

Librarian Bot: Add moe tag to model

Browse files

This pull request aims to enrich the metadata of your model by adding an `moe` (Mixture of Experts) `tag` in the `YAML` block of your model's `README.md`.

How did we find this information? We infered that this model is a `moe` model based on the following criteria:

- The model's name contains the string `moe`.
- The model indicates it uses a `moe` architecture
- The model's base model is a `moe` model


**Why add this?** Enhancing your model's metadata in this way:
- **Boosts Discoverability** - It becomes easier to find mixture of experts models on the Hub
- **Helping understand the ecosystem** - It becomes easier to understand the ecosystem of mixture of experts models on the Hub and how they are used


This PR comes courtesy of [Librarian Bot](https://huggingface.co./librarian-bot). If you have any feedback, queries, or need assistance, please don't hesitate to reach out to [@davanstrien](https://huggingface.co./davanstrien).

Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -1,16 +1,17 @@
1
  ---
2
- inference: false
3
  language:
4
  - en
5
- library_name: transformers
6
  license: apache-2.0
 
 
 
 
 
7
  model_name: Mixtral 8X7B - bnb 4-bit
 
8
  model_type: mixtral
9
  pipeline_tag: text-generation
10
  quantized_by: ybelkada
11
- tags:
12
- - mistral
13
- - mixtral
14
  ---
15
 
16
  # Mixtral 8x7B Instruct-v0.1 - `bitsandbytes` 4-bit
 
1
  ---
 
2
  language:
3
  - en
 
4
  license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - mistral
8
+ - mixtral
9
+ - moe
10
  model_name: Mixtral 8X7B - bnb 4-bit
11
+ inference: false
12
  model_type: mixtral
13
  pipeline_tag: text-generation
14
  quantized_by: ybelkada
 
 
 
15
  ---
16
 
17
  # Mixtral 8x7B Instruct-v0.1 - `bitsandbytes` 4-bit