File size: 2,843 Bytes
6eab84f
cdd6331
6eab84f
cdd6331
 
6eab84f
cdd6331
6eab84f
cdd6331
6eab84f
cdd6331
6eab84f
 
 
cdd6331
6eab84f
cdd6331
6eab84f
 
cdd6331
6eab84f
cdd6331
6eab84f
cdd6331
6eab84f
cdd6331
6eab84f
 
cdd6331
6eab84f
 
cdd6331
6eab84f
cdd6331
6eab84f
cdd6331
6eab84f
 
cdd6331
6eab84f
cdd6331
6eab84f
 
cdd6331
6eab84f
 
cdd6331
6eab84f
 
 
 
 
 
 
 
 
 
 
 
cdd6331
 
 
 
 
 
6eab84f
 
 
 
 
 
 
 
 
 
cdd6331
6eab84f
cdd6331
6eab84f
 
cdd6331
6eab84f
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
license: mit
base_model: microsoft/Phi-3-mini-128k-instruct
library_name: adapters
datasets:
- awels/ocpvirt_admin_dataset
language:
- en
widget:
- text: Who are you, Thready ?
tags:
- awels
- redhat
---

# Thready Model Card

## Model Details
**Model Name:** Thready

**Model Type:** Transformer-based leveraging Microsoft Phi 3b 128k tokens

**Publisher:** Awels Engineering

**License:** MIT

**Model Description:**
Thready is a sophisticated model designed to help as an AI agent focusing on the Red Hat Openshift Virtualization solution. It leverages advanced machine learning techniques to provide efficient and accurate solutions. It has been trained on the full docments corpus of OCP Virt 4.16. 

## Dataset
**Dataset Name:** [awels/ocpvirt_admin_dataset](https://huggingface.co./datasets/awels/ocpvirt_admin_dataset)

**Dataset Source:** Hugging Face Datasets

**Dataset License:** MIT 

**Dataset Description:**
The dataset used to train Thready consists of all the public documents available on Red Hat Openshift Virtualization. This dataset is curated to ensure a comprehensive representation of typical administrative scenarios encountered in Openshift Virtualization.

## Training Details

**Training Data:**
The training data includes 70,000 Questions and Answers generated by the [Bonito LLM](https://github.com/BatsResearch/bonito). The dataset is split into 3 sets of data (training, test and validation) to ensure robust model performance.

**Training Procedure:**
Thready was trained using supervised learning with cross-entropy loss and the Adam optimizer. The training involved 1 epoch, a batch size of 4, a learning rate of 5.0e-06, and a cosine learning rate scheduler with gradient checkpointing for memory efficiency.

**Hardware:**
The model was trained on a single NVIDIA H100 SXM graphic card.

**Framework:**
The training was conducted using PyTorch.

## Evaluation

**Evaluation Metrics:**
Thready was evaluated on the training dataset:

> epoch                    =         1.0
  total_flos               = 273116814GF
  train_loss               =      1.5825
  train_runtime            =  1:33:44.28
  train_samples_per_second =       9.803
  train_steps_per_second   =       2.451

**Performance:**
The model achieved the following results on the evaluation dataset:

> epoch                   =        1.0
  eval_loss               =     1.3341
  eval_runtime            = 0:04:02.02
  eval_samples            =      11191
  eval_samples_per_second =     56.469
  eval_steps_per_second   =     14.118


## Intended Use

**Primary Use Case:**
Thready is intended to be used locally in an agent swarm to colleborate together to solve Red Hat Openshift Virtualization related problems.

**Limitations:**
This 14b model is an upscale of the 3b model. Much better loss than the 3b so results should be better.