File size: 2,821 Bytes
cea9c36
38b0b3e
 
 
 
 
9284714
 
38b0b3e
 
 
9284714
 
cea9c36
 
38b0b3e
cea9c36
 
38b0b3e
cea9c36
38b0b3e
cea9c36
38b0b3e
cea9c36
38b0b3e
cea9c36
38b0b3e
 
cea9c36
38b0b3e
 
cea9c36
38b0b3e
cea9c36
38b0b3e
cea9c36
38b0b3e
 
cea9c36
 
 
38b0b3e
 
cea9c36
38b0b3e
 
cea9c36
38b0b3e
 
cea9c36
38b0b3e
 
cea9c36
 
 
38b0b3e
 
cea9c36
38b0b3e
 
 
 
 
 
cea9c36
38b0b3e
 
cea9c36
38b0b3e
 
 
 
 
 
cea9c36
 
38b0b3e
cea9c36
38b0b3e
 
cea9c36
38b0b3e
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
base_model: microsoft/Phi-3-mini-128k-instruct
datasets:
- awels/maximo_admin_dataset
language:
- en
library_name: adapters
license: mit
tags:
- awels
- maximo
widget:
- text: Who are you, Maximus ?
---

# Maximus Model Card

## Model Details
**Model Name:** Maximus

**Model Type:** Transformer-based leveraging Microsoft Phi 3b 128k tokens

**Publisher:** Awels Engineering

**License:** MIT

**Model Description:**
Maximus is a sophisticated model designed to help as an AI agent focusing on Maximo Application Suite. It leverages advanced machine learning techniques to provide efficient and accurate solutions. It has been trained on the full docments corpus of MAS 8.5. 

## Dataset
**Dataset Name:** [awels/maximo_admin_dataset](https://huggingface.co./datasets/awels/maximo_admin_dataset)

**Dataset Source:** Hugging Face Datasets

**Dataset License:** MIT 

**Dataset Description:**
The dataset used to train Maximus consists of all the public documents available on Maximo application suite. This dataset is curated to ensure a comprehensive representation of typical administrative scenarios encountered in Maximo.

## Training Details

**Training Data:**
The training data includes 67,000 Questions and Answers generated by the [Bonito LLM](https://github.com/BatsResearch/bonito). The dataset is split into 3 sets of data (training, test and validation) to ensure robust model performance.

**Training Procedure:**
Maximus was trained using supervised learning with cross-entropy loss and the Adam optimizer. The training involved 1 epoch, a batch size of 4, a learning rate of 5.0e-06, and a cosine learning rate scheduler with gradient checkpointing for memory efficiency.

**Hardware:**
The model was trained on a single NVIDIA RTX 4090 graphic card.

**Framework:**
The training was conducted using PyTorch.

## Evaluation

**Evaluation Metrics:**
Maximus was evaluated on the training dataset:

> epoch                    =        1.0
  total_flos               = 64046138GF
  train_loss               =     2.8079
  train_runtime            = 0:37:48.33
  train_samples_per_second =     21.066
  train_steps_per_second   =      5.267

**Performance:**
The model achieved the following results on the evaluation dataset:

> epoch                   =        1.0
  eval_loss               =      2.288
  eval_runtime            = 0:02:05.48
  eval_samples            =      10773
  eval_samples_per_second =     95.338
  eval_steps_per_second   =     23.836


## Intended Use

**Primary Use Case:**
Maximus is intended to be used locally in an agent swarm to colleborate together to solve Maximo Application Suite related problems.

**Limitations:**
While Maximus is highly effective, it may have limitations due to the model size. An 8b model based on Llama 3 is used internally at Awels Engineering.