Shanks9's picture
Update README.md
abaf2a0 verified
---
library_name: transformers
license: mit
language:
- en
metrics:
- accuracy
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
pipeline_tag: zero-shot-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
The Mistral 7B - Time Series Predictor is a fine-tuned large language model designed to analyze server performance metrics and forecast potential failures. It processes time-series data and predicts failure probabilities, offering actionable insights for predictive maintenance and operational risk assessment.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Sivakrishna Yaganti and Shankar Jayaratnam
- **Funded by:** Esperanto Technologies
- **Model type:** Causal Language Model, fine-tuned for time-series forecasting
- **Finetuned from model:** Mistral 7B
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The model can be directly used to:
- Forecast server health based on time-series metrics like temperature, power consumption, utilization and throughput.
- Predict potential causes of failures using historical data.
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
The model is ideal for integration into platforms such as Splunk and Grafana to:
- Monitor server health in real-time.
- Support decision-making in preventive maintenance.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
- This model is not designed for general time-series forecasting outside server health monitoring.
- It may not perform well on non-server-related data or domains significantly different from its training dataset.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
**Bias**:
1. Performance may vary on datasets with metrics significantly different from those in the training data.
2. Predictions are most accurate when used within the context of server health monitoring.
**Risks**
1. Relying solely on the model without validating its predictions may result in inaccurate failure forecasts.
2. Model outputs are probabilistic and should be interpreted cautiously in critical systems.
**Limitations**
1. Limited to time-series metrics related to server health (e.g., temperature, power, throughput).
2. Performance may degrade for very sparse or noisy datasets.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
**Recommendations**
1. Use the model in conjunction with other predictive maintenance tools.
2. Validate model predictions against domain knowledge to ensure accuracy.
## How to Get Started with the Model
The Mistral 7B - Time Series Predictor can process time-series queries such as server health metrics and predict failure probabilities and causes. The following Python script demonstrates how to load the model and generate responses.
### Code
- from transformers import AutoModelForCausalLM, AutoTokenizer
- model_name = "Esperanto/Mistral-7B-TimeSeriesReasoner"
- tokenizer = AutoTokenizer.from_pretrained(model_name)
- model = AutoModelForCausalLM.from_pretrained(model_name)
*prompt = "What is the failure probability and Cause for Server 'x' on Date : [mm/dd/yy]?"*
- input_ids = tokenizer(prompt, return_tensors='pt')['input_ids']
- output = model.generate(input_ids=input_ids, max_new_tokens=100)
- response = tokenizer.decode(output[0])
- print(response)
**Example Prompt**
- What is the failure probability and Cause for Server 'x' on Date : [mm/dd/yy]?
- *Expected Ouptut*: The failure probability for ET-1 on 11th July is 0.72. The likely cause is overheating due to sustained high temperatures over the past week.
### Requirements
#### Dependencies:
- pip install torch transformers
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
**Source:** Synthetic and real-world server metrics from Esperanto servers.
**Dataset:** Synthetic data generated with periodic patterns (e.g., cosine functions) combined with operational zones (green, yellow, red).
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
##### Numerical to Textual Conversion:
All numerical metrics (e.g., temperature, power consumption, throughput) were converted into descriptive textual data to make it comprehensible for the language model. For example:
- Numerical Input: {"temperature": [40, 42, 43]}
- Converted Text: "The temperature increased steadily from 40°C to 43°C over the last three readings."
##### Domain-Specific Context:
Prompts were carefully designed to incorporate domain knowledge, guiding the model to focus on server health indicators and operational risks.
- Example prompts include:
1. "Analyze the following server performance metrics and predict potential failures."
2. "Based on the provided metrics, forecast failure probabilities and identify potential causes."
*These prompts ensured the model understood the critical relationships between input metrics and their operational implications.*
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
- Training time: ~30 hours on NVIDIA A100 GPUs
- Model size: ~7B parameters
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
*Validation set:* 10% of synthetic and real-world server performance data.
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
Model evaluated for:
- Failure prediction accuracy with cause.
### Results
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6659207a17951b5bd11a91fa/UgK2hf8rK9gTw_1AAUuo7.png)
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
### Results
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
#### Hardware
Runs on both GPU A100 and Esperanto ET-SoC
#### Software
Use Pytorch, Huggingface transformers library
## Citation [optional]
Esperanto Blog :
## Model Card Authors [optional]
Sivakrishna Yaganti and Shankar Jayaratnam
## Model Card Contact
[email protected]