update-readme-usage
#2
by
shchuro
- opened
README.md
CHANGED
@@ -12,7 +12,9 @@ tags:
|
|
12 |
|
13 |
# Chronos-Bolt⚡ (Base)
|
14 |
|
15 |
-
Chronos-Bolt is a family of pretrained time series forecasting models which can be used for zero-shot forecasting. It is based on the [T5 encoder-decoder architecture](https://arxiv.org/abs/1910.10683) and has been trained on nearly 100 billion time series observations. It chunks the historical time series context into patches of multiple observations, which are then input into the encoder. The decoder then uses these representations to directly generate quantile forecasts across multiple future steps—a method known as direct multi-step forecasting. Chronos-Bolt models are up to 250 times faster and 20 times more memory-efficient than the [original Chronos](https://arxiv.org/abs/2403.07815) models of the same size.
|
|
|
|
|
16 |
|
17 |
The following plot compares the inference time of Chronos-Bolt against the original Chronos models for forecasting 1024 time series with a context length of 512 observations and a prediction horizon of 64 steps.
|
18 |
|
@@ -43,12 +45,13 @@ Chronos-Bolt models are available in the following sizes.
|
|
43 |
|
44 |
## Usage
|
45 |
|
46 |
-
|
47 |
|
|
|
48 |
```
|
49 |
pip install autogluon
|
50 |
```
|
51 |
-
|
52 |
```python
|
53 |
from autogluon.timeseries import TimeSeriesPredictor, TimeSeriesDataFrame
|
54 |
|
@@ -64,6 +67,40 @@ predictor = TimeSeriesPredictor(prediction_length=48).fit(
|
|
64 |
predictions = predictor.predict(df)
|
65 |
```
|
66 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
## Citation
|
68 |
|
69 |
If you find Chronos or Chronos-Bolt models useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2403.07815):
|
|
|
12 |
|
13 |
# Chronos-Bolt⚡ (Base)
|
14 |
|
15 |
+
Chronos-Bolt is a family of pretrained time series forecasting models which can be used for zero-shot forecasting. It is based on the [T5 encoder-decoder architecture](https://arxiv.org/abs/1910.10683) and has been trained on nearly 100 billion time series observations. It chunks the historical time series context into patches of multiple observations, which are then input into the encoder. The decoder then uses these representations to directly generate quantile forecasts across multiple future steps—a method known as direct multi-step forecasting. Chronos-Bolt models are **more accurate**, up to **250 times faster** and **20 times more memory-efficient** than the [original Chronos](https://arxiv.org/abs/2403.07815) models of the same size.
|
16 |
+
|
17 |
+
## Performance
|
18 |
|
19 |
The following plot compares the inference time of Chronos-Bolt against the original Chronos models for forecasting 1024 time series with a context length of 512 observations and a prediction horizon of 64 steps.
|
20 |
|
|
|
45 |
|
46 |
## Usage
|
47 |
|
48 |
+
### Zero-shot inference with Chronos-Bolt in AutoGluon
|
49 |
|
50 |
+
Install the required dependencies.
|
51 |
```
|
52 |
pip install autogluon
|
53 |
```
|
54 |
+
Forecast with the Chronos-Bolt model.
|
55 |
```python
|
56 |
from autogluon.timeseries import TimeSeriesPredictor, TimeSeriesDataFrame
|
57 |
|
|
|
67 |
predictions = predictor.predict(df)
|
68 |
```
|
69 |
|
70 |
+
For more advanced features such as **fine-tuning** and **forecasting with covariates**, check out [this tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html).
|
71 |
+
|
72 |
+
### Deploying a Chronos-Bolt endpoint to SageMaker
|
73 |
+
First, update the SageMaker SDK to make sure that all the latest models are available.
|
74 |
+
```
|
75 |
+
pip install -U sagemaker
|
76 |
+
```
|
77 |
+
Deploy an inference endpoint to SageMaker.
|
78 |
+
```python
|
79 |
+
from sagemaker.jumpstart.model import JumpStartModel
|
80 |
+
|
81 |
+
model = JumpStartModel(
|
82 |
+
model_id="autogluon-forecasting-chronos-bolt-base",
|
83 |
+
instance_type="ml.m5.2xlarge",
|
84 |
+
)
|
85 |
+
predictor = model.deploy()
|
86 |
+
```
|
87 |
+
Now you can send time series data to the endpoint in JSON format.
|
88 |
+
```python
|
89 |
+
import pandas as pd
|
90 |
+
df = pd.read_csv("https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv")
|
91 |
+
|
92 |
+
payload = {
|
93 |
+
"inputs": [
|
94 |
+
{"target": df["#Passengers"].tolist()}
|
95 |
+
],
|
96 |
+
"parameters": {
|
97 |
+
"prediction_length": 12,
|
98 |
+
}
|
99 |
+
}
|
100 |
+
forecast = predictor.predict(payload)["predictions"]
|
101 |
+
```
|
102 |
+
Chronos-Bolt models can be deployed to both CPU and GPU instances. These models also support **forecasting with covariates**. For more details about the endpoint API, check out the [example notebook](https://github.com/aws/amazon-sagemaker-examples/blob/default/%20%20%20%20generative_ai/sm-jumpstart_time_series_forecasting.ipynb).
|
103 |
+
|
104 |
## Citation
|
105 |
|
106 |
If you find Chronos or Chronos-Bolt models useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2403.07815):
|