Skip to content

Commit b48c62e

Browse files
author
Abdul Fatir Ansari
committed
Rename models
1 parent 46a1bbc commit b48c62e

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717

1818
## 🚀 News
1919

20-
- **25 Nov 2024**: 🚀 Chronos⚡️ (read: Chronos-Bolt) models released [on HuggingFace](https://huggingface.co/collections/amazon/chronos-models-65f1791d630a8d57cb718444). Chronos⚡️ models are more accurate (5% lower error) and 100x faster than the original Chronos models!
20+
- **25 Nov 2024**: ⚡️ Chronos-Bolt models released [on HuggingFace](https://huggingface.co/collections/amazon/chronos-models-65f1791d630a8d57cb718444). Chronos-Bolt models are more accurate (5% lower error), up to 250x faster and 20x more memory efficient than the original Chronos models of the same size!
2121
- **27 Jun 2024**: 🚀 [Released datasets](https://huggingface.co/datasets/autogluon/chronos_datasets) used in the paper and an [evaluation script](./scripts/README.md#evaluating-chronos-models) to compute the WQL and MASE scores reported in the paper.
2222
- **17 May 2024**: 🐛 Fixed an off-by-one error in bin indices in the `output_transform`. This simple fix significantly improves the overall performance of Chronos. We will update the results in the next revision on ArXiv.
2323
- **10 May 2024**: 🚀 We added the code for pretraining and fine-tuning Chronos models. You can find it in [this folder](./scripts/training). We also added [a script](./scripts/kernel-synth.py) for generating synthetic time series data from Gaussian processes (KernelSynth; see Section 4.2 in the paper for details). Check out the [usage examples](./scripts/).
@@ -62,19 +62,19 @@ The models in this repository are based on the [T5 architecture](https://arxiv.o
6262

6363
### Zero-Shot Results
6464

65-
The following figure showcases the remarkable **zero-shot** performance of Chronos and Chronos⚡️ models on 27 datasets against local models, task-specific models and other pretrained models. For details on the evaluation setup and other results, please refer to [the paper](https://arxiv.org/abs/2403.07815).
65+
The following figure showcases the remarkable **zero-shot** performance of Chronos and Chronos-Bolt models on 27 datasets against local models, task-specific models and other pretrained models. For details on the evaluation setup and other results, please refer to [the paper](https://arxiv.org/abs/2403.07815).
6666

6767
<p align="center">
6868
<img src="figures/zero_shot-agg_scaled_score.png" width="80%">
6969
<br />
7070
<span>
71-
Fig. 2: Performance of different models on Benchmark II, comprising 27 datasets <b>not seen</b> by Chronos and Chronos⚡️ models during training. This benchmark provides insights into the zero-shot performance of Chronos and Chronos⚡️ models against local statistical models, which fit parameters individually for each time series, task-specific models <i>trained on each task</i>, and pretrained models trained on a large corpus of time series. Pretrained Models (Other) indicates that some (or all) of the datasets in Benchmark II may have been in the training corpus of these models. The probabilistic (WQL) and point (MASE) forecasting metrics were normalized using the scores of the Seasonal Naive baseline and aggregated through a geometric mean to obtain the Agg. Relative WQL and MASE, respectively.
71+
Fig. 2: Performance of different models on Benchmark II, comprising 27 datasets <b>not seen</b> by Chronos and Chronos-Bolt models during training. This benchmark provides insights into the zero-shot performance of Chronos and Chronos-Bolt models against local statistical models, which fit parameters individually for each time series, task-specific models <i>trained on each task</i>, and pretrained models trained on a large corpus of time series. Pretrained Models (Other) indicates that some (or all) of the datasets in Benchmark II may have been in the training corpus of these models. The probabilistic (WQL) and point (MASE) forecasting metrics were normalized using the scores of the Seasonal Naive baseline and aggregated through a geometric mean to obtain the Agg. Relative WQL and MASE, respectively.
7272
</span>
7373
</p>
7474

7575
## 📈 Usage
7676

77-
To perform inference with Chronos or Chronos⚡️ models, install this package by running:
77+
To perform inference with Chronos or Chronos-Bolt models, install this package by running:
7878

7979
```
8080
pip install git+https://github.com/amazon-science/chronos-forecasting.git
@@ -84,15 +84,15 @@ pip install git+https://github.com/amazon-science/chronos-forecasting.git
8484
8585
### Forecasting
8686

87-
A minimal example showing how to perform forecasting using Chronos and Chronos⚡️ models:
87+
A minimal example showing how to perform forecasting using Chronos and Chronos-Bolt models:
8888

8989
```python
9090
import pandas as pd # requires: pip install pandas
9191
import torch
9292
from chronos import BaseChronosPipeline
9393

9494
pipeline = BaseChronosPipeline.from_pretrained(
95-
"amazon/chronos-t5-small", # use "amazon/chronos-bolt-small" for the corresponding Chronos⚡️ model
95+
"amazon/chronos-t5-small", # use "amazon/chronos-bolt-small" for the corresponding Chronos-Bolt model
9696
device_map="cuda", # use "cpu" for CPU inference and "mps" for Apple Silicon
9797
torch_dtype=torch.bfloat16,
9898
)
@@ -105,7 +105,7 @@ df = pd.read_csv(
105105
# or a left-padded 2D tensor with batch as the first dimension
106106
# The original Chronos models generate forecast samples, so forecast has shape
107107
# [num_series, num_samples, prediction_length].
108-
# Chronos⚡️ models generate quantile forecasts, so forecast has shape
108+
# Chronos-Bolt models generate quantile forecasts, so forecast has shape
109109
# [num_series, num_quantiles, prediction_length].
110110
forecast = pipeline.predict(
111111
context=torch.tensor(df["#Passengers"]), prediction_length=12
@@ -118,7 +118,7 @@ More options for `pipeline.predict` can be found with:
118118
from chronos import ChronosPipeline, ChronosBoltPipeline
119119

120120
print(ChronosPipeline.predict.__doc__) # for Chronos models
121-
print(ChronosBoltPipeline.predict.__doc__) # for Chronos⚡️ models
121+
print(ChronosBoltPipeline.predict.__doc__) # for Chronos-Bolt models
122122
```
123123

124124
We can now visualize the forecast:

0 commit comments

Comments
 (0)