Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data download instructions #191

Merged
merged 2 commits into from
Nov 11, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ Public notebooks, utilities, and serving components for working with Time Series
The core TSFM time series models have been made available on Hugging Face -- details can be found
[here](https://github.com/ibm-granite/granite-tsfm/wiki). Information on the services component can be found [here](services/inference/README.md).


## Python Version
The current Python versions supported are 3.9, 3.10, 3.11, 3.12.

Expand All @@ -28,6 +27,7 @@ pip install ".[notebooks]"
- Transfer learning with `PatchTSMixer` [[Try it out]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/patch_tsmixer_transfer.ipynb)
- Transfer learning with `PatchTST` [[Try it out]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/patch_tst_transfer.ipynb)
- Getting started with `TinyTimeMixer (TTM)` [[Try it out]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/ttm_getting_started.ipynb)
- `TTM` full benchmarking scripts and results are available [[here]](https://github.com/ibm-granite/granite-tsfm/tree/main/notebooks/hfdemo/tinytimemixer/full_benchmarking)

## 📗 Google Colab Tutorials
Run the TTM tutorial in Google Colab, and quickly build a forecasting application with the pre-trained TSFM models.
Expand Down
4 changes: 4 additions & 0 deletions notebooks/hfdemo/tinytimemixer/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Steps to run the M4 notebook

## Fetching the M4 data
The M4 data can be downloaded from the [Time-Series-Library](https://github.com/thuml/Time-Series-Library). The authors of that library have shared the data through this [download link](https://drive.google.com/drive/folders/15zio96o3NK4XOoR5L88oaWcJDVOiqQo9).
14 changes: 13 additions & 1 deletion notebooks/hfdemo/tinytimemixer/full_benchmarking/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,25 @@
# Steps to run the full benchmarking

## Fetching the data
The evaluation data can be downloaded from the [Time-Series-Library](https://github.com/thuml/Time-Series-Library). The authors of that library have shared the data through this [download link](https://drive.google.com/drive/folders/1vE0ONyqPlym2JaaAoEe0XNDR8FS_d322). The ETT datasets can also be downloaded from [ETT-Github-Repository](https://github.com/zhouhaoyi/ETDataset).

Download and save the datasets in a directory. For example, in `data_root_path`.

## Running the scripts

1. In terminal, the any one of the three bash scripts `granite-r2.sh`, `granite-r1.sh`, or `research-use-r2.sh`.
2. Run `summarize_results.py`. For example,
```
sh granite-r2.sh
sh granite-r2.sh data_root_path/
python summarize_results.py -rd=results-granite-r2/
```

It will run all benchmarking and dump the results. The dumped results are available in the CSV files.


## Benchmarking Results
Note that, although random seed has been set, the mean squared error (MSE) scores might not match the below scores exactly depending on the runtime environment. The following results were obtained in a Unix-based machine equipped with one NVIDIA A-100 GPU.

1. TTM-Research-Use model results:
- `combined_results-research-use-r2.csv`: Across all datasets, all TTM models, and all forecast horizons.
- `combined_avg_results-research-use-r2.csv`: Across all datasets and all TTM models average over forecast horizons.
Expand Down