|
|
|
PatchTST |
|
Overview |
|
The PatchTST model was proposed in A Time Series is Worth 64 Words: Long-term Forecasting with Transformers by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong and Jayant Kalagnanam. |
|
At a high level the model vectorizes time series into patches of a given size and encodes the resulting sequence of vectors via a Transformer that then outputs the prediction length forecast via an appropriate head. The model is illustrated in the following figure: |
|
|
|
The abstract from the paper is the following: |
|
We propose an efficient design of Transformer-based models for multivariate time series forecasting and self-supervised representation learning. It is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. Patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models. We also apply our model to self-supervised pre-training tasks and attain excellent fine-tuning performance, which outperforms supervised training on large datasets. Transferring of masked pre-trained representation on one dataset to others also produces SOTA forecasting accuracy. |
|
This model was contributed by namctin, gsinthong, diepi, vijaye12, wmgifford, and kashif. The original code can be found here. |
|
Usage tips |
|
The model can also be used for time series classification and time series regression. See the respective [PatchTSTForClassification] and [PatchTSTForRegression] classes. |
|
Resources |
|
|
|
A blog post explaining PatchTST in depth can be found here. The blog can also be opened in Google Colab. |
|
|
|
PatchTSTConfig |
|
[[autodoc]] PatchTSTConfig |
|
PatchTSTModel |
|
[[autodoc]] PatchTSTModel |
|
- forward |
|
PatchTSTForPrediction |
|
[[autodoc]] PatchTSTForPrediction |
|
- forward |
|
PatchTSTForClassification |
|
[[autodoc]] PatchTSTForClassification |
|
- forward |
|
PatchTSTForPretraining |
|
[[autodoc]] PatchTSTForPretraining |
|
- forward |
|
PatchTSTForRegression |
|
[[autodoc]] PatchTSTForRegression |
|
- forward |