Skip to content

PyTorch implementations of Graph ConvRNN, for time series prediction of correlated data in non Euclidean space (e.g., graph).

License

Notifications You must be signed in to change notification settings

DarkstartsUp/Graph-ConvRNN.PyTorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Graph ConvRNN in PyTorch

Implement end-to-end trainable Graph ConvRNN with PyTorch, by replacing 2D CNN in ConvRNN with GNNs.

This network can be used for time series prediction of correlated data in non Euclidean space (especially graph structure, e.g., metro system).

By far, The following GNNs and RNNs are supported and can be combined at will:

  • Graph Neural Networks
    • GAT
    • GCN
  • Recursive Neural Networks
    • LSTM
    • GRU

How to Use

Require PyTorch 1.0+

Setup

git clone https://github.com/DarkstartsUp/Graph-ConvRNN.pytorch.git
cd Graph-ConvRNN.pytorch
pip install -r requirements.txt

Graph ConvRNN module

This file contains the implementation of Graph ConvRNN in PyTorch.

The Graph_ConvRNN module derives from nn.Module so it can be used as any other PyTorch module. Example usage:

from graph_convrnn import Graph_ConvRNN

device = torch.device("cuda:0")
# Model initialization
model = Graph_ConvRNN(
    input_dim=64,
    num_layers=2,
    gnn_hidden_dim=32,
    rnn_hidden_dim=16,
    num_nodes=50,
    rnn_mode='lstm',
    gnn_mode='gcn',
).to(device)

# Test input output
x = torch.rand((128, 10, 50, 64)).to(device)  # [batch_size, sequence_length, num_nodes, feature_dim]
adj = torch.ones((50, 50)).to(device)
_, last_states = graph_convrnn(x, adj)
h = last_states[0][0]  # 0 for layer index, 0 for h index

Experiments

Datasets

Two real-world datasets were tested to demonstrate the effectiveness of the network. If you also want to train and test on these two datasets, please visit the download link below and place the unzipped dataset folder in the data folder at the root of the project.

The unzipped file structure:

    - data
	- Metro Ridership Dataset
	- Seattle_Loop_Dataset

Training code

Execute the following code to train on the above datasets:

# train on Metro dataset
python metro_train.py

# train on Seattle-Loop dataset
python seattle_train.py

Results

Metro dataset

There are 2 sub-dataset in Metro dataset: Metro-Hangzhou and Metro-shanghai. In the paper, The author uses three adjacency matrices: Physical Graph (CONN), Similarity Graph (SML) and Correlation Graph (CORR). In addition, the identity matrix (generated by np.eye(), EYE) and all-1 matrix (generated by np.ones(), ONES) are also tested. Noted that the input data is normalized to mean 0 and standard deviation 1, and the experimental results were also calculated on this basis.

Metro-Hangzhou GCN+LSTM

Adjancy Matirx MAE MAPE RMSE
EYE 0.6056 4.2775 0.7736
ONES 0.6657 4.1435 0.8133
CONN 0.5673 4.1346 0.7196
COR 0.6690 4.1589 0.8166
SML 0.5653 4.1608 0.7149

Metro-Hangzhou GCN+GRU

Adjancy Matirx MAE MAPE RMSE
EYE 0.5919 3.3964 0.7424
ONES 0.6287 3.9515 0.7741
CONN 0.5487 4.6308 0.7002
COR 0.5454 3.7265 0.6865
SML 0.5124 3.5896 0.6508

Metro-Hangzhou GAT+LSTM

Adjancy Matirx MAE MAPE RMSE
EYE 0.7905 1.0187 0.9545
ONES 0.6300 3.5725 0.7648
CONN 0.7198 4.8176 0.8794
COR 0.6454 4.6914 0.7946
SML 0.6074 4.0757 0.7586

Metro-Hangzhou GAT+GRU

Adjancy Matirx MAE MAPE RMSE
EYE 0.7990 1.0198 0.9648
ONES 0.5853 2.7512 0.7321
CONN 0.7513 4.5726 0.9197
COR 0.5858 3.8514 0.7175
SML 0.5384 3.1756 0.6686

METRO-Shanghai GCN+LSTM

Adjancy Matirx MAE MAPE RMSE
EYE 0.5182 4.1083 0.7833
ONES 0.4488 3.4030 0.7370
CONN 0.3663 2.3310 0.6714
COR 0.3533 1.9101 0.6674
SML 0.3620 2.3637 0.6705

METRO-Shanghai GCN+GRU

Adjancy Matirx MAE MAPE RMSE
EYE 0.4161 3.5248 0.6788
ONES 0.3751 3.2090 0.6287
CONN 0.3134 2.0498 0.5819
COR 0.2981 2.0499 0.5753
SML 0.3075 2.2292 0.5813

Seattle-Loop dataset

In the paper, The author uses adjacency matrices including Physical Connection (A) and Loop Free-flow Reachability Matrix during X minites' drive (X in {5, 10, 15, 20, 25}). In addition, the identity matrix (generated by np.eye(), EYE) and all-1 matrix (generated by np.ones(), ONES) are also tested. Noted that the input data is normalized to mean 0 and standard deviation 1, and the experimental results were also calculated on this basis.

Seattle-Loop GCN+LSTM

Adjancy Matirx MAE MAPE RMSE
EYE 0.5221 3.9945 0.7812
ONES 0.4666 5.3441 0.7590
A 0.4507 4.3034 0.7340
5min 0.4162 4.6766 0.7165
10min 0.4183 4.5386 0.7185
15min 0.4204 4.5217 0.7220
20min 0.4455 4.8245 0.7436
25min 0.4651 5.1576 0.7619

Seattle-Loop GCN+GRU

Adjancy Matirx MAE MAPE RMSE
EYE 0.4394 4.8956 0.7127
ONES 0.4331 5.4537 0.7105
A 0.4156 4.9964 0.6858
5min 0.3965 4.7380 0.6709
10min 0.3986 4.6266 0.6742
15min 0.4060 4.6954 0.6805
20min 0.4243 4.9668 0.7009
25min 0.4247 5.2689 0.7022

Acknowledgements

This repo borrows some codes from:

Citation

GCN

@inproceedings{kipf2017semi,
  title={Semi-Supervised Classification with Graph Convolutional Networks},
  author={Kipf, Thomas N. and Welling, Max},
  booktitle={International Conference on Learning Representations (ICLR)},
  year={2017}
}

GAT

@article{
  velickovic2018graph,
  title="{Graph Attention Networks}",
  author={Veli{\v{c}}kovi{\'{c}}, Petar and Cucurull, Guillem and Casanova, Arantxa and Romero, Adriana and Li{\`{o}}, Pietro and Bengio, Yoshua},
  journal={International Conference on Learning Representations},
  year={2018},
  url={https://openreview.net/forum?id=rJXMpikCZ},
  note={accepted as poster},
}

ConvLSTM

@inproceedings{xingjian2015convolutional,
  title={Convolutional LSTM network: A machine learning approach for precipitation nowcasting},
  author={Xingjian, SHI and Chen, Zhourong and Wang, Hao and Yeung, Dit-Yan and Wong, Wai-Kin and Woo, Wang-chun},
  booktitle={Advances in neural information processing systems},
  pages={802--810},
  year={2015}
}

Metro Dataset

@article{liu2020physical,
  title={Physical-Virtual Collaboration Modeling for Intra-and Inter-Station Metro Ridership Prediction},
  author={Liu, Lingbo and Chen, Jingwen and Wu, Hefeng and Zhen, Jiajie and Li, Guanbin and Lin, Liang},
  journal={IEEE Transactions on Intelligent Transportation Systems},
  year={2020},
  publisher={IEEE}
}

Seattle-Loop Dataset

@article{cui2019traffic,
  title={Traffic graph convolutional recurrent neural network: A deep learning framework for network-scale traffic learning and forecasting},
  author={Cui, Zhiyong and Henrickson, Kristian and Ke, Ruimin and Wang, Yinhai},
  journal={IEEE Transactions on Intelligent Transportation Systems},
  volume={21},
  number={11},
  pages={4883--4894},
  year={2019},
  publisher={IEEE}
}

About

PyTorch implementations of Graph ConvRNN, for time series prediction of correlated data in non Euclidean space (e.g., graph).

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages