Future Transportation, Vol. 6, Pages 51: Dynamic Multi-Relation Learning with Multi-Scale Hypergraph Transformer for Multi-Modal Traffic Forecasting


Future Transportation, Vol. 6, Pages 51: Dynamic Multi-Relation Learning with Multi-Scale Hypergraph Transformer for Multi-Modal Traffic Forecasting

Future Transportation doi: 10.3390/futuretransp6010051

Authors:
Juan Chen
Meiqing Shan

Accurate multi-modal traffic demand forecasting is key to optimizing intelligent transportation systems (ITSs). To overcome the shortcomings of existing methods in capturing dynamic high-order correlations between heterogeneous spatial units and decoupling intra- and inter-mode dependencies at multiple time scales, this paper proposes a Dynamic Multi-Relation Learning with Multi-Scale Hypergraph Transformer method (MST-Hype Trans). The model integrates three novel modules. Firstly, the Multi-Scale Temporal Hypergraph Convolutional Network (MSTHCN) achieves collaborative decoupling and captures periodic and cross-modal temporal interactions of transportation demand at multiple granularities, such as time, day, and week, by constructing a multi-scale temporal hypergraph. Secondly, the Dynamic Multi-Relationship Spatial Hypergraph Network (DMRSHN) innovatively integrates geographic proximity, passenger flow similarity, and transportation connectivity to construct structural hyperedges and combines KNN and K-means algorithms to generate dynamic hyperedges, thereby accurately modeling the high-order spatial correlations of dynamic evolution between heterogeneous nodes. Finally, the Conditional Meta Attention Gated Fusion Network (CMAGFN), as a lightweight meta network, introduces a gate control mechanism based on multi-head cross-attention. It can dynamically generate node features based on real-time traffic context and adaptively calibrate the fusion weights of multi-source information, achieving optimal prediction decisions for scene perception. Experiments on three real-world datasets (NYC-Taxi, -Bike, and -Subway) demonstrate that MST-Hyper Trans achieves an average reduction of 7.6% in RMSE and 9.2% in MAE across all modes compared to the strongest baseline, while maintaining interpretability of spatiotemporal interactions. This study not only provides good model interpretability but also offers a reliable solution for multi-modal traffic collaborative management.



Source link

Juan Chen www.mdpi.com