摘要(英) |
We conducted a detailed investigation on how to integrate Reinforcement
Learning (RL) and Graph Attention Networks (GAT) to optimize the asset allocation
process. The primary aim of this study is to enhance the management
efficiency and performance of stock portfolios through these advanced technologies.
This research not only improves the performance of the strategy but also
enhances the model’s responsiveness to market fluctuations and predictive accuracy.
The introduction begins with an overview of Graph Attention Networks
(GAT), by utilizing GAT, allows to get the analysis of interactions between
stocks. Through this analysis, stock-picking agents can effectively capture potential
opportunities in the market and predict asset performance.
Through this analysis, stock-picking agents can better capture hidden opportunities
in the market and effectively predict asset performance under various
economic conditions.
Additionally, we utilized a Temporal Convolutional Network Autoencoder
(TCN-AE) to compress features from daily trading data. This step is crucial
for enhancing the model’s efficiency in processing transaction information and
improving predictive accuracy. Through the compression process, TCN-AE simplifies
large volumes of trading data into more refined feature representations,
which aids in enhancing the efficiency and effectiveness of subsequent model
training.
In the end, the paper is going to show the application of reinforcement learning,
specifically using the Proximal Policy Optimization (PPO)algorithm. PPO
is an advanced reinforcement learning method employed to train stock-picking
and trading agents, automating and optimizing the decision-making process in
trading. The agents learn how to make optimal decisions in varying market
conditions and continuously adjust their strategies through an ongoing learning
process to adapt the market fluctuations.
Integrating these technologies, the paper proposes an innovative asset allocation
system, which is designed to optimize portfolio management through machine learning techniques. Compared to the largest globally managed ETFs,
this system gets lower risks and higher rewards via experimental demonstration.
This proves the effectiveness and forward-thinking nature of combining
reinforcement learning and graph attention mechanisms in asset allocation. |
參考文獻 |
[1] Nuno Fernandes. Economic effects of coronavirus outbreak (covid-19) on
the world economy. 2020.
[2] 戴孝君. 俄烏戰爭與全球經濟安全, 2022. 訪問於2024-06-15.
[3] Shuo Sun, Rundong Wang, and Bo An. Reinforcement learning for quantitative
trading, 2021.
[4] Xiao-Yang Liu, Hongyang Yang, Jiechao Gao, and Christina Dan Wang.
Finrl: deep reinforcement learning framework to automate trading in quantitative
finance. In Proceedings of the Second ACM International Conference
on AI in Finance, ICAIF’21. ACM, November 2021.
[5] Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian
Ernestus, and Noah Dormann. Stable-baselines3: Reliable reinforcement
learning implementations. Journal of Machine Learning Research,
22(268):1–8, 2021.
[6] Hongyang Yang, Xiao-Yang Liu, Shan Zhong, and Anwar Walid. Deep reinforcement
learning for automated stock trading: an ensemble strategy. In
Proceedings of the First ACM International Conference on AI in Finance,
ICAIF ’20, New York, NY, USA, 2021. Association for Computing Machinery.
[7] Yuyu Yuan, Wen Wen, and Jincui Yang. Using data augmentation based
reinforcement learning for daily stock trading. Electronics, 9(9):1384, 2020.
[8] Mao Guan and Xiao-Yang Liu. Explainable deep reinforcement learning for
portfolio management: An empirical approach, 2021.
[9] Kevin Dabérius, Elvin Granat, and Patrik Karlsson. Deep execution-value
and policy based reinforcement learning for trading and beating market
benchmarks. Available at SSRN 3374766, 2019.
[10] Jingyuan Wang, Yang Zhang, Ke Tang, Junjie Wu, and Zhang Xiong. Alphastock:
A buying-winners-and-selling-losers investment strategy using interpretable
deep reinforcement attention networks. In Proceedings of the
25th ACM SIGKDD International Conference on Knowledge Discovery amp;
Data Mining, KDD ’19. ACM, July 2019.
[11] Yi-Hsuan Lee. Portfolio management with autoencoders and reinforcement
learning, 2023.
[12] Alireza Jafari and Saman Haratizadeh. Gcnet: graph-based prediction of
stock price movement using graph convolutional network, 2022.
[13] Zihan Chen, Lei Zheng, Cheng Lu, Jialu Yuan, and Di Zhu. Chatgpt informed
graph neural network for stock movement prediction. SSRN Electronic
Journal, 2023.
[14] Huize Xu, Yuhang Zhang, and Yaoqun Xu. Promoting financial market
development-financial stock classification using graph convolutional neural
networks. IEEE Access, 11:49289–49299, 2023.
[15] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero,
Pietro Liò, and Yoshua Bengio. Graph attention networks, 2018.
[16] Markus Thill, Wolfgang Konen, and Thomas Bäck. Time Series Encodings
with Temporal Convolutional Networks, pages 161–173. 11 2020.
[17] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg
Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347,
2017.
[18] Ifrs. https://zh.wikipedia.org/zh-tw/%E5%9B%BD%E9%99%85%E8%B4%
A2%E5%8A%A1%E6%8A%A5%E5%91%8A%E5%87%86%E5%88%99.
[19] Finmind. https://finmind.github.io/, 2024.
[20] OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, and
Ilge Akkaya. Gpt-4 technical report, 2024. |