研究生: |
陳建瑋 Chen, Jian-Wei |
---|---|
論文名稱: |
基於深度學習網路之農產品期貨價格預測模型 Agricultural Futures Price Prediction Model based on Deep Learning Network |
指導教授: |
方瓊瑤
Fang, Chiung-Yao 葉耀明 Yeh, Yao-Ming |
學位類別: |
碩士 Master |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2019 |
畢業學年度: | 107 |
語文別: | 中文 |
論文頁數: | 65 |
中文關鍵詞: | 短期價格預測 、類神經網絡 、時間序列分析 、降噪自編碼器 、集成學習 、黃豆期貨 、農產品期貨 |
英文關鍵詞: | short-term price forecasting, neural network, time series analysis, denoise autoencoder, ensemble learning, soybean futures, agricultural futures |
DOI URL: | http://doi.org/10.6345/NTNU201900222 |
論文種類: | 學術論文 |
相關次數: | 點閱:177 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在財務金融領域當中,預測金融商品價格一直是個研究者以及產業界所感興趣的主題。其中國際大宗原物料價格變動對於仰賴進出口的台灣經濟體系影響甚鉅,大型企業營運更深受原物料價格起伏所干擾,必須經常性地採取避險手段,將非營運因素的干擾降低。為了解決此問題,本研究蒐集芝加哥期貨商品交易所(Chicago Mercantile Exchange, CME)之原物料期貨資料為樣本,在本研究中以黃豆期貨為例,訓練類神經網路模型解決價格預測之問題。
研究結果顯示,對於充斥雜訊的金融時間序列資料,使用降噪自編碼器可以從期貨價格資料中提取出更具有代表性的潛在特徵。此類特徵有助於提升類神經網路模型價格預測之表現。而透過集成式學習的方法,對於不同的時間區間建立模型並且使模型間共同進行決策,能夠提高方向性預測準確度以及價格預測表現,進一步改進模型效能。
In the field of finance, forecasting financial instruments prices has always been a topic of interest to researchers and industry. The price changes of commodities have a great impact on Taiwan's economic system, which relies on imports and exports. Large-scale enterprise operations are more deeply disturbed by changes in the prices of raw materials, and it is necessary to regularly adopt hedging measures to reduce interference from nonoperational factors. In order to solve this problem, this study collects the agricultural futures data of the Chicago Mercantile Exchange (CME) as a sample, and trained the neural network model to solve the problem of price prediction.
The research results show that for financial data filled with noise, the use of denoise autoencoder can extract more representative features from the time series of futures price data. Such features help to improve the price prediction of neural network models. which performed. Through the ensemble learning method to establish a model pool to make decisions together, it can improve the prediction accuracy and improve the performance of the model.
[1] E. Fama, “Efficient capital markets: A review of theory and empirical work,”
The Journal of Finance, pp. 383-417, 1970.
[2] H. Jacobs, “What explains the dynamics of 100 anomalies?,” Journal of
Banking & Finance, pp. 65-85, 2015.
[3] J. Green, J.R.M. Hand, X.F. Zhang, “ The supraview of return predictive
signals,” Review of Accounting Studies, pp. 692-730, 2013.
[4] C.W.J. Granger, A.P. Anderson, “ An Introduction to Bilinear Time Series
Models,” Vandenhoeck and Ruprecht, Göttingen, 1978.
[5] H. Tong, “Threshold Models in Non-Linear Time Series Analysis,” Springer
Verlag, New York , 1983.
[6] R. Engle, “Autoregressive conditional heteroscedasticity with estimates of the
variance of U.K. inflation,” Econometrica, pp. 987-1008, 1982.
[7] Jadhav V., Reddy B. V. Chinnappa, Gaddi G. M., “Application of ARIMA Model
for Forecasting Agricultural Prices,” JAST, pp. 981-992, 2019.
[8] Dong qing Zhang, Guang ming Zang. Jing Li, Kaiping Ma, Huan Liu.,
“Prediction of soybean price in China using QR-RBF neural network model.,”
Computers and Electronics in Agriculture, pp. 10-17, 2018.
[9] Hyungyoug Lee, Seungjee Hong, Minsu Yeo. , “Comparison of forecasting
performance of time series models for the wholesale price of dried red peppers,”
Korean Journal of Agricultural Science, pp. 859-870, 2018.
[10] Gan-qiong Li, Shi-wei Xu, Zhe-min Li. , “Short-Term Price Forecasting For Agro-products Using Artificial Neural Networks,” International Conference on
Agricultural Risk and Food Security, pp. 278-287, 2010.
[11] S.Iizuka, E.Simo-Serra, H.Ishikawa., “ Globallyandlocally consistent image
completion,” ACM Transactions on Graphics, 2017.
[12] N. Huck, “Pairs selection and outranking: An application to the S&P 100 index,”
European Journal of Operational Research, pp. 819-825, 2009.
[13] N. Huck, “Pairs trading and outranking: The multi-step-ahead forecasting case,”
European Journal of Operational Research, pp. 1702-1716, 2010.
[14] Takeuchi, L., & Lee, Y.-Y., “Applying deep learning to enhance momentum
trading strategies in stocks.,” Working paper, Stanford University., 2013.
[15] B. Moritz, T. Zimmermann, “Deep conditional portfolio sorts: The relation
between past and future stock returns,” Working paper, LMU Munich and
Harvard University, 2014.
[16] M. Dixon, D. Klabjan, J.H. Bang, “Implementing deep neural networks for
financial market prediction on the Intel Xeon Phi,” Proceedings of the eighth
workshop on high performance computational finance, pp. 1-6, 2015.
[17] Y. LeCun, Y. Bengio, G. Hinton, “Deep learning,” Nature, pp. 436-444, 2015.
[18] J. Gama, “Knowledge Discovery from Data Streams,” Chapman & Hall, CRC
Press, 2010.
[19] M. Woźniak, M. Graña, E. Corchado, “A survey of multiple classifier systems as
hybrid systems,” Inf. Fusion, pp. 3-17, 2014.
[20] D. Wolpert, “The supervised learning no-free-lunch theorems,” Proceedings of the 6th Online World Conference on Soft Computing in Industrial Applications, pp.
25-42, 2001.
[21] K. Tumer, J. Ghosh, “Analysis of decision boundaries in linearly combined neural
classifiers,” Pattern Recognit, pp. 341-348, 1996.
[22] R.O. Duda, P.E. Hart, D.G. Stork, “Pattern Classification,” Wiley, New York, p.
2, 2001.
[23] T.K. Ho, J.J. Hull, S.N. Srihari, “Decision combination in multiple classifier
systems,” IEEE Trans. Pattern Anal. Mach. Intell, pp. 66-75, 1994.
[24] A. Bifet, “Adaptive Learning and Mining for Data Streams and Frequent
Patterns,” Universitat Polit é cnica de Catalunya , 2009.
[25] J. Stefanowski, “Adaptive ensembles for evolving data streams - combining
block-based and online solutions,” New Frontiers in Mining Complex Patterns -
4th International Workshop, NFMCP, pp. 3-16, 2015.
[26] G. Ditzler, M. Roveri, C. Alippi, R. Polikar, “ Learning in nonstationary
environments: a survey,” IEEE Comput. Intell. Mag, pp. 12-25, 2015.
[27] I. Zliobaite, “Adaptive Training Set Formation, Vilnius University,” p. 2010.
[28] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, Pierre-Antoine Manzagol.,
“Extracting and composing robust features with denoising autoencoders,” ICML
'08 Proceedings of the 25th international conference on Machine learning, pp.
1096-1103, 2008.
[29] Sepp Hochreiter , Jürgen Schmidhuber, “Long Short-Term Memory,” Neural
Computation, pp. 1735-1780, 5 11 1997.
[30] Palangi H, Deng L, Shen YL, Gao JF, He XD, Chen JS, et al., “Deep Sentence
Embedding Using Long Short-Term Memory Networks Analysis and Application
to Information Retrieval.,” IEEE-ACM Trans Audio Speech Lang, pp. 694-707,
2016.
[31] Palangi H, Ward R, Deng L. , “Distributed Compressive Sensing: A Deep
Learning Approach.,” IEEE Transactions on Signal Processing, pp. 4504-18,
2016;64.
[32] Kang Zhang, Guoqiang Zhong, Junyu Dong, Shengke Wang, Yong Wang, “Stock
Market Prediction Based on Generative Adversarial Network, ” Procedia
Computer Science , pp. 400-406, 2019.
[33] P. Luc, C. Couprie, S. Chintala, J. Verbeek., “Semantic segmentation using
adversarial networks,” arXiv preprint, 2016.
[34] M.Mathieu, C.Couprie, Y.LeCun, “ Deepmulti-scalevideo prediction beyond
mean square error,” arXiv preprint, 2015.
[35] I.J.Goodfellow, J.Pouget-Abadie, M.Mirzaetal, “Generative adversarial nets,”
Proceedings of the 28th Annual Conference on Neural Information Processing
Systems, pp. 2672-2680, 2014.
[36] M. Arjovsky, S. Chintala, and L. Bottou. , “Wasserstein gan.,” arXiv preprint,
2017.
[37] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron
Courville., “Improved training of wasserstein gans,” arXiv preprint, 2017.
[38] Augustus Odena, Jacob Buckman, Catherine Olsson, Tom B Brown, Christopher
Olah, Colin Raffel, Ian Goodfellow, “Is generator conditioning causally related to
gan performance?,” arXiv preprint, 2018.
[39] S.Iizuka, E.Simo-Serra, H.Ishikawa, “Globally and locally consistent image
completion,” ACM Transactions on Graphics,, 2017.