簡易檢索 / 詳目顯示

研究生: 陳建瑋
Chen, Jian-Wei
論文名稱: 基於深度學習網路之農產品期貨價格預測模型
Agricultural Futures Price Prediction Model based on Deep Learning Network
指導教授: 方瓊瑤
Fang, Chiung-Yao
葉耀明
Yeh, Yao-Ming
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 中文
論文頁數: 65
中文關鍵詞: 短期價格預測類神經網絡時間序列分析降噪自編碼器集成學習黃豆期貨農產品期貨
英文關鍵詞: short-term price forecasting, neural network, time series analysis, denoise autoencoder, ensemble learning, soybean futures, agricultural futures
DOI URL: http://doi.org/10.6345/NTNU201900222
論文種類: 學術論文
相關次數: 點閱:191下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在財務金融領域當中,預測金融商品價格一直是個研究者以及產業界所感興趣的主題。其中國際大宗原物料價格變動對於仰賴進出口的台灣經濟體系影響甚鉅,大型企業營運更深受原物料價格起伏所干擾,必須經常性地採取避險手段,將非營運因素的干擾降低。為了解決此問題,本研究蒐集芝加哥期貨商品交易所(Chicago Mercantile Exchange, CME)之原物料期貨資料為樣本,在本研究中以黃豆期貨為例,訓練類神經網路模型解決價格預測之問題。

    研究結果顯示,對於充斥雜訊的金融時間序列資料,使用降噪自編碼器可以從期貨價格資料中提取出更具有代表性的潛在特徵。此類特徵有助於提升類神經網路模型價格預測之表現。而透過集成式學習的方法,對於不同的時間區間建立模型並且使模型間共同進行決策,能夠提高方向性預測準確度以及價格預測表現,進一步改進模型效能。

    In the field of finance, forecasting financial instruments prices has always been a topic of interest to researchers and industry. The price changes of commodities have a great impact on Taiwan's economic system, which relies on imports and exports. Large-scale enterprise operations are more deeply disturbed by changes in the prices of raw materials, and it is necessary to regularly adopt hedging measures to reduce interference from nonoperational factors. In order to solve this problem, this study collects the agricultural futures data of the Chicago Mercantile Exchange (CME) as a sample, and trained the neural network model to solve the problem of price prediction.

    The research results show that for financial data filled with noise, the use of denoise autoencoder can extract more representative features from the time series of futures price data. Such features help to improve the price prediction of neural network models. which performed. Through the ensemble learning method to establish a model pool to make decisions together, it can improve the prediction accuracy and improve the performance of the model.

    摘要 i Abstract ii 表次 v 圖次 v 第一章 緒論 1 第一節 研究背景與動機 2 第二節 研究目的 4 第三節 實驗目標 6 第四節 研究範圍與限制 7 第二章 文獻探討 8 第一節 金融價格預測模型與時間序列分析 8 第二節 深度學習價格預測模型 9 第三節 集成學習(Ensemble learning) 12 第三章 研究方法與步驟 16 第一節 資料前處理 16 使用資料集劃分 17 樣本與參考時間長度 19 標籤:模型學習目標 21 Z分數標準化 21 缺失值處理 22 第二節 技術指標特徵 23 第三節 降噪自編碼器 27 第四章 價格預測網路模型 30 第一節 長短期記憶遞歸神經網絡 30 第二節 價格預測模型架構 32 第三節 模型輸出預測結果 33 第四節 價格預測模型流程圖 33 第五節 集成學習 35 Bagging演算法 35 Bagging演算法實驗設計 36 偏差(Bias)與方差(Variance) 38 第五章 實驗環境設定 42 第一節 實驗資料庫說明 42 第二節 實驗的評估方式 45 第六章 研究結果與分析 46 第一節 模型訓練 48 第二節 價格預測表現 49 第七章 結論 57 第一節 結論 57 第二節 未來工作 59 參考文獻 61

    [1] E. Fama, “Efficient capital markets: A review of theory and empirical work,”
    The Journal of Finance, pp. 383-417, 1970.
    [2] H. Jacobs, “What explains the dynamics of 100 anomalies?,” Journal of
    Banking & Finance, pp. 65-85, 2015.
    [3] J. Green, J.R.M. Hand, X.F. Zhang, “ The supraview of return predictive
    signals,” Review of Accounting Studies, pp. 692-730, 2013.
    [4] C.W.J. Granger, A.P. Anderson, “ An Introduction to Bilinear Time Series
    Models,” Vandenhoeck and Ruprecht, Göttingen, 1978.
    [5] H. Tong, “Threshold Models in Non-Linear Time Series Analysis,” Springer
    Verlag, New York , 1983.
    [6] R. Engle, “Autoregressive conditional heteroscedasticity with estimates of the
    variance of U.K. inflation,” Econometrica, pp. 987-1008, 1982.
    [7] Jadhav V., Reddy B. V. Chinnappa, Gaddi G. M., “Application of ARIMA Model
    for Forecasting Agricultural Prices,” JAST, pp. 981-992, 2019.
    [8] Dong qing Zhang, Guang ming Zang. Jing Li, Kaiping Ma, Huan Liu.,
    “Prediction of soybean price in China using QR-RBF neural network model.,”
    Computers and Electronics in Agriculture, pp. 10-17, 2018.
    [9] Hyungyoug Lee, Seungjee Hong, Minsu Yeo. , “Comparison of forecasting
    performance of time series models for the wholesale price of dried red peppers,”
    Korean Journal of Agricultural Science, pp. 859-870, 2018.
    [10] Gan-qiong Li, Shi-wei Xu, Zhe-min Li. , “Short-Term Price Forecasting For Agro-products Using Artificial Neural Networks,” International Conference on
    Agricultural Risk and Food Security, pp. 278-287, 2010.
    [11] S.Iizuka, E.Simo-Serra, H.Ishikawa., “ Globallyandlocally consistent image
    completion,” ACM Transactions on Graphics, 2017.
    [12] N. Huck, “Pairs selection and outranking: An application to the S&P 100 index,”
    European Journal of Operational Research, pp. 819-825, 2009.
    [13] N. Huck, “Pairs trading and outranking: The multi-step-ahead forecasting case,”
    European Journal of Operational Research, pp. 1702-1716, 2010.
    [14] Takeuchi, L., & Lee, Y.-Y., “Applying deep learning to enhance momentum
    trading strategies in stocks.,” Working paper, Stanford University., 2013.
    [15] B. Moritz, T. Zimmermann, “Deep conditional portfolio sorts: The relation
    between past and future stock returns,” Working paper, LMU Munich and
    Harvard University, 2014.
    [16] M. Dixon, D. Klabjan, J.H. Bang, “Implementing deep neural networks for
    financial market prediction on the Intel Xeon Phi,” Proceedings of the eighth
    workshop on high performance computational finance, pp. 1-6, 2015.
    [17] Y. LeCun, Y. Bengio, G. Hinton, “Deep learning,” Nature, pp. 436-444, 2015.
    [18] J. Gama, “Knowledge Discovery from Data Streams,” Chapman & Hall, CRC
    Press, 2010.
    [19] M. Woźniak, M. Graña, E. Corchado, “A survey of multiple classifier systems as
    hybrid systems,” Inf. Fusion, pp. 3-17, 2014.
    [20] D. Wolpert, “The supervised learning no-free-lunch theorems,” Proceedings of the 6th Online World Conference on Soft Computing in Industrial Applications, pp.
    25-42, 2001.
    [21] K. Tumer, J. Ghosh, “Analysis of decision boundaries in linearly combined neural
    classifiers,” Pattern Recognit, pp. 341-348, 1996.
    [22] R.O. Duda, P.E. Hart, D.G. Stork, “Pattern Classification,” Wiley, New York, p.
    2, 2001.
    [23] T.K. Ho, J.J. Hull, S.N. Srihari, “Decision combination in multiple classifier
    systems,” IEEE Trans. Pattern Anal. Mach. Intell, pp. 66-75, 1994.
    [24] A. Bifet, “Adaptive Learning and Mining for Data Streams and Frequent
    Patterns,” Universitat Polit é cnica de Catalunya , 2009.
    [25] J. Stefanowski, “Adaptive ensembles for evolving data streams - combining
    block-based and online solutions,” New Frontiers in Mining Complex Patterns -
    4th International Workshop, NFMCP, pp. 3-16, 2015.
    [26] G. Ditzler, M. Roveri, C. Alippi, R. Polikar, “ Learning in nonstationary
    environments: a survey,” IEEE Comput. Intell. Mag, pp. 12-25, 2015.
    [27] I. Zliobaite, “Adaptive Training Set Formation, Vilnius University,” p. 2010.
    [28] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, Pierre-Antoine Manzagol.,
    “Extracting and composing robust features with denoising autoencoders,” ICML
    '08 Proceedings of the 25th international conference on Machine learning, pp.
    1096-1103, 2008.
    [29] Sepp Hochreiter , Jürgen Schmidhuber, “Long Short-Term Memory,” Neural
    Computation, pp. 1735-1780, 5 11 1997.
    [30] Palangi H, Deng L, Shen YL, Gao JF, He XD, Chen JS, et al., “Deep Sentence
    Embedding Using Long Short-Term Memory Networks Analysis and Application
    to Information Retrieval.,” IEEE-ACM Trans Audio Speech Lang, pp. 694-707,
    2016.
    [31] Palangi H, Ward R, Deng L. , “Distributed Compressive Sensing: A Deep
    Learning Approach.,” IEEE Transactions on Signal Processing, pp. 4504-18,
    2016;64.
    [32] Kang Zhang, Guoqiang Zhong, Junyu Dong, Shengke Wang, Yong Wang, “Stock
    Market Prediction Based on Generative Adversarial Network, ” Procedia
    Computer Science , pp. 400-406, 2019.
    [33] P. Luc, C. Couprie, S. Chintala, J. Verbeek., “Semantic segmentation using
    adversarial networks,” arXiv preprint, 2016.
    [34] M.Mathieu, C.Couprie, Y.LeCun, “ Deepmulti-scalevideo prediction beyond
    mean square error,” arXiv preprint, 2015.
    [35] I.J.Goodfellow, J.Pouget-Abadie, M.Mirzaetal, “Generative adversarial nets,”
    Proceedings of the 28th Annual Conference on Neural Information Processing
    Systems, pp. 2672-2680, 2014.
    [36] M. Arjovsky, S. Chintala, and L. Bottou. , “Wasserstein gan.,” arXiv preprint,
    2017.
    [37] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron
    Courville., “Improved training of wasserstein gans,” arXiv preprint, 2017.
    [38] Augustus Odena, Jacob Buckman, Catherine Olsson, Tom B Brown, Christopher
    Olah, Colin Raffel, Ian Goodfellow, “Is generator conditioning causally related to
    gan performance?,” arXiv preprint, 2018.
    [39] S.Iizuka, E.Simo-Serra, H.Ishikawa, “Globally and locally consistent image
    completion,” ACM Transactions on Graphics,, 2017.

    無法下載圖示 電子全文延後公開
    2024/12/31
    QR CODE