簡易檢索 / 詳目顯示

研究生: 陳宜威
Chen, I-Wei
論文名稱: 以深度學習對包含長文之資料集進行情感分析
Sentiment Analysis for Datasets Containing Long Texts Using Deep Learning
指導教授: 侯文娟
Hou, Wen-Juan
口試委員: 侯文娟
Hou, Wen-Juan
方瓊瑤
Fang, Chiung-Yao
郭俊桔
Kuo, June-Jei
口試日期: 2022/06/20
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 62
中文關鍵詞: 自然語言處理情感分析深度學習BERT
英文關鍵詞: Natural Language Processing, Sentiment Analysis, Deep Learning, BERT
研究方法: 實驗設計法
DOI URL: http://doi.org/10.6345/NTNU202200688
論文種類: 學術論文
相關次數: 點閱:205下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著網際網路的蓬勃發展,越來越多的訊息在社群網站、線上購物網站、論壇等各種網路平台間傳遞,而這一些訊息可能都表達了人們的看法或是評價。但是只依靠人力來觀察如此龐大的資訊量是非常沒有效率的,因此如何讓電腦得以代替人類完成這一項工作量龐大的任務是必要的。
    自然語言處理(Natural Language Processing,NLP)是一種讓電腦可以理解人類語言的技術,而情感分析(Sentiment Analysis)則是NLP其中的一項常見應用。它能夠了解字句間所表達的情緒,比如分析網路上對於某些產品、名人、事件等事物的評論立場為何,像是有好感還是持有負面態度。
    本實驗使用含有長文的IMDB資料集進行情感分析,該資料集將評論分為正面和負面,並且建立深度學習模型讓它藉由評論內容判斷評論表達的情緒是正面或負面,除了基本的LSTM和BERT模型以外,本實驗還有嘗詴讓BERT合併BERT或LSTM模型,希望藉由增加模型獲得的特徵來提高準確度,並且對各種模型的實驗結果進行比較。

    With the vigorous development of the Internet, more and more information is transmitted among various network platforms such as social networking sites, online shopping sites, forums, etc., and these messages may express people's opinions or evaluations. However, it is very inefficient to observe such a huge amount of information only by people. Therefore, it is necessary to allow computers to replace people to complete this huge task.
    Natural Language Processing (NLP) is a technology that enables computers to understand human language, and sentiment analysis is a common application in the NLP domain. Sentiment analysis aims to understand the emotions expressed between words, such as analyzing the stance of comments on certain products, celebrities, events, etc. on the Internet, and then identifying positive or negative polarity.
    This experiment uses the IMDB dataset containing long texts for sentiment analysis, which divides comments into positive and negative polarity. This thesis proposes a deep learning model to judge whether the sentiment expressed by the comment is positive or negative based on the content of the comment. In addition to the LSTM and BERT model, this experiment also tried to combine the BERT or LSTM model with BERT, hoping to improve the accuracy by increasing the features obtained by the model, and to compare the experimental results of different models.

    第一章 緒論 1 第一節 研究背景 1 第二節 研究目的 2 第三節 論文架構 2 第二章 文獻探討 3 第一節 情感分析 3 第二節 深度學習 3 第三節 Word Embedding 4 (一) Word2Vec 6 (二) Glove 7 第四節 LSTM 10 第五節 Self-attention 12 第六節 Multi-head Self-attention 13 第七節 BERT 14 (一) 預訓練任務 14 (二) BERT Embedding 16 第三章 研究步驟與方法 20 第一節 研究架構 20 第二節 資料前處理 21 (一) 去除html標籤 21 (二) 去除網址 22 (三) 去除數字和標點符號 22 第三節 LSTM 23 第四節 BERT 25 第五節 BERT + (LSTM/BERT)之合併模型 28 第四章 實驗結果與討論 29 第一節 實驗資料 29 第二節 實驗環境 29 第三節 LSTM (模型一) 31 第四節 BERT (模型二) 32 第五節 BERT+LSTM (模型三) 34 第六節 BERT+BERT 36 (一) 合併Pooled Output (模型四) 36 (二) 相加Pooled Output (模型五) 42 (三) 合併logits之結果 (模型六) 44 (四) 相加logits之結果 (模型七) 45 (五) 輸入Sequence Output至Multi-head Self-attention (模型八) 47 (六) 輸入Sequence Output至Multi-head Attention後相加向量 (模型九) 50 (七) 輸入Sequence Output至BERT Layer後取Pooled Output (模型十) 51 第七節 對BERT+BERT執行資料前處理 54 第五章 結論與未來研究方向 56 參考文獻 60

    [1] Khan, M. T., Durrani, M., Ali, A., Inayat, I., Khalid, S., & Khan, K. H. (2016). Sentiment analysis and the complex natural language. Complex Adaptive Systems Modeling, 4(1), 1-19.

    [2]Dai, W., Xue, G. R., Yang, Q., & Yu, Y. (2007, July). Transferring naive bayes classifiers for text classification. In AAAI (Vol. 7, pp. 540-545).

    [3]Shin, K., Abraham, A., & Han, S. Y. (2006, February). Improving kNN text categorization by removing outliers from training set. In International conference on intelligent text processing and computational linguistics (pp. 563-566). Springer, Berlin, Heidelberg.

    [4]Joachims, T. (1998, April). Text categorization with support vector machines: Learning with many relevant features. In European conference on machine learning (pp. 137-142). Springer, Berlin, Heidelberg.

    [5] Chen, Y. (2015). Convolutional neural network for sentence classification (Master's thesis, University of Waterloo).

    [6] Zhang, Y., Liu, Q., & Song, L. (2018). Sentence-state lstm for text representation. arXiv preprint arXiv:1805.02474.

    [7] Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.

    [8] Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.

    [9] Maas, A., Daly, R. E., Pham, P. T., Huang, D., Ng, A. Y., & Potts, C. (2011, June). Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies (pp. 142-150).

    [10] Pennington, J., Socher, R., & Manning, C. D. (2014, October). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543).

    [11] Varsamopoulos, S., Bertels, K., & Almudever, C. G. (2018). Designing neural network based decoders for surface codes. arXiv preprint arXiv:1811.12456.

    [12] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.

    [13] Sun, C., Qiu, X., Xu, Y., & Huang, X. (2019). How to Fine-Tune BERT for Text Classification?. arXiv e-prints, arXiv-1905.

    [14]Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., & Le, Q. V. (2019). Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32.

    [15] Howard, J., & Ruder, S. (2018). Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146.

    [16] Johnson, R., & Zhang, T. (2016, June). Supervised and semi-supervised text categorization using LSTM for region embeddings. In International Conference on Machine Learning (pp. 526-534). PMLR.

    [17] McCann, B., Bradbury, J., Xiong, C., & Socher, R. (2017). Learned in translation: Contextualized word vectors. Advances in neural information processing systems, 30.

    [18] Seo, M., Min, S., Farhadi, A., & Hajishirzi, H. (2017). Neural speed reading via skim-rnn. arXiv preprint arXiv:1711.02085.

    [19] Camacho-Collados, J., & Pilehvar, M. T. (2017). On the role of text preprocessing in neural network architectures: An evaluation study on text categorization and sentiment analysis. arXiv preprint arXiv:1707.01780.

    無法下載圖示 本全文未授權公開
    QR CODE