簡易檢索 / 詳目顯示

研究生: 陳信睿
Chen, Xin-Rui
論文名稱: 使用加上額外特徵的語言模型進行謠言偵測
Detecting Rumours on Social Media based on a Robust Language Model with External Features
指導教授: 侯文娟
Hou, Wen-Juan
口試委員: 侯文娟 郭俊桔 方瓊瑤
口試日期: 2021/08/23
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 46
中文關鍵詞: 語言模型深度學習假新聞規則模型
英文關鍵詞: Language Model, Deep Learning, Fake news, Rule-based System
研究方法: 實驗設計法
DOI URL: http://doi.org/10.6345/NTNU202101203
論文種類: 學術論文
相關次數: 點閱:158下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本篇論文提出一個強健語言模型加上額外特徵的系統,處理SemEval 2019
    RumourEval: Determining rumour veracity and support for rumours (SemEval
    2019 Task 7),主要包含了兩個任務,任務A 為 使用者的立場偵測,任務B偵測
    謠言是真、假或未驗證, 本研究利用到了對話分支的追蹤資訊,使用強健的預
    訓練語言模型與詞頻特徵,加上報導其他特徵的深度學習預訓練模型,結合兩者
    的預測結果,做出任務A的立場驗證,其Macro F1達到62%,再透過規則模型處
    理任務B的消息驗證,達到 Macro F1 68%,且 RSME降到0.5983。

    In this paper, we propose a robust language model with external features to
    deal with SemEval 2019 RumourEval: Determining rumour veracity and support
    for rumours (SemEval 2019 Task 7), which mainly contains two tasks. They are
    Task A: User’s stance detection, and task B: detect whether the rumour is true,
    false or unverified. We used the tracking of the dialogue branch, a robustly pretrained language model and word frequency features concatenate a deep learning
    pre-trained model that reported other features. Concatenating the prediction results of the two, we reached the performance of 62% Macro F1 for task A , and
    then processed the message verification of task B through a rule-based system to
    reach Macro F1 68% where is RMSE is reduced to 0.5983.

    第一章 緒論 1 第一節 研究動機 1 第二節 任務描述 1 第三節 資料集 2 第四節 研究問題 4 第五節 論文架構 5 第二章 研究背景 8 第一節 深度學習 8 1.1 循環神經網路 8 1.2 長短期記憶 9 第二節 Transformer 11 2.1 Multi-Head Attention 11 2.2 Positional Encoding 13 第三節 Contextualized Word Embeddings 14 第三章文獻探討 16 第一節 過去的SemEval比賽 16 1.1 SemEval-2017 Task 8 16 1.2 SemEval-2019 Task 7 17 第二節 結合基於計數的特徵與預訓練模型於立場偵測任務 18 2.1 Term frequency–inverse document frequency 18 2.2 RoBERTa 19 2.3 改良模型 19 第四章 模型架構 21 第一節 前處理 21 1.1 句子相似度計算 21 1.2 立場特徵提取 22 第二節 立場分析模型架構 25 2.1 資料的不平衡 25 2.2 MLP模型 26 2.3預測 27 第三節 謠言偵測模型架構 29 3.1 微調語言模型 29 3.2 後處理 29 第五章實驗結果與討論 32 第一節 評估方法 32 第二節 實驗結果 33 2.1 Task A實驗結果 33 2.2 Task B實驗結果 34 第三節 討論與分析 36 第六章結論與未來工作 40 第一 節結論 40 第二節 未來工作 41 參考文獻 42

    [1] C.Castillo, M.Mendoza, and B.Poblete. Information credibility on twitter. In Proceedings of the 20th international conference on World wide web, pages 675–684,2011, ACM.
    [2] G.Gorrell, E.Kochkina, M. Liakata, A.Aker, A.Zubiaga, K.Bontcheva, and L. Derczynski. SemEval-2019 task 7: RumourEval, determining rumour veracity and support for rumours In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 845–854, Minneapolis, Minnesota, USA, June 2019. Association for Computational Linguistics.
    [3] L.Derczynski, K.Bontcheva, M.Liakata, R.Procter, Geraldine.Hoi, and A.Zubiaga. Semeval-2017 task 8: Rumoureval: Determining rumour veracity and support for rumours. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 69–76,2017.
    [4] E. J. L., Finding structure in time,” Cognitive Science, vol. 14, no. 2, pages 179–211,1990.
    [5] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Computation, vol. 9, no. 8, pages 1735–1780, 1997.
    [6] J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin, “Convolutional sequence to sequence learning,” in Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1243–1252, JMLR. org, 2017.
    [7] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, pages 5998–6008, 2017.
    [8] J.Turian, L.Ratinov, and Y.Bengio. Word representations: A simple and general method for semi-supervised learning. In of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384–394,2010.Association for Computational Linguistics.
    [9] A.Mnih and G.Hinton. A scalable hierarchical distributed language model. In Neural Information Processing Systems 21, pages 1081–1088,2008.
    [10] R.Kiros, Y.Zhu, R. Salakhutdinov, R.Zemel, R.Urtasun, A.Torralba, and S.Fidler. Skip-thought vectors. In in neural information processing systems, pages 3294–3302,2015.
    [11] M. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, Deep contextualized word representations, in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2227–2237, 2018.
    [12] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, “SQuAD: 100,000+ questions for machine comprehension of text,” in Proceedings of the 2016 Conference on Em- pirical Methods in Natural Language Processing, (Austin, Texas), pages 2383–2392, Association for Computational Linguistics, Nov. 2016.
    [13] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Ng, and C. Potts, “Recursive deep models for semantic compositionality over a sentiment treebank,” in Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, (Seattle, Washington, USA), pages 1631–1642, Association for Computa- tional Linguistics, Oct. 2013.
    [14] E. F. Tjong Kim Sang and F. De Meulder, “Introduction to the conll-2003 shared task: Language-independent named entity recognition,” in Proceedings of the Sev- enth Conference on Natural Language Learning at HLT-NAACL 2003, CONLL ’03, (Stroudsburg, PA, USA), pages 142–147,2013. Association for Computational Linguistics.
    [15] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019.
    [16] S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning, A large annotated corpus for learning natural language inference, in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, (Lisbon, Portugal), pages 632– 642,2015. Association for Computational Linguistics, Sept.
    [17] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, Improving language understanding by generative pre-training,2018.
    [18] S.Mohammad, S.Kiritchenko, P.Sobhani, X.Zhu, and C.Cherry. Semeval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31–41,2016.
    [19] E.Kochkina, M.Liakata, and I.Augenstein. Turing at semeval-2017 task 8: Sequential approach to rumour stance classification with branch-lstm. arXiv preprint arXiv:1704.07221,2017.
    [20] T.Mikolov, K.Chen, G.Corrado, and J.Dean.Efficient estimation of word representations in vector space.arXiv preprint arXiv:1301.3781,2013.
    [21] M.Fajcik, L.Burget and P.Smrz.BUT-FIT at SemEval-2019 Task 7: Determining the Rumour Stance with Pre-Trained Deep Bidirectional Transformers. In Proceedings of the 13th International Workshop on Semantic Evaluation 13 (2019), pages 1097-1104,2019.
    [22] M.Peters, M.Neumann, M.Iyyer, M.Gardner, C.Clark, K.Lee, and L.Zettlemoyer. Deep contextualized word representations.preprint arXiv:1802.05365,2018.
    [23] A.Radford, K.Narasimhan, T.Salimans, and I.Sutskever.Improving language
    understanding with unsupervised learning. OpenAI,2018.
    [24] Q.Li, Q.Zhang and L.Si.eventAI at SemEval-2019 Task 7: Rumor Detection on Social Media by Exploiting Content, User Credibility and Propagation Information.In Proceedings of the 13th International Workshop on Semantic Evaluation,pages 855–859,2019.
    [25] Y. Liu, M.Ott, N.Goyal, J.Du, M.Joshi, D.Chen, O.Levy, M.Lewis, L.Zettlemoyer, and V.Stoyanov. Roberta: A robustly optimized bert pretraining approach.ArXiv, abs/1907.11692,2019.
    [26] A.Prakash, H.T.Madabushi. Incorporating Count-Based Features into PreTrained Models for Improved Stance Detection .arXiv:2010.09078,2020.

    無法下載圖示 本全文未授權公開
    QR CODE