簡易檢索 / 詳目顯示

研究生: 高漢棋
Gao, Han-Chi
論文名稱: 蜂巢式網路用戶與V2X通訊共存異質性網路之功率控制與資源分配演算法
Power Adjustment and Resource Allocation of Heterogeneous Networks composed of Cellular Network User and V2X Communication
指導教授: 王嘉斌
Wang, Chia-Pin
口試委員: 方士豪
Fang, Shih-Hau
郭文興
Kuo, Wen-Hsing
口試日期: 2021/07/26
學位類別: 碩士
Master
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 59
中文關鍵詞: 強化式學習深度學習深度強化式學習系統容量波束成形
英文關鍵詞: Reinforcement Learning, Deep Learning, Deep Reinforcement Learning, System Capacity, Beamforming
研究方法: 實驗設計法
DOI URL: http://doi.org/10.6345/NTNU202101237
論文種類: 學術論文
相關次數: 點閱:108下載:7
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在現今資訊暴漲的時代,無線網路是由許多的物聯網和通訊裝置所組合起來,而對於基地台原本所服務的蜂巢式網路用戶來說,因為基地台所需要服務的用戶不斷的增加,導致了基地台之間嚴重的互相干擾,為此我們通過提出一個下行鏈路干擾緩解方案,在確保了蜂巢式網路用戶的前提下,也保障了系統內的其他次級用戶不受到干擾,本文中以V2X通訊代表次級用戶。本論文建立了一個有多個多輸入單輸出(MISO)小區的環境,並在其中設置了數台採用C-V2X通訊的無人車,並使用人工智慧中的強化式學習模型Deep Q-learing 結合波束成形技術,提出了一種功率調整與波束成形演算法,每個基地台都代表一個代理(Agent),並擁有獨立的神經網路,能夠根據基地台目前的環境做出適當的決策,我們的研究結果表明此演算法能夠有效保障蜂巢式網路用戶的權益(Utility),並透過波束成形技術避開無人車,從而達到降低干擾並提升系統效能的目的。

    In today's era of soaring information, wireless networks are combined by many Internet of Things and communication devices. For cellular network users originally served by base stations, because the number of users that base stations need to serve continues to increase, This leads to serious mutual interference between base stations. For this reason, we propose a downlink interference mitigation plan, while ensuring the rights of cellular network users, and also ensuring that other secondary users in the system are not interfered.
    An environment with several multiple input single output (MISO) regional combinations is established, and a number of unmanned vehicles using C-V2X communication are set up in it, using the reinforcement learning model in artificial intelligence is used in combination Beamforming technology proposes a power adjustment and beamforming algorithm. Each base station represents an agent and has an independent neural network that can make appropriate decisions based on the current environment of the base station. Research results show that this algorithm can effectively protect the rights of cellular network users, and avoid unmanned vehicles through beamforming technology, thereby reducing interference and improving system performance.

    誌謝 i 摘要 ii ABSTRACT iii 圖 目 錄 vii 表 目 錄 ix 第一章  緒論 1 1.1 研究動機與背景 1 1.2 研究目的 4 1.3 其他相關研究 5 1.4 論文架構 7 第二章  相關知識介紹 8 2.1 無線網路的未來趨勢 8 2.2 強化學習(REINFORCE LEARNING) 11 2.3 Q-LEARNING 13 2.4 DEEP Q-LEARNING (DQN) 15 2.5 波束成形(BEAMFORMING) 16 2.6 CELLULAR VEHICLE-TO-EVERYTHING (C-V2X) 18 2.7 使用者滿意度(UTILITY) 20 第三章  本論文提出之演算法 22 3.1 研究作法之概述 22 3.2 系統模型之建立 23 3.3 下行鏈路數據傳輸框架 29 3.4 本論文所提出之演算法 31 第四章  模擬參數與結果分析 39 4.1 模擬環境與設定參數 39 4.2 模擬結果與討論 41 第五章  結論 53 參 考 文 獻 54 自 傳 58 學 術 成 就 59

    [1] 3GPP Release 16, https://www.3gpp.org/release-16
    [2] 3GPP Release 17, https://www.3gpp.org/release-17
    [3] 工業技術研究院產業經濟與趨勢研究中心, http://ieknet.iek.org.tw/BookView.do?domain=42&rptidno=624635173
    [4] Qualcomm Technologies, https://www.qualcomm.com/invention/1000x
    [5] Rick Chang(2020)。5G助力,推動產業垂直應用。2020車聯網應用與技術研討會暨圓桌論壇。臺北市:南港展覽館二館。
    [6] K. Shen and W. Yu, “Fractional programming for communication systems—Part I: Power control and beamforming,” IEEE Trans. Signal Process., vol. 66, no. 10, pp. 2616–2630, May 2018.
    [7] Q. Shi, M. Razaviyayn, Z.-Q. Luo, and C. He, “An iteratively weighted MMSE approach to distributed sum-utility maximization for a MIMO interfering broadcast channel,” IEEE Trans. Signal Process., vol. 59, no. 9, pp. 4331–4340, Sep. 2011.
    [8] S. K. Joshi, P. C. Weeraddana, M. Codreanu, and M. Latva-aho,“Weighted sum-rate maximization for MISO downlink cellular networks via branch and bound,” IEEE Trans. Signal Process., vol. 60, no. 4, pp. 2090–2095, Apr. 2012.
    [9] M. Simsek, M. Bennis, and A. Czylwik, “Dynamic inter-cell interference coordination in HetNets: A reinforcement learning approach,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), Anaheim, CA, USA, Dec. 2012, pp. 5446–5450.
    [10] N. Morozs, T. Clarke, and D. Grace, “Heuristically accelerated reinforcement learning for dynamic secondary spectrum sharing,” IEEE Access, vol. 3, pp. 2771–2783, Dec. 2015.
    [11] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, Feb. 2015.
    [12] J. Tan, L. Zhang, Y.-C. Liang, and D. Niyato, “Deep reinforcement learning for the coexistence of LAA-LTE and WiFi systems,” in Proc. IEEE Int. Conf. Commun. (ICC), Shanghai, China, May 2019, pp. 1–6.
    [13] Q. Zhang, Y.-C. Liang, and H. V. Poor, “Intelligent user association for symbiotic radio networks using deep reinforcement learning,” IEEE Trans. Wireless Commun., early access, Apr. 7, 2020, doi: 10.1109/TWC.2020.2984758.
    [14] Y. S. Nasir and D. Guo, “Multi-agent deep reinforcement learning for dynamic power allocation in wireless networks,” IEEE J. Sel. Areas Commun., vol. 37, no. 10, pp. 2239–2250, Oct. 2019.
    [15] Ge, J., Liang, Y. C., Joung, J., & Sun, S. (2020). Deep Reinforcement Learning for Distributed Dynamic MISO Downlink-Beamforming Coordination. IEEE Transactions on Communications, 68(10), 6070-6085.
    [16] K. Doppler, M. Rinne, C. Wijting, C. B. Ribeiro, and K. Hugl, "Device-to-device communication as an underlay to LTE-advanced networks," IEEE Communications Magazine, vol. 47, pp. 42-49, 2009.
    [17] X. Chen, L. Chen, M. Zeng, X. Zhang, and D. Yang, "Downlink resource allocation for Device-to-Device communication underlaying cellular networks," in 2012 IEEE 23rd International Symposium on Personal, Indoor and Mobile Radio Communications - (PIMRC), 2012, pp. 232-237.
    [18] P. Janis, V. Koivunen, C. Ribeiro, J. Korhonen, K. Doppler, and K. Hugl, "Interference-Aware Resource Allocation for Device-to-Device Radio Underlaying Cellular Networks," in Vehicular Technology Conference, 2009. VTC Spring 2009. IEEE 69th, 2009, pp. 1-5.
    [19] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, Feb. 2015.
    [20] G. Han, L. Xiao, and H. V. Poor, “Two-dimensional anti-jamming communication based on deep reinforcement learning,” in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), New Orleans, LA, USA, Mar. 2017, pp. 2087–2091.
    [21] Xu, X., Zeng, Y., Guan, Y. L., & Li, Y. (2020). Cellular-V2X Communications With Weighted-Power-Based Mode Selection. IEEE Open Journal of the Communications Society, 1, 386-400.
    [22] Propelling 5G forward ,
    https://www.qualcomm.com/media/documents/files/propelling-5g-forward-a-closer-look-at-release-16.pdf
    [23] B. Guo, S. Sun, and Q. Gao, "Downlink interference management for D2D communication underlying cellular networks," in Communications in China - Workshops (CIC/ICCC), 2013 IEEE/CIC International Conference on, 2013, pp. 193-196.
    [24] Mirza, J., Zheng, G., Wong, K. K., & Saleem, S. (2018). Joint beamforming and power optimization for d2d underlaying cellular networks. IEEE Transactions on Vehicular Technology, 67(9), 8324-8335.
    [25] Y.-C. Liang and F. P. S. Chin, “Downlink channel covariance matrix (DCCM) estimation and its applications in wireless DS-CDMA systems,” IEEE J. Sel. Areas Commun., vol. 19, no. 2, pp. 222–232, Feb. 2001.
    [26] Y. S. Nasir and D. Guo, “Multi-agent deep reinforcement learning for dynamic power allocation in wireless networks,” IEEE J. Sel. Areas Commun., vol. 37, no. 10, pp. 2239–2250, Oct. 2019.
    [27] H. Zhang, L. Venturino, N. Prasad, P. Li, S. Rangarajan, and X. Wang,“Weighted sum-rate maximization in multi-cell networks via coordinated scheduling and discrete power control,” IEEE J. Sel. Areas Commun., vol. 29, no. 6, pp. 1214–1224, Jun. 2011.
    [28] Wang, C., & Chan, K. Y. (2013). Utility-based admission control for mobile WiMAX networks. Wireless networks, 19(2), 207-218.
    [29] Phased array-wiki, https://en.wikipedia.org/wiki/Phased_array
    [30] 詹震浩, " 基於A2C 結合LSTM 之D2D通訊於蜂巢式網路功率調整演算法" 碩士, 電機工程學系, 國立臺灣師範大學, 台北市, 2020.

    下載圖示
    QR CODE