研究生: |
張哲銘 Chang, Che-Ming |
---|---|
論文名稱: |
基於生成對抗網路的偽隨機數生成函式研究 A Study on Generative Adversarial Network based Pseudorandom Number Generation Function |
指導教授: |
紀博文
Chi, Po-Wen |
口試委員: |
王銘宏
Wang, Ming-Hung 曾一凡 Tseng, Yi-Fan 官振傑 Guan, Albert 紀博文 Chi, Po-Wen |
口試日期: | 2022/08/08 |
學位類別: |
碩士 Master |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2022 |
畢業學年度: | 110 |
語文別: | 中文 |
論文頁數: | 42 |
中文關鍵詞: | 人工智慧 、深度學習 、分類 |
英文關鍵詞: | Artificial intelligence, Deep learning, Classification |
研究方法: | 實驗設計法 |
DOI URL: | http://doi.org/10.6345/NTNU202201398 |
論文種類: | 學術論文 |
相關次數: | 點閱:114 下載:12 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
如何生成安全和快速的隨機序列一直是密碼學中的一個關鍵問題。在本文中,我們將介紹如何用硬體噪音訓練GAN(生成對抗網路)並生成具有類似質量的隨機序列。Linux操作系統中由/dev/random產生的硬體噪音代表了我們GAN的訓練集。在訓練中,我們還應用了其他方法,如Early stopping,以防止模型過擬合。最後,我們使用128,000,000比特的隨機序列,在NIST(美國國家標準暨技術研究院)特別出版物800-22測試和ENT測試下,將我們的GAN與其他PRNG(偽隨機數生成器)進行比較。結果顯示,我們的GAN優於大多數PRNG,我們發現我們的GAN與/dev/random作為訓練集有很多相似之處,並且生成隨機序列的速度至少是/dev/random的1044倍。它證明了GAN作為一種神經網絡PRNG,可以模仿非確定性算法的硬體噪音,同時具有硬體噪音的高安全性和PRNG的速度優勢。而且,它已被證明可以取代安全但低速的硬體設備,並產生類似質量的隨機序列,為密碼學領域提供了一種全新的方法。
Generating cryptographic and fast random sequences has been a critical issue in cryptography. In this thesis, we will introduce how to train the GAN (Generative Adversarial Network) with hardware noise and generate random sequences with similar quality. The hardware noise generated by /dev/random in Linux OS represents the training set of our GAN. In training, we also applied other methods, such as Early stopping, to prevent the model from overfitting. Finally, we compare our GAN with other PRNGs (Pseudorandom Number Generator) using 128,000,000 bits of random sequences under the NIST (National Institute of Standards and Technology) Special Publication 800-22 test and ENT test. The results show that our GAN outperforms most PRNGs, and we find that our GAN has many similarities to /dev/random as the training set and generates random sequences at least 1044 times faster. It demonstrates that GAN, as a neural network PRNG, can imitate the hardware noise of non-deterministic algorithms that can have the high security of hardware noise and the speed advantage of PRNG at the same time. And, it has been shown to replace secure but low-speed hardware devices and generate random sequences of similar quality, providing a completely new approach to the field of cryptography.
[1] K. Tirdad and A. Sadeghian, “Hopfield neural networks as pseudo random number generators,” North America Fuzzy Information Processing Society, Jul. 01, 2010.
[2] Y.-S. Jeong, K. Oh, C.-K. Cho, and H.-J. Choi, “Pseudo Random Number Generation Using LSTMs and Irrational Numbers,” Big Data and Smart Computing, Jan. 01, 2018.
[3] I. Goodfellow et al., “Generative Adversarial Nets,” Neural Information Processing Systems, 2014.
[4] M. De Bernardi, M. H. R. Khouzani, and P. Malacaria, “Pseudo-Random Number Generation using Generative Adversarial Networks,” arXiv:1810.00378 [cs, stat], Sep. 2018.
[5] R. Oak, C. Rahalkar, and D. Gujar, “Poster: Using generative adversarial networks for secure pseudorandom number generation,” in Proceedings of the 26th ACM SIGSAC Conference on Computer and Communications Security (CCS), 2019.
[6] L. E. Bassham et al., “A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications,” NIST, Sep. 2010.
[7] “/dev/random,” Linux, Jun. 8, 2003.
[8] J. Walker, “ENT:A Pseudorandom Number Sequence Test Program,” ENT, Jan. 28, 2008.
[9] Torvalds, and Linus, “Linux Kernel drivers/char/random.c comment documentation @ 1da177e4,” Linux, Jul. 22, 2014.
[10] Lloyd, and Jack, “On Syllable’s /dev/random,” Linux, Aug. 21, 2019.
[11] “random(4)–Linux Programmer’s Manual,” Linux, Mar. 22, 2021.
[12] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved Techniques for Training GANs,” arXiv.org, 2016.
[13] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-To-Image Translation With Conditional Adversarial Networks,” openaccess.thecvf.com, 2017.
[14] J. Ho and S. Ermon, “Generative Adversarial Imitation Learning,” Neural Information Processing Systems, 2016.
[15] A. Radford, L. Metz, and S. Chintala, “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks,” arXiv.org, 2015.
[16] N. Kodali, J. Abernethy, J. Hays, and Z. Kira, “On Convergence and Stability of GANs,” arXiv:1705.07215 [cs], Dec. 2017.
[17] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley, “Least Squares Generative Adversarial Networks,” arXiv:1611.04076 [cs], Nov. 2016.
[18] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,” arXiv.org, 2017.
[19] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved Training of Wasserstein GANs,” arXiv:1704.00028 [cs, stat], Dec. 2017.
[20] T. Karras, S. Laine, and T. Aila, “A Style-Based Generator Architecture for Generative Adversarial Networks,” openaccess.thecvf.com, 2019.
[21] K. Schawinski, C. Zhang, H. Zhang, L. Fowler, and G. K. Santhanam, “Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit,” Monthly Notices of the Royal Astronomical Society: Letters, vol. 467, no. 1, pp. L110–L114, May 2017, doi: 10.1093/mnrasl/slx008.
[22] M. Mustafa, D. Bard, W. Bhimji, Z. Lukić, R. Al-Rfou, and J. M. Kratochvil, “CosmoGAN: creating high-fidelity weak lensing convergence maps using Generative Adversarial Networks,” Computational Astrophysics and Cosmology, vol. 6, no. 1, May 2019, doi: 10.1186/s40668-019-0029-9.
[23] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Generative Image Inpainting With Contextual Attention,” openaccess.thecvf.com, 2018.
[24] T. R. V. Bisneto, A. O. de Carvalho Filho, and D. M. V. Magalhães, “Generative adversarial network and texture features applied to automatic glaucoma detection,” Applied Soft Computing, vol. 90, p. 106165, May 2020, doi: 10.1016/j.asoc. 2020.106165.
[25] L. Hu, Q. Chen, L. Qiao, L. Du, and R. Ye, “Automatic Detection of Melanins and Sebums from Skin Images Using a Generative Adversarial Network,” Cognitive Computation, Apr. 2021, doi: 10.1007/s12559-021-09870-5.
[26] J. Deverall, J. Lee, and M. Ayala, “Using generative adversarial networks to design shoes: the preliminary steps,” CS231n in Stanford, 2017.
[27] R. J. Spick, P. Cowling, and J. A. Walker, “Procedural Generation using Spatial GANs for Region-Specific Learning of Elevation Data,” IEEE Symposium on Computational Intelligence and Games, Aug. 01, 2019.
[28] S. Lunz, Y. Li, A. Fitzgibbon, and N. Kushman, “Inverse Graphics GAN: Learning to Generate 3D Shapes from Unstructured 2D Data,” arXiv:2002.12674 [cs], Feb. 2020.
[29] P. Achlioptas, O. Diamanti, I. Mitliagkas, and L. Guibas, “Learning Representations and Generative Models for 3D Point Clouds,” proceedings.mlr.press, Jul. 03, 2018.
[30] C. Vondrick, H. Pirsiavash, and A. Torralba, “Generating Videos with Scene Dynamics,” Neural Information Processing Systems, 2016.