簡易檢索 / 詳目顯示

研究生: 李冠俊
LEE, Kuan-Chun
論文名稱: 驗證碼解碼器剋星:無需模型知識基於對抗性攻擊的有效解決方案
CAPTCHAs Solver Buster: An Efficient Adversarial Attack-based Approach Without the Knowledge of Models
指導教授: 紀博文
Chi, Po-Wen
口試委員: 王銘宏
Wang, Ming-Hung
官振傑
Guan, Albert
紀博文
Chi, Po-Wen
口試日期: 2024/01/22
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2024
畢業學年度: 112
語文別: 英文
論文頁數: 52
中文關鍵詞: 對抗攻擊驗證碼防護多模型整合攻擊
英文關鍵詞: Adversarial attack, CAPTCHA protection, Multi-model ensemble attack
研究方法: 實驗設計法
DOI URL: http://doi.org/10.6345/NTNU202400333
論文種類: 學術論文
相關次數: 點閱:83下載:11
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在現今網路服務越加發達的時代,許多網頁會在登入時透過驗證碼確保使用者是人而不是機器人,有些使用者會為了圖方便,使用自動化程式來進行登入,這可能會被惡意利用進行未經授權的訪問,導致資料外洩等嚴重後果。如果要防止這些情況,驗證碼的防護便是一大課題。為了增強驗證碼的安全性,我們將其與對抗攻擊相結合,預先將模型分群並生成對抗樣本,最後套用在驗證碼上。本文提出了兩種分群演算法。第一個是利用對抗樣本預測結果的相似性。我們為每個模型生成對抗樣本,並讓這些樣本被所有模型識別。列出所有成功誤導的機率後,將具有相似相互影響機率的模型分組。然後將分組的結果用於整合對抗性攻擊。第二種方法是計算每個模型的浮點運算數,並根據其值進行分組。在我們的觀察中,具有類似浮點運算數的模型在圖像分類任務中擁有相似的預測結果。浮點運算數是模型正向傳播的計算量,可用於評估模型的複雜度。第一種方法將模型分為7組,所有模型均達到了超過93%的誤導率。第二種方法也將模型分為7組,達到了90%的誤導率。最後將分群結果所生成的對抗樣本套用到驗證碼上,以達到增加其安全性的目的。

    In today's increasingly interconnected online landscape, many websites implement verification codes during login procedures to ensure that users are humans rather than automated bots. Some users opt for the convenience of automated programs to facilitate login processes. This practice can be exploited for unauthorized access, potentially leading to severe consequences such as data breaches. Safeguarding verification codes thus stands as a significant challenge in preventing such occurrences. To enhance the security of verification codes, this paper proposes combining them with adversarial attacks. Models are pre-grouped and adversarial examples are generated, which are then applied to the verification codes. Two grouping algorithms are presented. The first method involves assessing the similarity of prediction results using adversarial examples. Adversarial examples are generated for each model, and these examples are identified by all models. After listing the probabilities of successful deception, models with similar mutual influence probabilities are grouped. This grouping is then used for integrated adversarial attacks. The second method calculates the floating-point operations for each model and groups them based on their values. In our observations, models with similar floating-point operations tend to have similar prediction results in image classification tasks. Floating-point operations quantify the computational load during a model's forward pass and can be used to assess model complexity. The first method grouped models into 7 groups, all achieving over 93% deception rates. The second method also grouped models into 7 groups, achieving a 90% deception rate. Finally, the adversarial samples generated from the grouping results are applied to the verification codes to enhance their security.

    Chapter 1 Introduction 1 1.1 Contributions 6 1.2 Organization 7 Chapter 2 Related Works 8 2.1 CAPTCHA 8 2.2 Deep Learning Model 9 2.2.1 Convolutional Neural Networks 10 2.2.2 Recurrent Neural Networks 12 2.2.3 Networks based on Transformers 13 2.2.4 Impact on CAPTCHA 14 2.3 Adversarial Attacks 14 2.3.1 White-box Adversarial Attacks 15 2.3.2 Black-box Adversarial Attacks 16 2.3.3 Ensemble Adversarial Attack 19 Chapter 3 CAPTCHAs Solver Buster 21 3.1 Preliminary 21 3.1.1 Projected Gradient Descent 21 3.1.2 Floating Point Operations 22 3.2 Overview of the Proposed Scheme 24 3.3 Grouping Algorithms 24 3.3.1 Group by Probabilities of Mutual Influence 24 3.3.2 Group by FLOPs 26 3.4 Threshold Evaluation 27 3.5 Ensemble 28 3.6 Applications on CAPTCHA 28 Chapter 4 Evaluation 30 4.1 Experimental Setup 30 4.1.1 Dataset 30 4.1.2 Models to Generate Adversarial Examples 31 4.2 Evaluation of Grouping Algorithm 32 4.2.1 Group by Probabilities of Mutual Influence 32 4.2.2 Group by FLOPs 34 4.3 Threshold Evaluation 35 4.4 Applications on CAPTCHA 36 4.5 Performance Evaluation 37 Chapter 5 Conclusions and Future Works 44 References 46

    S. Sivakorn, I. Polakis, and A. D. Keromytis, “I am robot:(deep) learning to break semantic image captchas,” in 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 388–403, IEEE, 2016.
    H. Jonker, S. Karsch, B. Krumnow, and M. Sleegers, “Shepherd: a generic approach to automating website login,” 2020.
    L. Von Ahn, M. Blum, N. J. Hopper, and J. Langford, “Captcha: Using hard ai problems for security,” in Advances in Cryptology—EUROCRYPT 2003: International Conference on the Theory and Applications of Cryptographic Techniques, Warsaw, Poland, May 4–8, 2003 Proceedings 22, pp. 294–311, Springer, 2003.
    M. Egele, L. Bilge, E. Kirda, and C. Kruegel, “Captcha smuggling: Hijacking web browsing sessions to create captcha farms,” in Proceedings of the 2010 ACM Symposium on Applied Computing, pp. 1865–1870, 2010.
    G. Ye, Z. Tang, D. Fang, Z. Zhu, Y. Feng, P. Xu, X. Chen, and Z. Wang, Yet another text captcha solver: A generative adversarial network based approach,” in Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 332–348, 2018.
    S. Huang, N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel, “Adversarial attacks on neural network policies,” arXiv preprint arXiv:1702.02284, 2017.
    A. Ilyas, L. Engstrom, A. Athalye, and J. Lin, “Black-box adversarial attacks with limited queries and information,” in International conference on machine learning, pp. 2137–2146, PMLR, 2018.
    I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
    Z.-H. Zhou, Ensemble methods: foundations and algorithms. CRC press, 2012.
    H. Gao, D. Yao, H. Liu, X. Liu, and L. Wang, “A novel image based captcha using jigsaw puzzle,” in 2010 13th IEEE International Conference on Computational Science and Engineering, pp. 351–356, IEEE, 2010.
    E. L. Law, L. Von Ahn, R. B. Dannenberg, and M. Crawford, “Tagatune: A game for music and sound annotation.,” in ISMIR, vol. 3, p. 2, 2007.
    Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015.
    I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT press, 2016.
    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (F. Pereira, C. Burges, L. Bottou, and K. Weinberger, eds.), vol. 25, Curran Associates, Inc., 2012.
    K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
    Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, “A convnet for the 2020s,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11976–11986, 2022.
    C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9, 2015.
    C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826, 2016.
    N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “Shufflenet v2: Practical guidelines for efficient cnn architecture design,” in Proceedings of the European conference on computer vision (ECCV), pp. 116–131, 2018.
    F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer “Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size,” arXiv preprint arXiv:1602.07360, 2016.
    M. Tan, B. Chen, R. Pang, V. Vasudevan, M. Sandler, A. Howard, and Q. V. Le “Mnasnet: Platform-aware neural architecture search for mobile,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2820-2828, 2019.
    M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning, pp. 6105–6114, PMLR,2019.
    M. Tan and Q. Le, “Efficientnetv2: Smaller models and faster training,” in International conference on machine learning, pp. 10096–10106, PMLR, 2021.
    K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
    S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1492–1500, 2017.
    S. Zagoruyko and N. Komodakis, “Wide residual networks,” arXiv preprint arXiv:1605.07146, 2016.
    G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708, 2017.
    A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
    Z. Tu, H. Talebi, H. Zhang, F. Yang, P. Milanfar, A. Bovik, and Y. Li, “Maxvit: Multiaxis vision transformer,” in European conference on computer vision, pp. 459–479, Springer, 2022.
    Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 10012–10022, 2021.
    J. Wang, J. Qin, X. Xiang, Y. Tan, and N. Pan, “Captcha recognition based on deep convolutional neural network,” Math. Biosci. Eng, vol. 16, no. 5, pp. 5851–5861, 2019.
    F. Stark, C. Hazırbas, R. Triebel, and D. Cremers, “Captcha recognition with active deep learning,” in Workshop new challenges in neural computation, vol. 2015, p. 94, Citeseer, 2015.
    C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
    S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582, 2016.
    C. Guo, J. Gardner, Y. You, A. G. Wilson, and K. Weinberger, “Simple black-box adversarial attacks,” in International Conference on Machine Learning, pp. 2484-2493, PMLR, 2019.
    A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
    J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255, Ieee, 2009.

    下載圖示
    QR CODE