簡易檢索 / 詳目顯示

研究生: 曾厚強
Tseng, Hou-Chiang
論文名稱: 表徵學習法之文本可讀性
Representation Learning for Text Readability
指導教授: 陳柏琳
Chen, Berlin
宋曜廷
Sung, Yao-Ting
學位類別: 博士
Doctor
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 英文
論文頁數: 106
英文關鍵詞: fastText, StarSpace, BERT
DOI URL: http://doi.org/10.6345/NTNU202000173
論文種類: 學術論文
相關次數: 點閱:209下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • Text readability refers to the degree to which a text can be understood by its readers: the higher the readability of a text for readers, the better the comprehension and learning retention can be achieved. In order to facilitate readers to digest and comprehend documents, researchers have long been developing readability models that can automatically and accurately estimate text readability. Conventional approaches to readability classification aim to infer a readability model using a set of handcrafted features defined a priori and computed from the training documents, along with the readability levels of these documents. However, developing the handcrafted features is not only labor-intensive and time-consuming, but also expertise demanding. With the recent advance of representation learning techniques, we can efficiently extract salient features from documents without recourse to specialized expertise, which offers a promising avenue of research on readability classification. In view of this, we in this study based on representation learning techniques propose several novel readability models, which have the capability of effectively analyzing documents belonging to different domains and covering a wide variety of topics. Compared with a baseline reference using a traditional model, the new model improves by 39.55% to achieve 78.45% of accuracy. We then combine different kinds of representation learning algorithm with general linguistic features, and the accuracy improves by an even higher degree of 40.95% to achieve 79.85%. Finally, this study also explores character-level representations to develop a novel readability model, which offers the promise of conducting a successful text readability assessment of the Chinese language with 78.66% accuracy. All the above results indicate that the readability features developed in this study can be used both to train a readability model for leveling domain-specific texts and to be used in combination with the more common linguistic features to enhance the efficacy of the model. As to future work, we will focus on exploring more training methods of constructing the semantic space and combining text summarization techniques, in order to distill salient aspects of text content that can further enhance the effectiveness of a readability model.

    Chapter 1. Introduction 1 1.1. Motivation 2 1.2. Objective and Outline of the Dissertation 3 Chapter 2. Literature Review 5 2.1. Development of Readability Assessment 5 2.2. Studies on Domain-Specific Text Readability 8 Chapter 3. Traditional Representation Learning: Integrating LSA-Based Hierarchical Conceptual Space and Machine Learning Methods for Leveling the Readability of Domain-Specific Texts 12 3.1. Proposed Method: Using Hierarchical Conceptual Space to Calculate Grade-Level Vectors for the Readability Assessment of Chinese Domain-Specific Texts 14 3.2. Study 1: Constructing a Readability Model for Domain-Specific Texts with General Linguistic Features as a Baseline Study 25 3.2.1. Methods 25 3.2.2. Results 28 3.3. Study 2: Constructing a Readability Model for Domain-Specific Texts through the Hierarchical Conceptual Space as another Baseline Study 30 3.3.1. Methods 30 3.3.2. Results 32 3.4. Study 3: Constructing and Validating Readability Models Using either Hierarchical Conceptual Space or Bag-of-Word Based Features for Domain-Specific Texts 35 3.4.1. Methods 36 3.4.2. Results 38 3.5. Discussion 41 Chapter 4. Distributed Representation Learning: Leveraging Distributed Representations to Develop Generalized Readability Model 44 4.1. Distributed Representation Learning: 44 4.1.1. LSA 44 4.1.2. Word2vec 45 4.1.3. fastText 48 4.1.4. StarSpace 50 4.1.5. CNN 50 4.2. Methods 52 4.2.1. Materials 52 4.2.2. Procedure 56 4.2.3. Results and Discussions on Corpus B 59 4.2.4. Results and Discussions on Corpus A 69 Chapter 5. Distributed Representation Learning: Generalized Two-Stage Language Modeling Based Readability Assessment 80 5.1. Two-Stage Language Model 81 5.1.1. BERT 81 5.2. Methods 84 5.2.1. Materials 84 5.2.2. Procedure 84 5.2.3. Results and Discussions on Corpus B 85 Chapter 6. Conclusion and Future Work 87 References 90 Appendix A 102 Appendix B 103 Publication List, Award and Patent 104

    Altszyler, E., Sigman, M., Ribeiro, S., and Slezak, D. F. 2016. Comparative study of LSA vs Word2vec embeddings in small corpora: a case study in dreams database. arXiv preprint arXiv:1610.01520.
    Arras, L., Horn, F., Montavon, G., Müller, K. R., and Samek, W. 2017. "What is relevant in a text document?": An interpretable machine learning approach. PloS one, 12(8): e0181142.
    Bailin, A., and Grafstein, A. 2001. The linguistic assumptions underlying readability formulae. Language and Communication 21(3): 285-301.
    Begeny, J. C., and Greene, D. J. 2014. Can readability formulas be used to successfully gauge difficulty of reading materials?. Psychology in the Schools 51(2): 198-215.
    Belden, B. R., and Lee, W. D. 1961. Readability of biology textbooks and the reading ability of biology students. School Science and Mathematics 61(9): 689-693.
    Bengio, Y., Ducharme, R., Vincent, P., and Jauvin, C. 2003. A neural probabilistic language model. Journal of machine learning research, 3: 1137-1155.
    Bengio, Y., Courville, A., and Vincent, P. 2013. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence 35(8): 1798-1828.
    Bishop, C. M. 2006. Pattern recognition and machine learning. New York: Springer.
    Bruce, B., Rubin A., and Starr, K. 1981. Why readability formulas fail. (Reading Education Report No. 28). Urbana, IL: University of Illinois, Center for the Study of Reading.
    Cecconi, M., De Backer, D., Antonelli, M. l., Beale, R., Bakker, J., Hofer, C., Jaeschke, R., Mebazaa, A., Pinsky, M. R., Teboul, J. L., Vincent, J. L., and Rhodes, A. 2014. Consensus on circulatory shock and hemodynamic monitoring. Task force of the European society of intensive care medicine. Intensive Care Medicine 40(12): 1795-1815.
    Crossley, S. A., Skalicky, S., Dascalu, M., McNamara, D. S., and Kyle, K. 2017. Predicting text comprehension, processing, and familiarity in adult readers: New approaches to readability formulas. Discourse Processes 54(5-6): 340-359.
    Chall, J. S., and Dale, E. 1995. Readability Revisited: the New Dale-Chall Readability Formula. Cambridge, Mass.: Brookline Books.
    Chang, C. C., and Lin, C. J. 2011. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST) 2(3): 1-27.
    Chang, T. H., Sung, Y. T., and Lee, Y. T. 2013. Evaluating the difficulty of concepts on domain knowledge using latent semantic analysis. In Proceedings of International Conference on Asian Language Processing, Urumqi, China, pp. 193-196.
    Chen, Y. T., Chen, Y. H., and Cheng, Y. C. 2013. Assessing Chinese readability using term frequency and lexical chain. International Journal of Computational Linguistics & Chinese Language Processing 18(2): 1-18.
    Chi, M. T. H., Glaser, R., and Farr, M. (Eds.). 1988. The Nature of Expertise. Hillsdale, NJ, US: Lawrence Erlbaum Associates, Inc.
    Chollet, F. 2015. Keras. URL http://keras.io. Accessed March 2019.
    Collins-Thompson, K. 2014. Computational assessment of text readability: a survey of current and future research. ITL-International Journal of Applied Linguistics 165(2): 97-135.
    Dale, E., and Chall, J. S. 1949. The concept of readability. Elementary English 26(1): 19-26.
    Dascălu, M. 2014. Readerbench (2)-individual assessment through reading strategies and textual complexity. In: Analyzing Discourse and Text Complexity for Learning and Collaborating. Studies in Computational Intelligence, vol. 534, Cham: Springer, pp. 161-188.
    De Clercq, O., and Hoste, V. 2016. All mixed up? Finding the optimal feature set for general readability prediction and its application to English and Dutch. Computational Linguistics 42(3): 457-490.
    Devlin, J., Chang, M. W., Lee, K., and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
    DuBay, W. H. 2004. The principles of readability. http://files.eric.ed.gov/fulltext/ED490073.pdf Accessed January 2017.
    Feng, L., Jansche, M., Huenerfauth, M., and Elhadad, N. 2010. A comparison of features for automatic readability assessment. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, Association for Computational Linguistics, Stroudsburg, PA, pp. 276-284.
    Flesch, R. 1948. A new readability yardstick. Journal of Applied Psychology 32(3): 221-233.
    Fourney, A., Morris, M. R., Ali, A., and Vonessen, L. 2018. Assessing the Readability of Web Search Results for Searchers with Dyslexia. In SIGIR, Ann Arbor, Michigan, pp. 1069-1072.
    François, T., and Miltsakaki, E. 2012. Do NLP and machine learning improve traditional readability formulas?. In Proceedings of the First Workshop on Predicting and Improving Text Readability for Target Reader Populations, Association for Computational Linguistics, Stroudsburg, PA, pp. 49-57.
    Freimuth, V. S. 1979. Assessing the readability of health education messages. Public Health Reports 94(6): 568-570.
    Furnas, G. W., Deerwester, S., Dumais, S. T., Landauer, T. K., Harshman, R. A., Streeter, L. A., and Lochbaum, K. E. 1988. Information retrieval using a singular value decomposition model of latent semantic structure. In Proceedings of the 11th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, ACM, New York, NY, USA, pp. 465-480.
    Gallagher, D. J., and Thompson, G. R. 1981. A readability analysis of selected introductory economics textbooks. The Journal of Economic Education 12(2): 60-63.
    Golub, G. H., and Reinsch, C. 1970. Singular value decomposition and least squares solutions. Numerische Mathematik 14(5): 403-420.
    Goodfellow, I., Bengio, Y., and Courville, A. 2016. Deep learning. Cambridge, MA: MIT press.
    Graesser, A. C., Singer, M., and Trabasso, T. 1994. Constructing inferences during narrative text comprehension. Psychological Review 101(3): 371-395.
    Graesser, A. C., McNamara, D. S., Louwerse, M. M., and Cai, Z. 2004. Coh-Metrix: analysis of text on cohesion and language. Behavior Research Methods 36(2): 193-202.
    Gruber, T. R. 1993. A translation approach to portable ontology specifications. Knowledge Acquisition 5(2): 199-220.
    Gyllstrom K., and Moens, M. F. 2010. Wisdom of the ages: toward delivering the children’s web with the link-based agerank algorithm. In Proceedings of the 19th ACM international conference on Information and knowledge management (CIKM ’10). ACM, New York, NY, USA, pp.159-168.
    Harden, R. M. 1999. What is a spiral curriculum?. Medical Teacher 21(2): 141-143.
    Han Lin. 2009. URL https://www.hle.com.tw/ Accessed March 2018.
    Hirschfeld, L. A., and Gelman, S. A. 1994. Mapping the Mind: Domain-Specificity in Cognition and Culture. New York: Cambridge University Press.
    Hinton, G. E. 1986. Learning distributed representations of concepts. In Proceedings of the eighth annual conference of the cognitive science society Vol. 1, Amherst, Massachusetts, pp. 1-12.
    Hunt, D. P. 2003. The concept of knowledge and how to measure it. Journal of Intellectual Capital 4(1): 100-113.
    Johns, J. L., and Wheat, T. E. 1984. Newspaper readability: two crucial factors. Journal of Reading 27(5): 432-434.
    Joulin, A., Grave, E., Bojanowski, P., and Mikolov, T. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.
    Kang Hsuan. 2009. URL https://www.knsh.com.tw/ Accessed March 2018.
    Kim, Y. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882.
    Kincaid, J. P., Fishburne, R. P., Rogers, R. L., and Chissom, B. S. 1975. Derivation of new readability Formulas: (Automated readability index, fog count and Flesch Reading Ease Formula) for Navy enlisted personnel. (No. RBR–8-75). Naval Technical Training Command, Millington, TN: Research Branch
    Kireyev, K., and Landauer, T. K. 2011. Word maturity: computational modeling of word knowledge. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, Association for Computational Linguistics, Stroudsburg, PA, pp. 299-308.
    Klare, G. R. 1963. The Measurement of Readability. Ames, Iowa: Iowa State University Press.
    Klare, G. R. 2000. The measurement of readability: useful information for communicators. ACM Journal of Computer Documentation (JCD) 24(3): 107-121.
    Koops, H. V., Van Balen, J., and Wiering, F. 2014. A deep neural network approach to the lifeclef 2014 brid task. CLEF2014 working Notes, 1180: 634-642.
    Landauer, T. K., and Dumais, S. T. 1997. A solution to Plato's problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review 104(2): 211-240.
    Landauer, T. K., Foltz, P. W., and Laham, D. 1998. An introduction to latent semantic analysis. Discourse Processes 25(2-3): 259-284.
    Le, Q., and Mikolov, T. 2014, January. Distributed representations of sentences and documents. In Proceedings of International Conference on Machine Learning, Beijing, China, pp. 1188-1196.
    LeCun, Y. 1989. Generalization and network design strategies. In Pfeifer, R., Schreter, Z., Fogelman-Soulié, F., and Steels, L. (Eds.), Connectionism in perspective, Netherlands: North Holland, Denver, CO, USA, pp. 143-155.
    LeCun, Y., Boser, B. E., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W. E., & Jackel, L. D. 1989. Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems (NIPS 1989), pp. 396-404.
    Lee, L. S., and Chen, B. 2005. Spoken document understanding and organization. IEEE Signal Processing Magazine 22(5): 42-60.
    Lété, B., Sprenger-Charolles, L., and Colé, P. 2004. MANULEX: a grade-level lexical database from French elementary school readers. Behavior Research Methods Instruments, & Computers 36(1): 156-166.
    McConnell, C. R. 1982. Readability formulas as applied to college economics textbooks. Journal of Reading 26(1): 14-17.
    McNamara, D. S., Crossley, S. A., and Roscoe, R. 2013. Natural language processing in an intelligent writing strategy tutoring system. Behavior Research Methods 45(2): 499-515.
    McNamara, D. S., Louwerse, M. M., and Graesser, A. C. 2002. Coh-Metrix: automated cohesion and coherence scores to predict text readability and facilitate comprehension. Technical report, Institute for Intelligent Systems, University of Memphis, Memphis, TN.
    McNamara, D. S., Louwerse, M. M., McCarthy, P. M., and Graesser, A. C. 2010. Coh-Metrix: capturing linguistic features of cohesion. Discourse Processes 47(4): 292-330.
    Mikolov, T., Chen, K., Corrado, G., and Dean, J. 2013. Efficient estimation of word representations in vector space. In Proceeding of the International Conference on Learning Representations (ICLR), https://arxiv.org/abs/1301.3781, Scottsdale, Arizona. pp. 1-12.
    Ministry of Education. 2014. General curriculum guidelines of 12-year basic education. https://www.naer.edu.tw/ezfiles/0/1000/img/67/151760851.pdf Accessed January 2019.
    Montavon, G., Samek, W., and Müller, K. R. 2018. Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73, 1-15.
    Nair, V., and Hinton, G. E. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML2010), Haifa, Israel, pp. 807-814.
    Nan I. 2009. URL https://trans.nani.com.tw/NaniTeacher/ Accessed March 2018.
    Nolen-Hoeksema, S., Fredrickson, B. L., Loftus, G., and Wagenaar, W. A. 2009. Atkinson and Hilgard’s Introduction to Psychology. Boston: Cengage Learning.
    Pennington, J., Socher, R., and Manning, C. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), Doha, Qatar, pp. 1532-1543.
    Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365.
    Petersen, S. E., and Ostendorf, M. 2009. A machine learning approach to reading level assessment. Computer Speech & Language 23(1): 89-106.
    Pilán, I., Volodina, E., and Johansson, R. (2014). Rule-based and machine learning approaches for second language sentence-level readability. In Proceedings of the ninth workshop on innovative use of NLP for building educational applications, Baltimore, Maryland, pp. 174-184.
    Pitler, E. and Nenkova, A. 2008. Revisiting readability: a unified framework for predicting text quality. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP ’08). Association for Computational Linguistics, Stroudsburg, PA, USA, 186–195. http://dl.acm.org/citation.cfm?id=1613715.1613742
    Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. 2019. Language models are unsupervised multitask learners. OpenAI Blog 1(8).
    Razek, J. R., and Cone, R. E. 1981. Readability of business communication textbooks-an empirical study. Journal of Business Communication 18(2): 33-40.
    Redish, J. 2000. Readability formulas have even more limitations than Klare discusses. ACM Journal of Computer Documentation (JCD) 24(3): 132-137.
    Robbins, H., and Monro, S. 1951. A stochastic approximation method. The Annals of Mathematical Statistics 22(3): 400-407.
    Rosset, S., Zhu, J., and Hastie, T. J. 2003. Margin maximizing loss functions. In Advances in neural information processing systems (NIPS 2003), Vancouver, British Columbia, Canada, pp. 1237-1244.
    Sainath, T. N., Vinyals, O., Senior, A., and Sak, H. 2015. Convolutional, long short-term memory, fully connected deep neural networks. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), South Brisbane, Queensland, Australia, pp. 4580-4584.
    Salton, G., and Buckley, C. 1988. Term-weighting approaches in automatic text retrieval, Information Processing & Management 24(5): 513-523.
    Samek, W., Binder, A., Montavon, G., Lapuschkin, S., and Müller, K. R. 2017. Evaluating the visualization of what a deep neural network has learned. IEEE transactions on neural networks and learning systems, 28(11): 2660-2673.
    Santos, P. J. F., Daar, D. A., Badeau, A., and Leis, A. 2017. Readability of online materials for Dupuytren's contracture. Journal of Hand Therapy 31(4): 472-479.
    Sato, S., Matsuyoshi, S., and Kondoh, Y. 2008. Automatic Assessment of Japanese Text Readability Based on a Textbook Corpus. In Proceedings of the International Conference on Language Resources and Evaluation (LREC), Marrakech, Morocco, pp. 654-660.
    Schloerke, B. 2011. GGally: extension to ggplot2. R package version 3.2.5. URL https://github.com/ggobi/ggally Accessed January 2017.
    Schvaneveldt, R.W., Durso, F.T., and Dearholt, D.W. 1989. Network structures in proximity data. Psychology of Learning and Motivation 24: 249-284.
    Schvaneveldt, R.W., Durso, F.T., and Dearholt, D.W. 2017. Pathfinder Network. Retrieved from http://interlinkinc.net/ Accessed January 2017.
    Sherman, L. A. 1893. Analytics of Literature: a Manual for the Objective Study of English Prose and Poetry. Ginn and Company.
    Sparck Jones, K. 1972. A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation 28(1): 11-21.
    Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research 15(1): 1929-1958.
    Sung, Y. T., Chen, J. L., Lee, Y. S., Cha, J. H., Tseng, H. C., Lin, W. C., Chang, T. H., and Chang, K. E. 2013. Investigating Chinese text readability: linguistic features, modeling, and validation. Chinese Journal of Psychology 55(1): 75-106.
    Sung, Y. T., Chen, J. L., Cha, J. H., Tseng, H. C., Chang, T. H., and Chang, K. E. 2015a. Constructing and validating readability models: the method of integrating multilevel linguistic features with machine learning. Behavior Research Methods 47(2): 340-354.
    Sung, Y. T., Lin, W. C., Dyson, S. B., Chang, K. E., and Chen, Y. C. 2015b. Leveling L2 texts through readability: combining multilevel linguistic features with the CEFR. The Modern Language Journal 99(2): 371-391.
    Sung, Y. T., Chang, T. H., Lin, W. C., Hsieh, K. S., and Chang, K. E. 2016. CRIE: an automated analyzer for Chinese texts. Behavior Research Methods 48(4): 1238-1251.
    Tan, A. H. 1999. Text mining: The state of the art and the challenges. In Proceedings of the PAKDD 1999 Workshop on Knowledge Discovery from Advanced Databases Vol. 8, Beijing, China, pp. 65-70.
    Tanaka-Ishii, K., Tezuka, S., and Terada, H. 2010. Sorting texts by readability. Computational Linguistics 36(2): 203-227.
    Taylor, M. C., and Wahlstrom, M. W. 1999. Readability as applied to an ABE assessment instrument. http://files.eric.ed.gov/fulltext/ED349461.pdf#page=30 Accessed January 2017.
    Thorndike, E. L., and Lorge, I. 1944. The Teacher's Word Book of 30,000 Words. New York: Teachers College, Columbia University. Bureau of Publications.
    Todirascu, A., François, T., Gala, N., Fairon, C., Ligozat, A. L., and Bernhard, D. 2013. Coherence and cohesion for the assessment of text readability. Natural Language Processing and Cognitive Science, 11, 11-19.
    Truran, M., Georg, G., Cavazza, M., and Zhou, D. 2010. Assessing the readability of clinical documents in a document engineering environment. In Proceedings of the 10th ACM Symposium on Document Engineering, ACM, New York, NY, pp. 125-134.
    Tseng, H. C., Chen, B., Chang, T. H., and Sung Y. T. 2019. Integrating LSA-Based Hierarchical Conceptual Space and Machine Learning Methods for Leveling the Readability of Domain-Specific Texts. Natural Language Engineering, 1-31.
    Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. 2017. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008.
    Vor Der Brück, T., and Hartrumpf, S. 2007. A semantically oriented readability checker for German. In Proceedings of the 3rd Language & Technology Conference, Poznań, Poland, pp. 270-274.
    Wang, J., Yang, Y., Mao, J., Huang, Z., Huang, C., and Xu, W. 2016. Cnn-rnn: A unified framework for multi-label image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, Nevada, pp. 2285-2294.
    Wu, L. Y., Fisch, A., Chopra, S., Adams, K., Bordes, A., and Weston, J. 2018. Starspace: Embed all the things!. In Proceedings of the 32th Association for the Advancement of Artificial Intelligence (AAAI) Conference on Artificial Intelligence, New Orleans, Louisiana, USA, pp. 5569-5577.
    Wu, L. Y., Fisch, A., Chopra, S., Adams, K., Bordes, A., and Weston, J. 2019. Starspace: Embed all the things!. Retrieved from https://github.com/facebookresearch/StarSpace Accessed January 2019.
    Yan, X., Song, D., and Li, X. 2006. Concept-based document readability in domain-specific information retrieval, In Proceedings of the 15th ACM International Conference on Information and Knowledge Management, ACM, New York, NY, pp. 540-549.
    Zeno, S. M., Ivens, S. H., Millard, R. T., and Duvvuri, R. 1995. The Educator’s Word Frequency Guide, New York: Touchstone Applied Science Associates, Inc. My Book.
    Zhao, J., and Kan, M. Y. 2010. Domain-specific iterative readability computation, In Proceedings of the 10th Annual Joint Conference on Digital Libraries, ACM, New York, NY, pp. 205-214.

    無法下載圖示 電子全文延後公開
    2025/12/31
    QR CODE