研究生: |
鄭瑞芝 Jui-chih Cheng |
---|---|
論文名稱: |
九十八年學測閱讀測驗考生作答策略之初探 Strategies in Response to the Reading Comprehension Items on the 2009 General Scholastic Ability English Test |
指導教授: |
張武昌
Chang, Wu-Chang |
學位類別: |
博士 Doctor |
系所名稱: |
英語學系 Department of English |
論文出版年: | 2012 |
畢業學年度: | 100 |
語文別: | 英文 |
論文頁數: | 252 |
中文關鍵詞: | 閱讀測驗 、作答策略 、閱讀策略 、考題難度 、高中低成就考生 、命題效度 、外語評量 、學科能力測驗 、中成就考生 、低成就考生 |
英文關鍵詞: | reading comprehension, test-taking strategy, reading strategy, item difficulty, high achiever, validity, second language assessment, GSAET, average achiever, low achiever |
論文種類: | 學術論文 |
相關次數: | 點閱:230 下載:22 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本研究探討高三考生在2009 年學科能力測驗閱讀測驗部分的作答過程與
策略運用。分別針對考生在閱讀測驗部分普遍的策略使用、不同題型的策略使
用、以及難易度不同考題的策略使用進行分析與探討。2009 年學科能力測驗閱讀測驗部分共有四篇文章,每篇文章長度約248 至298 字。十八位高三考生(高、中、低程度考生各六位)於2009 年學科能力測驗結束後兩天之內參與本研究。在資料採集的過程中,十八位高三考生針對同樣一份考題的閱讀測驗部分進行作答,並詳細口述其閱讀過程、每題作答過程與選項依據。研究者參酌並修正Cohen與Upton 於2006 年所作的閱讀及考題作答策略分類表,將十八位高三考生的口述語料進行閱讀及考題作答策略分類,並做進一步的分析與探討。
研究發現十八位高三考生在2009 年學科能力測驗閱讀測驗十六道考題的
作答過程中,共運用十八項閱讀策略與三十六項考題作答策略。其中高成就考生在作答時,逾八成的考題會依據文章內容做選項的選取或刪除;中等程度考生在作答時,近五成的考題會依據文章內容做選項的選取或刪除;低成就考生只有三成的考題會依據文章內容做選項的選取或刪除。
回答閱讀測驗十六道題目時,多數高成就考生針對不同題型的考題,分別
所使用的閱讀與作答策略相當一致。中、低成就考生則使用不同的作答策略。
回答文章主旨或用意的題型時,多數高成就考生會直接依據文章的大意做
選項的選取或刪除。在回答文章細節、字義、指涉等題型時,多數高成就考會
詳讀文章一部分以搜尋線索,再依據文意做選項的選取或刪除,多數高成就考生鮮少運用字面作答技巧。
回答2009年學測閱讀測驗難度最高的考題(第44題)時,考生需要統整文章
段落之間的訊息,再針對四個選項作真偽的判別。參與本研究的十八位高三考生中,多數考生在閱讀題目之後會再詳讀文章一部分搜尋線索,然而只有高成就考生實際依據文意做選項的選取或刪除,中、低成就考生皆未依據文意做選項的選取或刪除。中、低成就考生無法理解選文內容與選項字義,因而使用較多的作答策略,其中又以運用先備知識與字面作答技巧居多。
回答2009 年學測閱讀測驗難度極低的一道考題(第48 題)時,考生需要在
閱讀題目之後,詳讀文章一部分以搜尋線索,再依據文意判定某一名詞的指涉。
參與本研究的考生相對使用少量的應答策略,且十八位考生全部答題正確。然而中、低成就考生在依據文意判定指涉的策略使用上,遠低於運用字面作答技巧的策略使用。五成程度中等的考生與全數低成就考生皆未依據文意做名詞的指涉,而直接運用字面作答技巧,選取出現文章關鍵字的選項。從本研究高三考生的應答策略中,發現此道考題的設計似無法有效評量考生對名詞指涉的能力。
本研究發現程度不同的考生在2009 年學測閱讀測驗文章的閱讀與作答的
過程不甚相同。高成就考生已具備該測驗所欲測試考生的英文能力,因而能夠有效率地閱讀文章並成功回答多數的題目。低成就考生尚未具備一定程度的英文能力,無法理解選文內容與多數選項字義,因而使用較多字面作答技巧。
本研究結果可提供國內英語教師與大考中心作為日後命題與審題的參酌。
在設計考題時,正答選項中出現文章關鍵字的考題,似比正答選項中未出現文章關鍵字的考題簡單;誘答選項中出現文章關鍵字卻陳述不真事實的考題,似比誘答選項未出現文章關鍵字的考題更具挑戰性。在設計考題時,應避免正答或誘答選項能以考生先備知識判定真偽的題目,並宜注意考題難度的排序,盡量將難度低的考題先置於難度高的考題,以增加考生答題的成就感並提升閱讀動機。
本研究結果亦提供英語教師作為教學上的參考。英語教師可以加強學生基
本語言能力與善用閱讀策略,協助中、低成就學生克服閱讀的困難,英語教師並宜挑選多種適合學生程度的文章讓學生廣泛閱讀,從閱讀中提升閱讀能力。
This study describes the responding strategies that test takers used on the reading comprehension subtest of the 2009 General Scholastic Ability English Test [GSAET]. The investigation focused on the examinees’ response strategies for: (1) 16 reading comprehension items in general; (2) 7 types of reading comprehension items; and (3) the most and least challenging test items. Verbal report data were collected from 18 12th graders across proficiency levels, i.e., 6 high achievers, 6 average achievers, and 6 low achievers. Participants worked on the reading comprehension subtest of the 2009 GSAET, containing four 248-298 word passages with four items. Participants’ verbal report was evaluated to determine strategy use based on the modified versions of Cohen and Upton’s reading and test-taking strategies coding rubrics (2006).
The participants used a total of 18 reading strategies and 36 test taking strategies in response to the 16 reading comprehension items. The high achievers selected or discarded the options through textual meaning 80% of the time; the average achievers, 50% of the time; and the low achievers, only 30% of the time.
In response to the reading comprehension items, the high achievers generally showed a consistent pattern in the use of the main response strategies for different types of questions, whereas the average and low achievers resorted to a variety of test taking strategies.
In response to global questions, e.g., questions on the main idea or purpose of the passage, the high achievers generally selected the options through passage overall meaning. In response to local questions, e.g., questions asking for vocabulary meaning, a specific referent, inference, specific information, cause-effect relationship, or details of the passage, the high achievers generally went back to the passage, read carefully for the clues, and selected the options through vocabulary, sentence, or paragraph meaning. They generally selected or discarded the options through textual meaning instead of test-wiseness strategies.
In response to the most challenging question, Q44, which required the respondents to integrate information conveyed across paragraphs and justify the correctness of the statements in four options, most of the participants read the question and then went back to the passage, carefully reading a potion of the passage to look for clues. While the high achievers generally selected and discarded options through textual meaning, none of the average and low achievers selected the option through textual meaning. Showing difficulty comprehending the passage and wrestling with the option meaning, the average and low achievers used more test-taking strategies in response to the question. They relied more on their background knowledge and the key word association strategy in their option selection.
In response to one of the least challenging items, i.e., Q48, all of the participants manipulated relatively fewer response strategies and successfully selected the option. But the strategy of verifying the referent was used at a much lower frequency rate than the test-wiseness strategy of key-word matching among the average and low achievers. Half of the average achievers and all of the low achievers selected the option through key-word matching strategy. The response strategies thus did not seem appropriate for the purpose of the item and provided weak evidence for theory-based validity.
This study showed that examinees of different proficiency levels processed the passages/tasks differently. The high achievers, whose English proficiency had reached a certain level required of the 2009 GSAET, were able to read the passages efficiently and completed most of the test items successfully. The low achievers, whose English proficiency had not reached a certain level to cope with most of the test items, wrestled with the meanings of the words in the passages/tasks and failed to process the passages/tasks globally.
The findings provide insights into the construction of L2 reading tests. They suggest that questions with the correct option containing key words in the passage are likely to be easier than questions without; questions with distracters containing words in the passage but describing something irrelevant to the passage are likely to be more challenging than those without; and questions with options involving statements which can be judged wrong from the examinees’ background knowledge do not make attractive distracters. They also confirm the importance of sequencing test items by difficulty, with easy items preceding challenging ones.
The findings also provide pedagogical implications, suggesting that L2 teachers may assist learners to cope with difficulties in reading by improving word-level competences and promoting use of comprehension strategies from a range of texts appropriate to learners’ proficiency levels.
大學入學考試中心 (2007a):學科能力測驗英文考科考試說明(適用於95 課綱)。取自http://www.ceec.edu.tw/95課綱考試說明/02-95學測英文考試說明_定稿_.pdf [College Entrance Examination Center. (2007a). Test specifications of the General Scholastic Ability English Test based on the 2006 Senior High School Curriculum Guidelines for the English Subject. Retrieved from http://www.ceec.edu.tw/95課綱考試說明/02-95學測英文考試說明_定稿_.pdf]
大學入學考試中心 (2007b):指定科目考試英文考科考試說明(適用於95 課綱)。取自http://www.ceec.edu.tw/95課綱考試說明/02-95指考英文考試說明_定稿_.pdf [College Entrance Examination Center. (2007b). Test specifications of the Department Required English Test based on the 2006 Senior High School Curriculum Guidelines for the English Subject. Retrieved from http://www.ceec.edu.tw/95課綱考試說明/02-95指考英文考試說明_定稿_.pdf]
大學入學考試中心 (2009a):九十八學年度學科能力測驗試題英文考科。取自http://www.ceec.edu.tw/AbilityExam/AbilityExamPaper /98SAT_Paper/
98SAT_PaperIndex.htm [College Entrance Examination Center. (2009a). The 2009 General Scholastic Ability English Test. Retrieved from http://www.ceec.edu.tw/AbilityExam/AbilityExamPaper/98SAT_Paper
/98SAT_PaperIndex.htm]
大學入學考試中心 (2009b) :九十八學年度學科能力測驗統計圖表。取自http://www.ceec.edu.tw/AbilityExam/SatStat/98SATStat/ 98SATStatIndex.htm [College Entrance Examination Center. (2009b). Statistical analysis of the 2009 General Scholastic Ability English Test. Retrieved from http://www.ceec.edu.tw /AbilityExam/SatStat/98SATStat/ 98SATStatIndex.htm]
大學入學考試中心 (2009c) :九十八學年度指定科目考試統計圖表。取自http://www.ceec.edu.tw/AppointExam/DrseStat/98DrseStat/98DrseStatIndex.
htm [College Entrance Examination Center. (2009c). Statistical analysis of the 2009 Department Required English Test. Retrieved from http://www.ceec.edu.tw /http://www.ceec.edu.tw/AppointExam/DrseStat/98DrseStat/98DrseStatIndex. htm
大學入學考試中心 (2009d) :九十八學年度指定科目考試試題英文考科。取自http://www.ceec.edu.tw/AppointExam/AppointExamPaper/98DRSE_Paper/
98DRSE_PaperIndex.htm [College Entrance Examination Center. (2009d). The 2009 Department Required English Test. Retrieved from http://www.ceec.edu.
tw/AppointExam/AppointExamPaper/98DRSE_Paper/98DRSE_PaperIndex. htm]
大學入學考試中心 (2009e) :九十八學年度學科能力測驗英文考科試題解析。台北: 大學入學考試中心 [College Entrance Examination Center. (2009e). The analysis of the test items of the 2009 General Scholastic Ability English Test. Taipei: College Entrance Examination Center.]
大學入學考試中心 (2011a) :學科能力測驗英文考科考試說明(適用於99 課綱) 。取自http://www.ceec.edu.tw/99課綱考試說明/1000930/02-102學測英文考試說明_定稿_.pdf [College Entrance Examination Center. (2011a). Test specifications of the General Scholastic Ability English Test based on the 2010 Senior High School Curriculum Guidelines for the English Subject. Retrieved from http://www.ceec.edu.tw/99課綱考試說明/1000930/02-102學測英文考試說明_定稿_.pdf]
大學入學考試中心 (2011b) :指定科目考試英文考科考試說明 (適用於99 課綱)。取自http://www.ceec.edu.tw/99課綱考試說明/1000930/02-102指考英文考試說明_定稿_.pdf [College Entrance Examination Center. (2011b). Test specifications of the Department Required English Test based on the 2010 Senior High School Curriculum Guidelines for the English Subject. Retrieved from http://www.ceec.edu.tw/99課綱考試說明/1000930/02-102指考英文考試說明_定稿_.pdf]
林秀慧(2009)。九十八學年度學科能力測驗試題分析—英文考科。取自http://www.ceec.edu.tw/Research2/doc_980828/ C%E5%AD%B898-2.pdf
[Lin, H. H. (2009). The analysis of the test items of the 2009 General Scholastic Ability English Test. Retrieved from http://www.ceec.edu.tw/Research2/
doc_980828/ C%E5%AD%B898-2.pdf]
殷允美、楊泰雄、葉錫南、林秀慧、游春琪(2008)。95課綱試題研究工作計畫--英文科研究報告。台北:大學入學考試中心。[Yin, Y. M., Yang, T. H., Yeh, H. N., Lin, H. H., & Yo, C. C. (2008). Project report on item development based on the 2006 Senior High School Curriculum Guidelines for the English Subject. Taipei: College Entrance Examination Center.]
游春琪(2009) :九十八學年度指定科目考試試題分析—英文考科。取自http://www.ceec.edu.tw/Research2/doc_990319/C%E7%B5%B198-2.pdf
[Yo, C. C. (2009). The analysis of the test items of the 2009 Department Required English Test. Retrieved from http://www.ceec.edu.tw/Research2/
doc_990319/C%E7%B5%B198-2.pdf]
鄭恆雄、張郇慧、程玉秀、顧英秀(2002)。大學入學考試中心高中英文參考詞彙
表。取自http://www.ceec.edu.tw/research/paper_doc/ce37/4.pdf
[Jeng, H. S., Chang, H. H., Cheng, Y. S., & Gu, Y. S. (2002). The CEEC English reference word list for high school students. Retrieved from http://www.ceec.edu.tw/research/paper_doc/ce37/ 4.pdf]
Abedi, J., Lord, C., Hofstetter, C., & Baker, E. (2000). Impact of accommodation
strategies on English language learners’ test performance. Educational Measurement: Issues and Practice, 19, 16-26.
Abedi, J., Lord, C., & Plummer, J. R. (1997). Final report of language background as
a variable in NAEP mathematics performance (CSE technical report # 429).
Los Angeles: Center for the Study of Evaluation.
Afflerbach, P. P. (1986). The influence of prior knowledge on expert readers’ importance assignment processes. In J. A. Niles & R. V. Lalik (Eds.), Solving problems in literacy: Learners, teachers and researchers (pp. 30-40). Thirty-fifth Yearbook of the National Reading Conference. Rochester, NY: National Reading Conference.
Afflerbach, P. (2000). Verbal reports and protocol analysis. In M. L. Kamil, P. B.
Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research (Vol.
3, pp. 163-179). Mahwah, NJ: Lawrence Erlbaum Associates.
Afflerbach, P. (2007). Understanding and using reading assessment K-12. Newark,
DE: International Reading Association.
Alderson, J. C. (1984). Reading in a foreign language: a reading problem or a
language problem? In J. C. Alderson & A. H. Urquhart (Eds), Reading in a
Foreign Language (pp. 1-24). Harlow: Longman.
Alderson, J. C. (2000). Assessing reading. Cambridge: Cambridge University Press.
Alderson, J. C. (2005). Diagnosing foreign language proficiency: The interface
between learning and assessment. New York: Biddles Ltd.
Alderson, J. C., Clapham, C. and Wall, D. (1995). Language test construction and evaluation. Cambridge: Cambridge University Press.
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
Anderson, N. J. (1991). Individual differences in strategy use in second language reading and testing. Modern Language Journal, 75(4), 460-72.
Anderson, N. J., Bachman, L., Perkins, K. & Cohen, A. (1991). An exploratory study into the construct validity of a reading comprehension test: Triangulation of data sources. Language Testing, 8(1), 41-66.
Anderson, R. C., & Pearson, P. D. (1984). A schema-theoretic view of basic processes in reading. In P. D. Pearson, R. Barr, M. L. Kamil, & P. Mosenthal (Eds.), Handbook of reading research (pp. 255-291). New York: Longman.
Bachman, L. F. (1990). Fundamental considerations in language assessment. Cambridge: Cambridge University Press.
Bachman, L. F. and Palmer, A. S. (1996). Language testing in practice. Oxford: Oxford University Press.
Bernhardt, E. B. (1991). Reading development in a second language: Theoretical, empirical, and classroom perspectives. Norwood, NJ: Ablex.
Bernhardt, E. B. (2000). Second-language reading as a case study of reading scholarship in the 20th century. In M. L. Kamil, P. B. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research (Vol. 3, pp. 791-812). Mahwah, NJ: Lawrence Erlbaum Associates.
Bernhardt, E. B., & Kamil, M. L. (1995). Interpreting relationships between L1 and L2 reading: Consolidating the linguistic threshold and the linguistic interdependence hypotheses. Applied Linguistics, 16, 15-34.
Brisbois, J. E. (1995). Connections between first- and second-language reading. Journal of Reading Behavior, 27, 565-584.
Brown, J. D. (2005). Testing in language programs: A comprehensive guide to English language assessment. New York: McGraw-Hill College.
Carpenter, P., Miyake, A., & Just, M. (1994). Working memory constraints in comprehension: Evidence from individual differences, aphasia and aging. In M. A. Gernsbacher (Ed.), Handbook of psycholinguistics (pp. 1075-1122). San Diego: Academic Press.
Carrell, P. L., Devine, J., & Eskey, D. (Eds.). (1988). Interactive approaches to
second language reading. New York: Cambridge University Press.
Carrell, P. L., & Grabe, W. (2002). Reading. In Schmitt, N., (Ed.), An Introduction to applied linguistics (pp. 233-250). London: Arnold.
Carver, R. (1994). Percentage of unknown vocabulary words in text as a function of the relative difficulty of the text: Implications for instruction. Journal of Reading Behavior, 26, 413-37.
Carver, R. (1997). Reading for one second, one minute, or one year from the perspective of reading theory. Scientific Studies on Reading, 1, 3-43.
Cheng, L., Fox, J., & Zheng, Y. (2007). Student accounts of the Ontario Secondary School Literary Test: A case for validation. The Canadian Modern Language Review, 64(1), 69-98.
Clapham, C. (1996a). The development of IELTS: A study of the effect of background knowledge on reading comprehension. Cambridge: University of Cambridge Press.
Clapham, C. (1996b). What makes an ESP reading test appropriate for its candidates? In A. Cumming & R. Berwich (Eds.), Validation in language testing (pp. 171-194). Clevedon: Multilingual Matters.
Clarke, M. A. (1979). Reading in Spanish and English: Evidence from adult ESL students. Language Learning, 29, 121-150.
Cohen, A. D. (1984). On taking language tests: What the students report. Language Testing, 1(1), 70-81.
Cohen, A. D. (1988). The use of verbal report data for a better understanding of test-taking processes. Australian Review of Applied Linguistics, 11, 30-42.
Cohen, A. D. (2000). Exploring strategies in test taking: Fine-tuning verbal reports from respondents. In G. Ekbatani & H. Pierson (Eds.), Learner-directed assessment in ESL (pp. 131-145). Mahwah, NJ: Erlbaum.
Cohen, A. D. & Upton, T. A. (2006). Strategies in responding to the new TOEFL reading tasks (TOEFL monograph series report No. 33). Princeton, NJ: Educational Testing Service.
Cohen, A. D. & Upton, T. A. (2007). ‘I want to go back to the text’: Response strategies on the reading subtest of the new TOEFL. Language Testing, 24(2), 209-250.
Cronbach, L. J. (1988). Five perspectives on validity argument. In H. Wainer & H. Braun (Eds.), Test validity (pp. 3-17). Hillsdale, NJ: Lawrence Erlbaum Associates.
Cronbach, L. J. (1989). Construct validation after thirty years. In R. E. Linn (Ed.), Intelligence: Measurement, theory, and public policy (pp. 147-171). Urbana: University of Illinois Press.
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281-302.
Downing, S. M. (2006). Twelve steps for effective test development. In S. M. Downing & T. M. Haladyna (Eds.), Handbook of test development (pp. 3-25). Mahwah, NJ: Lawrence Erlbaum Associates.
Ebel, R. L. (1979). Essentials of educational measurement (3rd ed.). Englewood Cliffs, NJ: Prentice-Hall.
Educational Testing Service. (2002). LanguEdge Courseware: Handbook for scoring speaking and writing. Princeton, NJ: Author.
Ellis, R. (2004). The definition and measurement of L2 explicit knowledge. Language Learning, 54, 227–275.
Enright, M. K., Grabe, M., Koda, K., Mosenthal, P., Mulcahy-Ernt, P., & Schedl, M. (2000). TOEFL 2000 reading framework: A working paper (TOEFL monograph series report No. 17). Princeton, NJ: Educational Testing Service.
Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal report as data. Cambridge, MA: MIT Press.
Favreau, M., & Segalowitz, N. S. (1982). Second language reading in fluent bilinguals. Applied Psycholinguistics, 3, 329-341.
Freebody, P., & Anderson, R. C. (1983). Effects of vocabulary difficulty, text comprehension, and schema availability on reading comprehension. Reading Research Quarterly, 18(3), 277-294.
Freedle, R., & Kostin, I. (1993). The prediction of TOEFL reading item difficulty: Implications for construct validity. Language Testing, 10, 133-70.
Fulcher, G., & Davidson, F. (2007). Language testing and assessment: An advanced resource book. New York: Routledge.
Gass, S. M., & Mackey, A. (2000). Simulated recall methodology in second language research. Mahwah, NJ: Lawrence Erlbaum.
Grabe, W. (1991). Current developments in second language reading research. TESOL Quarterly, 25, 375-406.
Grabe, W. (1999). Developments in reading research and their implications for computer-adaptive reading assessment. In M. Chalhoub-deVille (Ed.), Issues in computer-adaptive testing of reading proficiency (pp. 111-147). Cambridge: Cambridge University Press.
Grabe, W. (2000). Reading research and its implications for reading assessment. In A. Kunnan (Ed.), Fairness and validation in language assessment (pp. 226-260). Cambridge: Cambridge University Press.
Grabe, W. (2009). Reading in a second language: Moving from theory to practice. Cambridge: Cambridge University Press.
Grabe, W., & Stoller, F. L. (2002). Teaching and researching reading. Harlow: Pearson.
Green, A. (1998). Verbal protocol analysis in language testing research: A handbook. Cambridge, UK: Cambridge University Press.
Guthrie, J. T. (1988). Locating information in documents: Examination of a cognitive model. Reading Research Quarterly, 23, 178-199.
Haladyna, T. M. (2004). Developing and validating multiple-choice test items (3rd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.
Haladyna, T. M. (2006). Roles and importance of validity studies in test development. In S. M. Downing & T. M. Haladyna (Eds.), Handbook of Test Development (pp. 739-755). Mahwah, NJ: Lawrence Erlbaum Associates.
Haladyna, T. M., & Downing, S. M. (1989). A taxonomy of multiple-choice item-writing rules. In Applied Measurement in Education, 1, 37-50.
Haladyna, T. M., & Downing, S. M. (2004). Construct-irrelevant variance in high-stakes testing. In Educational Measurement: Issues and Practice, 23(1), 17-27.
Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. In Applied Measurement in Education, 15(3), 309-334.
Haynes, M., & Carr, T. H. (1990). Writing system background and second language reading: A component skills analysis of English reading by native-speaking readers of Chinese. In T. H. Carr & B. A. Levy (Eds.), Reading and its development: Component skills approaches (pp. 375-421). San Diego, CA: Academic Press.
Horiba, Y. (1993). Second language readers’ memory for narrative texts: Evidence for structure-preserving top-down processing. Language Learning, 43, 345-72.
Hu, M., & Nation, P. (2000). Unknown vocabulary density and reading comprehension. Reading in a Foreign Language, 13(1), 403-430.
Hudson, J. A. (1990). Constructive processing in children’s event memory. Developmental Psychology, 26, 180-187.
Hudson, J., & Nelson, K. (1983). Effects of script structure on children’s story recall. Developmental Psychology, 19, 625-635.
Hughes, A. (2003). Testing for language teachers. Cambridge: Cambridge University Press.
Just, M. A., & Carpenter, P. A. (1992). A capacity theory of comprehension: Individual differences in working memory. Psychological Review, 99, 122-49.
Khalifa, H. (1997). A study in the construct validation of the reading module of an EAP proficiency test battery: Validation from a variety of perspectives. PhD Thesis. University of Reading.
Kintsch, W., & Yarborough, J. C. (1982). Role of rhetorical structure in text comprehension. Journal of Educational Psychology, 74, 828-34.
Koda, K. (1996). L2 word recognition research: A critical review. The Modern Language Journal, 80, 450-460.
Koda, K. (1997). Orthographic knowledge in L2 lexical processing: A cross-linguistic perspective. In J. Coady & T. Huckin (Eds.), Second language vocabulary acquisition: Rationale for pedagogy (pp. 35-52). New York: Cambridge University Press.
Koda, K. (2005). Insights into second language reading: A cross-linguistic approach. Cambridge: Cambridge University Press.
LaBerge, D., & Samuels, S. J. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology, 6, 29-323.
Laufer, B. (1992). How much lexis is necessary for reading comprehension? In P. J. L. Arnaud & H. Bejoint, (Eds.), Vocabulary and applied linguistics (pp. 126-132). London: Macmillan.
Linn, R. L. (2006). The standards for educational and psychological testing: guidance in test development. In S. M. Downing & T. M. Haladyna (Eds.), Handbook of test development (pp. 27-38). Mahwah, NJ: Lawrence Erlbaum Associates.
Messick, S. (1989). Validity. In R. L, Linn (Ed.), Educational measurement (3rd ed., pp. 13-103). New York: American Council on Education and Macmillan.
Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741-749.
McAlpine, M. (2002). A summary of methods of item analysis. Retrieved from http://www.caacentre.ac.uk/dldocs/BP2final.pdf
Nation, I. S. P. (1990). Teaching and learning vocabulary. New York: Newbury House.
Nation, I. S. P. (2001). Learning vocabulary in another language. Cambridge: Cambridge University Press.
Perfetti, C. A. (1997). Sentences, individual differences, and multiple texts: Three issues in text comprehension. Discourse Processes, 23, 337-355.
Perkins, K. (1992). The effect of passage and topical structure types on ESL reading comprehension difficulty. Language Testing, 9(2), 163-72.
Phakiti, A. (2003). A closer look at the relationship of cognitive and metacognitive strategy use to EFL reading achievement test performance. Language Testing, 20(1), 26-56.
Pressley, M. (1998). Elementary reading instruction that works: Why balanced literacy instruction makes more sense than whole language or phonics and skills. New York: Guilford Press.
Pressley, M. (2000). What should comprehension instruction be the instruction of? In M. L. Kamil, P. B. Mosenthal, P. D. Pearson & R. Barr (Eds.), Handbook of Reading Research (Vol. 3, pp. 545-562). Mahwah, NJ: Lawrence Erlbaum Associates.
Pressley, M., & Afflerbach, P. (1995). Verbal protocols of reading: The nature of constructively responsive reading. Hillsdale, NJ: Lawrence Erlbaum Associates.
RAND Reading Study Group. (2002). Reading for understanding: Toward a R&D program in reading comprehension. Santa Monica, CA: Science and Technology Policy Institute, RAND Education.
Read, J. (2000). Assessing vocabulary. Cambridge: Cambridge University Press.
Roller, C. M. (1990). The interaction between knowledge and structure variables in the processing of expository prose. Reading Research Quarterly, 25, 79-89.
Rosenblatt, L. M. (1978). The reader, the text, the poem: The transactional theory of the literary work. Carbondale, IL: Southern Illinois University Press.
Rutherford, W. E. (1983). Language typology and language transfer. In S. Gass & L. Selinker (Eds.), Language transfer in language learning (pp. 358-370). Rowley, MA: Newbury House.
Schmitt, N., Schmitt, D., & Clapham, C. (2001). Developing and exploring the behavior of two new versions of the Vocabulary Levels Test. Language Testing, 18(1), 55-88.
Segalowitz, N. S. (1986). Skilled reading in the second language. In J. Vaid (Ed.), Language processing in bilinguals: Psycholinguistic and neurological perspectives (pp. 3-19). Hillsdale, NJ: Erlbaum.
Segalowitz, N. S., Poulsen, C., & Komoda, M. (1991). Lower level components of reading skill in higher level bilinguals: Implications for reading instruction. AILA Review, 8, 15-30.
Upton, T. A., & Lee-Thompson, Li-Chun. (2001). The role of the first language in second language reading. Studies in Second Language Acquisition, 23(4), 469-95.
Urquhart, A. H., & Weir, C. J. (1998). Reading in a second language: Process, product and practice. Harlow: Longman.
Weir, C. J. (1990). Communicative language testing. London: Prentice Hall.
Weir, C. J. (1993). Understanding and developing language tests. London: Prentice Hall.
Weir, C. J. (2005). Language testing and validation: An evidence-based approach. New York: Palgrave Macmillan.
Weir, C. J., Yang, H, & Jin, Y. (2000). An empirical investigation of the componentiality of L2 reading in English for academic purposes. Studies in language testing 12. Cambridge: Cambridge University Press.
Wendler, C. L. W., & Walker, M. E. (2006). Practical issues in designing and maintaining multiple test forms for large-scale programs. In S. M. Downing & T. M. Haladyna (Eds.), Handbook of test development (pp. 445-467). Mahwah, NJ: Lawrence Erlbaum Associates.
Williams, E., & Moran, C. (1989). Reading in a foreign language at intermediate and advanced levels with particular reference to English. Language Teaching, 22, 217-228.
Williams, J. P. (1993). Comprehension of students with and without learning disabilities: Identification of narrative themes and idiosyncratic text representations. Journal of Educational Psychology, 85, 631-641.
Williams, R., and Dallas, D. (1984). Aspects of vocabulary in the readability of content area L2 educational textbooks: A case study. In J. C. Alderson and A. H. Urquhart (Eds.), Reading in a Foreign Language (pp. 199-210). London: Longman.