簡易檢索 / 詳目顯示

研究生: 高欣
Kao, Hsin
論文名稱: 視覺類比量尺的診斷分類模型
A Diagnostic Classification Model for Visual Analogue Scale
指導教授: 劉振維
Liu, Chen-Wei
口試委員: 劉振維
Liu, Chen-Wei
陳柏熹
Chen, Po-Hsi
陳俊宏
Chen, Jyun-Hong
口試日期: 2023/12/29
學位類別: 碩士
Master
系所名稱: 教育心理與輔導學系
Department of Educational Psychology and Counseling
論文出版年: 2024
畢業學年度: 112
語文別: 中文
論文頁數: 115
中文關鍵詞: 視覺類比量尺診斷分類模型連續性資料馬可夫鏈蒙地卡羅
英文關鍵詞: visual analogue scale, diagnostic classification model, continuous data, Markov chain Monte Carlo
研究方法: 模擬研究實徵資料分析
DOI URL: http://doi.org/10.6345/NTNU202401556
論文種類: 學術論文
相關次數: 點閱:44下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 視覺類比量尺(visual analogue scale, VAS)使受試者根據題目的敘述,在連續的視覺化量尺上進行標記,來反應受試者於試題欲測量潛在特質的傾向。由於VAS具有等距的特性,因此相較於間斷量尺(如李克特量尺),VAS在個體層面上得以提供更細緻的區辨度。鑒於目前所知的文獻中並未有針對VAS資料的診斷分類模型(diagnostic classification model, DCM),因此本研究旨在發展針對VAS資料的DCM。由於VAS資料為連續且具有雙邊界(doubly bounded)特性,本研究透過結合beta response model (BRM)以及log-linear cognitive diagnosis model(LCDM)組成針對連續雙邊界資料的beta diagnostic classification model (BDCM),並以馬可夫鏈蒙地卡羅(Markov chain Monte Carlo, MCMC)作為模型參數的估計方法。模擬研究中透過操弄特質數以及樣本數比較兩種模型:(1)應用BDCM於VAS資料以及(2)使用LCDM於二分資料,比較兩者之間試題參數回復以及分類準確率的差異。研究結果顯示,在試題參數回復上,BDCM所需的樣本小於LCDM,且在分類準確率上BDCM也優於LCDM。實徵研究針對Holland職業代碼(Holland code)發展的VAS職業興趣量表進行分析,並針對受試者的特質分類進行探討。

    Visual analogue scale (VAS) enables participants to mark their responses on a continuous visual scale based on the item descriptions that reflect their tendencies toward the measured latent traits in the given items. VAS appears to provide more fine-grained discrimination at the individual level compared to categorical scale (i.e., Likert’s scale), given its interval properties. To the author’s best knowledge, there is currently no diagnostic classification model (DCM) designed for VAS data. Therefore, this study aims to develop a novel DCM model specifically tailored for VAS data. As VAS data are continuous and exhibit doubly bounded characteristics, this study integrates the beta response model (BRM) and the log-linear cognitive diagnosis model (LCDM) into beta diagnostic classification model (BDCM), suitable for continuous bounded data. Markov Chain Monte Carlo (MCMC) was employed as the estimation method for model parameters. The simulation study manipulated the number of attributes and sample sizes and compared two models: (1) applying BDCM to VAS data and (2) utilizing LCDM with dichotomous data. Specifically, the simulation study compared item parameter recovery and classification accuracy between the two models. The results suggest that, in terms of item parameter recovery, the sample size required for BDCM is smaller than that for LCDM. Additionally, in terms of classification accuracy, BDCM outperforms LCDM. An empirical study was conducted to examine the VAS Career Interest Scale based on the Holland Code, and investigated the trait classification of participants.

    第一章 緒論 1 第二章 文獻探討 5 第一節 診斷分類模型 5 第二節 Beta Response Model(BRM) 14 第三節 馬可夫鏈蒙地卡羅(Markov chain Monte Carlo, MCMC) 16 第三章 研究模型 19 第一節 Beta Diagnostic Classification Model(BDCM) 19 第二節 貝氏估計 21 第四章 模擬研究 25 第一節 資料設定與產生 25 第二節 參數估計設定 28 第三節 評估指標 29 第四節 模擬結果 30 第五節 模擬結果討論 32 第五章 實徵研究 37 第一節 實徵資料 37 第二節 實徵分析結果與討論 38 第六章 結論與建議 43 第一節 研究結論 43 第二節 研究限制與建議 44 參考文獻 47 附錄 57 附錄 1 模擬研究LCDM與BDCM中各試題參數估計評估指標 57 附錄 2 模擬研究LCDM中各試題參數估計評估指標 101 附錄 3 VAS職業興趣量表受試者分類比率 115

    Aitken, R. C. (1969). Measurement of feelings using visual analogue scales. Proceedings of the Royal Society of Medicine, 62(10), 989–993.
    Baneshi, M. R., & Talei, A. (2011). Dichotomisation of continuous data: Review of methods, advantages, and disadvantages. Iranian Journal of Cancer Prevention, 4(1), 26–32.
    Bond, T., Yan, Z., & Moritz, H. (2020). Applying the rasch model: fundamental measurement in the human sciences. Routledge. https://doi.org/10.4324/9781410614575
    Chen, J., & de la Torre, J. (2018). Introducing the general polytomous diagnosis modeling framework. Frontiers in Psychology, 9, Article 1474. https://doi.org/10.3389/fpsyg.2018.01474
    Costa, P. T., & McCrae, R. R. (2008). The revised neo personality inventory (neo-pi-r). The SAGE Handbook of Personality Theory and Assessment, 2(2), 179–198. https://doi.org/10.4135/9781849200479.n9
    de la Torre, J. (2011). The generalized DINA model framework. Psychometrika, 76, 179–199. https://doi.org/10.1007/S11336-011-9207-7
    de la Torre, J., van der Ark, L. A., & Rossi, G. (2018). Analysis of clinical data from a cognitive diagnosis modeling framework. Measurement and Evaluation in Counseling and Development, 51(4), 281–296. https://doi.org/10.1080/07481756.2017.1327286
    de Valpine, P., Turek, D., Paciorek, C. J., Anderson-Bergman, C., Lang, D. T., & Bodik, R. (2017). Programming with models: writing statistical algorithms for general model structures with NIMBLE. Journal of Computational and Graphical Statistics, 26(2), 403–413. https://doi.org/10.1080/10618600.2016.1172487
    DeCarlo, L. T. (2011). On the analysis of fraction subtraction data: The DINA model, classification, latent class sizes, and the Q-matrix. Applied Psychological Measurement, 35(1), 8–26. https://doi.org/10.1177/0146621610377081
    Deonovic, B., Chopade, P., Yudelson, M., de la Torre, J., & von Davier, A. A. (2019). Application of cognitive diagnostic models to learning and assessment systems. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models: Models and model extensions, applications, software packages (pp. 437–460). Springer International Publishing. https://doi.org/10.1007/978-3-030-05584-4_21
    Dias, J. G., & Wedel, M. (2004). An empirical comparison of EM, SEM and MCMC performance for problematic Gaussian mixture likelihoods. Statistics and Computing, 14, 323–332. https://doi.org/10.1023/B:STCO.0000039481.32211.5a
    Fang, G., Liu, J., & Ying, Z. (2019). On the identifiability of diagnostic classification models. Psychometrika, 84, 19–40.
    https://doi.org/10.1007/s11336-018-09658-x
    Ferrando, P. J. (2001). A nonlinear congeneric model for continuous item responses. British Journal of Mathematical and Statistical Psychology, 54(2), 293–313. https://doi.org/10.1348/000711001159573
    Funke, F., & Reips, U.-D. (2012). Why semantic differentials in web-based research should be made from visual analogue scales and not from 5-point scales. Field Methods, 24(3), 310–327. https://doi.org/10.1177/1525822X12444061
    Geweke, J. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments. Federal Reserve Bank of Minneapolis.
    Gilks, W. R., Richardson, S., & Spiegelhalter, D. (1995). Markov chain Monte Carlo in practice. CRC press.
    Goodman, D. P., & Huff, K. (2007). The demand for cognitive diagnostic assessment. In M. Gierl & J. Leighton (Eds.), Cognitive diagnostic assessment for education: Theory and applications (pp. 19–60). Cambridge University Press. https://doi.org/10.1017/CBO9780511611186.002
    Grant, S., Aitchison, T., Henderson, E., Christie, J., Zare, S., Mc Murray, J., & Dargie, H. (1999). A comparison of the reproducibility and the sensitivity to change of visual analogue scales, Borg scales, and Likert scales in normal subjects during submaximal exercise. Chest, 116(5), 1208–1217. https://doi.org/10.1378/chest.116.5.1208
    Haertel, E. H. (1989). Using restricted latent class models to map the skill structure of achievement items. Journal of Educational Measurement, 26(4), 301–321. https://doi.org/10.1111/j.1745-3984.1989.tb00336.x
    Hartz, S. M. (2002). A Bayesian framework for the unified model for assessing cognitive abilities: Blending theory with practicality. University of Illinois at Urbana-Champaign.
    Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57(1), 97–109. https://doi.org/10.1093/biomet/57.1.97
    Hayes, M. (1921). Experimental development of the graphic rating method. Psychological Bulletin, 18, 98–99.
    Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109–1144. https://doi.org/10.1287/opre.31.6.1109
    Henson, R. A., Templin, J. L., & Willse, J. T. (2009). Defining a family of cognitive diagnosis models using log-linear models with latent variables. Psychometrika, 74, 191–210. https://doi.org/10.1007/s11336-008-9089-5
    Hilbert, S., Kuechenhoff, H., Sarubin, N., Nakagawa, T. T., & Buehner, M. (2016). The influence of the response format in a personality questionnaire: An analysis of a dichotomous, a Likert-type, and a visual analogue scale. TPM: Testing, Psychometrics, Methodology in Applied Psychology, 23(1). https://doi.org/10.4473/TPM23.1.1
    Holland, J. L. (1959). A theory of vocational choice. Journal of Counseling Psychology, 6(1), Article 35. https://doi.org/10.1037/h0040767
    Holland, J. L. (1997). Making vocational choices: A theory of vocational personalities and work environments (3rd ed.). Psychological Assessment Resources.
    Huang, Z., Kohler, I. V., & Kämpfen, F. (2020). A single-item visual analogue scale (VAS) measure for assessing depression among college students. Community Mental Health Journal, 56, 355–367.
    https://doi.org/10.1007/s10597-019-00469-7
    Jamieson, S. (2004). Likert scales: How to (ab) use them? Medical Education, 38(12), 1217–1218. https://doi.org/10.1111/j.1365-2929.2004.02012.x
    Jang, Y., & Cohen, A. S. (2020). The impact of Markov chain convergence on estimation of mixture IRT model parameters. Educational and Psychological Measurement, 80(5), 975–994. https://doi.org/10.1177/0013164419898228
    Jia, B., Zhu, Z., & Gao, H. (2021). International comparative study of statistics learning trajectories based on PISA data on cognitive diagnostic models. Frontiers in Psychology, 12, Article 657858. https://doi.org/10.3389/fpsyg.2021.657858
    Jiang, Z., & Carter, R. (2019). Using Hamiltonian Monte Carlo to estimate the log-linear cognitive diagnosis model via Stan. Behavior Research Methods, 51, 651–662. https://doi.org/10.3758/s13428-018-1069-9
    Junker, B. W., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25(3), 258–272. https://doi.org/10.1177/01466210122032064
    Kloft, M., Hartmann, R., & Heck, D. W. (2022). The Dirichlet Dual Response Model: An Item Response Model for Continuous Bounded Interval Responses. Psychometrika, 88, 888–916. https://doi.org/10.1007/s11336-023-09924-7
    Kuhlmann, T., Dantlgraber, M., & Reips, U.-D. (2017). Investigating measurement equivalence of visual analogue scales and Likert-type scales in Internet-based personality questionnaires. Behavior Research Methods, 49, 2173–2181. https://doi.org/10.3758/s13428-016-0850-x
    Lambert, B. (2018). A student's guide to Bayesian statistics. Sage Publications Ltd.
    Lee, Y.-S., Park, Y. S., & Taylan, D. (2011). A cognitive diagnostic modeling of attribute mastery in Massachusetts, Minnesota, and the US national sample using the TIMSS 2007. International Journal of Testing, 11(2), 144–177. https://doi.org/10.1080/15305058.2010.534571
    Lee, Y.-W., & Sawaki, Y. (2009). Application of three cognitive diagnosis models to ESL reading and listening assessments. Language Assessment Quarterly, 6(3), 239–263. https://doi.org/10.1080/15434300903079562
    Lesage, F.-X., Berjot, S., & Deschamps, F. (2012). Clinical stress assessment using a visual analogue scale. Occupational Medicine, 62(8), 600–605. https://doi.org/10.1093/occmed/kqs140
    Liang, K., Tu, D., & Cai, Y. (2022). Using Process Data to Improve Classification Accuracy of Cognitive Diagnosis Model. Multivariate Behavioral Research, 58(5), 969–987. https://doi.org/10.1080/00273171.2022.2157788
    Liu, C.-W. (2022). Efficient Metropolis-Hastings Robbins-Monro Algorithm for High-Dimensional Diagnostic Classification Models. Applied Psychological Measurement, 46(8), 662–674. https://doi.org/10.1177/01466216221123981
    Liu, C.-W. (2023). Multidimensional item response theory models for testlet-based doubly bounded data. Behavior Research Methods, 55(7), 1–45. https://doi.org/10.3758/s13428-023-02272-5
    Liu, J., Xu, G., & Ying, Z. (2011). Learning item-attribute relationship in Q-matrix based diagnostic classification models (Report: arXiv:1106.0721). https://doi.org/10.48550/arXiv.1106.0721
    Liu, X., & Johnson, M. S. (2019). Estimating CDMs using MCMC. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models: Models and model extensions, applications, software packages (pp. 629–646). Springer International Publishing. https://doi.org/10.1007/978-3-030-05584-4_31
    LR, D. (1973). SCL-90: An outpatient psychiatric rating scale-preliminary report. Psychopharmacol Bull, 9, 13–28.
    Müller, H. (1987). A rasch model for continuous ratings. Psychometrika, 52(2), 165–181. https://doi.org/10.1007/BF02294232
    Ma, W., & de la Torre, J. (2020). GDINA: An R Package for Cognitive Diagnosis Modeling. Journal of Statistical Software, 93(14), 1–26. https://doi.org/10.18637/jss.v093.i14
    Madison, M. J., & Bradshaw, L. P. (2015). The effects of Q-matrix design on classification accuracy in the log-linear cognitive diagnosis model. Educational and Psychological Measurement, 75(3), 491–511. https://doi.org/10.1177/0013164414539162
    McGlohen, M., & Chang, H.-H. (2008). Combining computer adaptive testing technology with cognitively diagnostic assessment. Behavior Research Methods, 40, 808–821. https://doi.org/10.3758/BRM.40.3.808
    McKelvie, S. J. (1978). Graphic rating scales—How many categories? British Journal of psychology, 69(2), 185–202. https://doi.org/10.1111/j.2044-8295.1978.tb01647.x
    Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., & Teller, E. (1953). Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21(6), 1087–1092. https://doi.org/10.1063/1.1699114
    Minchen, N., & de la Torre, J. (2018). A general cognitive diagnosis model for continuous-response data. Measurement: Interdisciplinary Research and Perspectives, 16(1), 30–44. https://doi.org/10.1080/15366367.2018.1436817
    Minchen, N. D., de la Torre, J., & Liu, Y. (2017). A cognitive diagnosis model for continuous response. Journal of Educational and Behavioral Statistics, 42(6), 651–677. https://doi.org/10.3102/1076998617703
    Noel, Y., & Dauvier, B. (2007). A beta item response model for continuous bounded responses. Applied Psychological Measurement, 31(1), 47–73.
    Plummer, M., Best, N., Cowles, K., & Vines, K. (2006). CODA: Convergence diagnosis and output analysis for MCMC. R Journal, 6, 7–11.
    Ravand, H., & Robitzsch, A. (2018). Cognitive diagnostic model of best choice: A study of reading comprehension. Educational Psychology, 38(10), 1255–1277. https://doi.org/10.1080/01443410.2018.1489524
    Reips, U.-D., & Funke, F. (2008). Interval-level measurement with visual analogue scales in Internet-based research: VAS Generator. Behavior Research Methods, 40(3), 699–704. https://doi.org/10.3758/BRM.40.3.699
    Revuelta, J., Halty, L., & Ximénez, C. (2018). Validation of a questionnaire for personality profiling using cognitive diagnostic modeling. The Spanish Journal of Psychology, 21, E63. https://doi.org/10.1017/sjp.2018.62
    Royston, P., Altman, D. G., & Sauerbrei, W. (2006). Dichotomizing continuous predictors in multiple regression: A bad idea. Statistics in Medicine, 25(1), 127–141. https://doi.org/10.1002/sim.2331
    Rubin, D. B. (1984). Bayesianly justifiable and relevant frequency calculations for the applied statistician. The Annals of Statistics, 12(4), 1151–1172. http://www.jstor.org/stable/2240995
    Sinharay, S., Johnson, M. S., & Stern, H. S. (2006). Posterior predictive assessment of item response theory models. Applied Psychological Measurement, 30(4), 298–321. https://doi.org/10.1177/0146621605285517
    Stephens, M. (2000). Dealing with label switching in mixture models. Journal of the Royal Statistical Society Series B (Statistical Methodology), 62(4), 795–809. https://doi.org/10.1111/1467-9868.00265
    Streiner, D. L. (2002). Breaking up is hard to do: The heartbreak of dichotomizing continuous data. The Canadian Journal of Psychiatry, 47(3), 262–266. https://doi.org/10.1177/070674370204700307
    Sung, Y.-T., & Wu, J.-S. (2018). The visual analogue scale for rating, ranking and paired-comparison (VAS-RRP): A new technique for psychological measurement. Behavior Research Methods, 50, 1694–1715. https://doi.org/10.3758/s13428-018-1041-8
    Tamiya, N., Araki, S., Ohi, G., Inagaki, K., Urano, N., Hirano, W., & Daltroy, L. H. (2002). Assessment of pain, depression, and anxiety by visual analogue scale in Japanese women with rheumatoid arthritis. Scandinavian Journal of Caring Sciences, 16(2), 137–141. https://doi.org/10.1046/j.1471-6712.2002.00067.x
    Templin, J., & Hoffman, L. (2013). Obtaining diagnostic classification model estimates using Mplus. Educational Measurement: Issues and Practice, 32(2), 37–50. https://doi.org/10.1111/emip.12010
    Templin, J. L., & Henson, R. A. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological Methods, 11(3), Article 287. https://doi.org/10.1037/1082-989X.11.3.287
    Verkuilen, J., & Smithson, M. (2012). Mixed and mixture regression models for continuous bounded responses using the beta distribution. Journal of Educational and Behavioral Statistics, 37(1), 82–113. https://doi.org/10.3102/1076998610396895
    Von Davier, M. (2008). A general diagnostic model applied to language testing data. British Journal of Mathematical and Statistical Psychology, 61(2), 287–307. https://doi.org/10.1348/000711007X193957
    von Davier, M. (2014). The log‐linear cognitive diagnostic model (LCDM) as a special case of the general diagnostic model (GDM) (ETS Research Report Series, RR-14-40). Princeton. https://doi.org/10.1002/ets2.12043
    Wang, J., Shi, N., Zhang, X., & Xu, G. (2022). Sequential Gibbs sampling algorithm for cognitive diagnosis models with many attributes. Multivariate Behavioral Research, 57(5), 840–858. https://doi.org/10.1080/00273171.2021.1896352
    Wang, T., & Zeng, L. (1998). Item parameter estimation for a continuous response model using an EM algorithm. Applied Psychological Measurement, 22(4), 333–344. https://doi.org/10.1177/014662169802200402
    Wright, B. D., & Masters, G. N. (1982). Rating scale analysis. MESA Press.
    Wu, H.-M. (2019). Online individualised tutor for improving mathematics learning: A cognitive diagnostic model approach. Educational Psychology, 39(10), 1218–1232. https://doi.org/10.1080/01443410.2018.1494819
    Wu, X., Wu, R., Chang, H.-H., Kong, Q., & Zhang, Y. (2020). International comparative study on PISA mathematics achievement test based on cognitive diagnostic models. Frontiers in Psychology, 11, Article 2230. https://doi.org/10.3389/fpsyg.2020.02230
    Xie, Q. (2017). Diagnosing university students’ academic writing in English: Is cognitive diagnostic modelling the way forward? Educational Psychology, 37(1), 26–47. https://doi.org/10.1080/01443410.2016.1202900
    Xu , G. (2017). Identifiability of restricted latent class models with binary responses. The Annals of Statistics, 45(2), 675–707. https://doi.org/10.1214/16-AOS1464
    Xu, G., & Shang, Z. (2018). Identifying latent structures in restricted latent class models. Journal of the American Statistical Association, 113(523), 1284–1295. https://doi.org/10.1080/01621459.2017.1340889
    Yamaguchi, K., & Templin, J. (2022). A Gibbs sampling algorithm with monotonicity constraints for diagnostic classification models. Journal of Classification, 39(1), 24–54. https://doi.org/10.1007/s00357-021-09392-7
    Yusoff, R., & Mohd Janor, R. (2014). Generation of an interval metric scale to measure attitude. Sage Open, 4(1), 1–16. https://doi.org/10.1177/21582440135167
    Zhan, P., Man, K., Wind, S. A., & Malone, J. (2022). Cognitive diagnosis modeling incorporating response times and fixation counts: Providing comprehensive feedback and accurate diagnosis. Journal of Educational and Behavioral Statistics, 47(6), 736–776. https://doi.org/10.3102/10769986221111

    下載圖示
    QR CODE