閱讀全文 購買本期 | |
篇名 |
教師教學創新的跨國性比較:三種跨國性大數據分析方法的比較與介紹
|
---|---|
並列篇名 | Cross-Country Comparison of Teacher Teaching Innovation: Comparison and Introduction of Three Big Data Analysis |
作者 | 曾明基 |
中文摘要 | 本研究透過TALIS 2018國際資料庫進行15個國家在教師教學創新的跨國性比較,並透過貝氏隨機效果模型、貝氏近似測量恆等,以及校準三種不同大數據分析方法,比較教師教學創新的跨國差異。研究發現,過往的精確測量恆等性分析並不適合使用在教師教學創新的跨國比較。而貝氏隨機效果模型、貝氏近似測量恆等及校準方法,可有效估計教師教學創新的跨國差異,值得推廣使用。此外,經由排序比較發現,臺灣教師在教學創新上仍有許多可以進步之處。 |
英文摘要 | Research Motivation and Objective The primary objective of this study is clear: To conduct a transnational comparison of teaching innovation among teachers across 15 countries, using the TALIS 2018 international database. This research aims to explore the differences in teaching innovation across these countries. To achieve this, the study employs three advanced big data analysis methods developed in recent years within the field of testing and assessment: Bayesian random effects models (Asparouhov & Muthén, 2016; De Jong et al., 2007; Fox, 2010; Verhagen & Fox, 2012), Bayesian approximate measurement invariance (Muthén & Asparouhov, 2012, 2018), and Calibration (Asparouhov & Muthén, 2014, 2022; Muthén & Asparouhov, 2014, 2018). This research makes a definitive comparison of teaching innovation across countries and introduces these three methodological approaches for empirical researchers to reference in conducting transnational comparisons. Literature Review The Bayesian random effects model uses a multilevel modelling approach to estimate the random effects of factor loadings and measurement intercepts for test items. The Bayesian imputation method is used to derive potential values of latent factors of teaching innovation among teachers across different countries. These values are then ranked and compared, constituting a two step random effects model construction in a multilevel framework. The Bayesian approximate measurement invariance testing and calibration methods are based on a one-step fixed effects model construction. For Bayesian approximate measurement invariance testing, it is necessary to specify prior parameters to control for subtle differences in the variances of factor loadings and measurement intercepts across countries. The setting of prior parameter variances must be supported by previous simulation and empirical research. In contrast, the calibration method does not require anchor items. However, as with Exploratory Factor Analysis, the calibration must meet the conditions of a small number of items with large measurement non-invariance and a large number of items with small measurement non-invariance, rather than a moderate amount of items with intermediate measurement non-invariance, to achieve optimal calibration results (Muthén & Asparouhov, 2014). Research Methodology This study focuses on primary school teachers from the TALIS 2018 international database, encompassing 15 countries. The initial number of teachers was 51,782. After excluding primary school teachers who did not respond to all items in the teaching innovation scale, and retaining those who answered at least some items, the final sample size for analysis was 50,396 teachers. The research instrument for teaching innovation in this study is the TALIS 2018 teaching innovation scale, which includes four items. These items are scored on a four-point Likert scale: strongly disagree, disagree, agree, and strongly agree, measuring the latent construct of teaching innovation. The model construction in this study can be divided into two main analytical steps. The first step is model estimation, employing four analytical methods: Exact measurement invariance testing, Bayesian random effects models, Bayesian approximate measurement invariance, and calibration. The second step involves ranking, where the average latent factor scores of teaching innovation for teachers in different countries are ranked and compared. This study uses the Mplus software (Muthén & Muthén, 2021) to conduct cross-national comparisons of teaching innovation. The exact measurement invariance testing is estimated using the robust maximum likelihood method, while the Bayesian estimation method is used for the Bayesian random effects models, Bayesian approximate measurement invariance, and calibration models. The analysis uses Gibbs sampling with the Markov Chain Monte Carlo (MCMC) method, setting 2 chains and a minimum of 300,000 iterations and a maximum of 1,000,000 iterations. The parameter convergence criterion is Potential Scale Reduction (PSR) (Gelman et al., 2014), with a convergence threshold set at 0.05. Research Results The random effects estimates from the Bayesian random effects model and the fixed effects estimates from both the Bayesian approximate measurement invariance and Bayesian calibration show significant differences in the cross national ranking of teaching innovation among teachers. However, regardless of the ranking results from these three transnational big data analysis methods, the estimates and rankings for Taiwanese teachers’ teaching innovation are notably consistent. This suggests that Taiwanese teachers have much room for improvement and effort in terms of teaching innovation when compared to the other 14 countries. Discussion and Recommendations This study adopts a big data analysis approach to conduct a transnational comparison of teaching innovation among teachers. Through exact measurement invariance analysis, it is clear that teaching innovation among teachers does not meet the criteria for numerical invariance. There exist measurement invariance issues in factor loadings and measurement intercepts for teaching innovation items. This indicates that exact measurement invariance analysis results cannot be used as a basis for transnational comparisons. Therefore, this study further employs three different big data analysis methods: Bayesian random effects models, Bayesian approximate measurement invariance, and calibration. The analysis using the Bayesian random effects model reveals random effects variance in factor loadings and measurement intercepts for teaching innovation items, indicating measurement non-invariance across countries. The analysis then controls for measurement non-invariance through Bayesian approximate measurement invariance and calibration methods, and finds that the average latent factor scores of teaching innovation among teachers from 15 different countries can be reliably ranked and compared. The ranking shows that although there are differences in the ranking of teaching innovation among the three big data analysis methods, the estimation results for Taiwanese teachers’ teaching innovation are quite consistent. This suggests that there is still much room for improvement and effort needed in teaching innovation among Taiwanese teachers. |
起訖頁 | 037-063 |
關鍵詞 | TALIS 2018、貝氏近似測量恆等、貝氏隨機效果模型、校準、教師教學創新、TALIS 2018、Bayesian approximate measurement invariance、Bayesian random effect model、calibration、teacher teaching innovation |
刊名 | 教育與心理研究 |
期數 | 202503 (48:1期) |
出版單位 | 國立政治大學教育學院 |
DOI |
|
QR Code | |
該期刊 上一篇
| 校務評鑑情境中的評分者效果之影響及控制 |
該期刊 下一篇
| 新冠疫情期間危機領導、心理社會安全氛圍、死亡意識對教師工作投入的影響:教師責任感的調節作用 |