Multilevel meta-analysis of multiple regression coefficients from single-case experimental studies.

Laleh Jamshidi, Lies Declercq, Belén Fernández-Castilla, John M Ferron, Mariola Moeyaert, S Natasha Beretvas, Wim Van den Noortgate
Author Information
  1. Laleh Jamshidi: Faculty of Psychology and Educational Sciences & ITEC, imec research group at KU Leuven, KU Leuven, University of Leuven, Leuven, Belgium. laleh.jamshidi@uregina.ca.
  2. Lies Declercq: Faculty of Psychology and Educational Sciences & ITEC, imec research group at KU Leuven, KU Leuven, University of Leuven, Leuven, Belgium.
  3. Belén Fernández-Castilla: Faculty of Psychology and Educational Sciences & ITEC, imec research group at KU Leuven, KU Leuven, University of Leuven, Leuven, Belgium.
  4. John M Ferron: Department of Educational Measurement and Research, University of South Florida, Tampa, FL, USA.
  5. Mariola Moeyaert: Department of Educational and Counseling Psychology, University at Albany (State University of New York), Albany, NY, USA.
  6. S Natasha Beretvas: Department of Educational Psychology, University of Texas at Austin, Austin, TX, USA.
  7. Wim Van den Noortgate: Faculty of Psychology and Educational Sciences & ITEC, imec research group at KU Leuven, KU Leuven, University of Leuven, Leuven, Belgium.

Abstract

The focus of the current study is on handling the dependence among multiple regression coefficients representing the treatment effects when meta-analyzing data from single-case experimental studies. We compare the results when applying three different multilevel meta-analytic models (i.e., a univariate multilevel model avoiding the dependence, a multivariate multilevel model ignoring covariance at higher levels, and a multivariate multilevel model modeling the existing covariance) to deal with the dependent effect sizes. The results indicate better estimates of the overall treatment effects and variance components when a multivariate multilevel model is applied, independent of modeling or ignoring the existing covariance. These findings confirm the robustness of multilevel modeling to misspecifying the existing covariance at the case and study level in terms of estimating the overall treatment effects and variance components. The results also show that the overall treatment effect estimates are unbiased regardless of the underlying model, but the between-case and between-study variance components are biased in certain conditions. In addition, the between-study variance estimates are particularly biased when the number of studies is smaller than 40 (i.e., 10 or 20) and the true value of the between-case variance is relatively large (i.e., 8). The observed bias is larger for the between-case variance estimates compared to the between-study variance estimates when the true between-case variance is relatively small (i.e., 0.5).

Keywords

References

  1. Campbell, J. M. (2004). Statistical comparison of four effect sizes for single-subject designs. Behavior Modification, 28(2), 234–246. https://doi.org/10.1177/0145445503259264 [DOI: 10.1177/0145445503259264]
  2. Cheung, M. W.-L. (2014). Modeling dependent effect sizes with three-level meta-analyses: a structural equation modeling approach. Psychological Methods, 19(2), 211–229. https://doi.org/10.1037/a0032968 [DOI: 10.1037/a0032968]
  3. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale: Lawrence Erlbaum Associates Publishers.
  4. Fan, X., Felsovalyi, A., Sivo, S. A., & Keenan, S. C. (2002). SAS® for Monte Carlo studies: A guide for quantitative researchers. Cary: SAS Institute Inc.
  5. Farmer, J. L., Owens, C. M., Ferron, J. M., & Allsopp, D. H. (2010). A methodological review of single-case meta-analyses. Denver: Paper presented at the annual meeting of the American Educational Research Association.
  6. Ferron, J. M., Farmer, J. L., & Owens, C. M. (2010). Estimating individual treatment effects from multiple-baseline data : A Monte Carlo study of multilevel-modeling approaches. Behavior Research Methods, 42(4), 930–943. https://doi.org/10.3758/BRM.42.4.930 [DOI: 10.3758/BRM.42.4.930]
  7. Gingerich, W. J. (1984). Meta- analysis of applied time- series data. The Journal of Applied Behavioral Science, 20(1), 71–79. [DOI: 10.1177/002188638402000113]
  8. Hedges, L. V. (1981). Distribution theory for Glass ’ s estimator of effect size and related estimators. Journal of Educational Statistics, 6(2), 107–128. https://doi.org/10.2307/1164588 [DOI: 10.2307/1164588]
  9. Hedges, L. V, Tipton, E., & Johnson, M. C. (2010). Robust variance estimation in meta-regression with dependent effect size estimates. Research Synthesis Methods, 1, 39–65. https://doi.org/10.1002/jrsm.5 [DOI: 10.1002/jrsm.5]
  10. Hoogland, J. J., & Boomsma, A. (1998). Robustness studies in covariance structure modeling: An overview and a meta-analysis. Sociological Methods & Research, 26(3), 329–367. https://doi.org/10.1177/0049124198026003003 [DOI: 10.1177/0049124198026003003]
  11. Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S. L., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71(2), 165–179. https://doi.org/10.1177/001440290507100203 [DOI: 10.1177/001440290507100203]
  12. Huber, P. J. (1967). The behavior of maximum likelihood estimates under nonstandard conditions. In L. M. LeCam & J. Neyman (Eds.), Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability (pp. 221–233). Retrieved from https://projecteuclid.org/download/pdf_1/euclid.bsmsp/1200512988
  13. Jamshidi, L., Declercq, L., Fernández-Castilla, B., Ferron, J. M., Moeyaert, M., Beretvas, S. N., & Van den Noortgate, W. (2019). Bias adjustment in multilevel meta-analysis of standardized single-case experimental data. Journal of Experimental Education, 1–18. https://doi.org/10.1080/00220973.2019.1658568
  14. Jamshidi, L., Heyvaert, M., Declercq, L., Fernández-Castilla, B., Ferron, J. M., Moeyaert, M., ... Van den Noortgate, W. (2020). A systematic review of single-case experimental design meta-analyses: Characteristics of study designs, data and analyses. Evidence-based Communication Assessment and Intervention.
  15. Kalaian, H. A., & Raudenbush, S. W. (1996). A multivariate mixed linear model for meta-analysis. Psychological Methods, 1(3), 227–235. https://doi.org/10.1037/1082-989X.1.3.227 [DOI: 10.1037/1082-989X.1.3.227]
  16. Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf
  17. Maggin, D. M., Swaminathan, H., Rogers, H. J., O’Keeffe, B. V, Sugai, G., & Horner, R. H. (2011). A generalized least squares regression approach for computing effect sizes in single-case research: Application examples. Journal of School Psychology, 49, 301–321. https://doi.org/10.1016/j.jsp.2011.03.004 [DOI: 10.1016/j.jsp.2011.03.004]
  18. Manolov, R., & Moeyaert, M. (2017). Recommendations for choosing single-case data analytical techniques. Behavior Therapy, 48(1), 97–114. https://doi.org/10.1016/j.beth.2016.04.008 [DOI: 10.1016/j.beth.2016.04.008]
  19. Moeyaert, M., Ugille, M., Ferron, J. M., Beretvas, S. N., & Van den Noortgate, W. (2013). The three-level synthesis of standardized single-subject experimental data: A Monte Carlo simulation study. Multivariate Behavioral Research, 48(5), 719–748. https://doi.org/10.1080/00273171.2013.816621 [DOI: 10.1080/00273171.2013.816621]
  20. Moeyaert, M., Ugille, M., Ferron, J. M., Beretvas, S. N., & Van den Noortgate, W. (2014). Three-level analysis of single-case experimental data: Empirical validation. The Journal of Experimental Education, 82(1), 1–21. https://doi.org/10.1080/00220973.2012.745470 [DOI: 10.1080/00220973.2012.745470]
  21. Moeyaert, M., Ugille, M., Ferron, J. M., Beretvas, S. N., & Van den Noortgate, W. (2016). The misspecification of the covariance structures in multilevel models for single-case data: A Monte Carlo simulation Study. Journal of Experimental Education, 84(3), 473–509. https://doi.org/10.1080/00220973.2015.1065216 [DOI: 10.1080/00220973.2015.1065216]
  22. Onghena, P., & Edgington, E. S. (2005). Customization of pain treatments: Single-case design and analysis. The Clinical Journal of Pain, 21(1), 56–68; discussion 69-72. https://doi.org/10.1097/00002508-200501000-00007 [DOI: 10.1097/00002508-200501000-00007]
  23. Owens, C. M., & Ferron, J. M. (2012). Synthesizing single-case studies: A Monte Carlo examination of a three-level meta-analytic model. Behavior Research Methods, 44(3), 795–805. https://doi.org/10.3758/s13428-011-0180-y [DOI: 10.3758/s13428-011-0180-y]
  24. Parker, R. I., Hagan-Burke, S., & Vannest, K. J. (2007). Percentage of all non-overlapping data (PAND): An alternative to PND. The Journal of Special Education, 40(4), 194–204. https://doi.org/10.1177/00224669070400040101 [DOI: 10.1177/00224669070400040101]
  25. Parker, R. I., Vannest, K. J., & Davis, J. L. (2011). Effect size in single-case research: a review of nine nonoverlap techniques. Behavior Modification, 35(4), 303–322. https://doi.org/10.1177/0145445511399147 [DOI: 10.1177/0145445511399147]
  26. Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models : Applications and data analysis methods (2nd ed.). London: SAGE Publications Inc.
  27. SAS Institute, I. (2017). Base SAS® 9.4 procedures guide (7th ed.). Cary, NC: SAS Institute Inc.
  28. Scruggs, T. E., Mastropieri, M. A., & Casto, G. (1987). The quantitative synthesis of single-subject research: Methodology and validation. Remedial and Special Education, 8(2), 24–33. https://doi.org/10.1177/074193258700800206 [DOI: 10.1177/074193258700800206]
  29. Shadish, W. R., & Sullivan, K. J. (2011). Characteristics of single-case designs used to assess intervention effects in 2008. Behavior Research Methods, 43, 971–980. https://doi.org/10.3758/s13428-011-0111-y [DOI: 10.3758/s13428-011-0111-y]
  30. Ugille, M., Moeyaert, M., Beretvas, S. N., Ferron, J. M., & Van den Noortgate, W. (2012). Multilevel meta-analysis of single-subject experimental designs : A simulation study. Behavior Research Methods, 44(4), 1244–1254. https://doi.org/10.3758/s13428-012-0213-1 [DOI: 10.3758/s13428-012-0213-1]
  31. Ugille, M., Moeyaert, M., Beretvas, S. N., Ferron, J. M., & Van den Noortgate, W. (2014). Bias corrections for standardized effect size estimates used with single-subject experimental designs. The Journal of Experimental Education, 82(3), 358–374. https://doi.org/10.1080/00220973.2013.813366 [DOI: 10.1080/00220973.2013.813366]
  32. Van den Noortgate, W., López-López, J. A., Marín-Martínez, F., & Sánchez-Meca, J. (2013). Three-level meta-analysis of dependent effect sizes. Behavior Research Methods, 45, 576–594. https://doi.org/10.3758/s13428-012-0261-6 [DOI: 10.3758/s13428-012-0261-6]
  33. Van den Noortgate, W., & Onghena, P. (2003a). Combining single-case experimental data using hierarchical linear models. School Psychology Quarterly, 18(3), 325–346. https://doi.org/10.1521/scpq.18.3.325.22577 [DOI: 10.1521/scpq.18.3.325.22577]
  34. Van den Noortgate, W., & Onghena, P. (2003b). Hierarchical linear models for the quantitative integration of effect sizes in single-case research. Behavior Research Methods, Instruments, & Computers, 35(1), 1–10. https://doi.org/10.3758/BF03195492 [DOI: 10.3758/BF03195492]
  35. Van den Noortgate, W., & Onghena, P. (2008). A multilevel meta-analysis of single-subject experimental design studies. Evidence-Based Communication Assessment and Intervention, 2(3), 142–151. https://doi.org/10.1080/17489530802505362 [DOI: 10.1080/17489530802505362]
  36. White, H. (1982). Maximum likelihood estimation of misspecified models. Econometrica, 50(1), 1–25. [DOI: 10.2307/1912526]

MeSH Term

Bias
Multilevel Analysis
Multivariate Analysis

Word Cloud

Created with Highcharts 10.0.0variancemultilevelmodelestimatestreatmentiecovariancebetween-caseeffectsexperimentalstudiesresultsmultivariatemodelingexistingoverallcomponentsbetween-studystudydependencemultipleregressioncoefficientssingle-caseignoringeffectbiasedtruerelativelyMultilevelmeta-analysisfocuscurrenthandlingamongrepresentingmeta-analyzingdatacompareapplyingthreedifferentmeta-analyticmodelsunivariateavoidinghigherlevelsdealdependentsizesindicatebetterappliedindependentfindingsconfirmrobustnessmisspecifyingcaseleveltermsestimatingalsoshowunbiasedregardlessunderlyingcertainconditionsadditionparticularlynumbersmaller401020valuelarge8observedbiaslargercomparedsmall05MultivariateRobustestimatorSingle-casedesign

Similar Articles

Cited By