Assessing agreement with repeated measures for random observers.

Chia-Cheng Chen, Huiman X Barnhart
Author Information
  1. Chia-Cheng Chen: Section of Quantitative Health Sciences, Department of Pediatrics, Medical College of Wisconsin, Milwaukee, WI 53226, USA. ccchen@mcw.edu

Abstract

Agreement studies are often concerned with assessing whether different observers for measuring responses on the same subject or sample can produce similar results. The concordance correlation coefficient (CCC) is a popular index for assessing the closeness among observers for quantitative measurements. Usually, the CCC is used for data without and with replications based on subject and observer effects only. However, we cannot use this methodology if repeated measurements rather than replications are collected. Although there exist some CCC-type indices for assessing agreement with repeated measurements, there is no CCC for random observers and random time points. In this paper, we propose a new CCC for repeated measures where both observers and time points are treated as random effects. A simulation study demonstrates our proposed methodology, and we use vertebral body data and image data for illustrations.

MeSH Term

Biostatistics
Computer Simulation
Echocardiography
Humans
Monte Carlo Method
Observer Variation
Radiography
Random Allocation
Spine
Time Factors

Word Cloud

Created with Highcharts 10.0.0observersCCCrepeatedrandomassessingmeasurementsdatasubjectreplicationseffectsusemethodologyagreementtimepointsmeasuresAgreementstudiesoftenconcernedwhetherdifferentmeasuringresponsessamplecanproducesimilarresultsconcordancecorrelationcoefficientpopularindexclosenessamongquantitativeUsuallyusedwithoutbasedobserverHoweverrathercollectedAlthoughexistCCC-typeindicespaperproposenewtreatedsimulationstudydemonstratesproposedvertebralbodyimageillustrationsAssessing

Similar Articles

Cited By