Q-Coha tool to screen the methodological quality of cohort studies in systematic reviews and meta-analyses

  1. Jarde, Alexander
  2. Losilla Vidal, Josep Maria
  3. Vives Brosa, Jaume
  4. Rodrigo Giménez, María Florencia
Revista:
International journal of clinical and health psychology

ISSN: 1697-2600

Año de publicación: 2013

Volumen: 13

Número: 2

Páginas: 138-146

Tipo: Artículo

DOI: 10.1016/S1697-2600(13)70017-6 DIALNET GOOGLE SCHOLAR

Otras publicaciones en: International journal of clinical and health psychology

Información de financiación

This research was supported by Grant PSI2010-16270 from the Spanish Ministry of Science and Innovation.

Referencias bibliográficas

  • American Psychological Association (2010). Publication manual of the American Psychological Association (6th ed). Washington, D.C.: American Psychological Association (APA).
  • Byrt, T., Bishop, J.,&Carlin, J. B. (1993). Bias, prevalence and Kappa. Journal of Clinical Epidemiology, 46, 423-429.
  • Cicchetti, D. V.,&Feinstein, A.R. (1990). High agreement but low Kappa: II. Resolving the paradoxes. Journal of Clinical Epidemiology, 6, 551-558.
  • Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37-46.
  • Deeks, J. J., Dinnes, J., D'Amico, R., Sowden, A. J., Sakarovitch, C., Song, F.,&Petticrew, M., (2003). Evaluating non-randomised intervention studies. Health Technology Assessment, 7, 1-173.
  • Detsky, A. S., Naylor, C. D., O'Rourke, K., McGeer, A. J.,&L'Abbé, K. A. (1992). Incorporating variations in the quality of individual randomized trials into meta-analysis. Journal of Clinical Epidemiology, 45, 255-265.
  • Dreier, M., Borutta, B., Stahmeyer, J., Krauth, C.,&Walter, U. (2010). Vergleich von Bewertungsinstrumenten für dis Studienqualität von Primär- und Sekundärstudien zur Verwendung für HTA-Berichte im deutschsprachigen Raum (HTA Bericht No. 102). Köln, Germany: Deutsche Agentur für Health Technology Assessment.
  • Feinstein, A. R.,&Cicchetti, D. V. (1990). High agreement but low Kappa: I. The problems of two paradoxes. Journal of Clinical Epidemiology, 6, 543-549.
  • Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76, 378-382.
  • Fleiss, J. L., Cohen, J.,&Everitt, B. S. (1969). Large sample standard errors of kappa and weighted kappa. Psychological Bulletin, 72, 323-327.
  • Gamer, M., Lemon, J., Fellows, I.,&Singh, P. (2012). Various coefficients of interrater reliability and agreement. Package «irr» for R [computer software]. Author.
  • Higgins, J. P.,&Green, S. (2011). Cochrane handbook for systematic reviews of interventions (5.1.0 ed.). The Cochrane Collaboration. Available from: www.cochrane-handbook.org.
  • Jarde, A., Losilla, J. M.,&Vives, J. (2012a). Methodological quality assessment tools of non-experimental studies: A systematic review. Anales de Psicología, 28, 617-628.
  • Jarde, A., Losilla, J. M.,&Vives, J. (2012b). Suitability of three different tools for the assessment of methodological quality in ex post facto studies. International Journal of Clinical and Health Psychology, 12, 97-108.
  • Kendall, M. G. (1938). A new measure of rank correlation. Biometrika, 30, 8193.
  • Landis, J. R.,&Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159-174.
  • Lantz, C. A.,&Nebenzahl, E. (1996). Behavior and interpretation of the ? statistic: Resolution of the two paradoxes. Journal of Clinical Epidemiology, 49, 431-434.
  • Montero, I.,&León, O. G. (2007). A guide for naming research studies in Psychology. International Journal of Clinical and Health Psychology, 7, 847-862.
  • Sanderson, S., Tatt, I. D.,&Higgins, J. P. (2007). Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. International Journal of Epidemiology, 36, 666-676.
  • Shamliyan, T. A., Kane, R. L., Ansari, M. T., Raman, G., Berkman, N. D.,&Grant, M., (2010). Development quality criteria to evaluate nontherapeutic studies of incidence, prevalence, or risk factors of chronic diseases: pilot study of new checklists (AHRQ Publication No. 11-EHC008-EF). Rockville, MD: Agency for Healthcare Research and Quality.
  • Shamliyan, T. A., Kane, R. L.,&Dickinson, S. (2010). A systematic review of tools used to assess the quality of observational studies that examine incidence or prevalence and risk factors for diseases. Journal of Clinical Epidemiology, 63, 1061-1070.
  • Thompson, S., Ekelund, U., Jebb, S., Lindroos, A. K., Mander, A., Sharp, S.,&Turner, R., (2010). A proposed method of bias adjustment for meta-analyses of published observational studies. International Journal of Epidemiology, 40, 765-777.
  • Uebersax, J. (2010). Statistical methods for rater and diagnostic agreement: Recommended methods. Available from: http://www.john-uebersax.com/stat/agree.htm [retrieved 3 Jul 2010].
  • Valentine, J. C.,&Cooper, H. (2008). A systematic and transparent approach for assessing the methodological quality of intervention effectiveness research: The Study Design and Implementation Assessment Device (Study DIAD). Psychological Methods, 13, 130-149.
  • Vandenbroucke, J. P., von Elm, E., Altman, D. G., Gøtzsche, P. C., Mulrow, C. D.,&Pocock, S. J., (2007). Strengthening the reporting of observational studies in epidemiology (STROBE): explanation and elaboration. Epidemiology, 18, 805-835.
  • Viswanathan, M.,&Berkman, N. D. (2012). Development of the RTI item bank on risk of bias and precision of observational studies. Journal of Clinical Epidemiology, 65, 163-178.
  • West, S., King, V., Carey, T. S., Lohr, K. N., McKoy, N.,&Sutton, S. F., (2002). Systems to rate the strength of scientific evidence (AHRQ Publication No. 02-E016). Rockville, MD: Agency for Healthcare Research and Quality.