학술논문

Pre-Statistical Considerations for Harmonization of Cognitive Instruments: Harmonization of ARIC, CARDIA, CHS, FHS, MESA, and NOMAS.
Document Type
article
Source
Journal of Alzheimer's Disease. 83(4)
Subject
Biomedical and Clinical Sciences
Biological Psychology
Clinical Sciences
Neurosciences
Psychology
Aging
Behavioral and Social Science
Clinical Research
Brain Disorders
Atherosclerosis
Genetics
Cardiovascular
Aetiology
2.4 Surveillance and distribution
Blood Pressure
Cognition
Cohort Studies
Data Interpretation
Statistical
Humans
Meta-Analysis as Topic
Neuropsychological Tests
Research Design
Surveys and Questionnaires
dementia
epidemiology
methods
Cognitive Sciences
Neurology & Neurosurgery
Clinical sciences
Biological psychology
Language
Abstract
BackgroundMeta-analyses of individuals' cognitive data are increasing to investigate the biomedical, lifestyle, and sociocultural factors that influence cognitive decline and dementia risk. Pre-statistical harmonization of cognitive instruments is a critical methodological step for accurate cognitive data harmonization, yet specific approaches for this process are unclear.ObjectiveTo describe pre-statistical harmonization of cognitive instruments for an individual-level meta-analysis in the blood pressure and cognition (BP COG) study.MethodsWe identified cognitive instruments from six cohorts (the Atherosclerosis Risk in Communities Study, Cardiovascular Health Study, Coronary Artery Risk Development in Young Adults study, Framingham Offspring Study, Multi-Ethnic Study of Atherosclerosis, and Northern Manhattan Study) and conducted an extensive review of each item's administration and scoring procedures, and score distributions.ResultsWe included 153 cognitive instrument items from 34 instruments across the six cohorts. Of these items, 42%were common across ≥2 cohorts. 86%of common items showed differences across cohorts. We found administration, scoring, and coding differences for seemingly equivalent items. These differences corresponded to variability across cohorts in score distributions and ranges. We performed data augmentation to adjust for differences.ConclusionCross-cohort administration, scoring, and procedural differences for cognitive instruments are frequent and need to be assessed to address potential impact on meta-analyses and cognitive data interpretation. Detecting and accounting for these differences is critical for accurate attributions of cognitive health across cohort studies.