Psychometric Assessment Issues and Potential Opportunities

Loading...
Thumbnail Image
Authors
Sprung, Manuel S.
Santos, Henrique
Rook, Kelsey
Pinheiro, Paulo
McGuinness, Deborah L.
Chorpita, Bruce F.
Issue Date
2024-05-07
Type
Presentation
Language
Keywords
Research Projects
Organizational Units
Journal Issue
Alternative Title
Abstract
Psychometric assessment is essential for mental health care. However, which assessment instruments are best suited for particular use cases remains largely opaque to researchers, clinicians and policy makers alike. Although there have been metrics proposed to indicate the strength of evidence for assessment resources (SOEFA), the reporting of research evidence needed for these metrics is currently inconsistent and highly fragmented, which complicates evaluation of SOEFA. In efforts to improve the systematic collection and reporting of SOEFA, Hunsley and Mash (2008) and Youngstrom et al. (2017) have proposed a standard set of criteria to evaluate the SOEFA. Twelve categories, including norms, internal consistency, interrater reliability, test-retest reliability (stability), repeatability, content validity, construct validity, discriminative validity, prescriptive validity, validity generalization, treatment sensitivity, and clinical utility, are suggested to evaluate mental health assessment resources as adequate, good or excellent. In an effort to apply these criteria to a widely used measure of youth anxiety and depression, the Revised Child Anxiety and Depression Scale (https://rcads.ucla.edu/; Chorpita et al., 2000), we encountered a variety of challenges due to the fit between standards and the knowledge base represented in published research papers. First, it is difficult to map and connect the proposed criteria to the inconsistent, disintegrated and fragmented research evidence, such as varied use of criteria to determine validity or accuracy of measurement. Second, many assessment instruments exist in different versions, such as translations in different languages, derivatives (e.g., short forms) or respondent formats (e.g., youth or caregiver forms). The provenance of different versions (e.g., which items are newly created or are reused from already existing instruments) is highly opaque, and there is minimal guidance about how SOEFA metrics (indicators about the degree of uncertainty in the expected performance of a specific version) can be applied across resources with shared provenance. For example, one could assume that different versions inherit the SOEFA (1) of the parent class of instruments or (2) from sibling classes or (3) or not at all. Third, psychometric assessment instruments are always used in a specific context, i.e., with a specific cohort (with a specified age, gender, language, nationality, etc.) and with a specific purpose (of assessment), i.e., screening, supporting diagnosis, or treatment progress monitoring. To inform end users of the potential suitability of an assessment resource therefore requires marshaling large amounts of meta-data about the contexts and cohorts in which the SOEFA metrics were established for each measure. Thus, despite a laudable aim to apply standardized metrics to inform users about the evidence supporting specific assessment, the practical implementation of these standards (or their evolution) requires a significant change to the knowledge infrastructure of psychometric assessment. In our joint work that includes psychological clinical assessment experts along with semantic technology experts, we are exploring a semantic infrastructure that supports the encoding of SOEFA metrics in the context of mental health screening questionnaires. Our current effort involves the modeling of statistical evidence using standardized terminology from established ontologies (such as HAScO, STATO, and ECO). Our goal is to provide a provenance-aware, semantics-powered data portal that can be used by a broad set of users to help understand some of the important nuances of assessment instruments, to guide which instruments are best suited to which purpose, and to expose the reasons for (or against) such choices, in a way that is aligned with the guidance of the best scholars in mental health assessment.
Description
Full Citation
Sprung, M.S., Santos, H., Rook, K., Pinheiro, P., McGuinness, D.L., Chorpita, B.F. 2024. Psychometric Assessment Issues and Potential Opportunities. In The Healthcare and Life Sciences Symposium (HCLS), co-located with The 2024 Knowledge Graph Conference (KGC). New York, NY.
Publisher
The Healthcare and Life Sciences Symposium (HCLS), co-located with The 2024 Knowledge Graph Conference (KGC)
Terms of Use
Journal
Volume
Issue
PubMed ID
DOI
ISSN
EISSN