A comparison of four computerized neurocognitive assessment tools to a traditional neuropsychological test battery in service members with and without mild traumatic brain injury

Arch Clin Neuropsychol. 2017 Apr;

33(1):102-119.

Cole, W. R., Arrieux, J. P., Ivins, B. J., Schwab, K. A. and Qashu, F. M..

FEE $

Wouldn’t it be nice to earn CME Credits for the research work you’re already doing?

Abstract:

Objective: Computerized neurocognitive assessment tools (NCATS) are often used as a screening tool to identify cognitive deficits after mild traumatic brain injury (mTBI). However, differing methodology across studies renders it difficult to identify a consensus regarding the validity of NCATs. Thus, studies where multiple NCATs are administered in the same sample using the same methodology are warranted. Method: We investigated the validity of four NCATs: the ANAM4, CNS-VS, CogState, and ImPACT. Two NCATs were randomly assigned and a battery of traditional neuropsychological (NP) tests administered to healthy control active duty service members (n = 272) and to service members within 7 days of an mTBI (n = 231). Analyses included correlations between NCAT and the NP test scores to investigate convergent and discriminant validity, and regression analyses to identify the unique variance in NCAT and NP scores attributed to group status. Effect sizes (Cohen’s f2) were calculated to guide interpretation of data. Results: Only 37 (0.6%) of the 5,655 correlations calculated between NCATs and NP tests are large (i.e. r >/= 0.50). The majority of correlations are small (i.e. 0.30 > r >/= 0.10), with no clear patterns suggestive of convergent or discriminant validity between the NCATs and NP tests. Though there are statistically significant group differences across most NCAT and NP test scores, the unique variance accounted for by group status is minimal (i.e. semi-partial R2

Links to full article: