Test-Retest Reliability Of Responses To The ImPACT Neurocognitive Program

Journal of Athletic Training. -

43(3S):S-53-54.

Piland, S. G., T. E. Gould, A. Sumrall, K. Martin and J. Dixon.

FREE

Abstract:

Context: Serial testing of athletes utilizing neuropsychological testing batteries and measures of composite self-report symptoms is a common and recommended practice. This paradigm serves to facilitate an indication of the athlete’s resolution from the concussive insult. With return-to-play decisions in the balance, it is imperative that clinicians understand that these decisions have consequences. As such, it is necessary to provide continued psycho-metric evidence regarding the use of these scores so that clinicians understand the level of confidence they should or should not place upon score inferences. Therefore, information towards the test-retest (stability) reliability of scores is warranted. Ideally, scores should demonstrate high (R=1) stability so that score fluctuation can be attributed to variation in concussive symptoms and not random error variation. Objective: The purpose of this study was to analyze the 10-day test-retest reliability of responses used to calculate 5 of the composite scores generated in the ImPACT computer-based neurocognitive testing program. Design: This prospective analysis involved two testing sessions separated by 10 days. Setting: Data was collected in a laboratory located at a southeastern Division I institution. Patients or Other Participants: The group was comprised of healthy, physically active volunteer male students (N=27, age= 21.10+2.0) enrolled in a southeastern Division I institution. Interventions: Test subjects were provided informed consent in accordance with requirements from the involved institution and completed a brief health history questionnaire as well as the ImPACT computer-based neurocognitive testing program (version 6.0) on two occasions. The method of Shrout and Fleiss (1,1) was used to calculate 5 intraclass correlation coefficients (ICCs) from composite scores provided by the software. Main Outcome Measures: Mean values provided from responses to the ImPACT software. Results: Composite self-report symptoms (time1= .59±1.11, time2=1.07±2.33) yielded an ICC of (R=.039), memory composite verbal (time1=.91±.08, time2=.89±.09, R=.43), memory composite visual (time1=.78±.13, time2=.76±.14, R=.78), visual motor speed composite (time1=40.61±11.16, time2= 39.93±7.99, R=.91) and reaction time composite (time1=.53±.06, time2=.53±.06, R=.95). Conclusions: Evidence of score stability is important towards the clinician’s ability to draw appropriate inferences from responses to any of the multi-faceted measures of concussion. Due to the fact that there is no single biological marker for the injury, researchers must continue to evaluate the measurement properties of individual facets of test scores. Low score stability, as indicated for the composite symptoms and memory composite verbal scores of the ImPACT program, suggest that clinicians should continue to utilize a multi-faceted approach and understand the effects of poor measurement properties upon score inferences. In the same respect, high score stability, as demonstrated in memory visual, visual motor speed, and reaction time composites, provide support for reliable score interpretations.

Links to full article: