Objective: Past research by Broglio and colleagues (2007) documented relatively low test–retest reliability data for three computer-based neurocognitive tests: ImPACT, Headminder’s CRI, and CogState’s Concussion Sentinel. However, their methodology has been criticized for administering three complete test batteries in one test session, and their test–retest data for ImPACT were lower than those documented at 1 (Elbin et al., 2010) and 2 years (Schatz, 2009). We sought to document the test–retest reliability of ImPACT, when administered independent of other measures, over a time frame of 1 month. Methods: Participants were 26 college-age students recruited from a university human subjects pool. Varsity athletes were excluded, as were students with previous exposure to the ImPACT test, or diagnosis of a previous 622 2012 Annual Meeting Abstracts / Archives of Clinical Neuropsychology 27 (2012); 576–685 concussion. One participant was excluded due to test scores outside of the range of age-adjusted norms. Participants completed the ImPACT test as a baseline and returned 4 weeks later for a second administration (analogous to a postconcussion test). Results: Repeated-measures ANOVA, with Bonferroni correction, revealed a significant practice effect for Motor Speed composite score (p , .001), with no significant differences noted for the other measures. Intraclass correlation coefficients (ICCs) were as follows: Verbal Memory (ICC ¼ .788), Visual Memory (ICC ¼ .597), Motor Speed (ICC ¼ .876), Reaction Time (ICC ¼ .767), and the Symptom Scale (ICC ¼ .810). Conclusions: Administration of ImPACT, independent of other neurocognitive test batteries, yielded considerably higher test-reliability coefficients than Broglio and colleagues (2007). The current results support the assertion that the ImPACT test demonstrates reliability across a 1-month “clinically relevant” time interval.