Standard, well-established cognitive tasks that produce reliable effects in group comparisons also lead to unreliable measurement when assessing individual differences. This reliability paradox has been demonstrated in decision-conflict tasks such as the Simon, Flanker, and Stroop tasks, which measure various aspects of cognitive control. We aim to address this paradox by implementing carefully calibrated versions of the standard tests with an additional manipulation to encourage processing of conflicting information, as well as combinations of standard tasks. Over five experiments, we show that a Flanker task and a combined Simon and Stroop task with the additional manipulation produced reliable estimates of individual differences in under 100 trials per task, which improves on the reliability seen in benchmark Flanker, Simon, and Stroop data. We make these tasks freely available and discuss both theoretical and applied implications regarding how the cognitive testing of individual differences is carried out.
Funding
This research was funded by Australian Army Headquarters (Land Capability Division) and an Australian Research Council Discovery Grant to A.H., J.S., and M.P. (DP200100655). A.H. was supported by a Revesz Visiting Professor Fellowship from the University of Amsterdam. D.M. was supported by a Vidi grant (VI.Vidi.191.091) from the Dutch Research Council. T.K. was supported by an Australian Government Research Training Program Scholarship from the University of Tasmania.
Australian Army Headquarters (Land Capability Division)
Australian Research Council Discovery Grant | DP200100655
University of Amsterdam
Dutch Research Council | VI.Vidi.191.091
Australian Government Research Training Program Scholarship from the University of Tasmania