In 2015, the Australian Government’s Excellence in Research for Australia (ERA) assessment of research quality declined to rate 1.5 per cent of submissions from universities. The public debate focused on practices of gaming or ‘coding errors’ within university submissions as the reason for this outcome. The issue was about the in/appropriate allocation of research activities to Fields of Research. This paper argues that such practices are only part of the explanation. With the support of statistical modelling, unrated outcomes are shown to have also arisen from particular evaluation practices within the discipline of Psychology and the associated Medical and Health Sciences Research Evaluation Committee. Given the high stakes nature of unrated outcomes and that the evaluation process breaches public administration principles by being not appealable nor appropriately transparent, the paper concludes with recommendations for the strengthening ERA policy and procedures to enhance trust in future ERA processes.