Contrary to the predictions of salience hypothesis, no regions in our ROI analysis (see Table S6) showed evidence that losses were treated as wins by win-tie classifiers or wins were treated as losses by tie-loss classifiers even Bortezomib concentration at a liberal uncorrected significance level (two-tailed p < 0.1, binomial Z score compared with chance). Instead, Accumbens showed evidence of classifying
losses as ties more often than predicted by chance (t[21] = −3.54, p = 0.002), but no other region showed a significant bias (p < 0.05, uncorrected). For the tie-loss classifier, seven regions showed a significant tendency to classify wins as ties (p < 0.05, Bonferroni corrected), and at a looser threshold (p < 0.05, uncorrected) 28 regions showed this tendency. Searchlight analysis for the win-tie classifier showed very few clusters that significantly tended to classify losses as either wins or ties (Figure 7A). selleck screening library Only 8 clusters survived threshold
(p < 0.001, k = 10 cluster-corrected; see Table S5). Of these, only one cluster of 16 voxels showed the pattern predicted by the salience hypothesis (a portion of right middle occipital gyrus, BA19). The remaining seven clusters (Table S6) had a tendency to classify losses as ties. As shown in Figure 7B, searchlight analysis showed widespread tendency for the tie-loss classifier to classify wins as ties, rather than losses. Clusters surviving threshold (p < 0.001, k = 10) are too numerous to list (116 clusters encompassing 7658 voxels), but none of these clusters showed a tendency to classify wins as losses.
Therefore, the results of two way classification analyses were not consistent with the salience hypothesis. Winning or losing in a simple competitive game reliably led to different states in widely distributed neural regions, including regions not often implicated in reward or penalty processing. These states were FMO2 distinct and stable enough across the course of the experiment to be decodable via MVPA based on training from separate runs, despite strategic shifts and stochastically changing reward expectations to individual stimuli or motor choice throughout the experiment. Widely distributed reward signals were observed in the four volumes (8 s) following the outcome offset. While the primary source of reinforcement and punishment signals may still be a limited and specialized set of neural regions, our findings suggest that whatever the generating source signals related to decision outcomes are almost ubiquitously distributed in the brain. Ubiquitous reward signals cannot be attributed to computer’s recent choice (the visual stimulus), human’s recent choice (the motor response), and strategic variables (switches versus stays).