摘要:SummaryMany neural networks for medical imaging generalize poorly to data unseen during training. Such behavior can be caused by overfitting easy-to-learn features while disregarding other potentially informative features. A recent implicit bias mitigation technique called spectral decoupling provably encourages neural networks to learn more features by regularizing the networks' unnormalized prediction scores with an L2 penalty. We show that spectral decoupling increases the networks′ robustness for data distribution shifts and prevents overfitting on easy-to-learn features in medical images. To validate our findings, we train networks with and without spectral decoupling to detect prostate cancer on tissue slides and COVID-19 in chest radiographs. Networks trained with spectral decoupling achieve up to 9.5 percent point higher performance on external datasets. Spectral decoupling alleviates generalization issues associated with neural networks and can be used to complement or replace computationally expensive explicit bias mitigation methods, such as stain normalization in histological images.Graphical abstractDisplay OmittedHighlights•We evaluate the first implicit bias mitigation method in medical imaging•Spectral decoupling increases robustness for distribution shifts and shortcut learning•Complement or replace explicit mitigation methods, such as color normalization•Up to 9.5% point higher accuracy on external datasets with one line of codeMedical tests; Medical imaging; Algorithms; Artificial intelligence