Foto 7

CROCODILE: Causality aids RObustness via COntrastive DIsentangled LEarning

Written by

Deep learning image classifiers often struggle with domain shift, leading to significant performance degradation in real-world applications. In this paper, we introduce our CROCODILE framework, showing how tools from causality can foster a model’s robustness to domain shift via feature disentanglement, contrastive learning losses, and the injection of prior knowledge. This way, the model relies less on spurious correlations, learns the mechanism bringing from images to prediction better, and outperforms baselines on out-of-distribution (OOD) data. We apply our method to multi-label lung disease classification from chest X-rays (CXRs), utilizing over 750000 images from four datasets. Our bias-mitigation method improves domain generalization, broadening the applicability and reliability of deep learning models for a safer medical image analysis. Find our code at: https://github.com/gianlucarloni/crocodile.

International Conference: MICCAI 2024, Marrakesh, Morocco.

Carloni, G., Tsaftaris, S.A., Colantonio, S. (2025). CROCODILE: Causality Aids RObustness via COntrastive DIsentangled LEarning. In: Sudre, C.H., Mehta, R., Ouyang, C., Qin, C., Rakic, M., Wells, W.M. (eds) Uncertainty for Safe Utilization of Machine Learning in Medical Imaging. UNSURE 2024. Lecture Notes in Computer Science, vol 15167. Springer, Cham. https://doi.org/10.1007/978-3-031-73158-7_10