Deep learning (DL) models have significantly transformed machine learning (ML), particularly with their prowess in classification tasks. However, these models struggle to differentiate between in-distribution (ID) and out-of-distribution (OOD) data at the testing phase. This challenge has curtailed their deployment in sensitive fields like biotechnology, where misidentifying OOD data, such as unclear or unknown bacterial genomic sequences, as known ID classes could lead to dire consequences. To address this, we propose an approach to make DL models OOD-sensitive by exploiting the configuration of the logit space embeddings, into the model’s decision-making process. Leveraging the effect observed in recent studies that there is minimal overlap between the embeddings of ID and OOD data, we use a density estimator to model the ID logit distribution based on the training data. This allows us to reliably flag data that do not match the ID distribution as OOD. Our methodology is designed to be independent of the specific data or model architecture and can seamlessly augment existing trained models without the need to expose them to OOD data. Testing our method on widely recognized image datasets, we achieve leading-edge results, including a substantial 10% enhancement in the area under the receiver operating characteristic curve (AUCROC) on the Google genome dataset.
Part of ISBN 9783031824838
QC 20250404