RESUMEN
Gastric Intestinal Metaplasia (GIM) is one of the precancerous conditions in the gastric carcinogenesis cascade and its optical diagnosis during endoscopic screening is challenging even for seasoned endoscopists. Several solutions leveraging pre-trained deep neural networks (DNNs) have been recently proposed in order to assist human diagnosis. In this paper, we present a comparative study of these architectures in a new dataset containing GIM and non-GIM Narrow-band imaging still frames. We find that the surveyed DNNs perform remarkably well on average, but still measure sizeable inter-fold variability during cross-validation. An additional ad-hoc analysis suggests that these baseline architectures may not perform equally well at all scales when diagnosing GIM.Clinical relevance- Enhanching a clinician's ability to detect and localize intestinal metaplasia can be a crucial tool for gastric cancer management policies.
Asunto(s)
Aprendizaje Profundo , Lesiones Precancerosas , Humanos , Gastroscopía/métodos , Estómago/diagnóstico por imagen , Metaplasia , Lesiones Precancerosas/diagnósticoRESUMEN
This work considers the problem of segmenting heart sounds into their fundamental components. We unify statistical and data-driven solutions by introducing Markov-based Neural Networks (MNNs), a hybrid end-to-end framework that exploits Markov models as statistical inductive biases for an Artificial Neural Network (ANN) discriminator. We show that an MNN leveraging a simple one-dimensional Convolutional ANN significantly outperforms two recent purely data-driven solutions for this task in two publicly available datasets: PhysioNet 2016 (Sensitivity: 0.947 ±0.02; Positive Predictive Value : 0.937 ±0.025) and the CirCor DigiScope 2022 (Sensitivity: 0.950 ±0.008; Positive Predictive Value: 0.943 ±0.012). We also propose a novel gradient-based unsupervised learning algorithm that effectively makes the MNN adaptive to unseen datum sampled from unknown distributions. We perform a cross dataset analysis and show that an MNN pre-trained in the CirCor DigiScope 2022 can benefit from an average improvement of 3.90% Positive Predictive Value on unseen observations from the PhysioNet 2016 dataset using this method.