Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Brief Bioinform ; 23(4)2022 07 18.
Artigo em Inglês | MEDLINE | ID: mdl-35709752

RESUMO

Unintended inhibition of the human ether-à-go-go-related gene (hERG) ion channel by small molecules leads to severe cardiotoxicity. Thus, hERG channel blockage is a significant concern in the development of new drugs. Several computational models have been developed to predict hERG channel blockage, including deep learning models; however, they lack robustness, reliability and interpretability. Here, we developed a graph-based Bayesian deep learning model for hERG channel blocker prediction, named BayeshERG, which has robust predictive power, high reliability and high resolution of interpretability. First, we applied transfer learning with 300 000 large data in initial pre-training to increase the predictive performance. Second, we implemented a Bayesian neural network with Monte Carlo dropout to calibrate the uncertainty of the prediction. Third, we utilized global multihead attentive pooling to augment the high resolution of structural interpretability for the hERG channel blockers and nonblockers. We conducted both internal and external validations for stringent evaluation; in particular, we benchmarked most of the publicly available hERG channel blocker prediction models. We showed that our proposed model outperformed predictive performance and uncertainty calibration performance. Furthermore, we found that our model learned to focus on the essential substructures of hERG channel blockers via an attention mechanism. Finally, we validated the prediction results of our model by conducting in vitro experiments and confirmed its high validity. In summary, BayeshERG could serve as a versatile tool for discovering hERG channel blockers and helping maximize the possibility of successful drug discovery. The data and source code are available at our GitHub repository (https://github.com/GIST-CSBL/BayeshERG).


Assuntos
Aprendizado Profundo , Canais de Potássio Éter-A-Go-Go , Teorema de Bayes , Canais de Potássio Éter-A-Go-Go/química , Canais de Potássio Éter-A-Go-Go/genética , Humanos , Bloqueadores dos Canais de Potássio/química , Bloqueadores dos Canais de Potássio/farmacologia , Reprodutibilidade dos Testes
2.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38888097

RESUMO

Convolutional neural networks (CNNs) provide flexible function approximations for a wide variety of applications when the input variables are in the form of images or spatial data. Although CNNs often outperform traditional statistical models in prediction accuracy, statistical inference, such as estimating the effects of covariates and quantifying the prediction uncertainty, is not trivial due to the highly complicated model structure and overparameterization. To address this challenge, we propose a new Bayesian approach by embedding CNNs within the generalized linear models (GLMs) framework. We use extracted nodes from the last hidden layer of CNN with Monte Carlo (MC) dropout as informative covariates in GLM. This improves accuracy in prediction and regression coefficient inference, allowing for the interpretation of coefficients and uncertainty quantification. By fitting ensemble GLMs across multiple realizations from MC dropout, we can account for uncertainties in extracting the features. We apply our methods to biological and epidemiological problems, which have both high-dimensional correlated inputs and vector covariates. Specifically, we consider malaria incidence data, brain tumor image data, and fMRI data. By extracting information from correlated inputs, the proposed method can provide an interpretable Bayesian analysis. The algorithm can be broadly applicable to image regressions or correlated data analysis by enabling accurate Bayesian inference quickly.


Assuntos
Teorema de Bayes , Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Método de Monte Carlo , Redes Neurais de Computação , Humanos , Modelos Lineares , Imageamento por Ressonância Magnética/estatística & dados numéricos , Imageamento por Ressonância Magnética/métodos , Malária/epidemiologia , Algoritmos
3.
BMC Med Imaging ; 23(1): 162, 2023 10 19.
Artigo em Inglês | MEDLINE | ID: mdl-37858043

RESUMO

BACKGROUND: The deterministic deep learning models have achieved state-of-the-art performance in various medical image analysis tasks, including nuclei segmentation from histopathology images. The deterministic models focus on improving the model prediction accuracy without assessing the confidence in the predictions. METHODS: We propose a semantic segmentation model using Bayesian representation to segment nuclei from the histopathology images and to further quantify the epistemic uncertainty. We employ Bayesian approximation with Monte-Carlo (MC) dropout during the inference time to estimate the model's prediction uncertainty. RESULTS: We evaluate the performance of the proposed approach on the PanNuke dataset, which consists of 312 visual fields from 19 organ types. We compare the nuclei segmentation accuracy of our approach with that of a fully convolutional neural network, U-Net, SegNet, and the state-of-the-art Hover-net. We use F1-score and intersection over union (IoU) as the evaluation metrics. The proposed approach achieves a mean F1-score of 0.893 ± 0.008 and an IoU value of 0.868 ± 0.003 on the test set of the PanNuke dataset. These results outperform the Hover-net, which has a mean F1-score of 0.871 ± 0.010 and an IoU value of 0.840 ± 0.032. CONCLUSIONS: The proposed approach, which incorporates Bayesian representation and Monte-Carlo dropout, demonstrates superior performance in segmenting nuclei from histopathology images compared to existing models such as U-Net, SegNet, and Hover-net. By considering the epistemic uncertainty, our model provides a more reliable estimation of the prediction confidence. These findings highlight the potential of Bayesian deep learning for improving medical image analysis tasks and can contribute to the development of more accurate and reliable computer-aided diagnostic systems.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Teorema de Bayes , Redes Neurais de Computação , Núcleo Celular
4.
Stat Sin ; 33(SI): 1507-1532, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37409184

RESUMO

In Bayesian data analysis, it is often important to evaluate quantiles of the posterior distribution of a parameter of interest (e.g., to form posterior intervals). In multi-dimensional problems, when non-conjugate priors are used, this is often difficult generally requiring either an analytic or sampling-based approximation, such as Markov chain Monte-Carlo (MCMC), Approximate Bayesian computation (ABC) or variational inference. We discuss a general approach that reframes this as a multi-task learning problem and uses recurrent deep neural networks (RNNs) to approximately evaluate posterior quantiles. As RNNs carry information along a sequence, this application is particularly useful in time-series. An advantage of this risk-minimization approach is that we do not need to sample from the posterior or calculate the likelihood. We illustrate the proposed approach in several examples.

5.
Entropy (Basel) ; 25(3)2023 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-36981295

RESUMO

Click-through rate (CTR) prediction is a research point for measuring recommendation systems and calculating AD traffic. Existing studies have proved that deep learning performs very well in prediction tasks, but most of the existing studies are based on deterministic models, and there is a big gap in capturing uncertainty. Modeling uncertainty is a major challenge when using machine learning solutions to solve real-world problems in various domains. In order to quantify the uncertainty of the model and achieve accurate and reliable prediction results. This paper designs a CTR prediction framework combining feature selection and feature interaction. In this framework, a CTR prediction model based on Bayesian deep learning is proposed to quantify the uncertainty in the prediction model. On the squeeze network and DNN parallel prediction model framework, the approximate posterior parameter distribution of the model is obtained using the Monte Carlo dropout, and obtains the integrated prediction results. Epistemic and aleatoric uncertainty are defined and adopt information entropy to calculate the sum of the two kinds of uncertainties. Epistemic uncertainty could be measured by mutual information. Experimental results show that the model proposed is superior to other models in terms of prediction performance and has the ability to quantify uncertainty.

6.
Entropy (Basel) ; 25(6)2023 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-37372228

RESUMO

Sequential Bayesian inference can be used for continual learning to prevent catastrophic forgetting of past tasks and provide an informative prior when learning new tasks. We revisit sequential Bayesian inference and assess whether using the previous task's posterior as a prior for a new task can prevent catastrophic forgetting in Bayesian neural networks. Our first contribution is to perform sequential Bayesian inference using Hamiltonian Monte Carlo. We propagate the posterior as a prior for new tasks by approximating the posterior via fitting a density estimator on Hamiltonian Monte Carlo samples. We find that this approach fails to prevent catastrophic forgetting, demonstrating the difficulty in performing sequential Bayesian inference in neural networks. From there, we study simple analytical examples of sequential Bayesian inference and CL and highlight the issue of model misspecification, which can lead to sub-optimal continual learning performance despite exact inference. Furthermore, we discuss how task data imbalances can cause forgetting. From these limitations, we argue that we need probabilistic models of the continual learning generative process rather than relying on sequential Bayesian inference over Bayesian neural network weights. Our final contribution is to propose a simple baseline called Prototypical Bayesian Continual Learning, which is competitive with the best performing Bayesian continual learning methods on class incremental continual learning computer vision benchmarks.

7.
Hum Brain Mapp ; 43(6): 1895-1916, 2022 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-35023255

RESUMO

Post-hemorrhagic hydrocephalus (PHH) is a severe complication of intraventricular hemorrhage (IVH) in very preterm infants. PHH monitoring and treatment decisions rely heavily on manual and subjective two-dimensional measurements of the ventricles. Automatic and reliable three-dimensional (3D) measurements of the ventricles may provide a more accurate assessment of PHH, and lead to improved monitoring and treatment decisions. To accurately and efficiently obtain these 3D measurements, automatic segmentation of the ventricles can be explored. However, this segmentation is challenging due to the large ventricular anatomical shape variability in preterm infants diagnosed with PHH. This study aims to (a) propose a Bayesian U-Net method using 3D spatial concrete dropout for automatic brain segmentation (with uncertainty assessment) of preterm infants with PHH; and (b) compare the Bayesian method to three reference methods: DenseNet, U-Net, and ensemble learning using DenseNets and U-Nets. A total of 41 T2 -weighted MRIs from 27 preterm infants were manually segmented into lateral ventricles, external CSF, white and cortical gray matter, brainstem, and cerebellum. These segmentations were used as ground truth for model evaluation. All methods were trained and evaluated using 4-fold cross-validation and segmentation endpoints, with additional uncertainty endpoints for the Bayesian method. In the lateral ventricles, segmentation endpoint values for the DenseNet, U-Net, ensemble learning, and Bayesian U-Net methods were mean Dice score = 0.814 ± 0.213, 0.944 ± 0.041, 0.942 ± 0.042, and 0.948 ± 0.034 respectively. Uncertainty endpoint values for the Bayesian U-Net were mean recall = 0.953 ± 0.037, mean  negative predictive value = 0.998 ± 0.005, mean accuracy = 0.906 ± 0.032, and mean AUC = 0.949 ± 0.031. To conclude, the Bayesian U-Net showed the best segmentation results across all methods and provided accurate uncertainty maps. This method may be used in clinical practice for automatic brain segmentation of preterm infants with PHH, and lead to better PHH monitoring and more informed treatment decisions.


Assuntos
Hidrocefalia , Recém-Nascido Prematuro , Teorema de Bayes , Hemorragia Cerebral/complicações , Hemorragia Cerebral/diagnóstico por imagem , Ventrículos Cerebrais/diagnóstico por imagem , Humanos , Hidrocefalia/complicações , Hidrocefalia/etiologia , Lactente , Recém-Nascido
8.
BMC Cancer ; 22(1): 1001, 2022 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-36131239

RESUMO

BACKGROUND: Despite the fact that tumor microenvironment (TME) and gene mutations are the main determinants of progression of the deadliest cancer in the world - lung cancer, their interrelations are not well understood. Digital pathology data provides a unique insight into the spatial composition of the TME. Various spatial metrics and machine learning approaches were proposed for prediction of either patient survival or gene mutations from this data. Still, these approaches are limited in the scope of analyzed features and in their explainability, and as such fail to transfer to clinical practice. METHODS: Here, we generated 23,199 image patches from 26 hematoxylin-and-eosin (H&E)-stained lung cancer tissue sections and annotated them into 9 different tissue classes. Using this dataset, we trained a deep neural network ARA-CNN. Next, we applied the trained network to segment 467 lung cancer H&E images from The Cancer Genome Atlas (TCGA) database. We used the segmented images to compute human-interpretable features reflecting the heterogeneous composition of the TME, and successfully utilized them to predict patient survival and cancer gene mutations. RESULTS: We achieved per-class AUC ranging from 0.72 to 0.99 for classifying tissue types in lung cancer with ARA-CNN. Machine learning models trained on the proposed human-interpretable features achieved a c-index of 0.723 in the task of survival prediction and AUC up to 73.5% for PDGFRB in the task of mutation classification. CONCLUSIONS: We presented a framework that accurately predicted survival and gene mutations in lung adenocarcinoma patients based on human-interpretable features extracted from H&E slides. Our approach can provide important insights for designing novel cancer treatments, by linking the spatial structure of the TME in lung adenocarcinoma to gene mutations and patient survival. It can also expand our understanding of the effects that the TME has on tumor evolutionary processes. Our approach can be generalized to different cancer types to inform precision medicine strategies.


Assuntos
Adenocarcinoma de Pulmão , Carcinoma Pulmonar de Células não Pequenas , Aprendizado Profundo , Neoplasias Pulmonares , Adenocarcinoma de Pulmão/genética , Carcinoma Pulmonar de Células não Pequenas/genética , Amarelo de Eosina-(YS) , Hematoxilina , Humanos , Neoplasias Pulmonares/genética , Neoplasias Pulmonares/patologia , Mutação , Receptor beta de Fator de Crescimento Derivado de Plaquetas , Microambiente Tumoral/genética
9.
Inverse Probl ; 38(10): 104004, 2022 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-37745782

RESUMO

Deep learning-based image reconstruction approaches have demonstrated impressive empirical performance in many imaging modalities. These approaches usually require a large amount of high-quality paired training data, which is often not available in medical imaging. To circumvent this issue we develop a novel unsupervised knowledge-transfer paradigm for learned reconstruction within a Bayesian framework. The proposed approach learns a reconstruction network in two phases. The first phase trains a reconstruction network with a set of ordered pairs comprising of ground truth images of ellipses and the corresponding simulated measurement data. The second phase fine-tunes the pretrained network to more realistic measurement data without supervision. By construction, the framework is capable of delivering predictive uncertainty information over the reconstructed image. We present extensive experimental results on low-dose and sparse-view computed tomography showing that the approach is competitive with several state-of-the-art supervised and unsupervised reconstruction techniques. Moreover, for test data distributed differently from the training data, the proposed framework can significantly improve reconstruction quality not only visually, but also quantitatively in terms of PSNR and SSIM, when compared with learned methods trained on the synthetic dataset only.

10.
Sensors (Basel) ; 22(10)2022 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-35632183

RESUMO

Seismic response prediction is a challenging problem and is significant in every stage during a structure's life cycle. Deep neural network has proven to be an efficient tool in the response prediction of structures. However, a conventional neural network with deterministic parameters is unable to predict the random dynamic response of structures. In this paper, a deep Bayesian convolutional neural network is proposed to predict seismic response. The Bayes-backpropagation algorithm is applied to train the proposed Bayesian deep learning model. A numerical example of a three-dimensional building structure is utilized to validate the performance of the proposed model. The result shows that both acceleration and displacement responses can be predicted with a high level of accuracy by using the proposed method. The main statistical indices of prediction results agree closely with the results from finite element analysis. Furthermore, the influence of random parameters and the robustness of the proposed model are discussed.


Assuntos
Aprendizado Profundo , Algoritmos , Teorema de Bayes , Redes Neurais de Computação
11.
Sensors (Basel) ; 21(5)2021 Feb 25.
Artigo em Inglês | MEDLINE | ID: mdl-33668950

RESUMO

In addition to helping develop products that aid the disabled, brain-computer interface (BCI) technology can also become a modality of entertainment for all people. However, most BCI games cannot be widely promoted due to the poor control performance or because they easily cause fatigue. In this paper, we propose a P300 brain-computer-interface game (MindGomoku) to explore a feasible and natural way to play games by using electroencephalogram (EEG) signals in a practical environment. The novelty of this research is reflected in integrating the characteristics of game rules and the BCI system when designing BCI games and paradigms. Moreover, a simplified Bayesian convolutional neural network (SBCNN) algorithm is introduced to achieve high accuracy on limited training samples. To prove the reliability of the proposed algorithm and system control, 10 subjects were selected to participate in two online control experiments. The experimental results showed that all subjects successfully completed the game control with an average accuracy of 90.7% and played the MindGomoku an average of more than 11 min. These findings fully demonstrate the stability and effectiveness of the proposed system. This BCI system not only provides a form of entertainment for users, particularly the disabled, but also provides more possibilities for games.


Assuntos
Interfaces Cérebro-Computador , Aprendizado Profundo , Teorema de Bayes , Eletroencefalografia , Humanos , Reprodutibilidade dos Testes
12.
Entropy (Basel) ; 23(3)2021 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-33802360

RESUMO

The 3D modelling of indoor environments and the generation of process simulations play an important role in factory and assembly planning. In brownfield planning cases, existing data are often outdated and incomplete especially for older plants, which were mostly planned in 2D. Thus, current environment models cannot be generated directly on the basis of existing data and a holistic approach on how to build such a factory model in a highly automated fashion is mostly non-existent. Major steps in generating an environment model of a production plant include data collection, data pre-processing and object identification as well as pose estimation. In this work, we elaborate on a methodical modelling approach, which starts with the digitalization of large-scale indoor environments and ends with the generation of a static environment or simulation model. The object identification step is realized using a Bayesian neural network capable of point cloud segmentation. We elaborate on the impact of the uncertainty information estimated by a Bayesian segmentation framework on the accuracy of the generated environment model. The steps of data collection and point cloud segmentation as well as the resulting model accuracy are evaluated on a real-world data set collected at the assembly line of a large-scale automotive production plant. The Bayesian segmentation network clearly surpasses the performance of the frequentist baseline and allows us to considerably increase the accuracy of the model placement in a simulation scene.

13.
J Med Internet Res ; 22(12): e18418, 2020 12 16.
Artigo em Inglês | MEDLINE | ID: mdl-33325832

RESUMO

BACKGROUND: Despite excellent prediction performance, noninterpretability has undermined the value of applying deep-learning algorithms in clinical practice. To overcome this limitation, attention mechanism has been introduced to clinical research as an explanatory modeling method. However, potential limitations of using this attractive method have not been clarified to clinical researchers. Furthermore, there has been a lack of introductory information explaining attention mechanisms to clinical researchers. OBJECTIVE: The aim of this study was to introduce the basic concepts and design approaches of attention mechanisms. In addition, we aimed to empirically assess the potential limitations of current attention mechanisms in terms of prediction and interpretability performance. METHODS: First, the basic concepts and several key considerations regarding attention mechanisms were identified. Second, four approaches to attention mechanisms were suggested according to a two-dimensional framework based on the degrees of freedom and uncertainty awareness. Third, the prediction performance, probability reliability, concentration of variable importance, consistency of attention results, and generalizability of attention results to conventional statistics were assessed in the diabetic classification modeling setting. Fourth, the potential limitations of attention mechanisms were considered. RESULTS: Prediction performance was very high for all models. Probability reliability was high in models with uncertainty awareness. Variable importance was concentrated in several variables when uncertainty awareness was not considered. The consistency of attention results was high when uncertainty awareness was considered. The generalizability of attention results to conventional statistics was poor regardless of the modeling approach. CONCLUSIONS: The attention mechanism is an attractive technique with potential to be very promising in the future. However, it may not yet be desirable to rely on this method to assess variable importance in clinical settings. Therefore, along with theoretical studies enhancing attention mechanisms, more empirical studies investigating potential limitations should be encouraged.


Assuntos
Aprendizado Profundo/normas , Diabetes Mellitus/epidemiologia , Algoritmos , Pesquisa Empírica , Humanos , Reprodutibilidade dos Testes , República da Coreia , Projetos de Pesquisa
14.
Sensors (Basel) ; 20(21)2020 Oct 23.
Artigo em Inglês | MEDLINE | ID: mdl-33113927

RESUMO

We present a novel approach for training deep neural networks in a Bayesian way. Compared to other Bayesian deep learning formulations, our approach allows for quantifying the uncertainty in model parameters while only adding very few additional parameters to be optimized. The proposed approach uses variational inference to approximate the intractable a posteriori distribution on basis of a normal prior. By representing the a posteriori uncertainty of the network parameters per network layer and depending on the estimated parameter expectation values, only very few additional parameters need to be optimized compared to a non-Bayesian network. We compare our approach to classical deep learning, Bernoulli dropout and Bayes by Backprop using the MNIST dataset. Compared to classical deep learning, the test error is reduced by 15%. We also show that the uncertainty information obtained can be used to calculate credible intervals for the network prediction and to optimize network architecture for the dataset at hand. To illustrate that our approach also scales to large networks and input vector sizes, we apply it to the GoogLeNet architecture on a custom dataset, achieving an average accuracy of 0.92. Using 95% credible intervals, all but one wrong classification result can be detected.

15.
Entropy (Basel) ; 23(1)2020 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-33396677

RESUMO

Probabilistic predictions with machine learning are important in many applications. These are commonly done with Bayesian learning algorithms. However, Bayesian learning methods are computationally expensive in comparison with non-Bayesian methods. Furthermore, the data used to train these algorithms are often distributed over a large group of end devices. Federated learning can be applied in this setting in a communication-efficient and privacy-preserving manner but does not include predictive uncertainty. To represent predictive uncertainty in federated learning, our suggestion is to introduce uncertainty in the aggregation step of the algorithm by treating the set of local weights as a posterior distribution for the weights of the global model. We compare our approach to state-of-the-art Bayesian and non-Bayesian probabilistic learning algorithms. By applying proper scoring rules to evaluate the predictive distributions, we show that our approach can achieve similar performance as the benchmark would achieve in a non-distributed setting.

16.
Med Image Anal ; 95: 103162, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38593644

RESUMO

Active Learning (AL) has the potential to solve a major problem of digital pathology: the efficient acquisition of labeled data for machine learning algorithms. However, existing AL methods often struggle in realistic settings with artifacts, ambiguities, and class imbalances, as commonly seen in the medical field. The lack of precise uncertainty estimations leads to the acquisition of images with a low informative value. To address these challenges, we propose Focused Active Learning (FocAL), which combines a Bayesian Neural Network with Out-of-Distribution detection to estimate different uncertainties for the acquisition function. Specifically, the weighted epistemic uncertainty accounts for the class imbalance, aleatoric uncertainty for ambiguous images, and an OoD score for artifacts. We perform extensive experiments to validate our method on MNIST and the real-world Panda dataset for the classification of prostate cancer. The results confirm that other AL methods are 'distracted' by ambiguities and artifacts which harm the performance. FocAL effectively focuses on the most informative images, avoiding ambiguities and artifacts during acquisition. For both experiments, FocAL outperforms existing AL approaches, reaching a Cohen's kappa of 0.764 with only 0.69% of the labeled Panda data.


Assuntos
Neoplasias da Próstata , Humanos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Masculino , Aprendizado de Máquina , Teorema de Bayes , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Artefatos , Redes Neurais de Computação
17.
Int J Comput Assist Radiol Surg ; 19(11): 2177-2186, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38282095

RESUMO

PURPOSE: Manual annotations for training deep learning models in auto-segmentation are time-intensive. This study introduces a hybrid representation-enhanced sampling strategy that integrates both density and diversity criteria within an uncertainty-based Bayesian active learning (BAL) framework to reduce annotation efforts by selecting the most informative training samples. METHODS: The experiments are performed on two lower extremity datasets of MRI and CT images, focusing on the segmentation of the femur, pelvis, sacrum, quadriceps femoris, hamstrings, adductors, sartorius, and iliopsoas, utilizing a U-net-based BAL framework. Our method selects uncertain samples with high density and diversity for manual revision, optimizing for maximal similarity to unlabeled instances and minimal similarity to existing training data. We assess the accuracy and efficiency using dice and a proposed metric called reduced annotation cost (RAC), respectively. We further evaluate the impact of various acquisition rules on BAL performance and design an ablation study for effectiveness estimation. RESULTS: In MRI and CT datasets, our method was superior or comparable to existing ones, achieving a 0.8% dice and 1.0% RAC increase in CT (statistically significant), and a 0.8% dice and 1.1% RAC increase in MRI (not statistically significant) in volume-wise acquisition. Our ablation study indicates that combining density and diversity criteria enhances the efficiency of BAL in musculoskeletal segmentation compared to using either criterion alone. CONCLUSION: Our sampling method is proven efficient in reducing annotation costs in image segmentation tasks. The combination of the proposed method and our BAL framework provides a semi-automatic way for efficient annotation of medical image datasets.


Assuntos
Teorema de Bayes , Extremidade Inferior , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X , Humanos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodos , Extremidade Inferior/diagnóstico por imagem , Aprendizado Profundo
18.
Heliyon ; 10(2): e24188, 2024 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-38293520

RESUMO

Bayesian deep learning (BDL) has emerged as a powerful technique for quantifying uncertainty in classification tasks, surpassing the effectiveness of traditional models by aligning with the probabilistic nature of real-world data. This alignment allows for informed decision-making by not only identifying the most likely outcome but also quantifying the surrounding uncertainty. Such capabilities hold great significance in fields like medical diagnoses and autonomous driving, where the consequences of misclassification are substantial. To further improve uncertainty quantification, the research community has introduced Bayesian model ensembles, which combines multiple Bayesian models to enhance predictive accuracy and uncertainty quantification. These ensembles have exhibited superior performance compared to individual Bayesian models and even non-Bayesian counterparts. In this study, we propose a novel approach that leverages the power of Bayesian ensembles for enhanced uncertainty quantification. The proposed method exploits the disparity between predicted positive and negative classes and employes it as a ranking metric for model selection. For each instance or sample, the ensemble's output for each class is determined by selecting the top 'k' models based on this ranking. Experimental results on different medical image classifications demonstrate that the proposed method consistently outperforms or achieves comparable performance to conventional Bayesian ensemble. This investigation highlights the practical application of Bayesian ensemble techniques in refining predictive performance and enhancing uncertainty evaluation in image classification tasks.

19.
Med Image Anal ; 94: 103125, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38428272

RESUMO

In this paper, we study pseudo-labelling. Pseudo-labelling employs raw inferences on unlabelled data as pseudo-labels for self-training. We elucidate the empirical successes of pseudo-labelling by establishing a link between this technique and the Expectation Maximisation algorithm. Through this, we realise that the original pseudo-labelling serves as an empirical estimation of its more comprehensive underlying formulation. Following this insight, we present a full generalisation of pseudo-labels under Bayes' theorem, termed Bayesian Pseudo Labels. Subsequently, we introduce a variational approach to generate these Bayesian Pseudo Labels, involving the learning of a threshold to automatically select high-quality pseudo labels. In the remainder of the paper, we showcase the applications of pseudo-labelling and its generalised form, Bayesian Pseudo-Labelling, in the semi-supervised segmentation of medical images. Specifically, we focus on: (1) 3D binary segmentation of lung vessels from CT volumes; (2) 2D multi-class segmentation of brain tumours from MRI volumes; (3) 3D binary segmentation of whole brain tumours from MRI volumes; and (4) 3D binary segmentation of prostate from MRI volumes. We further demonstrate that pseudo-labels can enhance the robustness of the learned representations. The code is released in the following GitHub repository: https://github.com/moucheng2017/EMSSL.


Assuntos
Neoplasias Encefálicas , Motivação , Masculino , Humanos , Teorema de Bayes , Algoritmos , Encéfalo , Processamento de Imagem Assistida por Computador
20.
Comput Med Imaging Graph ; 113: 102352, 2024 04.
Artigo em Inglês | MEDLINE | ID: mdl-38341947

RESUMO

Automated medical image segmentation plays a crucial role in diverse clinical applications. The high annotation costs of fully-supervised medical segmentation methods have spurred a growing interest in semi-supervised methods. Existing semi-supervised medical segmentation methods train the teacher segmentation network using labeled data to establish pseudo labels for unlabeled data. The quality of these pseudo labels is constrained as these methods fail to effectively address the significant bias in the data distribution learned from the limited labeled data. To address these challenges, this paper introduces an innovative Correspondence-based Generative Bayesian Deep Learning (C-GBDL) model. Built upon the teacher-student architecture, we design a multi-scale semantic correspondence method to aid the teacher model in generating high-quality pseudo labels. Specifically, our teacher model, embedded with the multi-scale semantic correspondence, learns a better-generalized data distribution from input volumes by feature matching with the reference volumes. Additionally, a double uncertainty estimation schema is proposed to further rectify the noisy pseudo labels. The double uncertainty estimation takes the predictive entropy as the first uncertainty estimation and takes the structural similarity between the input volume and its corresponding reference volumes as the second uncertainty estimation. Four groups of comparative experiments conducted on two public medical datasets demonstrate the effectiveness and the superior performance of our proposed model. Our code is available on https://github.com/yumjoo/C-GBDL.


Assuntos
Aprendizado Profundo , Humanos , Teorema de Bayes , Entropia , Incerteza
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA