Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Health Data Sci ; 4: 0126, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38645573

RESUMEN

Background: Clinical trial is a crucial step in the development of a new therapy (e.g., medication) and is remarkably expensive and time-consuming. Forecasting the approval of clinical trials accurately would enable us to circumvent trials destined to fail, thereby allowing us to allocate more resources to therapies with better chances. However, existing approval prediction algorithms did not quantify the uncertainty and provide interpretability, limiting their usage in real-world clinical trial management. Methods: This paper quantifies uncertainty and improves interpretability in clinical trial approval predictions. We devised a selective classification approach and integrated it with the Hierarchical Interaction Network, the state-of-the-art clinical trial prediction model. Selective classification, encompassing a spectrum of methods for uncertainty quantification, empowers the model to withhold decision-making in the face of samples marked by ambiguity or low confidence. This approach not only amplifies the accuracy of predictions for the instances it chooses to classify but also notably enhances the model's interpretability. Results: Comprehensive experiments demonstrate that incorporating uncertainty markedly enhances the model's performance. Specifically, the proposed method achieved 32.37%, 21.43%, and 13.27% relative improvement on area under the precision-recall curve over the base model (Hierarchical Interaction Network) in phase I, II, and III trial approval predictions, respectively. For phase III trials, our method reaches 0.9022 area under the precision-recall curve scores. In addition, we show a case study of interpretability that helps domain experts to understand model's outcome. The code is publicly available at https://github.com/Vincent-1125/Uncertainty-Quantification-on-Clinical-Trial-Outcome-Prediction. Conclusion: Our approach not only measures model uncertainty but also greatly improves interpretability and performance for clinical trial approval prediction.

2.
Nat Commun ; 15(1): 976, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38302502

RESUMEN

Early detection is critical to achieving improved treatment outcomes for child patients with congenital heart diseases (CHDs). Therefore, developing effective CHD detection techniques using low-cost and non-invasive pediatric electrocardiogram are highly desirable. We propose a deep learning approach for CHD detection, CHDdECG, which automatically extracts features from pediatric electrocardiogram and wavelet transformation characteristics, and integrates them with key human-concept features. Developed on 65,869 cases, CHDdECG achieved ROC-AUC of 0.915 and specificity of 0.881 on a real-world test set covering 12,000 cases. Additionally, on two external test sets with 7137 and 8121 cases, the overall ROC-AUC were 0.917 and 0.907 while specificities were 0.937 and 0.907. Notably, CHDdECG surpassed cardiologists in CHD detection performance comparison, and feature importance scores suggested greater influence of automatically extracted electrocardiogram features on CHD detection compared with human-concept features, implying that CHDdECG may grasp some knowledge beyond human cognition. Our study directly impacts CHD detection with pediatric electrocardiogram and demonstrates the potential of pediatric electrocardiogram for broader benefits.


Asunto(s)
Aprendizaje Profundo , Cardiopatías Congénitas , Humanos , Niño , Cardiopatías Congénitas/diagnóstico , Electrocardiografía , Cognición
3.
Artículo en Inglés | MEDLINE | ID: mdl-38090818

RESUMEN

As a common and critical medical image analysis task, deep learning based biomedical image segmentation is hindered by the dependence on costly fine-grained annotations. To alleviate this data dependence, in this paper, a novel approach, called Polygonal Approximation Learning (PAL), is proposed for convex object instance segmentation with only bounding-box supervision. The key idea behind PAL is that the detection model for convex objects already contains the necessary information for segmenting them since their convex hulls, which can be generated approximately by the intersection of bounding boxes, are equivalent to the masks representing the objects. To extract the essential information from the detection model, a repeated detection approach is employed on biomedical images where various rotation angles are applied and a dice loss with the projection of the rotated detection results is utilized as a supervised signal in training our segmentation model. In biomedical imaging tasks involving convex objects, such as nuclei instance segmentation, PAL outperforms the known models (e.g., BoxInst) that rely solely on box supervision. Furthermore, PAL achieves comparable performance with mask-supervised models including Mask R-CNN and Cascade Mask R-CNN. Interestingly, PAL also demonstrates remarkable performance on non-convex object instance segmentation tasks, for example, surgical instrument and organ instance segmentation. Our code is available at https://github.com/shenmishajing/PAL.

4.
IEEE/ACM Trans Comput Biol Bioinform ; 20(4): 2434-2444, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-34990368

RESUMEN

A large number of people suffer from life-threatening cardiac abnormalities, and electrocardiogram (ECG) analysis is beneficial to determining whether an individual is at risk of such abnormalities. Automatic ECG classification methods, especially the deep learning based ones, have been proposed to detect cardiac abnormalities using ECG records, showing good potential to improve clinical diagnosis and help early prevention of cardiovascular diseases. However, the predictions of the known neural networks still do not satisfactorily meet the needs of clinicians, and this phenomenon suggests that some information used in clinical diagnosis may not be well captured and utilized by these methods. In this paper, we introduce some rules into convolutional neural networks, which help present clinical knowledge to deep learning based ECG analysis, in order to improve automated ECG diagnosis performance. Specifically, we propose a Handcrafted-Rule-enhanced Neural Network (called HRNN) for ECG classification with standard 12-lead ECG input, which consists of a rule inference module and a deep learning module. Experiments on two large-scale public ECG datasets show that our new approach considerably outperforms existing state-of-the-art methods. Further, our proposed approach not only can improve the diagnosis performance, but also can assist in detecting mislabelled ECG samples.

5.
Artículo en Inglés | MEDLINE | ID: mdl-35635817

RESUMEN

Cervical lesion detection (CLD) using colposcopic images of multi-modality (acetic and iodine) is critical to computer-aided diagnosis (CAD) systems for accurate, objective, and comprehensive cervical cancer screening. To robustly capture lesion features and conform with clinical diagnosis practice, we propose a novel corresponding region fusion network (CRFNet) for multi-modal CLD. CRFNet first extracts feature maps and generates proposals for each modality, then performs proposal shifting to obtain corresponding regions under large position shifts between modalities, and finally fuses those region features with a new corresponding channel attention to detect lesion regions on both modalities. To evaluate CRFNet, we build a large multi-modal colposcopic image dataset collected from our collaborative hospital. We show that our proposed CRFNet surpasses known single-modal and multi-modal CLD methods and achieves state-of-the-art performance, especially in terms of Average Precision.

6.
IEEE Trans Med Imaging ; 40(10): 2575-2588, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33606628

RESUMEN

Many known supervised deep learning methods for medical image segmentation suffer an expensive burden of data annotation for model training. Recently, few-shot segmentation methods were proposed to alleviate this burden, but such methods often showed poor adaptability to the target tasks. By prudently introducing interactive learning into the few-shot learning strategy, we develop a novel few-shot segmentation approach called Interactive Few-shot Learning (IFSL), which not only addresses the annotation burden of medical image segmentation models but also tackles the common issues of the known few-shot segmentation methods. First, we design a new few-shot segmentation structure, called Medical Prior-based Few-shot Learning Network (MPrNet), which uses only a few annotated samples (e.g., 10 samples) as support images to guide the segmentation of query images without any pre-training. Then, we propose an Interactive Learning-based Test Time Optimization Algorithm (IL-TTOA) to strengthen our MPrNet on the fly for the target task in an interactive fashion. To our best knowledge, our IFSL approach is the first to allow few-shot segmentation models to be optimized and strengthened on the target tasks in an interactive and controllable manner. Experiments on four few-shot segmentation tasks show that our IFSL approach outperforms the state-of-the-art methods by more than 20% in the DSC metric. Specifically, the interactive optimization algorithm (IL-TTOA) further contributes ~10% DSC improvement for the few-shot segmentation models.


Asunto(s)
Aprendizaje Profundo , Entrenamiento Simulado , Algoritmos
7.
Artículo en Inglés | MEDLINE | ID: mdl-32356757

RESUMEN

Higher-resolution biopsy slice images reveal many details, which are widely used in medical practice. However, taking high-resolution slice images is more costly than taking low-resolution ones. In this paper, we propose a joint framework containing a novel transfer learning strategy and a deep super-resolution framework to generate high-resolution slice images from low-resolution ones. The super-resolution framework called SRFBN+ is proposed by modifying a state-of-the-art framework SRFBN. Specifically, the structure of the feedback block of SRFBN was modified to be more flexible. Besides, it is challenging to use typical transfer learning strategies directly for the tasks on slice images, as the patterns on different types of biopsy slice images are varying. To this end, we propose a novel transfer learning strategy, called Channel Fusion Transfer Learning (CF-Trans). CF-Trans builds a middle domain by fusing the data manifolds of the source domain and the target domain, serving as a springboard for knowledge transfer. Thus, in the transfer learning setting, SRFBN+ can be trained on the source domain and then the middle domain and finally the target domain. Experiments on biopsy slice images validate SRFBN+ works well in generating super-resolution slice images, and CF-Trans is an efficient transfer learning strategy.


Asunto(s)
Biopsia/métodos , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Microscopía/métodos , Algoritmos , Colon/patología , Biología Computacional , Bases de Datos Factuales , Femenino , Humanos , Ovario/patología
8.
IEEE J Biomed Health Inform ; 25(10): 3700-3708, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33232248

RESUMEN

Colorectal cancer (CRC) is one of the most life-threatening malignancies. Colonoscopy pathology examination can identify cells of early-stage colon tumors in small tissue image slices. But, such examination is time-consuming and exhausting on high resolution images. In this paper, we present a new framework for colonoscopy pathology whole slide image (WSI) analysis, including lesion segmentation and tissue diagnosis. Our framework contains an improved U-shape network with a VGG net as backbone, and two schemes for training and inference, respectively (the training scheme and inference scheme). Based on the characteristics of colonoscopy pathology WSI, we introduce a specific sampling strategy for sample selection and a transfer learning strategy for model training in our training scheme. Besides, we propose a specific loss function, class-wise DSC loss, to train the segmentation network. In our inference scheme, we apply a sliding-window based sampling strategy for patch generation and diploid ensemble (data ensemble and model ensemble) for the final prediction. We use the predicted segmentation mask to generate the classification probability for the likelihood of WSI being malignant. To our best knowledge, DigestPath 2019 is the first challenge and the first public dataset available on colonoscopy tissue screening and segmentation, and our proposed framework yields good performance on this dataset. Our new framework achieved a DSC of 0.7789 and AUC of 1 on the online test dataset, and we won the [Formula: see text] place in the DigestPath 2019 Challenge (task 2). Our code is available at https://github.com/bhfs9999/colonoscopy_tissue_screen_and_segmentation.


Asunto(s)
Aprendizaje Profundo , Colonoscopía , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA