Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Sci Rep ; 14(1): 2032, 2024 01 23.
Artículo en Inglés | MEDLINE | ID: mdl-38263232

RESUMEN

Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, appearance, and location makes the detection of polyps challenging. Moreover, colonoscopy surveillance and removal of polyps are highly operator-dependent procedures and occur in a highly complex organ topology. There exists a high missed detection rate and incomplete removal of colonic polyps. To assist in clinical procedures and reduce missed rates, automated methods for detecting and segmenting polyps using machine learning have been achieved in past years. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets from different centres, populations, modalities, and acquisition systems. To test this hypothesis rigorously, we, together with expert gastroenterologists, curated a multi-centre and multi-population dataset acquired from six different colonoscopy systems and challenged the computational expert teams to develop robust automated detection and segmentation methods in a crowd-sourcing Endoscopic computer vision challenge. This work put forward rigorous generalisability tests and assesses the usability of devised deep learning methods in dynamic and actual clinical colonoscopy procedures. We analyse the results of four top performing teams for the detection task and five top performing teams for the segmentation task. Our analyses demonstrate that the top-ranking teams concentrated mainly on accuracy over the real-time performance required for clinical applicability. We further dissect the devised methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets and routine clinical procedures.


Asunto(s)
Colaboración de las Masas , Aprendizaje Profundo , Pólipos , Humanos , Colonoscopía , Computadores
2.
Nutrition ; 119: 112317, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38154396

RESUMEN

OBJECTIVES: Cancer cachexia is a debilitating condition with widespread negative effects. The heterogeneity of clinical features within patients with cancer cachexia is unclear. The identification and prognostic analysis of diverse phenotypes of cancer cachexia may help develop individualized interventions to improve outcomes for vulnerable populations. The aim of this study was to show that the machine learning-based cancer cachexia classification model generalized well on the external validation cohort. METHODS: This was a nationwide multicenter observational study conducted from October 2012 to April 2021 in China. Unsupervised consensus clustering analysis was applied based on demographic, anthropometric, nutritional, oncological, and quality-of-life data. Key characteristics of each cluster were identified using the standardized mean difference. We used logistic and Cox regression analysis to evaluate 1-, 3-, 5-y, and overall mortality. RESULTS: A consensus clustering algorithm was performed for 4329 patients with cancer cachexia in the discovery cohort, and four clusters with distinct phenotypes were uncovered. From clusters 1 to 4, the clinical characteristics of patients showed a transition from almost unimpaired to mildly, moderately, and severely impaired. Consistently, an increase in mortality from clusters 1 to 4 was observed. The overall mortality rate was 32%, 40%, 54%, and 68%, and the median overall survival time was 21.9, 18, 16.7, and 13.6 mo for patients in clusters 1 to 4, respectively. Our machine learning-based model performed better in predicting mortality than the traditional model. External validation confirmed the above results. CONCLUSIONS: Machine learning is valuable for phenotype classifications of patients with cancer cachexia. Detection of clinically distinct clusters among cachexic patients assists in scheduling personalized treatment strategies and in patient selection for clinical trials.


Asunto(s)
Caquexia , Neoplasias , Humanos , Caquexia/etiología , Fenotipo , Aprendizaje Automático , Algoritmos , Neoplasias/complicaciones
3.
Artículo en Inglés | MEDLINE | ID: mdl-37796672

RESUMEN

Unpaired medical image enhancement (UMIE) aims to transform a low-quality (LQ) medical image into a high-quality (HQ) one without relying on paired images for training. While most existing approaches are based on Pix2Pix/CycleGAN and are effective to some extent, they fail to explicitly use HQ information to guide the enhancement process, which can lead to undesired artifacts and structural distortions. In this article, we propose a novel UMIE approach that avoids the above limitation of existing methods by directly encoding HQ cues into the LQ enhancement process in a variational fashion and thus model the UMIE task under the joint distribution between the LQ and HQ domains. Specifically, we extract features from an HQ image and explicitly insert the features, which are expected to encode HQ cues, into the enhancement network to guide the LQ enhancement with the variational normalization module. We train the enhancement network adversarially with a discriminator to ensure the generated HQ image falls into the HQ domain. We further propose a content-aware loss to guide the enhancement process with wavelet-based pixel-level and multiencoder-based feature-level constraints. Additionally, as a key motivation for performing image enhancement is to make the enhanced images serve better for downstream tasks, we propose a bi-level learning scheme to optimize the UMIE task and downstream tasks cooperatively, helping generate HQ images both visually appealing and favorable for downstream tasks. Experiments on three medical datasets verify that our method outperforms existing techniques in terms of both enhancement quality and downstream task performance. The code and the newly collected datasets are publicly available at https://github.com/ChunmingHe/HQG-Net.

4.
Med Image Anal ; 88: 102880, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37413792

RESUMEN

Semi-supervised learning has greatly advanced medical image segmentation since it effectively alleviates the need of acquiring abundant annotations from experts, wherein the mean-teacher model, known as a milestone of perturbed consistency learning, commonly serves as a standard and simple baseline. Inherently, learning from consistency can be regarded as learning from stability under perturbations. Recent improvement leans toward more complex consistency learning frameworks, yet, little attention is paid to the consistency target selection. Considering that the ambiguous regions from unlabeled data contain more informative complementary clues, in this paper, we improve the mean-teacher model to a novel ambiguity-consensus mean-teacher (AC-MT) model. Particularly, we comprehensively introduce and benchmark a family of plug-and-play strategies for ambiguous target selection from the perspectives of entropy, model uncertainty and label noise self-identification, respectively. Then, the estimated ambiguity map is incorporated into the consistency loss to encourage consensus between the two models' predictions in these informative regions. In essence, our AC-MT aims to find out the most worthwhile voxel-wise targets from the unlabeled data, and the model especially learns from the perturbed stability of these informative regions. The proposed methods are extensively evaluated on left atrium segmentation and brain tumor segmentation. Encouragingly, our strategies bring substantial improvement over recent state-of-the-art methods. The ablation study further demonstrates our hypothesis and shows impressive results under various extreme annotation conditions.


Asunto(s)
Benchmarking , Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Consenso , Entropía , Atrios Cardíacos , Aprendizaje Automático Supervisado , Procesamiento de Imagen Asistido por Computador
5.
Hepatol Int ; 16(5): 1188-1198, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36001229

RESUMEN

INTRODUCTION: Microvascular invasion (MVI) is a known risk factor for prognosis after R0 liver resection for hepatocellular carcinoma (HCC). The aim of this study was to develop a deep learning prognostic prediction model by incorporating a new factor of MVI area to the other independent risk factors. METHODS: Consecutive patients with HCC who underwent R0 liver resection from January to December 2016 at the Eastern Hepatobiliary Surgery Hospital were included in this retrospective study. For patients with MVI detected on resected specimens, they were divided into two groups according to the size of the maximal MVI area: the small-MVI group and the large-MVI group. RESULTS: Of 193 patients who had MVI in the 337 HCC patients, 130 patients formed the training cohort and 63 patients formed the validation cohort. The large-MVI group of patients had worse overall survival (OS) when compared with the small-MVI group (p = 0.009). A deep learning model was developed based on the following independent risk factors found in this study: MVI stage, maximal MVI area, presence/absence of cirrhosis, and maximal tumor diameter. The areas under the receiver operating characteristic of the deep learning model for the 1-, 3-, and 5-year predictions of OS were 80.65, 74.04, and 79.44, respectively, which outperformed the traditional COX proportional hazards model. CONCLUSION: The deep learning model, by incorporating the maximal MVI area as an additional prognostic factor to the other previously known independent risk factors, predicted more accurately postoperative long-term OS for HCC patients with MVI after R0 liver resection.


Asunto(s)
Carcinoma Hepatocelular , Aprendizaje Profundo , Neoplasias Hepáticas , Carcinoma Hepatocelular/patología , Hepatectomía , Humanos , Neoplasias Hepáticas/patología , Microvasos/patología , Invasividad Neoplásica/patología , Pronóstico , Estudios Retrospectivos
6.
IEEE Trans Med Imaging ; 41(11): 3062-3073, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-35604969

RESUMEN

Manually segmenting medical images is expertise-demanding, time-consuming and laborious. Acquiring massive high-quality labeled data from experts is often infeasible. Unfortunately, without sufficient high-quality pixel-level labels, the usual data-driven learning-based segmentation methods often struggle with deficient training. As a result, we are often forced to collect additional labeled data from multiple sources with varying label qualities. However, directly introducing additional data with low-quality noisy labels may mislead the network training and undesirably offset the efficacy provided by those high-quality labels. To address this issue, we propose a Mean-Teacher-assisted Confident Learning (MTCL) framework constructed by a teacher-student architecture and a label self-denoising process to robustly learn segmentation from a small set of high-quality labeled data and plentiful low-quality noisy labeled data. Particularly, such a synergistic framework is capable of simultaneously and robustly exploiting (i) the additional dark knowledge inside the images of low-quality labeled set via perturbation-based unsupervised consistency, and (ii) the productive information of their low-quality noisy labels via explicit label refinement. Comprehensive experiments on left atrium segmentation with simulated noisy labels and hepatic and retinal vessel segmentation with real-world noisy labels demonstrate the superior segmentation performance of our approach as well as its effectiveness on label denoising.

7.
IEEE J Biomed Health Inform ; 26(7): 3174-3184, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35324450

RESUMEN

Semi-supervised learning has substantially advanced medical image segmentation since it alleviates the heavy burden of acquiring the costly expert-examined annotations. Especially, the consistency-based approaches have attracted more attention for their superior performance, wherein the real labels are only utilized to supervise their paired images via supervised loss while the unlabeled images are exploited by enforcing the perturbation-based "unsupervised" consistency without explicit guidance from those real labels. However, intuitively, the expert-examined real labels contain more reliable supervision signals. Observing this, we ask an unexplored but interesting question: can we exploit the unlabeled data via explicit real label supervision for semi-supervised training? To this end, we discard the previous perturbation-based consistency but absorb the essence of non-parametric prototype learning. Based on the prototypical networks, we then propose a novel cyclic prototype consistency learning (CPCL) framework, which is constructed by a labeled-to-unlabeled (L2U) prototypical forward process and an unlabeled-to-labeled (U2L) backward process. Such two processes synergistically enhance the segmentation network by encouraging morediscriminative and compact features. In this way, our framework turns previous "unsupervised" consistency into new "supervised" consistency, obtaining the "all-around real label supervision" property of our method. Extensive experiments on brain tumor segmentation from MRI and kidney segmentation from CT images show that our CPCL can effectively exploit the unlabeled data and outperform other state-of-the-art semi-supervised medical image segmentation methods.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Automático Supervisado , Humanos , Riñón , Imagen por Resonancia Magnética
8.
Comput Med Imaging Graph ; 97: 102053, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35306442

RESUMEN

BACKGROUND: Deep convolutional neural networks (CNNs) have yielded promising results in automatic whole slide images (WSIs) processing for digital pathology in recent years. Training supervised CNNs usually requires a large amount of annotated samples. However, manual annotation of gigapixel WSIs is labor-intensive and error-prone, i.e., the shortage of annotations has become the major bottleneck of WSI diagnosis model development. In this work, we aim to develop a deep learning based self-supervised histopathology image analysis workflow that can classify tissues without any annotation. METHODS: Inspired by the contrastive learning methods that have achieved state-of-the-art results on unsupervised representation learning for natural images, we adopt the self-supervised training scheme to generate discriminative embeddings from annotation-free WSI patches and simultaneously obtain initial clusters, which are further refined by a silhouette coefficient based recursive scheme to divide tissue mixture clusters. A multi-scale encoder network is specifically designed to extract pathology-specific contextual features. A tissue dictionary composed by the tissue clusters is then built for cancer diagnosis. RESULTS: Experiments show that our method can identify different tissues in annotation-free conditions with competitive results (achieving the accuracy of 0.9364/0.9325 in human colorectal/sentinel lymph WSIs) as the supervised methods (with the corresponding accuracy of 0.9806/0.9494) and surpass other unsupervised baselines. Our method is also evaluated in a cohort of 20 clinical patients and get an AUC score of 0.99 to distinguish benign/malignant polyps. CONCLUSION: Our proposed deep contrastive learning based tissue clustering method can learn from raw WSIs without annotation to distinguish different tissues. The method are tested in three different datasets and show the potential to help pathologists diagnosing diseases as a quantitative and qualitative tool.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Análisis por Conglomerados , Humanos
9.
Artículo en Inglés | MEDLINE | ID: mdl-37250854

RESUMEN

In order to tackle the difficulty associated with the ill-posed nature of the image registration problem, regularization is often used to constrain the solution space. For most learning-based registration approaches, the regularization usually has a fixed weight and only constrains the spatial transformation. Such convention has two limitations: (i) Besides the laborious grid search for the optimal fixed weight, the regularization strength of a specific image pair should be associated with the content of the images, thus the "one value fits all" training scheme is not ideal; (ii) Only spatially regularizing the transformation may neglect some informative clues related to the ill-posedness. In this study, we propose a mean-teacher based registration framework, which incorporates an additional temporal consistency regularization term by encouraging the teacher model's prediction to be consistent with that of the student model. More importantly, instead of searching for a fixed weight, the teacher enables automatically adjusting the weights of the spatial regularization and the temporal consistency regularization by taking advantage of the transformation uncertainty and appearance uncertainty. Extensive experiments on the challenging abdominal CT-MRI registration show that our training strategy can promisingly advance the original learning-based method in terms of efficient hyperparameter tuning and a better tradeoff between accuracy and smoothness.

10.
Artículo en Inglés | MEDLINE | ID: mdl-34367471

RESUMEN

The loss function of an unsupervised multimodal image registration framework has two terms, i.e., a metric for similarity measure and regularization. In the deep learning era, researchers proposed many approaches to automatically learn the similarity metric, which has been shown effective in improving registration performance. However, for the regularization term, most existing multimodal registration approaches still use a hand-crafted formula to impose artificial properties on the estimated deformation field. In this work, we propose a unimodal cyclic regularization training pipeline, which learns task-specific prior knowledge from simpler unimodal registration, to constrain the deformation field of multimodal registration. In the experiment of abdominal CT-MR registration, the proposed method yields better results over conventional regularization methods, especially for severely deformed local regions.

11.
Artículo en Inglés | MEDLINE | ID: mdl-34366715

RESUMEN

Multimodal image registration (MIR) is a fundamental procedure in many image-guided therapies. Recently, unsupervised learning-based methods have demonstrated promising performance over accuracy and efficiency in deformable image registration. However, the estimated deformation fields of the existing methods fully rely on the to-be-registered image pair. It is difficult for the networks to be aware of the mismatched boundaries, resulting in unsatisfactory organ boundary alignment. In this paper, we propose a novel multimodal registration framework, which elegantly leverages the deformation fields estimated from both: (i) the original to-be-registered image pair, (ii) their corresponding gradient intensity maps, and adaptively fuses them with the proposed gated fusion module. With the help of auxiliary gradient-space guidance, the network can concentrate more on the spatial relationship of the organ boundary. Experimental results on two clinically acquired CT-MRI datasets demonstrate the effectiveness of our proposed approach.

12.
Int J Comput Assist Radiol Surg ; 16(6): 923-932, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-33939077

RESUMEN

PURPOSE: Deformable image registration (DIR) is essential for many image-guided therapies. Recently, deep learning approaches have gained substantial popularity and success in DIR. Most deep learning approaches use the so-called mono-stream high-to-low, low-to-high network structure and can achieve satisfactory overall registration results. However, accurate alignments for some severely deformed local regions, which are crucial for pinpointing surgical targets, are often overlooked. Consequently, these approaches are not sensitive to some hard-to-align regions, e.g., intra-patient registration of deformed liver lobes. METHODS: We propose a novel unsupervised registration network, namely full-resolution residual registration network (F3RNet), for deformable registration of severely deformed organs. The proposed method combines two parallel processing streams in a residual learning fashion. One stream takes advantage of the full-resolution information that facilitates accurate voxel-level registration. The other stream learns the deep multi-scale residual representations to obtain robust recognition. We also factorize the 3D convolution to reduce the training parameters and enhance network efficiency. RESULTS: We validate the proposed method on a clinically acquired intra-patient abdominal CT-MRI dataset and a public inspiratory and expiratory thorax CT dataset. Experiments on both multimodal and unimodal registration demonstrate promising results compared to state-of-the-art approaches. CONCLUSION: By combining the high-resolution information and multi-scale representations in a highly interactive residual learning fashion, the proposed F3RNet can achieve accurate overall and local registration. The run time for registering a pair of images is less than 3 s using a GPU. In future works, we will investigate how to cost-effectively process high-resolution information and fuse multi-scale representations.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Tomografía Computarizada por Rayos X/métodos , Humanos
13.
Med Image Comput Comput Assist Interv ; 12263: 222-232, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33283210

RESUMEN

Deformable image registration between Computed Tomography (CT) images and Magnetic Resonance (MR) imaging is essential for many image-guided therapies. In this paper, we propose a novel translation-based unsupervised deformable image registration method. Distinct from other translation-based methods that attempt to convert the multimodal problem (e.g., CT-to-MR) into a unimodal problem (e.g., MR-to-MR) via image-to-image translation, our method leverages the deformation fields estimated from both: (i) the translated MR image and (ii) the original CT image in a dual-stream fashion, and automatically learns how to fuse them to achieve better registration performance. The multimodal registration network can be effectively trained by computationally efficient similarity metrics without any ground-truth deformation. Our method has been evaluated on two clinical datasets and demonstrates promising results compared to state-of-the-art traditional and learning-based methods.

14.
PLoS One ; 15(11): e0242629, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33237926

RESUMEN

Online shopping behavior has the characteristics of rich granularity dimension and data sparsity and presents a challenging task in e-commerce. Previous studies on user behavior prediction did not seriously discuss feature selection and ensemble design, which are important to improving the performance of machine learning algorithms. In this paper, we proposed an SE-stacking model based on information fusion and ensemble learning for user purchase behavior prediction. After successfully using the ensemble feature selection method to screen purchase-related factors, we used the stacking algorithm for user purchase behavior prediction. In our efforts to avoid the deviation of the prediction results, we optimized the model by selecting ten different types of models as base learners and modifying the relevant parameters specifically for them. Experiments conducted on a publicly available dataset show that the SE-stacking model can achieve a 98.40% F1 score, approximately 0.09% higher than the optimal base models. The SE-stacking model not only has a good application in the prediction of user purchase behavior but also has practical value when combined with the actual e-commerce scene. At the same time, this model has important significance in academic research and the development of this field.


Asunto(s)
Ciencias Bioconductuales , Comportamiento del Consumidor , Aprendizaje Automático , Humanos
15.
Comput Med Imaging Graph ; 85: 101784, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32860972

RESUMEN

Recent works have demonstrated that deep learning (DL) based compressed sensing (CS) implementation can accelerate Magnetic Resonance (MR) Imaging by reconstructing MR images from sub-sampled k-space data. However, network architectures adopted in previous methods are all designed by handcraft. Neural Architecture Search (NAS) algorithms can automatically build neural network architectures which have outperformed human designed ones in several vision tasks. Inspired by this, here we proposed a novel and efficient network for the MR image reconstruction problem via NAS instead of manual attempts. Particularly, a specific cell structure, which was integrated into the model-driven MR reconstruction pipeline, was automatically searched from a flexible pre-defined operation search space in a differentiable manner. Experimental results show that our searched network can produce better reconstruction results compared to previous state-of-the-art methods in terms of PSNR and SSIM with 4∼6 times fewer computation resources. Extensive experiments were conducted to analyze how hyper-parameters affect reconstruction performance and the searched structures. The generalizability of the searched architecture was also evaluated on different organ MR datasets. Our proposed method can reach a better trade-off between computation cost and reconstruction performance for MR reconstruction problem with good generalizability and offer insights to design neural networks for other medical image applications. The evaluation code will be available at https://github.com/yjump/NAS-for-CSMRI.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Algoritmos , Humanos , Redes Neurales de la Computación , Proyectos de Investigación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...