Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Med Image Anal ; 88: 102833, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37267773

RESUMEN

In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, gray matter, white matter, ventricles, cerebellum, brainstem, deep gray matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Sustancia Blanca , Embarazo , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo/diagnóstico por imagen , Cabeza , Feto/diagnóstico por imagen , Algoritmos , Imagen por Resonancia Magnética/métodos
2.
IEEE Trans Med Imaging ; 42(7): 2044-2056, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37021996

RESUMEN

Federated learning (FL) allows the collaborative training of AI models without needing to share raw data. This capability makes it especially interesting for healthcare applications where patient and data privacy is of utmost concern. However, recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data. In this work, we show that these attacks presented in the literature are impractical in FL use-cases where the clients' training involves updating the Batch Normalization (BN) statistics and provide a new baseline attack that works for such scenarios. Furthermore, we present new ways to measure and visualize potential data leakage in FL. Our work is a step towards establishing reproducible methods of measuring data leakage in FL and could help determine the optimal tradeoffs between privacy-preserving techniques, such as differential privacy, and model accuracy based on quantifiable metrics.


Asunto(s)
Redes Neurales de la Computación , Aprendizaje Automático Supervisado , Humanos , Privacidad , Informática Médica
3.
Med Image Anal ; 70: 101992, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33601166

RESUMEN

The recent outbreak of Coronavirus Disease 2019 (COVID-19) has led to urgent needs for reliable diagnosis and management of SARS-CoV-2 infection. The current guideline is using RT-PCR for testing. As a complimentary tool with diagnostic imaging, chest Computed Tomography (CT) has been shown to be able to reveal visual patterns characteristic for COVID-19, which has definite value at several stages during the disease course. To facilitate CT analysis, recent efforts have focused on computer-aided characterization and diagnosis with chest CT scan, which has shown promising results. However, domain shift of data across clinical data centers poses a serious challenge when deploying learning-based models. A common way to alleviate this issue is to fine-tune the model locally with the target domains local data and annotations. Unfortunately, the availability and quality of local annotations usually varies due to heterogeneity in equipment and distribution of medical resources across the globe. This impact may be pronounced in the detection of COVID-19, since the relevant patterns vary in size, shape, and texture. In this work, we attempt to find a solution for this challenge via federated and semi-supervised learning. A multi-national database consisting of 1704 scans from three countries is adopted to study the performance gap, when training a model with one dataset and applying it to another. Expert radiologists manually delineated 945 scans for COVID-19 findings. In handling the variability in both the data and annotations, a novel federated semi-supervised learning technique is proposed to fully utilize all available data (with or without annotations). Federated learning avoids the need for sensitive data-sharing, which makes it favorable for institutions and nations with strict regulatory policy on data privacy. Moreover, semi-supervision potentially reduces the annotation burden under a distributed setting. The proposed framework is shown to be effective compared to fully supervised scenarios with conventional data sharing instead of model weight sharing.


Asunto(s)
COVID-19/diagnóstico por imagen , Aprendizaje Automático Supervisado , Tomografía Computarizada por Rayos X , China , Humanos , Italia , Japón
4.
Nat Commun ; 11(1): 4080, 2020 08 14.
Artículo en Inglés | MEDLINE | ID: mdl-32796848

RESUMEN

Chest CT is emerging as a valuable diagnostic tool for clinical management of COVID-19 associated lung disease. Artificial intelligence (AI) has the potential to aid in rapid evaluation of CT scans for differentiation of COVID-19 findings from other clinical entities. Here we show that a series of deep learning algorithms, trained in a diverse multinational cohort of 1280 patients to localize parietal pleura/lung parenchyma followed by classification of COVID-19 pneumonia, can achieve up to 90.8% accuracy, with 84% sensitivity and 93% specificity, as evaluated in an independent test set (not included in training and validation) of 1337 patients. Normal controls included chest CTs from oncology, emergency, and pneumonia-related indications. The false positive rate in 140 patients with laboratory confirmed other (non COVID-19) pneumonias was 10%. AI-based algorithms can readily identify CT scans with COVID-19 associated pneumonia, as well as distinguish non-COVID related pneumonias with high specificity in diverse patient populations.


Asunto(s)
Inteligencia Artificial , Técnicas de Laboratorio Clínico/métodos , Infecciones por Coronavirus/diagnóstico por imagen , Neumonía Viral/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Betacoronavirus/aislamiento & purificación , COVID-19 , Prueba de COVID-19 , Niño , Preescolar , Infecciones por Coronavirus/diagnóstico , Infecciones por Coronavirus/virología , Aprendizaje Profundo , Femenino , Humanos , Imagenología Tridimensional/métodos , Pulmón/diagnóstico por imagen , Masculino , Persona de Mediana Edad , Pandemias , Neumonía Viral/virología , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , SARS-CoV-2 , Adulto Joven
5.
IEEE Trans Med Imaging ; 39(7): 2531-2540, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32070947

RESUMEN

Recent advances in deep learning for medical image segmentation demonstrate expert-level accuracy. However, application of these models in clinically realistic environments can result in poor generalization and decreased accuracy, mainly due to the domain shift across different hospitals, scanner vendors, imaging protocols, and patient populations etc. Common transfer learning and domain adaptation techniques are proposed to address this bottleneck. However, these solutions require data (and annotations) from the target domain to retrain the model, and is therefore restrictive in practice for widespread model deployment. Ideally, we wish to have a trained (locked) model that can work uniformly well across unseen domains without further training. In this paper, we propose a deep stacked transformation approach for domain generalization. Specifically, a series of n stacked transformations are applied to each image during network training. The underlying assumption is that the "expected" domain shift for a specific medical imaging modality could be simulated by applying extensive data augmentation on a single source domain, and consequently, a deep model trained on the augmented "big" data (BigAug) could generalize well on unseen domains. We exploit four surprisingly effective, but previously understudied, image-based characteristics for data augmentation to overcome the domain generalization problem. We train and evaluate the BigAug model (with n=9 transformations) on three different 3D segmentation tasks (prostate gland, left atrial, left ventricle) covering two medical imaging modalities (MRI and ultrasound) involving eight publicly available challenge datasets. The results show that when training on relatively small dataset (n = 10~32 volumes, depending on the size of the available datasets) from a single source domain: (i) BigAug models degrade an average of 11%(Dice score change) from source to unseen domain, substantially better than conventional augmentation (degrading 39%) and CycleGAN-based domain adaptation method (degrading 25%), (ii) BigAug is better than "shallower" stacked transforms (i.e. those with fewer transforms) on unseen domains and demonstrates modest improvement to conventional augmentation on the source domain, (iii) after training with BigAug on one source domain, performance on an unseen domain is similar to training a model from scratch on that domain when using the same number of training samples. When training on large datasets (n = 465 volumes) with BigAug, (iv) application to unseen domains reaches the performance of state-of-the-art fully supervised models that are trained and tested on their source domains. These findings establish a strong benchmark for the study of domain generalization in medical imaging, and can be generalized to the design of highly robust deep segmentation models for clinical deployment.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino
6.
Radiother Oncol ; 127(2): 332-338, 2018 May.
Artículo en Inglés | MEDLINE | ID: mdl-29526492

RESUMEN

PURPOSE: To validate a novel deformable image registration (DIR) method for online adaptation of planning organ-at-risk (OAR) delineations to match daily anatomy during hypo-fractionated RT of abdominal tumors. MATERIALS AND METHODS: For 20 liver cancer patients, planning OAR delineations were adapted to daily anatomy using the DIR on corresponding repeat CTs. The DIR's accuracy was evaluated for the entire cohort by comparing adapted and expert-drawn OAR delineations using geometric (Dice Similarity Coefficient (DSC), Modified Hausdorff Distance (MHD) and Mean Surface Error (MSE)) and dosimetric (Dmax and Dmean) measures. RESULTS: For all OARs, DIR achieved average DSC, MHD and MSE of 86%, 2.1 mm, and 1.7 mm, respectively, within 20 s for each repeat CT. Compared to the baseline (translations), the average improvements ranged from 2% (in heart) to 24% (in spinal cord) in DSC, and 25% (in heart) to 44% (in right kidney) in MHD and MSE. Furthermore, differences in dose statistics (Dmax, Dmean and D2%) using delineations from an expert and the proposed DIR were found to be statistically insignificant (p > 0.01). CONCLUSION: The validated DIR showed potential for online-adaptive radiotherapy of abdominal tumors as it achieved considerably high geometric and dosimetric correspondences with the expert-drawn OAR delineations, albeit in a fraction of time required by experts.


Asunto(s)
Carcinoma Hepatocelular/diagnóstico por imagen , Carcinoma Hepatocelular/radioterapia , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/radioterapia , Órganos en Riesgo/diagnóstico por imagen , Planificación de la Radioterapia Asistida por Computador/métodos , Abdomen/anatomía & histología , Abdomen/diagnóstico por imagen , Neoplasias Abdominales/diagnóstico por imagen , Neoplasias Abdominales/radioterapia , Anciano , Algoritmos , Fraccionamiento de la Dosis de Radiación , Femenino , Humanos , Masculino , Persona de Mediana Edad , Órganos en Riesgo/anatomía & histología , Tomografía Computarizada por Rayos X/métodos
7.
Med Phys ; 45(4): 1329-1337, 2018 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-29405307

RESUMEN

PURPOSE: This study investigates the potential application of image-based motion tracking and real-time motion correction to a helical tomotherapy system. METHODS: A kV x-ray imaging system was added to a helical tomotherapy system, mounted 90 degrees offset from the MV treatment beam, and an optical camera system was mounted above the foot of the couch. This experimental system tracks target motion by acquiring an x-ray image every few seconds during gantry rotation. For respiratory (periodic) motion, software correlates internal target positions visible in the x-ray images with marker positions detected continuously by the camera, and generates an internal-external correlation model to continuously determine the target position in three-dimensions (3D). Motion correction is performed by continuously updating jaw positions and MLC leaf patterns to reshape (effectively re-pointing) the treatment beam to follow the 3D target motion. For motion due to processes other than respiration (e.g., digestion), no correlation model is used - instead, target tracking is achieved with the periodically acquired x-ray images, without correlating with a continuous camera signal. RESULTS: The system's ability to correct for respiratory motion was demonstrated using a helical treatment plan delivered to a small (10 mm diameter) target. The phantom was moved following a breathing trace with an amplitude of 15 mm. Film measurements of delivered dose without motion, with motion, and with motion correction were acquired. Without motion correction, dose differences within the target of up to 30% were observed. With motion correction enabled, dose differences in the moving target were less than 2%. Nonrespiratory system performance was demonstrated using a helical treatment plan for a 55 mm diameter target following a prostate motion trace with up to 14 mm of motion. Without motion correction, dose differences up to 16% and shifts of greater than 5 mm were observed. Motion correction reduced these to less than a 6% dose difference and shifts of less than 2 mm. CONCLUSIONS: Real-time motion tracking and correction is technically feasible on a helical tomotherapy system. In one experiment, dose differences due to respiratory motion were greatly reduced. Dose differences due to nonrespiratory motion were also reduced, although not as much as in the respiratory case due to less frequent tracking updates. In both cases, beam-on time was not increased by motion correction, since the system tracks and corrects for motion simultaneously with treatment delivery.


Asunto(s)
Movimiento , Radioterapia de Intensidad Modulada/métodos , Diagnóstico por Imagen , Estudios de Factibilidad , Humanos , Masculino , Próstata/diagnóstico por imagen , Próstata/fisiología , Próstata/efectos de la radiación , Planificación de la Radioterapia Asistida por Computador , Radioterapia Guiada por Imagen/instrumentación , Radioterapia de Intensidad Modulada/instrumentación , Respiración , Factores de Tiempo
8.
IEEE Trans Pattern Anal Mach Intell ; 32(12): 2262-75, 2010 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-20975122

RESUMEN

Point set registration is a key component in many computer vision tasks. The goal of point set registration is to assign correspondences between two sets of points and to recover the transformation that maps one point set to the other. Multiple factors, including an unknown nonrigid spatial transformation, large dimensionality of point set, noise, and outliers, make the point set registration a challenging problem. We introduce a probabilistic method, called the Coherent Point Drift (CPD) algorithm, for both rigid and nonrigid point set registration. We consider the alignment of two point sets as a probability density estimation problem. We fit the Gaussian mixture model (GMM) centroids (representing the first point set) to the data (the second point set) by maximizing the likelihood. We force the GMM centroids to move coherently as a group to preserve the topological structure of the point sets. In the rigid case, we impose the coherence constraint by reparameterization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the EM algorithm in arbitrary dimensions. In the nonrigid case, we impose the coherence constraint by regularizing the displacement field and using the variational calculus to derive the optimal transformation. We also introduce a fast algorithm that reduces the method computation complexity to linear. We test the CPD algorithm for both rigid and nonrigid transformations in the presence of noise, outliers, and missing points, where CPD shows accurate results and outperforms current state-of-the-art methods.

9.
IEEE Trans Med Imaging ; 29(11): 1882-91, 2010 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-20562036

RESUMEN

Accurate definition of the similarity measure is a key component in image registration. Most commonly used intensity-based similarity measures rely on the assumptions of independence and stationarity of the intensities from pixel to pixel. Such measures cannot capture the complex interactions among the pixel intensities, and often result in less satisfactory registration performances, especially in the presence of spatially-varying intensity distortions. We propose a novel similarity measure that accounts for intensity nonstationarities and complex spatially-varying intensity distortions in mono-modal settings. We derive the similarity measure by analytically solving for the intensity correction field and its adaptive regularization. The final measure can be interpreted as one that favors a registration with minimum compression complexity of the residual image between the two registered images. One of the key advantages of the new similarity measure is its simplicity in terms of both computational complexity and implementation. This measure produces accurate registration results on both artificial and real-world problems that we have tested, and outperforms other state-of-the-art similarity measures in these cases.


Asunto(s)
Algoritmos , Inteligencia Artificial , Encéfalo/anatomía & histología , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Técnica de Sustracción , Humanos , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
10.
JACC Cardiovasc Imaging ; 3(3): 227-34, 2010 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-20223418

RESUMEN

OBJECTIVES: To compute left ventricular (LV) twist from 3-dimensional (3D) echocardiography. BACKGROUND: LV twist is a sensitive index of cardiac performance. Conventional 2-dimensional based methods of computing LV twist are cumbersome and subject to errors. METHODS: We studied 10 adult open-chest pigs. The pre-load to the heart was altered by temporary controlled occlusion of the inferior vena cava, and myocardial ischemia was produced by ligating the left anterior descending coronary artery. Full-volume 3D loops were reconstructed by stitching of pyramidal volumes acquired from 7 consecutive heart beats with electrocardiography gating on a Philips IE33 system (Philips Medical Systems, Andover, Massachusetts) at baseline and other steady states. Polar coordinate data of the 3D images were entered into an envelope detection program implemented in MatLab (The MathWorks, Inc., Natick, Massachusetts), and speckle motion was tracked using nonrigid image registration with spline-based transformation parameterization. The 3D displacement field was obtained, and rotation at apical and basal planes was computed. LV twist was derived as the net difference of apical and basal rotation. Sonomicrometry data of cardiac motion were also acquired from crystals anchored to epicardium in apical and basal planes at all states. RESULTS: The 3D dense tracking slightly overestimated the LV twist, but detected changes in LV twist at different states and showed good correlation (r = 0.89) when compared with sonomicrometry-derived twist at all steady states. In open chest pigs, peak cardiac twist was increased with reduction of pre-load from inferior vena cava occlusion from 6.25 degrees +/- 1.65 degrees to 9.45 degrees +/- 1.95 degrees . With myocardial ischemia from left anterior descending coronary artery ligation, twist was decreased to 4.90 degrees +/- 0.85 degrees (r = 0.8759). CONCLUSIONS: Despite lower spatiotemporal resolution of 3D echocardiography, LV twist and torsion can be computed accurately.


Asunto(s)
Ecocardiografía Tridimensional , Ventrículos Cardíacos/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador , Isquemia Miocárdica/diagnóstico por imagen , Disfunción Ventricular Izquierda/diagnóstico por imagen , Función Ventricular Izquierda , Animales , Modelos Animales de Enfermedad , Femenino , Ventrículos Cardíacos/fisiopatología , Masculino , Isquemia Miocárdica/complicaciones , Isquemia Miocárdica/fisiopatología , Reproducibilidad de los Resultados , Porcinos , Torsión Mecánica , Disfunción Ventricular Izquierda/etiología , Disfunción Ventricular Izquierda/fisiopatología
11.
Med Image Comput Comput Assist Interv ; 10(Pt 2): 428-35, 2007.
Artículo en Inglés | MEDLINE | ID: mdl-18044597

RESUMEN

Automated motion reconstruction of the left ventricle (LV) from 3D echocardiography provides insight into myocardium architecture and function. Low image quality and artifacts make 3D ultrasound image processing a challenging problem. We introduce a LV tracking method, which combines textural and structural information to overcome the image quality limitations. Our method automatically reconstructs the motion of the LV contour (endocardium and epicardium) from a sequence of 3D ultrasound images.


Asunto(s)
Ecocardiografía Tridimensional/métodos , Ventrículos Cardíacos/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Movimiento/fisiología , Contracción Miocárdica/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Función Ventricular , Algoritmos , Animales , Inteligencia Artificial , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Porcinos
12.
Artículo en Inglés | MEDLINE | ID: mdl-17354816

RESUMEN

This paper explores the use of deformable mesh for registration of microscopic iris image sequences. The registration, as an effort for stabilizing and rectifying images corrupted by motion artifacts, is a crucial step toward leukocyte tracking and motion characterization for the study of immune systems. The image sequences are characterized by locally nonlinear deformations, where an accurate analytical expression can not be derived through modeling of image formation. We generalize the existing deformable mesh and formulate it in a probabilistic framework, which allows us to conveniently introduce local image similarity measures, to model image dynamics and to maintain a well-defined mesh structure and smooth deformation through appropriate regularization. Experimental results demonstrate the effectiveness and accuracy of the algorithm.


Asunto(s)
Inteligencia Artificial , Interpretación de Imagen Asistida por Computador/métodos , Iris/citología , Microscopía por Video/métodos , Oftalmoscopía/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Técnica de Sustracción , Algoritmos , Simulación por Computador , Humanos , Aumento de la Imagen/métodos , Modelos Biológicos , Modelos Estadísticos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...