Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Diagnostics (Basel) ; 13(20)2023 Oct 17.
Artículo en Inglés | MEDLINE | ID: mdl-37892046

RESUMEN

INTRODUCTION: A deep learning algorithm to quantify steatosis from ultrasound images may change a subjective diagnosis to objective quantification. We evaluate this algorithm in patients with weight changes. MATERIALS AND METHODS: Patients (N = 101) who experienced weight changes ≥ 5% were selected for the study, using serial ultrasound studies retrospectively collected from 2013 to 2021. After applying our exclusion criteria, 74 patients from 239 studies were included. We classified images into four scanning views and applied the algorithm. Mean values from 3-5 images in each group were used for the results and correlated against weight changes. RESULTS: Images from the left lobe (G1) in 45 patients, right intercostal view (G2) in 67 patients, and subcostal view (G4) in 46 patients were collected. In a head-to-head comparison, G1 versus G2 or G2 versus G4 views showed identical steatosis scores (R2 > 0.86, p < 0.001). The body weight and steatosis scores were significantly correlated (R2 = 0.62, p < 0.001). Significant differences in steatosis scores between the highest and lowest body weight timepoints were found (p < 0.001). Men showed a higher liver steatosis/BMI ratio than women (p = 0.026). CONCLUSIONS: The best scanning conditions are 3-5 images from the right intercostal view. The algorithm objectively quantified liver steatosis, which correlated with body weight changes and gender.

2.
World J Gastroenterol ; 29(14): 2188-2201, 2023 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-37122600

RESUMEN

BACKGROUND: Acoustic radiation force impulse (ARFI) is used to measure liver fibrosis and predict outcomes. The performance of elastography in assessment of fibrosis is poorer in hepatitis B virus (HBV) than in other etiologies of chronic liver disease. AIM: To evaluate the performance of ARFI in long-term outcome prediction among different etiologies of chronic liver disease. METHODS: Consecutive patients who received an ARFI study between 2011 and 2018 were enrolled. After excluding dual infection, alcoholism, autoimmune hepatitis, and others with incomplete data, this retrospective cohort were divided into hepatitis B (HBV, n = 1064), hepatitis C (HCV, n = 507), and non-HBV, non-HCV (NBNC, n = 391) groups. The indexed cases were linked to cancer registration (1987-2020) and national mortality databases. The differences in morbidity and mortality among the groups were analyzed. RESULTS: At the enrollment, the HBV group showed more males (77.5%), a higher prevalence of pre-diagnosed hepatocellular carcinoma (HCC), and a lower prevalence of comorbidities than the other groups (P < 0.001). The HCV group was older and had a lower platelet count and higher ARFI score than the other groups (P < 0.001). The NBNC group showed a higher body mass index and platelet count, a higher prevalence of pre-diagnosed non-HCC cancers (P < 0.001), especially breast cancer, and a lower prevalence of cirrhosis. Male gender, ARFI score, and HBV were independent predictors of HCC. The 5-year risk of HCC was 5.9% and 9.8% for those ARFI-graded with severe fibrosis and cirrhosis. ARFI alone had an area under the receiver operating characteristic curve (AUROC) of 0.742 for prediction of HCC in 5 years. AUROC increased to 0.828 after adding etiology, gender, age, and platelet score. No difference was found in mortality rate among the groups. CONCLUSION: The HBV group showed a higher prevalence of HCC but lower comorbidity that made mortality similar among the groups. Those patients with ARFI-graded severe fibrosis or cirrhosis should receive regular surveillance.


Asunto(s)
Carcinoma Hepatocelular , Diagnóstico por Imagen de Elasticidad , Hepatitis C Crónica , Hepatitis C , Neoplasias Hepáticas , Humanos , Masculino , Virus de la Hepatitis B , Estudios Retrospectivos , Hepatitis C Crónica/patología , Cirrosis Hepática/diagnóstico por imagen , Cirrosis Hepática/epidemiología , Comorbilidad , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/epidemiología , Carcinoma Hepatocelular/diagnóstico por imagen , Carcinoma Hepatocelular/epidemiología , Acústica
3.
Nat Commun ; 13(1): 6137, 2022 10 17.
Artículo en Inglés | MEDLINE | ID: mdl-36253346

RESUMEN

Accurate organ-at-risk (OAR) segmentation is critical to reduce radiotherapy complications. Consensus guidelines recommend delineating over 40 OARs in the head-and-neck (H&N). However, prohibitive labor costs cause most institutions to delineate a substantially smaller subset of OARs, neglecting the dose distributions of other OARs. Here, we present an automated and highly effective stratified OAR segmentation (SOARS) system using deep learning that precisely delineates a comprehensive set of 42 H&N OARs. We train SOARS using 176 patients from an internal institution and independently evaluate it on 1327 external patients across six different institutions. It consistently outperforms other state-of-the-art methods by at least 3-5% in Dice score for each institutional evaluation (up to 36% relative distance error reduction). Crucially, multi-user studies demonstrate that 98% of SOARS predictions need only minor or no revisions to achieve clinical acceptance (reducing workloads by 90%). Moreover, segmentation and dosimetric accuracy are within or smaller than the inter-user variation.


Asunto(s)
Neoplasias de Cabeza y Cuello , Órganos en Riesgo , Neoplasias de Cabeza y Cuello/radioterapia , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Cuello , Radiometría
4.
World J Gastroenterol ; 28(22): 2494-2508, 2022 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-35979264

RESUMEN

BACKGROUND: Hepatic steatosis is a major cause of chronic liver disease. Two-dimensional (2D) ultrasound is the most widely used non-invasive tool for screening and monitoring, but associated diagnoses are highly subjective. AIM: To develop a scalable deep learning (DL) algorithm for quantitative scoring of liver steatosis from 2D ultrasound images. METHODS: Using multi-view ultrasound data from 3310 patients, 19513 studies, and 228075 images from a retrospective cohort of patients received elastography, we trained a DL algorithm to diagnose steatosis stages (healthy, mild, moderate, or severe) from clinical ultrasound diagnoses. Performance was validated on two multi-scanner unblinded and blinded (initially to DL developer) histology-proven cohorts (147 and 112 patients) with histopathology fatty cell percentage diagnoses and a subset with FibroScan diagnoses. We also quantified reliability across scanners and viewpoints. Results were evaluated using Bland-Altman and receiver operating characteristic (ROC) analysis. RESULTS: The DL algorithm demonstrated repeatable measurements with a moderate number of images (three for each viewpoint) and high agreement across three premium ultrasound scanners. High diagnostic performance was observed across all viewpoints: Areas under the curve of the ROC to classify mild, moderate, and severe steatosis grades were 0.85, 0.91, and 0.93, respectively. The DL algorithm outperformed or performed at least comparably to FibroScan control attenuation parameter (CAP) with statistically significant improvements for all levels on the unblinded histology-proven cohort and for "= severe" steatosis on the blinded histology-proven cohort. CONCLUSION: The DL algorithm provides a reliable quantitative steatosis assessment across view and scanners on two multi-scanner cohorts. Diagnostic performance was high with comparable or better performance than the CAP.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Imagen de Elasticidad , Hígado Graso , Enfermedad del Hígado Graso no Alcohólico , Diagnóstico por Imagen de Elasticidad/métodos , Hígado Graso/diagnóstico por imagen , Hígado Graso/patología , Humanos , Hígado/diagnóstico por imagen , Hígado/patología , Enfermedad del Hígado Graso no Alcohólico/diagnóstico por imagen , Enfermedad del Hígado Graso no Alcohólico/patología , Curva ROC , Reproducibilidad de los Resultados , Estudios Retrospectivos
5.
Hepatol Commun ; 6(10): 2901-2913, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35852311

RESUMEN

Hepatocellular carcinoma (HCC) can be potentially discovered from abdominal computed tomography (CT) studies under varied clinical scenarios (e.g., fully dynamic contrast-enhanced [DCE] studies, noncontrast [NC] plus venous phase [VP] abdominal studies, or NC-only studies). Each scenario presents its own clinical challenges that could benefit from computer-aided detection (CADe) tools. We investigate whether a single CADe model can be made flexible enough to handle different contrast protocols and whether this flexibility imparts performance gains. We developed a flexible three-dimensional deep algorithm, called heterophase volumetric detection (HPVD), that can accept any combination of contrast-phase inputs with adjustable sensitivity depending on the clinical purpose. We trained HPVD on 771 DCE CT scans to detect HCCs and evaluated it on 164 positives and 206 controls. We compared performance against six clinical readers, including two radiologists, two hepatopancreaticobiliary surgeons, and two hepatologists. The area under the curve of the localization receiver operating characteristic for NC-only, NC plus VP, and full DCE CT yielded 0.71 (95% confidence interval [CI], 0.64-0.77), 0.81 (95% CI, 0.75-0.87), and 0.89 (95% CI, 0.84-0.93), respectively. At a high-sensitivity operating point of 80% on DCE CT, HPVD achieved 97% specificity, which is comparable to measured physician performance. We also demonstrated performance improvements over more typical and less flexible nonheterophase detectors. Conclusion: A single deep-learning algorithm can be effectively applied to diverse HCC detection clinical scenarios, indicating that HPVD could serve as a useful clinical aid for at-risk and opportunistic HCC surveillance.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Algoritmos , Carcinoma Hepatocelular/diagnóstico , Medios de Contraste , Humanos , Neoplasias Hepáticas/diagnóstico , Tomografía Computarizada por Rayos X/métodos
6.
IEEE Trans Med Imaging ; 41(10): 2658-2669, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35442886

RESUMEN

Radiological images such as computed tomography (CT) and X-rays render anatomy with intrinsic structures. Being able to reliably locate the same anatomical structure across varying images is a fundamental task in medical image analysis. In principle it is possible to use landmark detection or semantic segmentation for this task, but to work well these require large numbers of labeled data for each anatomical structure and sub-structure of interest. A more universal approach would learn the intrinsic structure from unlabeled images. We introduce such an approach, called Self-supervised Anatomical eMbedding (SAM). SAM generates semantic embeddings for each image pixel that describes its anatomical location or body part. To produce such embeddings, we propose a pixel-level contrastive learning framework. A coarse-to-fine strategy ensures both global and local anatomical information are encoded. Negative sample selection strategies are designed to enhance the embedding's discriminability. Using SAM, one can label any point of interest on a template image and then locate the same body part in other images by simple nearest neighbor searching. We demonstrate the effectiveness of SAM in multiple tasks with 2D and 3D image modalities. On a chest CT dataset with 19 landmarks, SAM outperforms widely-used registration algorithms while only taking 0.23 seconds for inference. On two X-ray datasets, SAM, with only one labeled template image, surpasses supervised methods trained on 50 labeled images. We also apply SAM on whole-body follow-up lesion matching in CT and obtain an accuracy of 91%. SAM can also be applied for improving image registration and initializing CNN weights.


Asunto(s)
Imagenología Tridimensional , Tomografía Computarizada por Rayos X , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Radiografía , Aprendizaje Automático Supervisado , Tomografía Computarizada por Rayos X/métodos
7.
IEEE Trans Med Imaging ; 40(1): 59-70, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-32894709

RESUMEN

The acquisition of large-scale medical image data, necessary for training machine learning algorithms, is hampered by associated expert-driven annotation costs. Mining hospital archives can address this problem, but labels often incomplete or noisy, e.g., 50% of the lesions in DeepLesion are left unlabeled. Thus, effective label harvesting methods are critical. This is the goal of our work, where we introduce Lesion-Harvester-a powerful system to harvest missing annotations from lesion datasets at high precision. Accepting the need for some degree of expert labor, we use a small fully-labeled image subset to intelligently mine annotations from the remainder. To do this, we chain together a highly sensitive lesion proposal generator (LPG) and a very selective lesion proposal classifier (LPC). Using a new hard negative suppression loss, the resulting harvested and hard-negative proposals are then employed to iteratively finetune our LPG. While our framework is generic, we optimize our performance by proposing a new 3D contextual LPG and by using a global-local multi-view LPC. Experiments on DeepLesion demonstrate that Lesion-Harvester can discover an additional 9,805 lesions at a precision of 90%. We publicly release the harvested lesions, along with a new test set of completely annotated DeepLesion volumes. We also present a pseudo 3D IoU evaluation metric that corresponds much better to the real 3D IoU than current DeepLesion evaluation metrics. To quantify the downstream benefits of Lesion-Harvester we show that augmenting the DeepLesion annotations with our harvested lesions allows state-of-the-art detectors to boost their average precision by 7 to 10%.


Asunto(s)
Algoritmos , Aprendizaje Automático
8.
Med Image Anal ; 68: 101909, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33341494

RESUMEN

Gross tumor volume (GTV) and clinical target volume (CTV) delineation are two critical steps in the cancer radiotherapy planning. GTV defines the primary treatment area of the gross tumor, while CTV outlines the sub-clinical malignant disease. Automatic GTV and CTV segmentation are both challenging for distinct reasons: GTV segmentation relies on the radiotherapy computed tomography (RTCT) image appearance, which suffers from poor contrast with the surrounding tissues, while CTV delineation relies on a mixture of predefined and judgement-based margins. High intra- and inter-user variability makes this a particularly difficult task. We develop tailored methods solving each task in the esophageal cancer radiotherapy, together leading to a comprehensive solution for the target contouring task. Specifically, we integrate the RTCT and positron emission tomography (PET) modalities together into a two-stream chained deep fusion framework taking advantage of both modalities to facilitate more accurate GTV segmentation. For CTV segmentation, since it is highly context-dependent-it must encompass the GTV and involved lymph nodes while also avoiding excessive exposure to the organs at risk-we formulate it as a deep contextual appearance-based problem using encoded spatial distances of these anatomical structures. This better emulates the margin- and appearance-based CTV delineation performed by oncologists. Adding to our contributions, for the GTV segmentation we propose a simple yet effective progressive semantically-nested network (PSNN) backbone that outperforms more complicated models. Our work is the first to provide a comprehensive solution for the esophageal GTV and CTV segmentation in radiotherapy planning. Extensive 4-fold cross-validation on 148 esophageal cancer patients, the largest analysis to date, was carried out for both tasks. The results demonstrate that our GTV and CTV segmentation approaches significantly improve the performance over previous state-of-the-art works, e.g., by 8.7% increases in Dice score (DSC) and 32.9mm reduction in Hausdorff distance (HD) for GTV segmentation, and by 3.4% increases in DSC and 29.4mm reduction in HD for CTV segmentation.


Asunto(s)
Neoplasias Esofágicas , Planificación de la Radioterapia Asistida por Computador , Neoplasias Esofágicas/diagnóstico por imagen , Neoplasias Esofágicas/radioterapia , Humanos , Tomografía de Emisión de Positrones , Tomografía Computarizada por Rayos X , Carga Tumoral
9.
IEEE Trans Med Imaging ; 40(10): 2759-2770, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33370236

RESUMEN

Large-scale datasets with high-quality labels are desired for training accurate deep learning models. However, due to the annotation cost, datasets in medical imaging are often either partially-labeled or small. For example, DeepLesion is such a large-scale CT image dataset with lesions of various types, but it also has many unlabeled lesions (missing annotations). When training a lesion detector on a partially-labeled dataset, the missing annotations will generate incorrect negative signals and degrade the performance. Besides DeepLesion, there are several small single-type datasets, such as LUNA for lung nodules and LiTS for liver tumors. These datasets have heterogeneous label scopes, i.e., different lesion types are labeled in different datasets with other types ignored. In this work, we aim to develop a universal lesion detection algorithm to detect a variety of lesions. The problem of heterogeneous and partial labels is tackled. First, we build a simple yet effective lesion detection framework named Lesion ENSemble (LENS). LENS can efficiently learn from multiple heterogeneous lesion datasets in a multi-task fashion and leverage their synergy by proposal fusion. Next, we propose strategies to mine missing annotations from partially-labeled datasets by exploiting clinical prior knowledge and cross-dataset knowledge transfer. Finally, we train our framework on four public lesion datasets and evaluate it on 800 manually-labeled sub-volumes in DeepLesion. Our method brings a relative improvement of 49% compared to the current state-of-the-art approach in the metric of average sensitivity. We have publicly released our manual 3D annotations of DeepLesion online.1 1https://github.com/viggin/DeepLesion_manual_test_set.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Radiografía
10.
IEEE Trans Med Imaging ; 40(10): 2672-2684, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33290215

RESUMEN

Accurate segmentation of anatomical structures is vital for medical image analysis. The state-of-the-art accuracy is typically achieved by supervised learning methods, where gathering the requisite expert-labeled image annotations in a scalable manner remains a main obstacle. Therefore, annotation-efficient methods that permit to produce accurate anatomical structure segmentation are highly desirable. In this work, we present Contour Transformer Network (CTN), a one-shot anatomy segmentation method with a naturally built-in human-in-the-loop mechanism. We formulate anatomy segmentation as a contour evolution process and model the evolution behavior by graph convolutional networks (GCNs). Training the CTN model requires only one labeled image exemplar and leverages additional unlabeled data through newly introduced loss functions that measure the global shape and appearance consistency of contours. On segmentation tasks of four different anatomies, we demonstrate that our one-shot learning method significantly outperforms non-learning-based methods and performs competitively to the state-of-the-art fully supervised deep learning methods. With minimal human-in-the-loop editing feedback, the segmentation performance can be further improved to surpass the fully supervised methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos
11.
Med Image Anal ; 66: 101811, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32937229

RESUMEN

Chest X-rays (CXRs) are a crucial and extraordinarily common diagnostic tool, leading to heavy research for computer-aided diagnosis (CAD) solutions. However, both high classification accuracy and meaningful model predictions that respect and incorporate clinical taxonomies are crucial for CAD usability. To this end, we present a deep hierarchical multi-label classification (HMLC) approach for CXR CAD. Different than other hierarchical systems, we show that first training the network to model conditional probability directly and then refining it with unconditional probabilities is key in boosting performance. In addition, we also formulate a numerically stable cross-entropy loss function for unconditional probabilities that provides concrete performance improvements. Finally, we demonstrate that HMLC can be an effective means to manage missing or incomplete labels. To the best of our knowledge, we are the first to apply HMLC to medical imaging CAD. We extensively evaluate our approach on detecting abnormality labels from the CXR arm of the Prostate, Lung, Colorectal and Ovarian (PLCO) dataset, which comprises over 198,000 manually annotated CXRs. When using complete labels, we report a mean area under the curve (AUC) of 0.887, the highest yet reported for this dataset. These results are supported by ancillary experiments on the PadChest dataset, where we also report significant improvements, 1.2% and 4.1% in AUC and average precision, respectively over strong "flat" classifiers. Finally, we demonstrate that our HMLC approach can much better handle incompletely labelled data. These performance improvements, combined with the inherent usefulness of taxonomic predictions, indicate that our approach represents a useful step forward for CXR CAD.


Asunto(s)
Pulmón , Tomografía Computarizada por Rayos X , Diagnóstico por Computador , Humanos , Pulmón/diagnóstico por imagen , Masculino , Radiografía , Rayos X
12.
Med Image Anal ; 45: 94-107, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-29427897

RESUMEN

Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two-stage cascaded approach-pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a (mean  ±â€¯ std. dev.) Dice similarity coefficient (DSC) of 81.27 ±â€¯6.27% in validation, which significantly outperforms both a previous state-of-the art method and a preliminary version of this work that report DSCs of 71.80 ±â€¯10.70% and 78.01 ±â€¯8.20%, respectively, using the same dataset.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional , Redes Neurales de la Computación , Páncreas/anatomía & histología , Páncreas/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Algoritmos , Automatización , Aprendizaje Profundo , Humanos
14.
Med Phys ; 44(6): 2447-2452, 2017 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-28332211

RESUMEN

PURPOSE: We present a denoising algorithm designed for a whole-body prototype photon-counting computed tomography (PCCT) scanner with up to 4 energy thresholds and associated energy-binned images. METHODS: Spectral PCCT images can exhibit low signal to noise ratios (SNRs) due to the limited photon counts in each simultaneously-acquired energy bin. To help address this, our denoising method exploits the correlation and exact alignment between energy bins, adapting the highly-effective block-matching 3D (BM3D) denoising algorithm for PCCT. The original single-channel BM3D algorithm operates patch-by-patch. For each small patch in the image, a patch grouping action collects similar patches from the rest of the image, which are then collaboratively filtered together. The resulting performance hinges on accurate patch grouping. Our improved multi-channel version, called BM3D_PCCT, incorporates two improvements. First, BM3D_PCCT uses a more accurate shared patch grouping based on the image reconstructed from photons detected in all 4 energy bins. Second, BM3D_PCCT performs a cross-channel decorrelation, adding a further dimension to the collaborative filtering process. These two improvements produce a more effective algorithm for PCCT denoising. RESULTS: Preliminary results compare BM3D_PCCT against BM3D_Naive, which denoises each energy bin independently. Experiments use a three-contrast PCCT image of a canine abdomen. Within five regions of interest, selected from paraspinal muscle, liver, and visceral fat, BM3D_PCCT reduces the noise standard deviation by 65.0%, compared to 40.4% for BM3D_Naive. Attenuation values of the contrast agents in calibration vials also cluster much tighter to their respective lines of best fit. Mean angular differences (in degrees) for the original, BM3D_Naive, and BM3D_PCCT images, respectively, were 15.61, 7.34, and 4.45 (iodine); 12.17, 7.17, and 4.39 (galodinium); and 12.86, 6.33, and 3.96 (bismuth). CONCLUSION: We outline a multi-channel denoising algorithm tailored for spectral PCCT images, demonstrating improved performance over an independent, yet state-of-the-art, single-channel approach.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Animales , Perros , Humanos , Fantasmas de Imagen , Fotones , Relación Señal-Ruido
15.
Med Image Comput Comput Assist Interv ; 16(Pt 3): 235-42, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24505766

RESUMEN

Automatic segmentation techniques, despite demonstrating excellent overall accuracy, can often produce inaccuracies in local regions. As a result, correcting segmentations remains an important task that is often laborious, especially when done manually for 3D datasets. This work presents a powerful tool called Intelligent Learning-Based Editor of Segmentations (IntellEditS) that minimizes user effort and further improves segmentation accuracy. The tool partners interactive learning with an energy-minimization approach to editing. Based on interactive user input, a discriminative classifier is trained and applied to the edited 3D region to produce soft voxel labeling. The labels are integrated into a novel energy functional along with the existing segmentation and image data. Unlike the state of the art, IntellEditS is designed to correct segmentation results represented not only as masks but also as meshes. In addition, IntellEditS accepts intuitive boundary-based user interactions. The versatility and performance of IntellEditS are demonstrated on both MRI and CT datasets consisting of varied anatomical structures and resolutions.


Asunto(s)
Algoritmos , Inteligencia Artificial , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Programas Informáticos , Tomografía Computarizada por Rayos X/métodos , Documentación/métodos , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
16.
IEEE Trans Pattern Anal Mach Intell ; 34(7): 1368-80, 2012 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-22184255

RESUMEN

Photometric stereo and depth-map estimation provide a way to construct a depth map from images of an object under one viewpoint but with varying illumination directions. While estimating surface normals using the Lambertian model of reflectance is well established, depth-map estimation is an ongoing field of research and dealing with image noise is an active topic. Using the zero-mean Gaussian model of image noise, this paper introduces a method for maximum likelihood depth-map estimation that accounts for the propagation of noise through all steps of the estimation process. Solving for maximum likelihood depth-map estimates involves an independent sequence of nonlinear regression estimates, one for each pixel, followed by a single large and sparse linear regression estimate. The linear system employs anisotropic weights, which arise naturally and differ in value to related work. The new depth-map estimation method remains efficient and fast, making it practical for realistic image sizes. Experiments using synthetic images demonstrate the method's ability to robustly estimate depth maps under the noise model. Practical benefits of the method on challenging imaging scenarios are illustrated by experiments using the Extended Yale Face Database B and an extensive data set of 500 reflected light microscopy image sequences.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...