Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
1.
Eur Radiol ; 33(9): 6020-6032, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37071167

RESUMO

OBJECTIVE: To assess the performance of convolutional neural networks (CNNs) for semiautomated segmentation of hepatocellular carcinoma (HCC) tumors on MRI. METHODS: This retrospective single-center study included 292 patients (237 M/55F, mean age 61 years) with pathologically confirmed HCC between 08/2015 and 06/2019 and who underwent MRI before surgery. The dataset was randomly divided into training (n = 195), validation (n = 66), and test sets (n = 31). Volumes of interest (VOIs) were manually placed on index lesions by 3 independent radiologists on different sequences (T2-weighted imaging [WI], T1WI pre-and post-contrast on arterial [AP], portal venous [PVP], delayed [DP, 3 min post-contrast] and hepatobiliary phases [HBP, when using gadoxetate], and diffusion-weighted imaging [DWI]). Manual segmentation was used as ground truth to train and validate a CNN-based pipeline. For semiautomated segmentation of tumors, we selected a random pixel inside the VOI, and the CNN provided two outputs: single slice and volumetric outputs. Segmentation performance and inter-observer agreement were analyzed using the 3D Dice similarity coefficient (DSC). RESULTS: A total of 261 HCCs were segmented on the training/validation sets, and 31 on the test set. The median lesion size was 3.0 cm (IQR 2.0-5.2 cm). Mean DSC (test set) varied depending on the MRI sequence with a range between 0.442 (ADC) and 0.778 (high b-value DWI) for single-slice segmentation; and between 0.305 (ADC) and 0.667 (T1WI pre) for volumetric-segmentation. Comparison between the two models showed better performance in single-slice segmentation, with statistical significance on T2WI, T1WI-PVP, DWI, and ADC. Inter-observer reproducibility of segmentation analysis showed a mean DSC of 0.71 in lesions between 1 and 2 cm, 0.85 in lesions between 2 and 5 cm, and 0.82 in lesions > 5 cm. CONCLUSION: CNN models have fair to good performance for semiautomated HCC segmentation, depending on the sequence and tumor size, with better performance for the single-slice approach. Refinement of volumetric approaches is needed in future studies. KEY POINTS: • Semiautomated single-slice and volumetric segmentation using convolutional neural networks (CNNs) models provided fair to good performance for hepatocellular carcinoma segmentation on MRI. • CNN models' performance for HCC segmentation accuracy depends on the MRI sequence and tumor size, with the best results on diffusion-weighted imaging and T1-weighted imaging pre-contrast, and for larger lesions.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Humanos , Pessoa de Meia-Idade , Carcinoma Hepatocelular/diagnóstico por imagem , Carcinoma Hepatocelular/patologia , Estudos Retrospectivos , Reprodutibilidade dos Testes , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/patologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação
2.
Radiol Artif Intell ; 4(5): e210315, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36204533

RESUMO

Purpose: To demonstrate the value of pretraining with millions of radiologic images compared with ImageNet photographic images on downstream medical applications when using transfer learning. Materials and Methods: This retrospective study included patients who underwent a radiologic study between 2005 and 2020 at an outpatient imaging facility. Key images and associated labels from the studies were retrospectively extracted from the original study interpretation. These images were used for RadImageNet model training with random weight initiation. The RadImageNet models were compared with ImageNet models using the area under the receiver operating characteristic curve (AUC) for eight classification tasks and using Dice scores for two segmentation problems. Results: The RadImageNet database consists of 1.35 million annotated medical images in 131 872 patients who underwent CT, MRI, and US for musculoskeletal, neurologic, oncologic, gastrointestinal, endocrine, abdominal, and pulmonary pathologic conditions. For transfer learning tasks on small datasets-thyroid nodules (US), breast masses (US), anterior cruciate ligament injuries (MRI), and meniscal tears (MRI)-the RadImageNet models demonstrated a significant advantage (P < .001) to ImageNet models (9.4%, 4.0%, 4.8%, and 4.5% AUC improvements, respectively). For larger datasets-pneumonia (chest radiography), COVID-19 (CT), SARS-CoV-2 (CT), and intracranial hemorrhage (CT)-the RadImageNet models also illustrated improved AUC (P < .001) by 1.9%, 6.1%, 1.7%, and 0.9%, respectively. Additionally, lesion localizations of the RadImageNet models were improved by 64.6% and 16.4% on thyroid and breast US datasets, respectively. Conclusion: RadImageNet pretrained models demonstrated better interpretability compared with ImageNet models, especially for smaller radiologic datasets.Keywords: CT, MR Imaging, US, Head/Neck, Thorax, Brain/Brain Stem, Evidence-based Medicine, Computer Applications-General (Informatics) Supplemental material is available for this article. Published under a CC BY 4.0 license.See also the commentary by Cadrin-Chênevert in this issue.

3.
IEEE Trans Med Imaging ; 41(12): 3509-3519, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-35767509

RESUMO

The recent success of learning-based algorithms can be greatly attributed to the immense amount of annotated data used for training. Yet, many datasets lack annotations due to the high costs associated with labeling, resulting in degraded performances of deep learning methods. Self-supervised learning is frequently adopted to mitigate the reliance on massive labeled datasets since it exploits unlabeled data to learn relevant feature representations. In this work, we propose SS-StyleGAN, a self-supervised approach for image annotation and classification suitable for extremely small annotated datasets. This novel framework adds self-supervision to the StyleGAN architecture by integrating an encoder that learns the embedding to the StyleGAN latent space, which is well-known for its disentangled properties. The learned latent space enables the smart selection of representatives from the data to be labeled for improved classification performance. We show that the proposed method attains strong classification results using small labeled datasets of sizes 50 and even 10. We demonstrate the superiority of our approach for the tasks of COVID-19 and liver tumor pathology identification.


Assuntos
COVID-19 , Curadoria de Dados , Humanos , Algoritmos , Aprendizado de Máquina Supervisionado
4.
Cells ; 10(12)2021 11 29.
Artigo em Inglês | MEDLINE | ID: mdl-34943859

RESUMO

We present a new classification approach for live cells, integrating together the spatial and temporal fluctuation maps and the quantitative optical thickness map of the cell, as acquired by common-path quantitative-phase dynamic imaging and processed with a deep-learning framework. We demonstrate this approach by classifying between two types of cancer cell lines of different metastatic potential originating from the same patient. It is based on the fact that both the cancer-cell morphology and its mechanical properties, as indicated by the cell temporal and spatial fluctuations, change over the disease progression. We tested different fusion methods for inputting both the morphological optical thickness maps and the coinciding spatio-temporal fluctuation maps of the cells to the classifying network framework. We show that the proposed integrated triple-path deep-learning architecture improves over deep-learning classification that is based only on the cell morphological evaluation via its quantitative optical thickness map, demonstrating the benefit in the acquisition of the cells over time and in extracting their spatio-temporal fluctuation maps, to be used as an input to the classifying deep neural network.


Assuntos
Algoritmos , Aprendizado Profundo , Neoplasias , Linhagem Celular Tumoral , Humanos , Modelos Teóricos , Neoplasias/patologia , Fatores de Tempo
5.
Int J Comput Assist Radiol Surg ; 16(1): 133-140, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33211235

RESUMO

PURPOSE: Atrial fibrillation (AF), the most prevalent form of cardiac arrhythmia, afflicts millions worldwide. Here, we developed an imaging algorithm for the diagnosis and online guidance of radio-frequency ablation, which is currently the first line of treatment for AF and other arrhythmia. This requires the simultaneous mapping of the left atrium anatomy and the propagation of the electrical activation wave, and for some arrhythmia, within a single heartbeat. METHODS: We constructed a multi-frequency ultrasonic system consisting of 64 elements mounted on a spherical basket, operated in a synthetic aperture mode, that allows instant localization of thousands of points on the endocardial surface and yields a MRI-like geometric reconstruction. RESULTS: The system and surface localization algorithm were extensively tested and validated in a series of in silico and in vitro experiments. We report considerable improvement over traditional methods along with theoretical results that help refine the extracted shape. The results in left atrium-shaped silicon phantom were accurate to within 4 mm. CONCLUSIONS: A novel catheter system consisting of a basket of splines with multiple multi-frequency ultrasonic elements allows 3D anatomical mapping and real-time tracking of the entire heart chamber within a single heartbeat. These design parameters achieve highly acceptable reconstruction accuracy.


Assuntos
Fibrilação Atrial/diagnóstico por imagem , Ablação por Cateter/métodos , Fibrilação Atrial/cirurgia , Simulação por Computador , Átrios do Coração/fisiopatologia , Átrios do Coração/cirurgia , Humanos , Imageamento por Ressonância Magnética , Ultrassom
6.
Acad Radiol ; 26(5): 626-631, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30097402

RESUMO

RATIONALE AND OBJECTIVES: The purpose of this paper is to describe the integration of a commercial chest CT computer-aided detection (CAD) system into the clinical radiology reporting workflow and perform an initial investigation of its impact on radiologist efficiency. It seeks to complement research into CAD sensitivity and specificity of stand-alone systems, by focusing on report generation time when the CAD is integrated into the clinical workflow. MATERIALS AND METHODS: A commercial chest CT CAD software that provides automated detection and measurement of lung nodules, ascending and descending aorta, and pleural effusion was integrated with a commercial radiology report dictation application. The CAD system automatically prepopulated a radiology report template, thus offering the potential for increased efficiency. The integrated system was evaluated using 40 scans from a publicly available lung nodule database. Each scan was read using two methods: (1) without CAD analytics, i.e., manually populated report with measurements using electronic calipers, and (2) with CAD analytics to prepopulate the report for reader review and editing. Three radiologists participated as readers in this study. RESULTS: CAD assistance reduced reading times by 7%-44%, relative to the conventional manual method, for the three radiologists from opening of the case to signing of the final report. CONCLUSION: This study provides an investigation of the impact of CAD and measurement on chest CTs within a clinical reporting workflow. Prepopulation of a report with automated nodule and aorta measurements yielded substantial time savings relative to manual measurement and entry.


Assuntos
Eficiência , Neoplasias Pulmonares/diagnóstico por imagem , Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador , Radiologia/organização & administração , Nódulo Pulmonar Solitário/diagnóstico por imagem , Humanos , Radiografia Torácica , Sensibilidade e Especificidade , Software , Fatores de Tempo , Tomografia Computadorizada por Raios X , Fluxo de Trabalho
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 886-889, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946036

RESUMO

Training data is the key component in designing algorithms for medical image analysis and in many cases it is the main bottleneck in achieving good results. Recent progress in image generation has enabled the training of neural network based solutions using synthetic data. A key factor in the generation of new samples is controlling the important appearance features and potentially being able to generate a new sample of a specific class with different variants. In this work we suggest the synthesis of new data by mixing the class specified and unspecified representation of different factors in the training data which are separated using a disentanglement based scheme. Our experiments on liver lesion classification in CT show an average improvement of 7.4% in accuracy over the baseline training scheme.


Assuntos
Fígado/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Neoplasias Hepáticas , Redes Neurais de Computação
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 895-898, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946038

RESUMO

We present an automatic method for joint liver lesion segmentation and classification using a hierarchical fine-tuning framework. Our dataset is small, containing 332 2-D CT examinations with lesion annotated into 3 lesion types: cysts, hemangiomas, and metastases. Using a cascaded U-net that performs segmentation and classification simultaneously, we trained a strong lesion segmentation model on the dataset of MICCAI 2017 Liver Tumor Segmentation (LiTS) Challenge. We used the trained weights to fine-tune a slightly modified model to obtain improved lesion segmentation and classification, on the smaller dataset. Since pre-training was done with similar data on a related task, we were able to learn more representative features (especially higher-level features in the U-Net's encoder), and improve pixel-wise classification results. We show an improvement of over 10% in Dice score and classification accuracy, compared to a baseline model. We further improve the classification performance by hierarchically freezing the encoder part of the network and achieve an improvement of over 15% in Dice score and classification accuracy. We compare our results with an existing method and show an improvement of 14% in the success rate and 12% in the classification accuracy.


Assuntos
Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X
9.
Acad Radiol ; 24(12): 1501-1509, 2017 12.
Artigo em Inglês | MEDLINE | ID: mdl-28778512

RESUMO

RATIONALE AND OBJECTIVES: This study aimed to provide decision support for the human expert, to categorize liver metastases into their primary cancer sites. Currently, once a liver metastasis is detected, the process of finding the primary site is challenging, time-consuming, and requires multiple examinations. The proposed system can support the human expert in localizing the search for the cancer source by prioritizing the examinations to probable cancer sites. MATERIALS AND METHODS: The suggested method is a learning-based approach, using computed tomography (CT) data as the input source. Each metastasis is circumscribed by a radiologist in portal phase and in non-contrast CT images. Visual features are computed from these images, combined into feature vectors, and classified using support vector machine classification. A variety of different features were explored and tested. A leave-one-out cross-validation technique was conducted for classification evaluation. The methods were developed on a set of 50 lesion cases taken from 29 patients. RESULTS: Experiments were conducted on a separate set of 142 lesion cases taken from 71 patients with four different primary sites. Multiclass categorization results (four classes) achieved low accuracy results. However, the proposed system was found to provide promising results of 83% and 99% for top-2 and top-3 classification tasks, respectively. Moreover, when compared to the experts' ability to distinguish the different metastases, the system shows improved results. CONCLUSIONS: Automated systems, such as the one proposed, show promising new results and demonstrate new capabilities that, in the future, will be able to provide decision and treatment support for radiologists and oncologists, toward more efficient detection and treatment of cancer.


Assuntos
Algoritmos , Técnicas de Apoio para a Decisão , Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/secundário , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Primárias Desconhecidas , Máquina de Vetores de Suporte
10.
Cytometry A ; 91(5): 482-493, 2017 05.
Artigo em Inglês | MEDLINE | ID: mdl-28426133

RESUMO

We present cytometric classification of live healthy and cancerous cells by using the spatial morphological and textural information found in the label-free quantitative phase images of the cells. We compare both healthy cells to primary tumor cells and primary tumor cells to metastatic cancer cells, where tumor biopsies and normal tissues were isolated from the same individuals. To mimic analysis of liquid biopsies by flow cytometry, the cells were imaged while unattached to the substrate. We used low-coherence off-axis interferometric phase microscopy setup, which allows a single-exposure acquisition mode, and thus is suitable for quantitative imaging of dynamic cells during flow. After acquisition, the optical path delay maps of the cells were extracted and then used to calculate 15 parameters derived from the cellular 3D morphology and texture. Upon analyzing tens of cells in each group, we found high statistical significance in the difference between the groups in most of the parameters calculated, with the same trends for all statistically significant parameters. Furthermore, a specially designed machine learning algorithm, implemented on the phase map extracted features, classified the correct cell type (healthy/cancer/metastatic) with 81-93% sensitivity and 81-99% specificity. The quantitative phase imaging approach for liquid biopsies presented in this paper could be the basis for advanced techniques of staging freshly isolated live cancer cells in imaging flow cytometers. © 2017 International Society for Advancement of Cytometry.


Assuntos
Citometria de Fluxo/métodos , Holografia/métodos , Microscopia/métodos , Neoplasias/sangue , Algoritmos , Contagem de Células , Humanos , Biópsia Líquida , Neoplasias/patologia
11.
IEEE Trans Biomed Eng ; 64(6): 1380-1392, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-27608447

RESUMO

OBJECTIVE: We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. METHODS: Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. RESULTS: We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). CONCLUSION: We demonstrated that classification based on informative selected set of words results in significant improvement. SIGNIFICANCE: Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.


Assuntos
Aprendizado de Máquina , Reconhecimento Automatizado de Padrão/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Torácica/métodos , Tomografia Computadorizada por Raios X/métodos , Dicionários como Assunto , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Técnica de Subtração
12.
IEEE J Biomed Health Inform ; 20(6): 1585-1594, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-26372661

RESUMO

The bag-of-visual-words (BoVW) method with construction of a single dictionary of visual words has been used previously for a variety of classification tasks in medical imaging, including the diagnosis of liver lesions. In this paper, we describe a novel method for automated diagnosis of liver lesions in portal-phase computed tomography (CT) images that improves over single-dictionary BoVW methods by using an image patch representation of the interior and boundary regions of the lesions. Our approach captures characteristics of the lesion margin and of the lesion interior by creating two separate dictionaries for the margin and the interior regions of lesions ("dual dictionaries" of visual words). Based on these dictionaries, visual word histograms are generated for each region of interest within the lesion and its margin. For validation of our approach, we used two datasets from two different institutions, containing CT images of 194 liver lesions (61 cysts, 80 metastasis, and 53 hemangiomas). The final diagnosis of each lesion was established by radiologists. The classification accuracy for the images from the two institutions was 99% and 88%, respectively, and 93% for a combined dataset. Our new BoVW approach that uses dual dictionaries shows promising results. We believe the benefits of our approach may generalize to other application domains within radiology.


Assuntos
Neoplasias Hepáticas/diagnóstico por imagem , Fígado/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Humanos
13.
IEEE Trans Med Imaging ; 35(2): 645-53, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26452277

RESUMO

Classification of clustered breast microcalcifications into benign and malignant categories is an extremely challenging task for computerized algorithms and expert radiologists alike. In this paper we apply a multi-view-classifier for the task. We describe a two-step classification method that is based on a view-level decision, implemented by a logistic regression classifier, followed by a stochastic combination of the two view-level indications into a single benign or malignant decision. The proposed method was evaluated on a large number of cases from a standardized digital database for screening mammography (DDSM). Experimental results demonstrate the advantage of the proposed multi-view classification algorithm that automatically learns the best way to combine the views.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Calcinose/diagnóstico por imagem , Mamografia/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos , Mama/diagnóstico por imagem , Feminino , Humanos
14.
J Med Imaging (Bellingham) ; 2(3): 034502, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-27014712

RESUMO

This paper presents a fully automated method for detection and segmentation of liver metastases in serial computed tomography (CT) examinations. Our method uses a given two-dimensional baseline segmentation mask for identifying the lesion location in the follow-up CT and locating surrounding tissues, using nonrigid image registration and template matching, in order to reduce the search area for segmentation. Adaptive region growing and mean-shift clustering are used to obtain the lesion segmentation. Our database contains 127 cases from the CT abdomen unit at Sheba Medical Center. Development of the methodology was conducted using 22 of the cases, and testing was conducted on the remaining 105 cases. Results show that 94 of the 105 lesions were detected, for an overall matching rate of 90% making the correct RECIST 1.1 assessment in 88% of the cases. The average Dice index was [Formula: see text], the average sensitivity was [Formula: see text], and the positive predictive value was [Formula: see text]. In 92% of the rated cases, the results were classified by the radiologists as acceptable or better. The segmentation performance, matching rate, and RECIST assessment results hence appear promising.

15.
PLoS One ; 9(11): e113428, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25409162

RESUMO

A splicing mutation in the ikbkap gene causes Familial Dysautonomia (FD), affecting the IKAP protein expression levels and proper development and function of the peripheral nervous system (PNS). Here we attempted to elucidate the role of IKAP in PNS development in the chick embryo and found that IKAP is required for proper axonal outgrowth, branching, and peripheral target innervation. Moreover, we demonstrate that IKAP colocalizes with activated JNK (pJNK), dynein, and ß-tubulin at the axon terminals of dorsal root ganglia (DRG) neurons, and may be involved in transport of specific target derived signals required for transcription of JNK and NGF responsive genes in the nucleus. These results suggest the novel role of IKAP in neuronal transport and specific signaling mediated transcription, and provide, for the first time, the basis for a molecular mechanism behind the FD phenotype.


Assuntos
Proteínas de Transporte/metabolismo , Proteínas Quinases JNK Ativadas por Mitógeno/metabolismo , Fator de Crescimento Neural/metabolismo , Sistema Nervoso Periférico/patologia , Animais , Axônios/metabolismo , Proteínas de Transporte/antagonistas & inibidores , Proteínas de Transporte/genética , Movimento Celular , Células Cultivadas , Embrião de Galinha , Galinhas , Dineínas/metabolismo , Disautonomia Familiar/genética , Disautonomia Familiar/patologia , Gânglios Espinais/citologia , Microscopia de Fluorescência , Neurônios/citologia , Neurônios/metabolismo , Sistema Nervoso Periférico/crescimento & desenvolvimento , Interferência de RNA , RNA Interferente Pequeno/metabolismo , Transdução de Sinais , Tubulina (Proteína)/química , Tubulina (Proteína)/metabolismo
16.
Med Phys ; 39(9): 5405-18, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22957608

RESUMO

PURPOSE: To develop a method to quantify the margin sharpness of lesions on CT and to evaluate it in simulations and CT scans of liver and lung lesions. METHODS: The authors computed two attributes of margin sharpness: the intensity difference between a lesion and its surroundings, and the sharpness of the intensity transition across the lesion boundary. These two attributes were extracted from sigmoid curves fitted along lines automatically drawn orthogonal to the lesion margin. The authors then represented the margin characteristics for each lesion by a feature vector containing histograms of these parameters. The authors created 100 simulated CT scans of lesions over a range of intensity difference and margin sharpness, and used the concordance correlation between the known parameter and the corresponding computed feature as a measure of performance. The authors also evaluated their method in 79 liver lesions (44 patients: 23 M, 21 F, mean age 61) and 58 lung nodules (57 patients: 24 M, 33 F, mean age 66). The methodology presented takes into consideration the boundary of the liver and lung during feature extraction in clinical images to ensure that the margin feature do not get contaminated by anatomy other than the normal organ surrounding the lesions. For evaluation in these clinical images, the authors created subjective independent reference standards for pairwise margin sharpness similarity in the liver and lung cohorts, and compared rank orderings of similarity used using our sharpness feature to that expected from the reference standards using mean normalized discounted cumulative gain (NDCG) over all query images. In addition, the authors compared their proposed feature with two existing techniques for lesion margin characterization using the simulated and clinical datasets. The authors also evaluated the robustness of their features against variations in delineation of the lesion margin by simulating five types of deformations of the lesion margin. Equivalence across deformations was assessed using Schuirmann's paired two one-sided tests. RESULTS: In simulated images, the concordance correlation between measured gradient and actual gradient was 0.994. The mean (s.d.) and standard deviation NDCG score for the retrieval of K images, K = 5, 10, and 15, were 84% (8%), 85% (7%), and 85% (7%) for CT images containing liver lesions, and 82% (7%), 84% (6%), and 85% (4%) for CT images containing lung nodules, respectively. The authors' proposed method outperformed the two existing margin characterization methods in average NDCG scores over all K, by 1.5% and 3% in datasets containing liver lesion, and 4.5% and 5% in datasets containing lung nodules. Equivalence testing showed that the authors' feature is more robust across all margin deformations (p < 0.05) than the two existing methods for margin sharpness characterization in both simulated and clinical datasets. CONCLUSIONS: The authors have described a new image feature to quantify the margin sharpness of lesions. It has strong correlation with known margin sharpness in simulated images and in clinical CT images containing liver lesions and lung nodules. This image feature has excellent performance for retrieving images with similar margin characteristics, suggesting potential utility, in conjunction with other lesion features, for content-based image retrieval applications.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem
17.
Med Phys ; 38(11): 5879-86, 2011 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-22047352

RESUMO

PURPOSE: It is challenging to reproducibly measure and compare cancer lesions on numerous follow-up studies; the process is time-consuming and error-prone. In this paper, we show a method to automatically and reproducibly identify and segment abnormal lymph nodes in serial computed tomography (CT) exams. METHODS: Our method leverages initial identification of enlarged (abnormal) lymph nodes in the baseline scan. We then identify an approximate region for the node in the follow-up scans using nonrigid image registration. The baseline scan is also used to locate regions of normal, non-nodal tissue surrounding the lymph node and to map them onto the follow-up scans, in order to reduce the search space to locate the lymph node on the follow-up scans. Adaptive region-growing and clustering algorithms are then used to obtain the final contours for segmentation. We applied our method to 24 distinct enlarged lymph nodes at multiple time points from 14 patients. The scan at the earlier time point was used as the baseline scan to be used in evaluating the follow-up scan, resulting in 70 total test cases (e.g., a series of scans obtained at 4 time points results in 3 test cases). For each of the 70 cases, a "reference standard" was obtained by manual segmentation by a radiologist. Assessment according to response evaluation criteria in solid tumors (RECIST) using our method agreed with RECIST assessments made using the reference standard segmentations in all test cases, and by calculating node overlap ratio and Hausdorff distance between the computer and radiologist-generated contours. RESULTS: Compared to the reference standard, our method made the correct RECIST assessment for all 70 cases. The average overlap ratio was 80.7 ± 9.7% s.d., and the average Hausdorff distance was 3.2 ± 1.8 mm s.d. The concordance correlation between automated and manual segmentations was 0.978 (95% confidence interval 0.962, 0.984). The 100% agreement in our sample between our method and the standard with regard to RECIST classification suggests that the true disagreement rate is no more than 6%. CONCLUSIONS: Our automated lymph node segmentation method achieves excellent overall segmentation performance and provides equivalent RECIST assessment. It potentially will be useful to streamline and improve cancer lesion measurement and tracking and to improve assessment of cancer treatment response.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Linfoma/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Automação , Humanos , Linfonodos/diagnóstico por imagem , Fatores de Tempo
18.
Radiology ; 256(1): 243-52, 2010 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-20505065

RESUMO

PURPOSE: To develop a system to facilitate the retrieval of radiologic images that contain similar-appearing lesions and to perform a preliminary evaluation of this system with a database of computed tomographic (CT) images of the liver and an external standard of image similarity. MATERIALS AND METHODS: Institutional review board approval was obtained for retrospective analysis of deidentified patient images. Thereafter, 30 portal venous phase CT images of the liver exhibiting one of three types of liver lesions (13 cysts, seven hemangiomas, 10 metastases) were selected. A radiologist used a controlled lexicon and a tool developed for complete and standardized description of lesions to identify and annotate each lesion with semantic features. In addition, this software automatically computed image features on the basis of image texture and boundary sharpness. Semantic and computer-generated features were weighted and combined into a feature vector representing each image. An independent reference standard was created for pairwise image similarity. This was used in a leave-one-out cross-validation to train weights that optimized the rankings of images in the database in terms of similarity to query images. Performance was evaluated by using precision-recall curves and normalized discounted cumulative gain (NDCG), a common measure for the usefulness of information retrieval. RESULTS: When used individually, groups of semantic, texture, and boundary features resulted in various levels of performance in retrieving relevant lesions. However, combining all features produced the best overall results. Mean precision was greater than 90% at all values of recall, and mean, best, and worst case retrieval accuracy was greater than 95%, 100%, and greater than 78%, respectively, with NDCG. CONCLUSION: Preliminary assessment of this approach shows excellent retrieval results for three types of liver lesions visible on portal venous CT images, warranting continued development and validation in a larger and more comprehensive database.


Assuntos
Hepatopatias/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Automação , Cistos/diagnóstico por imagem , Hemangioma/diagnóstico por imagem , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/secundário , Veia Porta/diagnóstico por imagem , Padrões de Referência , Estudos Retrospectivos , Software , Terminologia como Assunto
19.
IEEE Trans Med Imaging ; 29(2): 488-501, 2010 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-20129849

RESUMO

This paper presents a procedure for automatic extraction and segmentation of a class-specific object (or region) by learning class-specific boundaries. We describe and evaluate the method with a specific focus on the detection of lesion regions in uterine cervix images. The watershed segmentation map of the input image is modeled using a Markov random field (MRF) in which watershed regions correspond to binary random variables indicating whether the region is part of the lesion tissue or not. The local pairwise factors on the arcs of the watershed map indicate whether the arc is part of the object boundary. The factors are based on supervised learning of a visual word distribution. The final lesion region segmentation is obtained using a loopy belief propagation applied to the watershed arc-level MRF. Experimental results on real data show state-of-the-art segmentation results on this very challenging task that, if necessary, can be interactively enhanced.


Assuntos
Colo do Útero/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Fotografação/métodos , Neoplasias do Colo do Útero/patologia , Algoritmos , Inteligência Artificial , Colo do Útero/citologia , Feminino , Humanos , Cadeias de Markov , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
Comput Med Imaging Graph ; 33(3): 205-16, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19217754

RESUMO

This work is focused on the generation and utilization of a reliable ground truth (GT) segmentation for a large medical repository of digital cervicographic images (cervigrams) collected by the National Cancer Institute (NCI). NCI invited twenty experts to manually segment a set of 939 cervigrams into regions of medical and anatomical interest. Based on this unique data, the objectives of the current work are to: (1) Automatically generate a multi-expert GT segmentation map; (2) Use the GT map to automatically assess the complexity of a given segmentation task; (3) Use the GT map to evaluate the performance of an automated segmentation algorithm. The multi-expert GT map is generated via the STAPLE (Simultaneous Truth and Performance Level Estimation) algorithm, which is a well-known method to generate a GT segmentation from multiple observations. A new measure of segmentation complexity, which relies on the inter-observer variability within the GT map, is defined. This measure is used to identify images that were found difficult to segment by the experts and to compare the complexity of different segmentation tasks. An accuracy measure, which evaluates the performance of automated segmentation algorithms is presented. Two algorithms for cervix boundary detection are compared using the proposed accuracy measure. The measure is shown to reflect the actual segmentation quality achieved by the algorithms. The methods and conclusions presented in this work are general and can be applied to different images and segmentation tasks. Here they are applied to the cervigram database including a thorough analysis of the available data.


Assuntos
Colo do Útero/patologia , Tomada de Decisões Assistida por Computador , Diagnóstico por Imagem , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos , Feminino , Lógica Fuzzy , Humanos , Reconhecimento Automatizado de Padrão , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA