Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
bioRxiv ; 2023 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-36865216

RESUMO

Morphology-based classification of cells in the bone marrow aspirate (BMA) is a key step in the diagnosis and management of hematologic malignancies. However, it is time-intensive and must be performed by expert hematopathologists and laboratory professionals. We curated a large, high-quality dataset of 41,595 hematopathologist consensus-annotated single-cell images extracted from BMA whole slide images (WSIs) containing 23 morphologic classes from the clinical archives of the University of California, San Francisco. We trained a convolutional neural network, DeepHeme, to classify images in this dataset, achieving a mean area under the curve (AUC) of 0.99. DeepHeme was then externally validated on WSIs from Memorial Sloan Kettering Cancer Center, with a similar AUC of 0.98, demonstrating robust generalization. When compared to individual hematopathologists from three different top academic medical centers, the algorithm outperformed all three. Finally, DeepHeme reliably identified cell states such as mitosis, paving the way for image-based quantification of mitotic index in a cell-specific manner, which may have important clinical applications.

2.
Diagnostics (Basel) ; 12(2)2022 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-35204436

RESUMO

Glioma is the most common type of primary malignant brain tumor. Accurate survival time prediction for glioma patients may positively impact treatment planning. In this paper, we develop an automatic survival time prediction tool for glioblastoma patients along with an effective solution to the limited availability of annotated medical imaging datasets. Ensembles of snapshots of three dimensional (3D) deep convolutional neural networks (CNN) are applied to Magnetic Resonance Image (MRI) data to predict survival time of high-grade glioma patients. Additionally, multi-sequence MRI images were used to enhance survival prediction performance. A novel way to leverage the potential of ensembles to overcome the limitation of labeled medical image availability is shown. This new classification method separates glioblastoma patients into long- and short-term survivors. The BraTS (Brain Tumor Image Segmentation) 2019 training dataset was used in this work. Each patient case consisted of three MRI sequences (T1CE, T2, and FLAIR). Our training set contained 163 cases while the test set included 46 cases. The best known prediction accuracy of 74% for this type of problem was achieved on the unseen test set.

3.
IEEE Access ; 9: 72970-72979, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34178559

RESUMO

A number of recent papers have shown experimental evidence that suggests it is possible to build highly accurate deep neural network models to detect COVID-19 from chest X-ray images. In this paper, we show that good generalization to unseen sources has not been achieved. Experiments with richer data sets than have previously been used show models have high accuracy on seen sources, but poor accuracy on unseen sources. The reason for the disparity is that the convolutional neural network model, which learns features, can focus on differences in X-ray machines or in positioning within the machines, for example. Any feature that a person would clearly rule out is called a confounding feature. Some of the models were trained on COVID-19 image data taken from publications, which may be different than raw images. Some data sets were of pediatric cases with pneumonia where COVID-19 chest X-rays are almost exclusively from adults, so lung size becomes a spurious feature that can be exploited. In this work, we have eliminated many confounding features by working with as close to raw data as possible. Still, deep learned models may leverage source specific confounders to differentiate COVID-19 from pneumonia preventing generalizing to new data sources (i.e. external sites). Our models have achieved an AUC of 1.00 on seen data sources but in the worst case only scored an AUC of 0.38 on unseen ones. This indicates that such models need further assessment/development before they can be broadly clinically deployed. An example of fine-tuning to improve performance at a new site is given.

4.
Tomography ; 6(2): 209-215, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32548298

RESUMO

Noninvasive diagnosis of lung cancer in early stages is one task where radiomics helps. Clinical practice shows that the size of a nodule has high predictive power for malignancy. In the literature, convolutional neural networks (CNNs) have become widely used in medical image analysis. We study the ability of a CNN to capture nodule size in computed tomography images after images are resized for CNN input. For our experiments, we used the National Lung Screening Trial data set. Nodules were labeled into 2 categories (small/large) based on the original size of a nodule. After all extracted patches were re-sampled into 100-by-100-pixel images, a CNN was able to successfully classify test nodules into small- and large-size groups with high accuracy. To show the generality of our discovery, we repeated size classification experiments using Common Objects in Context (COCO) data set. From the data set, we selected 3 categories of images, namely, bears, cats, and dogs. For all 3 categories a 5- × 2-fold cross-validation was performed to put them into small and large classes. The average area under receiver operating curve is 0.954, 0.952, and 0.979 for the bear, cat, and dog categories, respectively. Thus, camera image rescaling also enables a CNN to discover the size of an object. The source code for experiments with the COCO data set is publicly available in Github (https://github.com/VisionAI-USF/COCO_Size_Decoding/).


Assuntos
Neoplasias Pulmonares , Nódulos Pulmonares Múltiplos , Animais , Gatos , Cães , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Redes Neurais de Computação , Ensaios Clínicos Controlados Aleatórios como Assunto , Tomografia Computadorizada por Raios X , Ursidae
5.
Tomography ; 6(2): 250-260, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32548303

RESUMO

Image acquisition parameters for computed tomography scans such as slice thickness and field of view may vary depending on tumor size and site. Recent studies have shown that some radiomics features were dependent on voxel size (= pixel size × slice thickness), and with proper normalization, this voxel size dependency could be reduced. Deep features from a convolutional neural network (CNN) have shown great promise in characterizing cancers. However, how do these deep features vary with changes in imaging acquisition parameters? To analyze the variability of deep features, a physical radiomics phantom with 10 different material cartridges was scanned on 8 different scanners. We assessed scans from 3 different cartridges (rubber, dense cork, and normal cork). Deep features from the penultimate layer of the CNN before (pre-rectified linear unit) and after (post-rectified linear unit) applying the rectified linear unit activation function were extracted from a pre-trained CNN using transfer learning. We studied both the interscanner and intrascanner dependency of deep features and also the deep features' dependency over the 3 cartridges. We found some deep features were dependent on pixel size and that, with appropriate normalization, this dependency could be reduced. False discovery rate was applied for multiple comparisons, to mitigate potentially optimistic results. We also used stable deep features for prognostic analysis on 1 non-small cell lung cancer data set.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Tomografia Computadorizada por Raios X , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Humanos , Redes Neurais de Computação , Imagens de Fantasmas
6.
J Med Imaging (Bellingham) ; 7(2): 024502, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32280729

RESUMO

Purpose: Due to the high incidence and mortality rates of lung cancer worldwide, early detection of a precancerous lesion is essential. Low-dose computed tomography is a commonly used technique for screening, diagnosis, and prognosis of non-small-cell lung cancer. Recently, convolutional neural networks (CNN) had shown great potential in lung nodule classification. Clinical information (family history, gender, and smoking history) together with nodule size provide information about lung cancer risk. Large nodules have greater risk than small nodules. Approach: A subset of cases from the National Lung Screening Trial was chosen as a dataset in our study. We divided the nodules into large and small nodules based on different clinical guideline thresholds and then analyzed the groups individually. Similarly, we also analyzed clinical features by dividing them into groups. CNNs were designed and trained over each of these groups individually. To our knowledge, this is the first study to incorporate nodule size and clinical features for classification using CNN. We further made a hybrid model using an ensemble with the CNN models of clinical and size information to enhance malignancy prediction. Results: From our study, we obtained 0.9 AUC and 83.12% accuracy, which was a significant improvement over our previous best results. Conclusions: In conclusion, we found that dividing the nodules by size and clinical information for building predictive models resulted in improved malignancy predictions. Our analysis also showed that appropriately integrating clinical information and size groups could further improve risk prediction.

7.
Tomography ; 5(1): 192-200, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30854457

RESUMO

Quantitative features are generated from a tumor phenotype by various data characterization, feature-extraction approaches and have been used successfully as a biomarker. These features give us information about a nodule, for example, nodule size, pixel intensity, histogram-based information, and texture information from wavelets or a convolution kernel. Semantic features, on the other hand, can be generated by an experienced radiologist and consist of the common characteristics of a tumor, for example, location of a tumor, fissure, or pleural wall attachment, presence of fibrosis or emphysema, concave cut on nodule surface. These features have been derived for lung nodules by our group. Semantic features have also shown promise in predicting malignancy. Deep features from images are generally extracted from the last layers before the classification layer of a convolutional neural network (CNN). By training with the use of different types of images, the CNN learns to recognize various patterns and textures. But when we extract deep features, there is no specific naming approach for them, other than denoting them by the feature column number (position of a neuron in a hidden layer). In this study, we tried to relate and explain deep features with respect to traditional quantitative features and semantic features. We discovered that 26 deep features from the Vgg-S neural network and 12 deep features from our trained CNN could be explained by semantic or traditional quantitative features. From this, we concluded that those deep features can have a recognizable definition via semantic or quantitative features.


Assuntos
Neoplasias Pulmonares/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Algoritmos , Aprendizado Profundo , Humanos , Neoplasias Pulmonares/patologia , Redes Neurais de Computação , Semântica , Nódulo Pulmonar Solitário/patologia , Tomografia Computadorizada por Raios X/métodos
8.
Cancer Med ; 7(12): 6340-6356, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30507033

RESUMO

BACKGROUND: Current guidelines for lung cancer screening increased a positive scan threshold to a 6 mm longest diameter. We extracted radiomic features from baseline and follow-up screens and performed size-specific analyses to predict lung cancer incidence using three nodule size classes (<6 mm [small], 6-16 mm [intermediate], and ≥16 mm [large]). METHODS: We extracted 219 features from baseline (T0) nodules and 219 delta features which are the change from T0 to first follow-up (T1). Nodules were identified for 160 incidence cases diagnosed with lung cancer at T1 or second follow-up screen (T2) and for 307 nodule-positive controls that had three consecutive positive screens not diagnosed as lung cancer. The cases and controls were split into training and test cohorts; classifier models were used to identify the most predictive features. RESULTS: The final models revealed modest improvements for baseline and delta features when compared to only baseline features. The AUROCs for small- and intermediate-sized nodules were 0.83 (95% CI 0.76-0.90) and 0.76 (95% CI 0.71-0.81) for baseline-only radiomic features, respectively, and 0.84 (95% CI 0.77-0.90) and 0.84 (95% CI 0.80-0.88) for baseline and delta features, respectively. When intermediate and large nodules were combined, the AUROC for baseline-only features was 0.80 (95% CI 0.76-0.84) compared with 0.86 (95% CI 0.83-0.89) for baseline and delta features. CONCLUSIONS: We found modest improvements in predicting lung cancer incidence by combining baseline and delta radiomics. Radiomics could be used to improve current size-based screening guidelines.


Assuntos
Detecção Precoce de Câncer , Neoplasias Pulmonares/diagnóstico por imagem , Programas de Rastreamento , Idoso , Estudos de Casos e Controles , Feminino , Humanos , Incidência , Neoplasias Pulmonares/epidemiologia , Masculino , Pessoa de Meia-Idade , Radiografia
9.
J Med Imaging (Bellingham) ; 5(1): 011021, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29594181

RESUMO

Lung cancer has a high incidence and mortality rate. Early detection and diagnosis of lung cancers is best achieved with low-dose computed tomography (CT). Classical radiomics features extracted from lung CT images have been shown as able to predict cancer incidence and prognosis. With the advancement of deep learning and convolutional neural networks (CNNs), deep features can be identified to analyze lung CTs for prognosis prediction and diagnosis. Due to a limited number of available images in the medical field, the transfer learning concept can be helpful. Using subsets of participants from the National Lung Screening Trial (NLST), we utilized a transfer learning approach to differentiate lung cancer nodules versus positive controls. We experimented with three different pretrained CNNs for extracting deep features and used five different classifiers. Experiments were also conducted with deep features from different color channels of a pretrained CNN. Selected deep features were combined with radiomics features. A CNN was designed and trained. Combinations of features from pretrained, CNNs trained on NLST data, and classical radiomics were used to build classifiers. The best accuracy (76.79%) was obtained using feature combinations. An area under the receiver operating characteristic curve of 0.87 was obtained using a CNN trained on an augmented NLST data cohort.

10.
J Magn Reson Imaging ; 46(1): 115-123, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-27678245

RESUMO

PURPOSE: Glioblastoma multiforme (GBM) is the most common malignant brain tumor in adults. Most GBMs exhibit extensive regional heterogeneity at tissue, cellular, and molecular scales, but the clinical relevance of the observed spatial imaging characteristics remains unknown. We investigated pretreatment magnetic resonance imaging (MRI) scans of GBMs to identify tumor subregions and quantify their image-based spatial characteristics that are associated with survival time. MATERIALS AND METHODS: We quantified tumor subregions (termed habitats) in GBMs, which are hypothesized to capture intratumoral characteristics using multiple MRI sequences. For proof-of-concept, we developed a computational framework that used intratumoral grouping and spatial mapping to identify GBM tumor subregions and yield habitat-based features. Using a feature selector and three classifiers, experimental results from two datasets are reported, including Dataset1 with 32 GBM patients (594 tumor slices) and Dataset2 with 22 GBM patients, who did not undergo resection (261 tumor slices) for survival group prediction. RESULTS: In both datasets, we show that habitat-based features achieved 87.50% and 86.36% accuracies for survival group prediction, respectively, using leave-one-out cross-validation. Experimental results revealed that spatially correlated features between signal-enhanced subregions were effective for predicting survival groups (P < 0.05 for all three machine-learning classifiers). CONCLUSION: The quantitative spatial-correlated features derived from MRI-defined tumor subregions in GBM could be effectively used to predict the survival time of patients. LEVEL OF EVIDENCE: 2 J. MAGN. RESON. IMAGING 2017;46:115-123.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/mortalidade , Glioblastoma/diagnóstico por imagem , Glioblastoma/mortalidade , Reconhecimento Automatizado de Padrão/métodos , Análise Espaço-Temporal , Análise de Sobrevida , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Biomarcadores , Neoplasias Encefálicas/patologia , Feminino , Glioblastoma/patologia , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Incidência , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Prognóstico , Reprodutibilidade dos Testes , Fatores de Risco , Sensibilidade e Especificidade , Estados Unidos/epidemiologia , Adulto Jovem
11.
Tomography ; 2(4): 388-395, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28066809

RESUMO

Lung cancer is the most common cause of cancer-related deaths in the USA. It can be detected and diagnosed using computed tomography images. For an automated classifier, identifying predictive features from medical images is a key concern. Deep feature extraction using pretrained convolutional neural networks (CNNs) has recently been successfully applied in some image domains. Here, we applied a pretrained CNN to extract deep features from 40 computed tomography images, with contrast, of non-small cell adenocarcinoma lung cancer, and combined deep features with traditional image features and trained classifiers to predict short- and long-term survivors. We experimented with several pretrained CNNs and several feature selection strategies. The best previously reported accuracy when using traditional quantitative features was 77.5% (area under the curve [AUC], 0.712), which was achieved by a decision tree classifier. The best reported accuracy from transfer learning and deep features was 77.5% (AUC, 0.713) using a decision tree classifier. When extracted deep neural network features were combined with traditional quantitative features, we obtained an accuracy of 90% (AUC, 0.935) with the 5 best post-rectified linear unit features extracted from a vgg-f pretrained CNN and the 5 best traditional features. The best results were achieved with the symmetric uncertainty feature ranking algorithm followed by a random forests classifier.

12.
J Magn Reson Imaging ; 42(5): 1421-30, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25884277

RESUMO

PURPOSE: To evaluate heterogeneity within tumor subregions or "habitats" via textural kinetic analysis on breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) for the classification of two clinical prognostic features; 1) estrogen receptor (ER)-positive from ER-negative tumors, and 2) tumors with four or more viable lymph node metastases after neoadjuvant chemotherapy from tumors without nodal metastases. MATERIALS AND METHODS: Two separate volumetric DCE-MRI datasets were obtained at 1.5T, comprised of bilateral axial dynamic 3D T1 -weighted fat suppressed gradient recalled echo-pulse sequences obtained before and after gadolinium-based contrast administration. Representative image slices of breast tumors from 38 and 34 patients were used for ER status and lymph node classification, respectively. Four tumor habitats were defined based on their kinetic contrast enhancement characteristics. The heterogeneity within each habitat was quantified using textural kinetic features, which were evaluated using two feature selectors and three classifiers. RESULTS: Textural kinetic features from the habitat with rapid delayed washout yielded classification accuracies of 84.44% (area under the curve [AUC] 0.83) for ER and 88.89% (AUC 0.88) for lymph node status. The texture feature, information measure of correlation, most often chosen in cross-validations, measures heterogeneity and provides accuracy approximately the same as with the best feature set. CONCLUSION: Heterogeneity within habitats with rapid washout is highly predictive of molecular tumor characteristics and clinical behavior.


Assuntos
Neoplasias da Mama/metabolismo , Neoplasias da Mama/patologia , Gadolínio , Aumento da Imagem , Imageamento por Ressonância Magnética , Receptores de Estrogênio/metabolismo , Adulto , Idoso , Área Sob a Curva , Mama/metabolismo , Mama/patologia , Meios de Contraste , Feminino , Humanos , Linfonodos/patologia , Metástase Linfática , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Estudos Retrospectivos , Sensibilidade e Especificidade
13.
Stat Methods Med Res ; 24(1): 68-106, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24919829

RESUMO

Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research.


Assuntos
Algoritmos , Biomarcadores , Diagnóstico por Imagem , Projetos de Pesquisa , Estatística como Assunto , Viés , Simulação por Computador , Humanos , Imagens de Fantasmas , Padrões de Referência , Reprodutibilidade dos Testes
14.
J Digit Imaging ; 27(6): 805-23, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24990346

RESUMO

Quantitative size, shape, and texture features derived from computed tomographic (CT) images may be useful as predictive, prognostic, or response biomarkers in non-small cell lung cancer (NSCLC). However, to be useful, such features must be reproducible, non-redundant, and have a large dynamic range. We developed a set of quantitative three-dimensional (3D) features to describe segmented tumors and evaluated their reproducibility to select features with high potential to have prognostic utility. Thirty-two patients with NSCLC were subjected to unenhanced thoracic CT scans acquired within 15 min of each other under an approved protocol. Primary lung cancer lesions were segmented using semi-automatic 3D region growing algorithms. Following segmentation, 219 quantitative 3D features were extracted from each lesion, corresponding to size, shape, and texture, including features in transformed spaces (laws, wavelets). The most informative features were selected using the concordance correlation coefficient across test-retest, the biological range and a feature independence measure. There were 66 (30.14%) features with concordance correlation coefficient ≥ 0.90 across test-retest and acceptable dynamic range. Of these, 42 features were non-redundant after grouping features with R (2) Bet ≥ 0.95. These reproducible features were found to be predictive of radiological prognosis. The area under the curve (AUC) was 91% for a size-based feature and 92% for the texture features (runlength, laws). We tested the ability of image features to predict a radiological prognostic score on an independent NSCLC (39 adenocarcinoma) samples, the AUC for texture features (runlength emphasis, energy) was 0.84 while the conventional size-based features (volume, longest diameter) was 0.80. Test-retest and correlation analyses have identified non-redundant CT image features with both high intra-patient reproducibility and inter-patient biological range. Thus making the case that quantitative image features are informative and prognostic biomarkers for NSCLC.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Área Sob a Curva , Feminino , Humanos , Imageamento Tridimensional/métodos , Pulmão/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
15.
Transl Oncol ; 7(1): 72-87, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24772210

RESUMO

We study the reproducibility of quantitative imaging features that are used to describe tumor shape, size, and texture from computed tomography (CT) scans of non-small cell lung cancer (NSCLC). CT images are dependent on various scanning factors. We focus on characterizing image features that are reproducible in the presence of variations due to patient factors and segmentation methods. Thirty-two NSCLC nonenhanced lung CT scans were obtained from the Reference Image Database to Evaluate Response data set. The tumors were segmented using both manual (radiologist expert) and ensemble (software-automated) methods. A set of features (219 three-dimensional and 110 two-dimensional) was computed, and quantitative image features were statistically filtered to identify a subset of reproducible and nonredundant features. The variability in the repeated experiment was measured by the test-retest concordance correlation coefficient (CCCTreT). The natural range in the features, normalized to variance, was measured by the dynamic range (DR). In this study, there were 29 features across segmentation methods found with CCCTreT and DR ≥ 0.9 and R(2) Bet ≥ 0.95. These reproducible features were tested for predicting radiologist prognostic score; some texture features (run-length and Laws kernels) had an area under the curve of 0.9. The representative features were tested for their prognostic capabilities using an independent NSCLC data set (59 lung adenocarcinomas), where one of the texture features, run-length gray-level nonuniformity, was statistically significant in separating the samples into survival groups (P ≤ .046).

16.
Pattern Recognit ; 46(3): 692-702, 2013 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-23459617

RESUMO

A single click ensemble segmentation (SCES) approach based on an existing "Click&Grow" algorithm is presented. The SCES approach requires only one operator selected seed point as compared with multiple operator inputs, which are typically needed. This facilitates processing large numbers of cases. Evaluation on a set of 129 CT lung tumor images using a similarity index (SI) was done. The average SI is above 93% using 20 different start seeds, showing stability. The average SI for 2 different readers was 79.53%. We then compared the SCES algorithm with the two readers, the level set algorithm and the skeleton graph cut algorithm obtaining an average SI of 78.29%, 77.72%, 63.77% and 63.76% respectively. We can conclude that the newly developed automatic lung lesion segmentation algorithm is stable, accurate and automated.

17.
Radiother Oncol ; 105(2): 167-73, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23157978

RESUMO

PURPOSE: To assess the clinical relevance of a semiautomatic CT-based ensemble segmentation method, by comparing it to pathology and to CT/PET manual delineations by five independent radiation oncologists in non-small cell lung cancer (NSCLC). MATERIALS AND METHODS: For 20 NSCLC patients (stages Ib-IIIb) the primary tumor was delineated manually on CT/PET scans by five independent radiation oncologists and segmented using a CT based semi-automatic tool. Tumor volume and overlap fractions between manual and semiautomatic-segmented volumes were compared. All measurements were correlated with the maximal diameter on macroscopic examination of the surgical specimen. Imaging data are available on www.cancerdata.org. RESULTS: High overlap fractions were observed between the semi-automatically segmented volumes and the intersection (92.5±9.0, mean±SD) and union (94.2±6.8) of the manual delineations. No statistically significant differences in tumor volume were observed between the semiautomatic segmentation (71.4±83.2 cm(3), mean±SD) and manual delineations (81.9±94.1 cm(3); p=0.57). The maximal tumor diameter of the semiautomatic-segmented tumor correlated strongly with the macroscopic diameter of the primary tumor (r=0.96). CONCLUSIONS: Semiautomatic segmentation of the primary tumor on CT demonstrated high agreement with CT/PET manual delineations and strongly correlated with the macroscopic diameter considered as the "gold standard". This method may be used routinely in clinical practice and could be employed as a starting point for treatment planning, target definition in multi-center clinical trials or for high throughput data mining research. This method is particularly suitable for peripherally located tumors.


Assuntos
Neoplasias Pulmonares/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Neoplasias Pulmonares/patologia , Neoplasias Pulmonares/cirurgia , Imagem Multimodal , Tomografia por Emissão de Pósitrons
18.
Magn Reson Imaging ; 30(9): 1234-48, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-22898692

RESUMO

"Radiomics" refers to the extraction and analysis of large amounts of advanced quantitative imaging features with high throughput from medical images obtained with computed tomography, positron emission tomography or magnetic resonance imaging. Importantly, these data are designed to be extracted from standard-of-care images, leading to a very large potential subject pool. Radiomics data are in a mineable form that can be used to build descriptive and predictive models relating image features to phenotypes or gene-protein signatures. The core hypothesis of radiomics is that these models, which can include biological or medical data, can provide valuable diagnostic, prognostic or predictive information. The radiomics enterprise can be divided into distinct processes, each with its own challenges that need to be overcome: (a) image acquisition and reconstruction, (b) image segmentation and rendering, (c) feature extraction and feature qualification and (d) databases and data sharing for eventual (e) ad hoc informatics analyses. Each of these individual processes poses unique challenges. For example, optimum protocols for image acquisition and reconstruction have to be identified and harmonized. Also, segmentations have to be robust and involve minimal operator input. Features have to be generated that robustly reflect the complexity of the individual volumes, but cannot be overly complex or redundant. Furthermore, informatics databases that allow incorporation of image features and image annotations, along with medical and genetic data, have to be generated. Finally, the statistical approaches to analyze these data have to be optimized, as radiomics is not a mature field of study. Each of these processes will be discussed in turn, as well as some of their unique challenges and proposed approaches to solve them. The focus of this article will be on images of non-small-cell lung cancer.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Carcinoma Pulmonar de Células não Pequenas/patologia , Humanos , Neoplasias Pulmonares/patologia , Imageamento por Ressonância Magnética/métodos , Informática Médica/métodos , Análise Multivariada , Reconhecimento Automatizado de Padrão/métodos , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons/métodos , Radioterapia (Especialidade)/métodos , Reprodutibilidade dos Testes , Fatores de Risco , Software , Tomografia Computadorizada por Raios X/métodos
19.
IEEE Trans Syst Man Cybern B Cybern ; 39(4): 989-1001, 2009 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-19336328

RESUMO

Support vector machines (SVMs) can be trained to be very accurate classifiers and have been used in many applications. However, the training time and, to a lesser extent, prediction time of SVMs on very large data sets can be very long. This paper presents a fast compression method to scale up SVMs to large data sets. A simple bit-reduction method is applied to reduce the cardinality of the data by weighting representative examples. We then develop SVMs trained on the weighted data. Experiments indicate that bit-reduction SVM produces a significant reduction in the time required for both training and prediction with minimum loss in accuracy. It is also shown to typically be more accurate than random sampling when the data are not overcompressed.

20.
J Signal Process Syst ; 54(1-3): 183-203, 2009 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-20046893

RESUMO

A fast, accurate and fully automatic method of segmenting magnetic resonance images of the human brain is introduced. The approach scales well allowing fast segmentations of fine resolution images. The approach is based on modifications of the soft clustering algorithm, fuzzy c-means, that enable it to scale to large data sets. Two types of modifications to create incremental versions of fuzzy c-means are discussed. They are much faster when compared to fuzzy c-means for medium to extremely large data sets because they work on successive subsets of the data. They are comparable in quality to application of fuzzy c-means to all of the data. The clustering algorithms coupled with inhomogeneity correction and smoothing are used to create a framework for automatically segmenting magnetic resonance images of the human brain. The framework is applied to a set of normal human brain volumes acquired from different magnetic resonance scanners using different head coils, acquisition parameters and field strengths. Results are compared to those from two widely used magnetic resonance image segmentation programs, Statistical Parametric Mapping and the FMRIB Software Library (FSL). The results are comparable to FSL while providing significant speed-up and better scalability to larger volumes of data.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA