Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Oncologist ; 24(11): e1156-e1164, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-30936378

RESUMEN

BACKGROUND: Lung adenocarcinoma (LADC) with epidermal growth factor receptor (EGFR) mutation is considered a subgroup of lung cancer sensitive to EGFR-targeted tyrosine kinase inhibitors. We aimed to develop and validate a computed tomography (CT)-based radiomics signature for prediction of EGFR mutation status in LADC appearing as a subsolid nodule. MATERIALS AND METHODS: A total of 467 eligible patients were divided into training and validation cohorts (n = 306 and 161, respectively). Radiomics features were extracted from unenhanced CT images by using Pyradiomics. A CT-based radiomics signature for distinguishing EGFR mutation status was constructed using the random forest (RF) method in the training cohort and then tested in the validation cohort. A combination of the radiomics signature with a clinical factors model was also constructed using the RF method. The performance of the model was evaluated using the area under the curve (AUC) of a receiver operating characteristic curve. RESULTS: In this study, 64.2% (300/467) of the patients showed EGFR mutations. L858R mutation of exon 21 was the most common mutation type (185/301). We identified a CT-based radiomics signature that successfully discriminated between EGFR positive and EGFR negative in the training cohort (AUC = 0.831) and the validation cohort (AUC = 0.789). The radiomics signature combined with the clinical factors model was not superior to the simple radiomics signature in the two cohorts (p > .05). CONCLUSION: As a noninvasive method, the CT-based radiomics signature can be used to predict the EGFR mutation status of LADC appearing as a subsolid nodule. IMPLICATIONS FOR PRACTICE: Lung adenocarcinoma (LADC) with epidermal growth factor receptor (EGFR) mutation is considered a subgroup of lung cancer that is sensitive to EGFR-targeted tyrosine kinase inhibitors. However, some patients with inoperable subsolid LADC are unable to undergo tissue sampling by biopsy for molecular analysis in clinical practice. A computed tomography-based radiomics signature may serve as a noninvasive biomarker to predict the EGFR mutation status of subsolid LADCs when mutational profiling is not available or possible.


Asunto(s)
Adenocarcinoma del Pulmón/diagnóstico por imagen , Adenocarcinoma del Pulmón/genética , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/genética , Adenocarcinoma del Pulmón/patología , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Biomarcadores de Tumor/genética , Niño , Receptores ErbB/genética , Femenino , Humanos , Neoplasias Pulmonares/patología , Masculino , Informática Médica , Persona de Mediana Edad , Modelos Teóricos , Mutación , Reproducibilidad de los Resultados , Estudios Retrospectivos , Tomografía Computarizada por Rayos X , Adulto Joven
2.
Med Image Anal ; 69: 101894, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33421919

RESUMEN

Deep learning for three dimensional (3D) abdominal organ segmentation on high-resolution computed tomography (CT) is a challenging topic, in part due to the limited memory provide by graphics processing units (GPU) and large number of parameters and in 3D fully convolutional networks (FCN). Two prevalent strategies, lower resolution with wider field of view and higher resolution with limited field of view, have been explored but have been presented with varying degrees of success. In this paper, we propose a novel patch-based network with random spatial initialization and statistical fusion on overlapping regions of interest (ROIs). We evaluate the proposed approach using three datasets consisting of 260 subjects with varying numbers of manual labels. Compared with the canonical "coarse-to-fine" baseline methods, the proposed method increases the performance on multi-organ segmentation from 0.799 to 0.856 in terms of mean DSC score (p-value < 0.01 with paired t-test). The effect of different numbers of patches is evaluated by increasing the depth of coverage (expected number of patches evaluated per voxel). In addition, our method outperforms other state-of-the-art methods in abdominal organ segmentation. In conclusion, the approach provides a memory-conservative framework to enable 3D segmentation on high-resolution CT. The approach is compatible with many base network structures, without substantially increasing the complexity during inference. Given a CT scan with at high resolution, a low-res section (left panel) is trained with multi-channel segmentation. The low-res part contains down-sampling and normalization in order to preserve the complete spatial information. Interpolation and random patch sampling (mid panel) is employed to collect patches. The high-dimensional probability maps are acquired (right panel) from integration of all patches on field of views.


Asunto(s)
Imagenología Tridimensional , Redes Neurales de la Computación , Abdomen/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X
3.
Med Phys ; 48(3): 1276-1285, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33410167

RESUMEN

PURPOSE: Dynamic contrast-enhanced computed tomography (CT) is widely used to provide dynamic tissue contrast for diagnostic investigation and vascular identification. However, the phase information of contrast injection is typically recorded manually by technicians, which introduces missing or mislabeling. Hence, imaging-based contrast phase identification is appealing, but challenging, due to large variations among different contrast protocols, vascular dynamics, and metabolism, especially for clinically acquired CT scans. The purpose of this study is to perform imaging-based phase identification for dynamic abdominal CT using a proposed adversarial learning framework across five representative contrast phases. METHODS: A generative adversarial network (GAN) is proposed as a disentangled representation learning model. To explicitly model different contrast phases, a low dimensional common representation and a class specific code are fused in the hidden layer. Then, the low dimensional features are reconstructed following a discriminator and classifier. 36 350 slices of CT scans from 400 subjects are used to evaluate the proposed method with fivefold cross-validation with splits on subjects. Then, 2216 slices images from 20 independent subjects are employed as independent testing data, which are evaluated using multiclass normalized confusion matrix. RESULTS: The proposed network significantly improved correspondence (0.93) over VGG, ResNet50, StarGAN, and 3DSE with accuracy scores 0.59, 0.62, 0.72, and 0.90, respectively (P < 0.001 Stuart-Maxwell test for normalized multiclass confusion matrix). CONCLUSION: We show that adversarial learning for discriminator can be benefit for capturing contrast information among phases. The proposed discriminator from the disentangled network achieves promising results.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Abdomen , Humanos , Tomografía Computarizada Espiral
4.
IEEE Trans Med Imaging ; 40(5): 1499-1507, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33560981

RESUMEN

Body part regression is a promising new technique that enables content navigation through self-supervised learning. Using this technique, the global quantitative spatial location for each axial view slice is obtained from computed tomography (CT). However, it is challenging to define a unified global coordinate system for body CT scans due to the large variabilities in image resolution, contrasts, sequences, and patient anatomy. Therefore, the widely used supervised learning approach cannot be easily deployed. To address these concerns, we propose an annotation-free method named blind-unsupervised-supervision network (BUSN). The contributions of the work are in four folds: (1) 1030 multi-center CT scans are used in developing BUSN without any manual annotation. (2) the proposed BUSN corrects the predictions from unsupervised learning and uses the corrected results as the new supervision; (3) to improve the consistency of predictions, we propose a novel neighbor message passing (NMP) scheme that is integrated with BUSN as a statistical learning based correction; and (4) we introduce a new pre-processing pipeline with inclusion of the BUSN, which is validated on 3D multi-organ segmentation. The proposed method is trained on 1,030 whole body CT scans (230,650 slices) from five datasets, as well as an independent external validation cohort with 100 scans. From the body part regression results, the proposed BUSN achieved significantly higher median R-squared score (=0.9089) than the state-of-the-art unsupervised method (=0.7153). When introducing BUSN as a preprocessing stage in volumetric segmentation, the proposed pre-processing pipeline using BUSN approach increases the total mean Dice score of the 3D abdominal multi-organ segmentation from 0.7991 to 0.8145.


Asunto(s)
Cuerpo Humano , Tomografía Computarizada por Rayos X , Humanos , Procesamiento de Imagen Asistido por Computador
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 3040-3045, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33018646

RESUMEN

The success of deep learning (DL) methods in the Brain-Computer Interfaces (BCI) field for classification of electroencephalographic (EEG) recordings has been restricted by the lack of large datasets. Privacy concerns associated with EEG signals limit the possibility of constructing a large EEG-BCI dataset by the conglomeration of multiple small ones for jointly training machine learning models. Hence, in this paper, we propose a novel privacy-preserving DL architecture named federated transfer learning (FTL) for EEG classification that is based on the federated learning framework. Working with the single-trial covariance matrix, the proposed architecture extracts common discriminative information from multi-subject EEG data with the help of domain adaptation techniques. We evaluate the performance of the proposed architecture on the PhysioNet dataset for 2-class motor imagery classification. While avoiding the actual data sharing, our FTL approach achieves 2% higher classification accuracy in a subject-adaptive analysis. Also, in the absence of multi-subject data, our architecture provides 6% better accuracy compared to other state-of-the-art DL architectures.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Imágenes en Psicoterapia , Aprendizaje Automático , Privacidad
6.
J Med Imaging (Bellingham) ; 7(4): 044002, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32775501

RESUMEN

Purpose: Deep learning methods have become essential tools for quantitative interpretation of medical imaging data, but training these approaches is highly sensitive to biases and class imbalance in the available data. There is an opportunity to increase the available training data by combining across different data sources (e.g., distinct public projects); however, data collected under different scopes tend to have differences in class balance, label availability, and subject demographics. Recent work has shown that importance sampling can be used to guide training selection. To date, these approaches have not considered imbalanced data sources with distinct labeling protocols. Approach: We propose a sampling policy, known as adaptive stochastic policy (ASP), inspired by reinforcement learning to adapt training based on subject, data source, and dynamic use criteria. We apply ASP in the context of multiorgan abdominal computed tomography segmentation. Training was performed with cross validation on 840 subjects from 10 data sources. External validation was performed with 20 subjects from 1 data source. Results: Four alternative strategies were evaluated with the state-of-the-art baseline as upper confident bound (UCB). ASP achieves average Dice of 0.8261 compared to 0.8135 UCB ( p < 0.01 , paired t -test) across fivefold cross validation. On withheld testing datasets, the proposed ASP achieved 0.8265 mean Dice versus 0.8077 UCB ( p < 0.01 , paired t -test). Conclusions: ASP provides a flexible reweighting technique for training deep learning models. We conclude that the proposed method adapts the sample importance, which leverages the performance on a challenging multisite, multiorgan, and multisize segmentation task.

7.
Artículo en Inglés | MEDLINE | ID: mdl-34040277

RESUMEN

Segmentation of abdominal computed tomography (CT) provides spatial context, morphological properties, and a framework for tissue-specific radiomics to guide quantitative Radiological assessment. A 2015 MICCAI challenge spurred substantial innovation in multi-organ abdominal CT segmentation with both traditional and deep learning methods. Recent innovations in deep methods have driven performance toward levels for which clinical translation is appealing. However, continued cross-validation on open datasets presents the risk of indirect knowledge contamination and could result in circular reasoning. Moreover, "real world" segmentations can be challenging due to the wide variability of abdomen physiology within patients. Herein, we perform two data retrievals to capture clinically acquired deidentified abdominal CT cohorts with respect to a recently published variation on 3D U-Net (baseline algorithm). First, we retrieved 2004 deidentified studies on 476 patients with diagnosis codes involving spleen abnormalities (cohort A). Second, we retrieved 4313 deidentified studies on 1754 patients without diagnosis codes involving spleen abnormalities (cohort B). We perform prospective evaluation of the existing algorithm on both cohorts, yielding 13% and 8% failure rate, respectively. Then, we identified 51 subjects in cohort A with segmentation failures and manually corrected the liver and gallbladder labels. We re-trained the model adding the manual labels, resulting in performance improvement of 9% and 6% failure rate for the A and B cohorts, respectively. In summary, the performance of the baseline on the prospective cohorts was similar to that on previously published datasets. Moreover, adding data from the first cohort substantively improved performance when evaluated on the second withheld validation cohort.

8.
Artículo en Inglés | MEDLINE | ID: mdl-34040279

RESUMEN

Human in-the-loop quality assurance (QA) is typically performed after medical image segmentation to ensure that the systems are performing as intended, as well as identifying and excluding outliers. By performing QA on large-scale, previously unlabeled testing data, categorical QA scores (e.g. "successful" versus "unsuccessful") can be generated. Unfortunately, the precious use of resources for human in-the-loop QA scores are not typically reused in medical image machine learning, especially to train a deep neural network for image segmentation. Herein, we perform a pilot study to investigate if the QA labels can be used as supplementary supervision to augment the training process in a semi-supervised fashion. In this paper, we propose a semi-supervised multi-organ segmentation deep neural network consisting of a traditional segmentation model generator and a QA involved discriminator. An existing 3-D abdominal segmentation network is employed, while the pre-trained ResNet-18 network is used as discriminator. A large-scale dataset of 2027 volumes are used to train the generator, whose 2-D montage images and segmentation mask with QA scores are used to train the discriminator. To generate the QA scores, the 2-D montage images were reviewed manually and coded 0 (success), 1 (errors consistent with published performance), and 2 (gross failure). Then, the ResNet-18 network was trained with 1623 montage images in equal distribution of all three code labels and achieved an accuracy 94% for classification predictions with 404 montage images withheld for the test cohort. To assess the performance of using the QA supervision, the discriminator was used as a loss function in a multi-organ segmentation pipeline. The inclusion of QA-loss function boosted performance on the unlabeled test dataset from 714 patients to 951 patients over the baseline model. Additionally, the number of failures decreased from 606 (29.90%) to 402 (19.83%). The contributions of the proposed method are three-fold: We show that (1) the QA scores can be used as a loss function to perform semi-supervised learning for unlabeled data, (2) the well trained discriminator is learnt by QA score rather than traditional "true/false", and (3) the performance of multi-organ segmentation on unlabeled datasets can be fine-tuned with more robust and higher accuracy than the original baseline method. The use of QA-inspired loss functions represents a promising area of future research and may permit tighter integration of supervised and semi-supervised learning.

9.
Artículo en Inglés | MEDLINE | ID: mdl-34526733

RESUMEN

Dynamic contrast enhanced computed tomography (CT) is an imaging technique that provides critical information on the relationship of vascular structure and dynamics in the context of underlying anatomy. A key challenge for image processing with contrast enhanced CT is that phase discrepancies are latent in different tissues due to contrast protocols, vascular dynamics, and metabolism variance. Previous studies with deep learning frameworks have been proposed for classifying contrast enhancement with networks inspired by computer vision. Here, we revisit the challenge in the context of whole abdomen contrast enhanced CTs. To capture and compensate for the complex contrast changes, we propose a novel discriminator in the form of a multi-domain disentangled representation learning network. The goal of this network is to learn an intermediate representation that separates contrast enhancement from anatomy and enables classification of images with varying contrast time. Briefly, our unpaired contrast disentangling GAN(CD-GAN) Discriminator follows the ResNet architecture to classify a CT scan from different enhancement phases. To evaluate the approach, we trained the enhancement phase classifier on 21060 slices from two clinical cohorts of 230 subjects. The scans were manually labeled with three independent enhancement phases (non-contrast, portal venous and delayed). Testing was performed on 9100 slices from 30 independent subjects who had been imaged with CT scans from all contrast phases. Performance was quantified in terms of the multi-class normalized confusion matrix. The proposed network significantly improved correspondence over baseline UNet, ResNet50 and StarGAN's performance of accuracy scores 0.54. 0.55, 0.62 and 0.91, respectively (p-value<0.0001 paired t-test for ResNet versus CD-GAN). The proposed discriminator from the disentangled network presents a promising technique that may allow deeper modeling of dynamic imaging against patient specific anatomies.

10.
Artículo en Inglés | MEDLINE | ID: mdl-33907347

RESUMEN

Abdominal multi-organ segmentation of computed tomography (CT) images has been the subject of extensive research interest. It presents a substantial challenge in medical image processing, as the shape and distribution of abdominal organs can vary greatly among the population and within an individual over time. While continuous integration of novel datasets into the training set provides potential for better segmentation performance, collection of data at scale is not only costly, but also impractical in some contexts. Moreover, it remains unclear what marginal value additional data have to offer. Herein, we propose a single-pass active learning method through human quality assurance (QA). We built on a pre-trained 3D U-Net model for abdominal multi-organ segmentation and augmented the dataset either with outlier data (e.g., exemplars for which the baseline algorithm failed) or inliers (e.g., exemplars for which the baseline algorithm worked). The new models were trained using the augmented datasets with 5-fold cross-validation (for outlier data) and withheld outlier samples (for inlier data). Manual labeling of outliers increased Dice scores with outliers by 0.130, compared to an increase of 0.067 with inliers (p<0.001, two-tailed paired t-test). By adding 5 to 37 inliers or outliers to training, we find that the marginal value of adding outliers is higher than that of adding inliers. In summary, improvement on single-organ performance was obtained without diminishing multi-organ performance or significantly increasing training time. Hence, identification and correction of baseline failure cases present an effective and efficient method of selecting training data to improve algorithm performance.

11.
Abdom Radiol (NY) ; 45(9): 2688-2697, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32232524

RESUMEN

PURPOSE: To evaluate whether a three-phase dynamic contrast-enhanced CT protocol, when combined with a deep learning model, has similar accuracy in differentiating hepatocellular carcinoma (HCC) from other focal liver lesions (FLLs) compared with a four-phase protocol. METHODS: Three hundred and forty-two patients (mean age 49.1 ± 10.5 years, range 19-86 years, 65.8% male) scanned with a four-phase CT protocol (precontrast, arterial, portal-venous and delayed phases) were retrospectively enrolled. A total of 449 FLLs were categorized into HCC and non-HCC groups based on the best available reference standard. Three convolutional dense networks (CDNs) with the input of four-phase CT images (model A), three-phase images without portal-venous phase (model B) and three-phase images without precontrast phase (model C) were trained on 80% of lesions and evaluated in the other 20% by receiver operating characteristics (ROC) and confusion matrix analysis. The DeLong test was performed to compare the areas under the ROC curves (AUCs) of A with B, B with C, and A with C. RESULTS: The diagnostic accuracy in differentiating HCC from other FLLs on test sets was 83.3% for model A, 81.1% for model B and 85.6% for model C, and the AUCs were 0.925, 0.862 and 0.920, respectively. The AUCs of models A and C did not differ significantly (p = 0.765), but the AUCs of models A and B (p = 0.038) and of models B and C (p = 0.028) did. CONCLUSIONS: When combined with a CDN, a three-phase CT protocol without precontrast showed similar diagnostic accuracy as a four-phase protocol in differentiating HCC from other FLLs, suggesting that the multiphase CT protocol for HCC diagnosis might be optimized by removing the precontrast phase to reduce radiation dose.


Asunto(s)
Carcinoma Hepatocelular , Aprendizaje Profundo , Neoplasias Hepáticas , Adulto , Anciano , Anciano de 80 o más Años , Carcinoma Hepatocelular/diagnóstico por imagen , Medios de Contraste , Femenino , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , Tomografía Computarizada Espiral , Adulto Joven
12.
Cancer Manag Res ; 12: 2979-2992, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32425607

RESUMEN

PURPOSE: The purpose of this study is ï»¿to compare the detection performance of the 3-dimensional convolutional neural network (3D CNN)-based computer-aided detection (CAD) models with radiologists of different levels of experience in detecting pulmonary nodules on thin-section computed tomography (CT). PATIENTS AND METHODS: We retrospectively reviewed 1109 consecutive patients who underwent follow-up thin-section CT at our institution. The 3D CNN model for nodule detection was re-trained and complemented by expert augmentation. The annotations of a consensus panel consisting of two expert radiologists determined the ground truth. The detection performance of the re-trained CAD model and three other radiologists at different levels of experience were tested using a free-response receiver operating characteristic (FROC) analysis in the test group. RESULTS: The detection performance of the re-trained CAD model was significantly better than that of the pre-trained network (sensitivity: 93.09% vs 38.44%). The re-trained CAD model had a significantly better detection performance than radiologists (average sensitivity: 93.09% vs 50.22%), without significantly increasing the number of false positives per scan (1.64 vs 0.68). In the training set, 922 nodules less than 3 mm in size in 211 patients at high risk were recommended for follow-up CT according to the Fleischner Society Guidelines. Fifteen of 101 solid nodules were confirmed to be lung cancer. CONCLUSION: The re-trained 3D CNN-based CAD model, complemented by expert augmentation, was an accurate and efficient tool in identifying incidental pulmonary nodules for subsequent management.

13.
IEEE Trans Pattern Anal Mach Intell ; 31(6): 989-1005, 2009 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-19372605

RESUMEN

A discriminant formulation of top-down visual saliency, intrinsically connected to the recognition problem, is proposed. The new formulation is shown to be closely related to a number of classical principles for the organization of perceptual systems, including infomax, inference by detection of suspicious coincidences, classification with minimal uncertainty, and classification with minimum probability of error. The implementation of these principles with computational parsimony, by exploitation of the statistics of natural images, is investigated. It is shown that Barlow's principle of inference by the detection of suspicious coincidences enables computationally efficient saliency measures which are nearly optimal for classification. This principle is adopted for the solution of the two fundamental problems in discriminant saliency, feature selection and saliency detection. The resulting saliency detector is shown to have a number of interesting properties, and act effectively as a focus of attention mechanism for the selection of interest points according to their relevance for visual recognition. Experimental evidence shows that the selected points have good performance with respect to 1) the ability to localize objects embedded in significant amounts of clutter, 2) the ability to capture information relevant for image classification, and 3) the richness of the set of visual attributes that can be considered salient.


Asunto(s)
Algoritmos , Inteligencia Artificial , Biomimética/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Percepción Visual , Análisis Discriminante , Humanos , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
14.
J Med Imaging (Bellingham) ; 6(4): 044005, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31763353

RESUMEN

Tissue window filtering has been widely used in deep learning for computed tomography (CT) image analyses to improve training performance (e.g., soft tissue windows for abdominal CT). However, the effectiveness of tissue window normalization is questionable since the generalizability of the trained model might be further harmed, especially when such models are applied to new cohorts with different CT reconstruction kernels, contrast mechanisms, dynamic variations in the acquisition, and physiological changes. We evaluate the effectiveness of both with and without using soft tissue window normalization on multisite CT cohorts. Moreover, we propose a stochastic tissue window normalization (SWN) method to improve the generalizability of tissue window normalization. Different from the random sampling, the SWN method centers the randomization around the soft tissue window to maintain the specificity for abdominal organs. To evaluate the performance of different strategies, 80 training and 453 validation and testing scans from six datasets are employed to perform multiorgan segmentation using standard 2D U-Net. The six datasets cover the scenarios, where the training and testing scans are from (1) same scanner and same population, (2) same CT contrast but different pathology, and (3) different CT contrast and pathology. The traditional soft tissue window and nonwindowed approaches achieved better performance on (1). The proposed SWN achieved general superior performance on (2) and (3) with statistical analyses, which offers better generalizability for a trained model.

15.
J Vis ; 8(7): 13.1-18, 2008 Jun 13.
Artículo en Inglés | MEDLINE | ID: mdl-19146246

RESUMEN

It has been suggested that saliency mechanisms play a role in perceptual organization. This work evaluates the plausibility of a recently proposed generic principle for visual saliency: that all saliency decisions are optimal in a decision-theoretic sense. The discriminant saliency hypothesis is combined with the classical assumption that bottom-up saliency is a center-surround process to derive a (decision-theoretic) optimal saliency architecture. Under this architecture, the saliency of each image location is equated to the discriminant power of a set of features with respect to the classification problem that opposes stimuli at center and surround. The optimal saliency detector is derived for various stimulus modalities, including intensity, color, orientation, and motion, and shown to make accurate quantitative predictions of various psychophysics of human saliency for both static and motion stimuli. These include some classical nonlinearities of orientation and motion saliency and a Weber law that governs various types of saliency asymmetries. The discriminant saliency detectors are also applied to various saliency problems of interest in computer vision, including the prediction of human eye fixations on natural scenes, motion-based saliency in the presence of ego-motion, and background subtraction in highly dynamic scenes. In all cases, the discriminant saliency detectors outperform previously proposed methods from both the saliency and the general computer vision literatures.


Asunto(s)
Discriminación en Psicología/fisiología , Percepción de Forma/fisiología , Percepción de Movimiento/fisiología , Percepción Visual/fisiología , Humanos , Estimulación Luminosa , Psicofísica/métodos
16.
Lung Cancer ; 125: 109-114, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30429007

RESUMEN

OBJECTIVES: Pulmonary granulomatous nodule (GN) with spiculated or lobulated appearance are indistinguishable from solid lung adenocarcinoma (SADC) based on CT morphological features, and partial false-positive findings on PET/CT. The objective of this study was to investigate the ability of quantitative CT radiomics for preoperatively differentiating solitary atypical GN from SADC. METHODS: 302 eligible patients (SADC = 209, GN = 93) were evaluated in this retrospective study and were divided into training (n = 211) and validation cohorts (n = 91). Radiomics features were extracted from plain and vein-phase CT images. The L1 regularized logistic regression model was used to identify the optimal radiomics features for construction of a radiomics model in differentiate solitary GN from SADC. The performance of the constructed radiomics model was evaluated using the area under curve (AUC) of receiver operating characteristic curve (ROC). RESULTS: 16.7% (35/209) of SADC were misdiagnosed as GN and 24.7% (23/93) of GN were misdiagnosed as lung cancer before surgery. The AUCs of combined radiomics and clinical risk factors were 0.935, 0.902, and 0.923 in the training cohort of plain radiomics(PR), vein radiomics, and plain and vein radiomics, and were 0.817, 0835, and 0.841 in the validation cohort of three models, respectively. PR combined with clinical risk factors (PRC) performed better than simple radiomics models (p < 0.05). The diagnostic accuracy of PRC in the total cohorts was similar to our radiologists (p ≥ 0.05). CONCLUSIONS: As a noninvasive method, PRC has the ability to identify SADC and GN with spiculation or lobulation.


Asunto(s)
Adenocarcinoma del Pulmón/patología , Neoplasias Pulmonares/patología , Nódulo Pulmonar Solitario/patología , Área Bajo la Curva , Femenino , Humanos , Modelos Logísticos , Masculino , Persona de Mediana Edad , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Curva ROC , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos
17.
J Thorac Dis ; 10(Suppl 7): S807-S819, 2018 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-29780627

RESUMEN

BACKGROUND: Lymph node metastasis (LNM) of lung cancer is an important factor related to survival and recurrence. The association between radiomics features of lung cancer and LNM remains unclear. We developed and validated a radiomics nomogram to predict LNM in solid lung adenocarcinoma. METHODS: A total of 159 eligible patients with solid lung adenocarcinoma were divided into training (n=106) and validation cohorts (n=53). Radiomics features were extracted from venous-phase CT images. We built a radiomics nomogram using a multivariate logistic regression model combined with CT-reported lymph node (LN) status. The performance of the radiomics nomogram was evaluated using the area under curve (AUC) of receiver operating characteristic curve. We performed decision curve analysis (DCA) within training and validation cohorts to assess the clinical usefulness of the nomogram. RESULTS: Fourteen radiomics features were chosen from 94 candidate features to build a radiomics signature that significantly correlated with LNM. The model showed good calibration and discrimination in the training cohort, with an AUC of 0.871 (95% CI: 0.804-0.937), sensitivity of 85.71% and specificity of 77.19%. In the validation cohort, AUC was 0.856 (95% CI: 0.745-0.966), sensitivity was 91.66%, and specificity was 82.14%. DCA demonstrated that the nomogram was clinically useful. The nomogram also showed good predictive ability in patients at high risk for LNM in the CT-reported LN negative (cN0) subgroup. CONCLUSIONS: The radiomics nomogram, based on preoperative CT images, can be used as a noninvasive method to predict LNM in patients with solid lung adenocarcinoma.

18.
Med Image Comput Comput Assist Interv ; 13(Pt 2): 446-53, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-20879346

RESUMEN

Image focus quality is of utmost importance in digital microscopes because the pathologist cannot accurately characterize the tissue state without focused images. We propose to train a classifier to measure the focus quality of microscopy scans based on an extensive set of image features. However, classifiers rely heavily on the quality and quantity of the training data, and collecting annotated data is tedious and expensive. We therefore propose a new method to automatically generate large amounts of training data using image stacks. Our experiments demonstrate that a classifier trained with the image stacks performs comparably with one trained with manually annotated data. The classifier is able to accurately detect out-of-focus regions, provide focus quality feedback to the user, and identify potential problems of the microscopy design.


Asunto(s)
Algoritmos , Inteligencia Artificial , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Microscopía/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
19.
Neural Comput ; 21(1): 239-71, 2009 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-19210172

RESUMEN

A decision-theoretic formulation of visual saliency, first proposed for top-down processing (object recognition) (Gao & Vasconcelos, 2005a), is extended to the problem of bottom-up saliency. Under this formulation, optimality is defined in the minimum probability of error sense, under a constraint of computational parsimony. The saliency of the visual features at a given location of the visual field is defined as the power of those features to discriminate between the stimulus at the location and a null hypothesis. For bottom-up saliency, this is the set of visual features that surround the location under consideration. Discrimination is defined in an information-theoretic sense and the optimal saliency detector derived for a class of stimuli that complies with known statistical properties of natural images. It is shown that under the assumption that saliency is driven by linear filtering, the optimal detector consists of what is usually referred to as the standard architecture of V1: a cascade of linear filtering, divisive normalization, rectification, and spatial pooling. The optimal detector is also shown to replicate the fundamental properties of the psychophysics of saliency: stimulus pop-out, saliency asymmetries for stimulus presence versus absence, disregard of feature conjunctions, and Weber's law. Finally, it is shown that the optimal saliency architecture can be applied to the solution of generic inference problems. In particular, for the class of stimuli studied, it performs the three fundamental operations of statistical inference: assessment of probabilities, implementation of Bayes decision rule, and feature selection.


Asunto(s)
Toma de Decisiones/fisiología , Discriminación en Psicología/fisiología , Redes Neurales de la Computación , Reconocimiento Visual de Modelos/fisiología , Teoría Psicológica , Animales , Atención , Humanos , Modelos Neurológicos , Neurofisiología , Estimulación Luminosa/métodos , Psicofísica/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA