Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Neuroimage ; 111: 562-79, 2015 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-25652394

RESUMEN

Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n=30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org.


Asunto(s)
Algoritmos , Enfermedad de Alzheimer/diagnóstico , Disfunción Cognitiva/diagnóstico , Diagnóstico por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Anciano , Anciano de 80 o más Años , Enfermedad de Alzheimer/clasificación , Disfunción Cognitiva/clasificación , Diagnóstico por Computador/normas , Femenino , Humanos , Interpretación de Imagen Asistida por Computador/normas , Imagen por Resonancia Magnética/normas , Masculino , Persona de Mediana Edad , Sensibilidad y Especificidad
2.
Adv Neurobiol ; 36: 469-486, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38468048

RESUMEN

This chapter discusses multifractal texture estimation and characterization of brain lesions (necrosis, edema, enhanced tumor, nonenhanced tumor, etc.) in magnetic resonance (MR) images. This work formulates the complex texture of tumor in MR images using a stochastic model known as multifractional Brownian motion (mBm). Mathematical derivations of the mBm model and corresponding algorithm to extract the spatially varying multifractal texture feature are discussed. Extracted multifractal texture feature is fused with other effective features to enhance the tissue characteristics. Segmentation of the tissues is performed using a feature-based classification method. The efficacy of the mBm texture feature in segmenting different abnormal tissues is demonstrated using a large-scale publicly available clinical dataset. Experimental results and performance of the methods confirm the efficacy of the proposed technique in an automatic segmentation of abnormal tissues in multimodal (T1, T2, Flair, and T1contrast) brain MRIs.


Asunto(s)
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Algoritmos , Neuroimagen , Encéfalo/diagnóstico por imagen , Encéfalo/patología
3.
Cancers (Basel) ; 15(18)2023 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-37760604

RESUMEN

Recent clinical research describes a subset of glioblastoma patients that exhibit REP prior to the start of radiation therapy. Current literature has thus far described this population using clinicopathologic features. To our knowledge, this study is the first to investigate the potential of conventional radiomics, sophisticated multi-resolution fractal texture features, and different molecular features (MGMT, IDH mutations) as a diagnostic and prognostic tool for prediction of REP from non-REP cases using computational and statistical modeling methods. The radiation-planning T1 post-contrast (T1C) MRI sequences of 70 patients are analyzed. An ensemble method with 5-fold cross-validation over 1000 iterations offers an AUC of 0.793 ± 0.082 for REP versus non-REP classification. In addition, copula-based modeling under dependent censoring (where a subset of the patients may not be followed up with until death) identifies significant features (p-value < 0.05) for survival probability and prognostic grouping of patient cases. The prediction of survival for the patients' cohort produces a precision of 0.881 ± 0.056. The prognostic index (PI) calculated using the fused features shows that 84.62% of REP cases fall under the bad prognostic group, suggesting the potential of fused features for predicting a higher percentage of REP cases. The experimental results further show that multi-resolution fractal texture features perform better than conventional radiomics features for prediction of REP and survival outcomes.

4.
IEEE Trans Neural Netw Learn Syst ; 33(11): 6215-6225, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-33900927

RESUMEN

Efficient processing of large-scale time-series data is an intricate problem in machine learning. Conventional sensor signal processing pipelines with hand-engineered feature extraction often involve huge computational costs with high dimensional data. Deep recurrent neural networks have shown promise in automated feature learning for improved time-series processing. However, generic deep recurrent models grow in scale and depth with the increased complexity of the data. This is particularly challenging in presence of high dimensional data with temporal and spatial characteristics. Consequently, this work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to efficiently process complex multidimensional time-series data with spatial information. The cellular recurrent architecture in the proposed model allows for location-aware synchronous processing of time-series data from spatially distributed sensor signal sources. Extensive trainable parameter sharing due to cellularity in the proposed architecture ensures efficiency in the use of recurrent processing units with high-dimensional inputs. This study also investigates the versatility of the proposed DCRNN model for the classification of multiclass time-series data from different application domains. Consequently, the proposed DCRNN architecture is evaluated using two time-series data sets: a multichannel scalp electroencephalogram (EEG) data set for seizure detection, and a machine fault detection data set obtained in-house. The results suggest that the proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.


Asunto(s)
Redes Neurales de la Computación , Procesamiento de Señales Asistido por Computador , Humanos , Electroencefalografía/métodos , Aprendizaje Automático , Convulsiones
5.
Artículo en Inglés | MEDLINE | ID: mdl-36157884

RESUMEN

The concept of weight pruning has shown success in neural network model compression with marginal loss in classification performance. However, similar concepts have not been well recognized in improving unsupervised learning. To the best of our knowledge, this paper proposes one of the first studies on weight pruning in unsupervised autoencoder models using non-imaging data points. We adapt the weight pruning concept to investigate the dynamic behavior of weights while reconstructing data using an autoencoder and propose a deterministic model perturbation algorithm based on the weight statistics. The model perturbation at periodic intervals resets a percentage of weight values using a binary weight mask. Experiments across eight non-imaging data sets ranging from gene sequence to swarm behavior data show that only a few periodic perturbations of weights improve the data reconstruction accuracy of autoencoders and additionally introduce model compression. All data sets yield a small portion of (<5%) weights that are substantially higher than the mean weight value. These weights are found to be much more informative than a substantial portion (>90%) of the weights with negative values. In general, the perturbation of low or negative weight values at periodic intervals has improved the data reconstruction loss for most data sets when compared to the case without perturbation. The proposed approach may help explain and correct the dynamic behavior of neural network models in a deterministic way for data reconstruction and obtaining a more accurate representation of latent variables using autoencoders.

6.
Front Oncol ; 11: 668694, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34277415

RESUMEN

Gliomas are primary brain tumors that originate from glial cells. Classification and grading of these tumors is critical to prognosis and treatment planning. The current criteria for glioma classification in central nervous system (CNS) was introduced by World Health Organization (WHO) in 2016. This criteria for glioma classification requires the integration of histology with genomics. In 2017, the Consortium to Inform Molecular and Practical Approaches to CNS Tumor Taxonomy (cIMPACT-NOW) was established to provide up-to-date recommendations for CNS tumor classification, which in turn the WHO is expected to adopt in its upcoming edition. In this work, we propose a novel glioma analytical method that, for the first time in the literature, integrates a cellularity feature derived from the digital analysis of brain histopathology images integrated with molecular features following the latest WHO criteria. We first propose a novel over-segmentation strategy for region-of-interest (ROI) selection in large histopathology whole slide images (WSIs). A Deep Neural Network (DNN)-based classification method then fuses molecular features with cellularity features to improve tumor classification performance. We evaluate the proposed method with 549 patient cases from The Cancer Genome Atlas (TCGA) dataset for evaluation. The cross validated classification accuracies are 93.81% for lower-grade glioma (LGG) and high-grade glioma (HGG) using a regular DNN, and 73.95% for LGG II and LGG III using a residual neural network (ResNet) DNN, respectively. Our experiments suggest that the type of deep learning has a significant impact on tumor subtype discrimination between LGG II vs. LGG III. These results outperform state-of-the-art methods in classifying LGG II vs. LGG III and offer competitive performance in distinguishing LGG vs. HGG in the literature. In addition, we also investigate molecular subtype classification using pathology images and cellularity information. Finally, for the first time in literature this work shows promise for cellularity quantification to predict brain tumor grading for LGGs with IDH mutations.

7.
Front Med (Lausanne) ; 8: 705071, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34490297

RESUMEN

RNA sequencing (RNAseq) is a recent technology that profiles gene expression by measuring the relative frequency of the RNAseq reads. RNAseq read counts data is increasingly used in oncologic care and while radiology features (radiomics) have also been gaining utility in radiology practice such as disease diagnosis, monitoring, and treatment planning. However, contemporary literature lacks appropriate RNA-radiomics (henceforth, radiogenomics ) joint modeling where RNAseq distribution is adaptive and also preserves the nature of RNAseq read counts data for glioma grading and prediction. The Negative Binomial (NB) distribution may be useful to model RNAseq read counts data that addresses potential shortcomings. In this study, we propose a novel radiogenomics-NB model for glioma grading and prediction. Our radiogenomics-NB model is developed based on differentially expressed RNAseq and selected radiomics/volumetric features which characterize tumor volume and sub-regions. The NB distribution is fitted to RNAseq counts data, and a log-linear regression model is assumed to link between the estimated NB mean and radiomics. Three radiogenomics-NB molecular mutation models (e.g., IDH mutation, 1p/19q codeletion, and ATRX mutation) are investigated. Additionally, we explore gender-specific effects on the radiogenomics-NB models. Finally, we compare the performance of the proposed three mutation prediction radiogenomics-NB models with different well-known methods in the literature: Negative Binomial Linear Discriminant Analysis (NBLDA), differentially expressed RNAseq with Random Forest (RF-genomics), radiomics and differentially expressed RNAseq with Random Forest (RF-radiogenomics), and Voom-based count transformation combined with the nearest shrinkage classifier (VoomNSC). Our analysis shows that the proposed radiogenomics-NB model significantly outperforms (ANOVA test, p < 0.05) for prediction of IDH and ATRX mutations and offers similar performance for prediction of 1p/19q codeletion, when compared to the competing models in the literature, respectively.

8.
Front Artif Intell ; 4: 718950, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35047766

RESUMEN

This work investigates the efficacy of deep learning (DL) for classifying C100 superconducting radio-frequency (SRF) cavity faults in the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab. CEBAF is a large, high-power continuous wave recirculating linac that utilizes 418 SRF cavities to accelerate electrons up to 12 GeV. Recent upgrades to CEBAF include installation of 11 new cryomodules (88 cavities) equipped with a low-level RF system that records RF time-series data from each cavity at the onset of an RF failure. Typically, subject matter experts (SME) analyze this data to determine the fault type and identify the cavity of origin. This information is subsequently utilized to identify failure trends and to implement corrective measures on the offending cavity. Manual inspection of large-scale, time-series data, generated by frequent system failures is tedious and time consuming, and thereby motivates the use of machine learning (ML) to automate the task. This study extends work on a previously developed system based on traditional ML methods (Tennant and Carpenter and Powers and Shabalina Solopova and Vidyaratne and Iftekharuddin, Phys. Rev. Accel. Beams, 2020, 23, 114601), and investigates the effectiveness of deep learning approaches. The transition to a DL model is driven by the goal of developing a system with sufficiently fast inference that it could be used to predict a fault event and take actionable information before the onset (on the order of a few hundred milliseconds). Because features are learned, rather than explicitly computed, DL offers a potential advantage over traditional ML. Specifically, two seminal DL architecture types are explored: deep recurrent neural networks (RNN) and deep convolutional neural networks (CNN). We provide a detailed analysis on the performance of individual models using an RF waveform dataset built from past operational runs of CEBAF. In particular, the performance of RNN models incorporating long short-term memory (LSTM) are analyzed along with the CNN performance. Furthermore, comparing these DL models with a state-of-the-art fault ML model shows that DL architectures obtain similar performance for cavity identification, do not perform quite as well for fault classification, but provide an advantage in inference speed.

9.
Appl Opt ; 49(10): B92-103, 2010 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-20357846

RESUMEN

In this work, we propose a novel technique for face recognition with +/-90 degrees pose variations in image sequences using a cellular simultaneous recurrent network (CSRN). We formulate the recognition problem with such large-pose variations as an implicit temporal prediction task for CSRN. We exploit a face extraction algorithm based on the scale-space method and facial structural knowledge as a preprocessing step. Further, to reduce computational cost, we obtain eigenfaces for a set of image sequences for each person and use these reduced pattern vectors as the input to CSRN. CSRN learns how to associate each face class/person in the training phase. A modified distance metric between successive frames of test and training output pattern vectors indicate either a match or mismatch between the two corresponding face classes. We extensively evaluate our CSRN-based face recognition technique using the publicly available VidTIMIT Audio-Video face dataset. Our simulation shows that for this dataset with large-scale pose variations, we can obtain an overall 77% face recognition rate.

10.
Appl Opt ; 49(10): B1-8, 2010 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-20357836

RESUMEN

We propose a new hierarchical architecture for visual pattern classification. The new architecture consists of a set of fixed, directional filters and a set of adaptive filters arranged in a cascade structure. The fixed filters are used to extract primitive features such as orientations and edges that are present in a wide range of objects, whereas the adaptive filters can be trained to find complex features that are specific to a given object. Both types of filter are based on the biological mechanism of shunting inhibition. The proposed architecture is applied to two problems: pedestrian detection and car detection. Evaluation results on benchmark data sets demonstrate that the proposed architecture outperforms several existing ones.


Asunto(s)
Reconocimiento de Normas Patrones Automatizadas , Sistemas de Computación , Humanos , Fenómenos Ópticos , Reconocimiento Visual de Modelos
11.
Sci Rep ; 10(1): 19726, 2020 11 12.
Artículo en Inglés | MEDLINE | ID: mdl-33184301

RESUMEN

A brain tumor is an uncontrolled growth of cancerous cells in the brain. Accurate segmentation and classification of tumors are critical for subsequent prognosis and treatment planning. This work proposes context aware deep learning for brain tumor segmentation, subtype classification, and overall survival prediction using structural multimodal magnetic resonance images (mMRI). We first propose a 3D context aware deep learning, that considers uncertainty of tumor location in the radiology mMRI image sub-regions, to obtain tumor segmentation. We then apply a regular 3D convolutional neural network (CNN) on the tumor segments to achieve tumor subtype classification. Finally, we perform survival prediction using a hybrid method of deep learning and machine learning. To evaluate the performance, we apply the proposed methods to the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) dataset for tumor segmentation and overall survival prediction, and to the dataset of the Computational Precision Medicine Radiology-Pathology (CPM-RadPath) Challenge on Brain Tumor Classification 2019 for tumor classification. We also perform an extensive performance evaluation based on popular evaluation metrics, such as Dice score coefficient, Hausdorff distance at percentile 95 (HD95), classification accuracy, and mean square error. The results suggest that the proposed method offers robust tumor segmentation and survival prediction, respectively. Furthermore, the tumor classification results in this work is ranked at second place in the testing phase of the 2019 CPM-RadPath global challenge.


Asunto(s)
Neoplasias Encefálicas/clasificación , Neoplasias Encefálicas/mortalidad , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Algoritmos , Neoplasias Encefálicas/patología , Femenino , Humanos , Masculino , Pronóstico , Tasa de Supervivencia
12.
Artículo en Inglés | MEDLINE | ID: mdl-34354762

RESUMEN

This work proposes a novel framework for brain tumor segmentation prediction in longitudinal multi-modal MRI scans, comprising two methods; feature fusion and joint label fusion (JLF). The first method fuses stochastic multi-resolution texture features with tumor cell density feature to obtain tumor segmentation predictions in follow-up timepoints using data from baseline pre-operative timepoint. The cell density feature is obtained by solving the 3D reaction-diffusion equation for biophysical tumor growth modelling using the Lattice-Boltzmann method. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method, known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). We quantitatively evaluate both proposed methods using the Dice Similarity Coefficient (DSC) in longitudinal scans of 9 patients from the public BraTS 2015 multi-institutional dataset. The evaluation results for the feature-based fusion method show improved tumor segmentation prediction for the whole tumor(DSC WT = 0.314, p = 0.1502), tumor core (DSC TC = 0.332, p = 0.0002), and enhancing tumor (DSC ET = 0.448, p = 0.0002) regions. The feature-based fusion shows some improvement on tumor prediction of longitudinal brain tumor tracking, whereas the JLF offers statistically significant improvement on the actual segmentation of WT and ET (DSC WT = 0.85 ± 0.055, DSC ET = 0.837 ± 0.074), and also improves the results of GB. The novelty of this work is two-fold: (a) exploit tumor cell density as a feature to predict brain tumor segmentation, using a stochastic multi-resolution RF-based method, and (b) improve the performance of another successful tumor segmentation method, GB, by fusing with the RF-based segmentation labels.

13.
Front Neurosci ; 13: 966, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31619949

RESUMEN

Glioblastoma is recognized as World Health Organization (WHO) grade IV glioma with an aggressive growth pattern. The current clinical practice in diagnosis and prognosis of Glioblastoma using MRI involves multiple steps including manual tumor sizing. Accurate identification and segmentation of multiple abnormal tissues within tumor volume in MRI is essential for precise survival prediction. Manual tumor and abnormal tissue detection and sizing are tedious, and subject to inter-observer variability. Consequently, this work proposes a fully automated MRI-based glioblastoma and abnormal tissue segmentation, and survival prediction framework. The framework includes radiomics feature-guided deep neural network methods for tumor tissue segmentation; followed by survival regression and classification using these abnormal tumor tissue segments and other relevant clinical features. The proposed multiple abnormal tumor tissue segmentation step effectively fuses feature-based and feature-guided deep radiomics information in structural MRI. The survival prediction step includes two representative survival prediction pipelines that combine different feature selection and regression approaches. The framework is evaluated using two recent widely used benchmark datasets from Brain Tumor Segmentation (BraTS) global challenges in 2017 and 2018. The best overall survival pipeline in the proposed framework achieves leave-one-out cross-validation (LOOCV) accuracy of 0.73 for training datasets and 0.68 for validation datasets, respectively. These training and validation accuracies for tumor patient survival prediction are among the highest reported in literature. Finally, a critical analysis of radiomics features and efficacy of these features in segmentation and survival prediction performance is presented as lessons learned.

14.
J Med Imaging (Bellingham) ; 6(2): 024501, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31037246

RESUMEN

A glioma grading method using conventional structural magnetic resonance image (MRI) and molecular data from patients is proposed. The noninvasive grading of glioma tumors is obtained using multiple radiomic texture features including dynamic texture analysis, multifractal detrended fluctuation analysis, and multiresolution fractal Brownian motion in structural MRI. The proposed method is evaluated using two multicenter MRI datasets: (1) the brain tumor segmentation (BRATS-2017) challenge for high-grade versus low-grade (LG) and (2) the cancer imaging archive (TCIA) repository for glioblastoma (GBM) versus LG glioma grading. The grading performance using MRI is compared with that of digital pathology (DP) images in the cancer genome atlas (TCGA) data repository. The results show that the mean area under the receiver operating characteristic curve (AUC) is 0.88 for the BRATS dataset. The classification of tumor grades using MRI and DP images in TCIA/TCGA yields mean AUC of 0.90 and 0.93, respectively. This work further proposes and compares tumor grading performance using molecular alterations (IDH1/2 mutations) along with MRI and DP data, following the most recent World Health Organization grading criteria, respectively. The overall grading performance demonstrates the efficacy of the proposed noninvasive glioma grading approach using structural MRI.

15.
IEEE Trans Neural Netw Learn Syst ; 29(10): 4905-4916, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-29993957

RESUMEN

Facial expression recognition is a challenging task that involves detection and interpretation of complex and subtle changes in facial muscles. Recent advances in feed-forward deep neural networks (DNNs) have offered improved object recognition performance. Sparse feature learning in feed-forward DNN models offers further improvement in performance when compared to the earlier handcrafted techniques. However, the depth of the feed-forward DNNs and the computational complexity of the models increase proportional to the challenges posed by the facial expression recognition problem. The feed-forward DNN architectures do not exploit another important learning paradigm, known as recurrency, which is ubiquitous in the human visual system. Consequently, this paper proposes a novel biologically relevant sparse-deep simultaneous recurrent network (S-DSRN) for robust facial expression recognition. The feature sparsity is obtained by adopting dropout learning in the proposed DSRN as opposed to usual handcrafting of additional penalty terms for the sparse representation of data. Theoretical analysis of S-DSRN shows that the dropout learning offers desirable properties such as sparsity, and prevents the model from overfitting. Experimental results also suggest that the proposed method yields better performance accuracy, requires reduced number of parameters, and offers reduced computational complexity than that of the previously reported state-of-the-art feed-forward DNNs using two of the most widely used publicly available facial expression data sets. Furthermore, we show that by combining the proposed neural architecture with a state-of-the-art metric learning technique significantly improves the overall recognition performance. Finally, a graphical processing unit (GPU)-based implementation of S-DSRN is obtained for real-time applications.

16.
Brainlesion ; 10670: 358-368, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30016377

RESUMEN

Glioblastoma is a stage IV highly invasive astrocytoma tumor. Its heterogeneous appearance in MRI poses critical challenge in diagnosis, prognosis and survival prediction. This work extracts a total of 1207 different types of texture and other features, tests their significance and prognostic values, and then utilizes the most significant features with Random Forest regression model to perform survival prediction. We use 163 cases from BraTS17 training dataset for evaluation of the proposed model. A 10-fold cross validation offers normalized root mean square error of 30% for the training dataset and the cross validated accuracy of 63%, respectively.

17.
VipIMAGE 2017 (2017) ; 27: 10-18, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29239394

RESUMEN

This paper presents an integrated quantitative MR image analysis framework to include all necessary steps such as MRI inhomogeneity correction, feature extraction, multiclass feature selection and multimodality abnormal brain tissue segmentation respectively. We first obtain mathematical algorithm to compute a novel Generalized multifractional Brownian motion (GmBm) texture feature. We then demonstrate efficacy of multiple multiresolution texture features including regular fractal dimension (FD) texture, and stochastic texture such as multifractional Brownian motion (mBm) and GmBm features for robust tumor and other abnormal tissue segmentation in brain MRI. We evaluate these texture and associated intensity features to effectively delineate multiple abnormal tissues within and around the tumor core, and stroke lesions using large scale public and private datasets.

18.
IEEE Trans Neural Syst Rehabil Eng ; 26(2): 353-361, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-29432106

RESUMEN

Autism spectrum disorder (ASD) is a neurodevelopmental disability with atypical traits in behavioral and physiological responses. These atypical traits in individuals with ASD may be too subtle and subjective to measure visually using tedious methods of scoring. Alternatively, the use of intrusive sensors in the measurement of psychophysical responses in individuals with ASD may likely cause inhibition and bias. This paper proposes a novel experimental protocol for non-intrusive sensing and analysis of facial expression, visual scanning, and eye-hand coordination to investigate behavioral markers for ASD. An institutional review board approved pilot study is conducted to collect the response data from two groups of subjects (ASD and control) while they engage in the tasks of visualization, recognition, and manipulation. For the first time in the ASD literature, the facial action coding system is used to classify spontaneous facial responses. Statistical analyses reveal significantly (p <0.01) higher prevalence of smile expression for the group with ASD with the eye-gaze significantly averted (p<0.05) from viewing the face in the visual stimuli. This uncontrolled manifestation of smile without proper visual engagement suggests impairment in reciprocal social communication, e.g., social smile. The group with ASD also reveals poor correlation in eye-gaze and hand movement data suggesting deficits in motor coordination while performing a dynamic manipulation task. The simultaneous sensing and analysis of multimodal response data may provide useful quantitative insights into ASD to facilitate early detection of symptoms for effective intervention planning.


Asunto(s)
Trastorno Autístico/psicología , Conducta , Expresión Facial , Movimiento , Desempeño Psicomotor , Adolescente , Algoritmos , Trastorno Autístico/diagnóstico , Biomarcadores , Niño , Estudios de Factibilidad , Femenino , Fijación Ocular , Humanos , Masculino , Estimulación Luminosa , Proyectos Piloto , Conducta Social , Adulto Joven
19.
IEEE Trans Neural Syst Rehabil Eng ; 25(11): 2146-2156, 2017 11.
Artículo en Inglés | MEDLINE | ID: mdl-28459693

RESUMEN

This paper proposes a novel patient-specific real-time automatic epileptic seizure onset detection, using both scalp and intracranial electroencephalogram (EEG). The proposed technique obtains harmonic multiresolution and self-similarity-based fractal features from EEG for robust seizure onset detection. A fast wavelet decomposition method, known as harmonic wavelet packet transform (HWPT), is computed based on Fourier transform to achieve higher frequency resolutions without recursive calculations. Similarly, fractal dimension (FD) estimates are obtained to capture self-similar repetitive patterns in the EEG signal. Both FD and HWPT energy features across all EEG channels at each epoch are organized following the spatial information due to electrode placement on the skull. The final feature vector combines feature configurations of each epoch within the specified moving window to reflect the temporal information of EEG. Finally, relevance vector machine is used to classify the feature vectors due to its efficiency in classifying sparse, yet high-dimensional data sets. The algorithm is evaluated using two publicly available long-term scalp EEG (data set A) and short-term intracranial and scalp EEG (data set B) databases. The proposed algorithm is effective in seizure onset detection with 96% sensitivity, 0.1 per hour median false detection rate, and 1.89 s average detection latency, respectively. Results obtained from analyzing the short-term data offer 99.8% classification accuracy. These results demonstrate that the proposed method is effective with both short- and long-term EEG signal analyzes recorded with either scalp or intracranial modes, respectively. Finally, the use of less computationally intensive feature extraction techniques enables faster seizure onset detection when compared with similar techniques in the literature, indicating potential usage in real-time applications.


Asunto(s)
Electrocorticografía/métodos , Epilepsia/diagnóstico , Convulsiones/diagnóstico , Algoritmos , Sistemas de Computación , Electrodos , Epilepsia/clasificación , Análisis de Fourier , Fractales , Humanos , Reproducibilidad de los Resultados , Cuero Cabelludo , Convulsiones/clasificación , Cráneo , Máquina de Vectores de Soporte , Análisis de Ondículas
20.
Proc SPIE Int Soc Opt Eng ; 101342017 Feb 11.
Artículo en Inglés | MEDLINE | ID: mdl-28642629

RESUMEN

In this work, we propose a novel method to improve texture based tumor segmentation by fusing cell density patterns that are generated from tumor growth modeling. In order to model tumor growth, we solve the reaction-diffusion equation by using Lattice-Boltzmann method (LBM). Computational tumor growth modeling obtains the cell density distribution that potentially indicates the predicted tissue locations in the brain over time. The density patterns is then considered as novel features along with other texture (such as fractal, and multifractal Brownian motion (mBm)), and intensity features in MRI for improved brain tumor segmentation. We evaluate the proposed method with about one hundred longitudinal MRI scans from five patients obtained from public BRATS 2015 data set, validated by the ground truth. The result shows significant improvement of complete tumor segmentation using ANOVA analysis for five patients in longitudinal MR images.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA