Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Epilepsia ; 53(11): e189-92, 2012 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-22967005

RESUMEN

Interictal electroencephalography (EEG) has clinically meaningful limitations in its sensitivity and specificity in the diagnosis of epilepsy because of its dependence on the occurrence of epileptiform discharges. We have developed a computer-aided diagnostic (CAD) tool that operates on the absolute spectral energy of the routine EEG and has both substantially higher sensitivity and negative predictive value than the identification of interictal epileptiform discharges. Our approach used a multilayer perceptron to classify 156 patients admitted for video-EEG monitoring. The patient population was diagnostically diverse; 87 were diagnosed with either generalized or focal seizures. The remainder of the patients were diagnosed with nonepileptic seizures. The sensitivity was 92% (95% confidence interval [CI] 85-97%) and the negative predictive value was 82% (95% CI 67-92%). We discuss how these findings suggest that this CAD can be used to supplement event-based analysis by trained epileptologists.


Asunto(s)
Diagnóstico por Computador/métodos , Electroencefalografía/métodos , Epilepsia/diagnóstico , Epilepsia/fisiopatología , Humanos
2.
Yale J Biol Med ; 85(4): 541-9, 2012 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-23239953

RESUMEN

The technology of fluoro-deoxyglucose positron emission tomography (PET) has drastically increased our ability to visualize the metabolic process of numerous neurological diseases. The relationship between the methodological noise sources inherent to PET technology and the resulting noise in the reconstructed image is complex. In this study, we use Monte Carlo simulations to examine the effect of Poisson noise in the PET signal on the noise in reconstructed space for two pervasive reconstruction algorithms: the historical filtered back-projection (FBP) and the more modern expectation maximization (EM). We confirm previous observations that the image reconstructed with the FBP biases all intensity values toward the mean, likely due to spatial spreading of high intensity voxels. However, we demonstrate that in both algorithms the variance from high intensity voxels spreads to low intensity voxels and obliterates their signal to noise ratio. This finding has profound impacts on the clinical interpretation of hypometabolic lesions. Our results suggest that PET is relatively insensitive when it comes to detecting and quantifying changes in hypometabolic tissue. Further, the images reconstructed with EM visually match the original images more closely, but more detailed analysis reveals as much as a 40 percent decrease in the signal to noise ratio for high intensity voxels relative to the FBP. This suggests that even though the apparent spatial resolution of EM outperforms FBP, the signal to noise ratio of the intensity of each voxel may be higher in the FBP. Therefore, EM may be most appropriate for manual visualization of pathology, but FBP should be used when analyzing quantitative markers of the PET signal. This suggestion that different reconstruction algorithms should be used for quantification versus visualization represents a major paradigm shift in the analysis and interpretation of PET images.


Asunto(s)
Ruido , Algoritmos , Humanos , Método de Montecarlo , Tomografía de Emisión de Positrones
3.
Yale J Biol Med ; 85(3): 363-77, 2012 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-23012584

RESUMEN

The electronic health record mandate within the American Recovery and Reinvestment Act of 2009 will have a far-reaching affect on medicine. In this article, we provide an in-depth analysis of how this mandate is expected to stimulate the production of large-scale, digitized databases of patient information. There is evidence to suggest that millions of patients and the National Institutes of Health will fully support the mining of such databases to better understand the process of diagnosing patients. This data mining likely will reaffirm and quantify known risk factors for many diagnoses. This quantification may be leveraged to further develop computer-aided diagnostic tools that weigh risk factors and provide decision support for health care providers. We expect that creation of these databases will stimulate the development of computer-aided diagnostic support tools that will become an integral part of modern medicine.


Asunto(s)
Minería de Datos , Bases de Datos Factuales , Diagnóstico por Computador/tendencias , Técnicas y Procedimientos Diagnósticos/tendencias , American Recovery and Reinvestment Act , Biología Computacional/métodos , Sistemas de Administración de Bases de Datos , Sistemas de Apoyo a Decisiones Clínicas , Técnicas de Apoyo para la Decisión , Diagnóstico por Computador/métodos , Registros Electrónicos de Salud/legislación & jurisprudencia , Registros Electrónicos de Salud/organización & administración , Humanos , Programas Obligatorios/organización & administración , National Institutes of Health (U.S.)/organización & administración , Factores de Riesgo , Estados Unidos
4.
eNeuro ; 3(3)2016.
Artículo en Inglés | MEDLINE | ID: mdl-27482534

RESUMEN

Variants at 21 genetic loci have been associated with an increased risk for Alzheimer's disease (AD). An important unresolved question is whether multiple genetic risk factors can be combined to increase the power to detect changes in neuroimaging biomarkers for AD. We acquired high-resolution structural images of the hippocampus in 66 healthy, older human subjects. For 45 of these subjects, longitudinal 2-year follow-up data were also available. We calculated an additive AD genetic risk score for each participant and contrasted this with a weighted risk score (WRS) approach. Each score included APOE (apolipoprotein E), CLU (clusterin), PICALM (phosphatidylinositol binding clathrin assembly protein), and family history of AD. Both unweighted risk score (URS) and WRS correlated strongly with the percentage change in thickness across the whole hippocampal complex (URS: r = -0.40; p = 0.003; WRS: r = -0.25, p = 0.048), driven by a strong relationship to entorhinal cortex thinning (URS: r = -0.35; p = 0.009; WRS: r = -0.35, p = 0.009). By contrast, at baseline the risk scores showed no relationship to thickness in any hippocampal complex subregion. These results provide compelling evidence that polygenic AD risk scores may be especially sensitive to structural change over time in regions affected early in AD, like the hippocampus and adjacent entorhinal cortex. This work also supports the paradigm of studying genetic risk for disease in healthy volunteers. Together, these findings will inform clinical trial design by supporting the idea that genetic prescreening in healthy control subjects can be useful to maximize the ability to detect an effect on a longitudinal neuroimaging endpoint, like hippocampal complex cortical thickness.


Asunto(s)
Envejecimiento/genética , Envejecimiento/patología , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/genética , Predisposición Genética a la Enfermedad , Hipocampo/diagnóstico por imagen , Envejecimiento/psicología , Enfermedad de Alzheimer/psicología , Apolipoproteínas E/genética , Ensayos Clínicos como Asunto , Clusterina/genética , Femenino , Estudios de Seguimiento , Hipocampo/patología , Humanos , Estudios Longitudinales , Imagen por Resonancia Magnética , Masculino , Escala del Estado Mental , Persona de Mediana Edad , Proteínas de Ensamble de Clatrina Monoméricas/genética , Herencia Multifactorial , Análisis Multivariante , Pruebas Neuropsicológicas , Tamaño de los Órganos , Síntomas Prodrómicos , Población Blanca
5.
Neuroimage Clin ; 11: 210-223, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26955516

RESUMEN

The underlying mechanisms of alpha band (8-12 Hz) neural oscillations are of importance to the functioning of attention control systems as well as to neuropsychiatric conditions that are characterized by deficits of that system, such as attention deficit hyperactivity disorder (ADHD). The objectives of the present study were to test if visual encoding-related alpha event-related desynchronization (ERD) correlates with fronto-parieto-occipital connectivity, and whether this is disrupted in ADHD during spatial working memory (SWM) performance. We acquired EEG concurrently with fMRI in thirty boys (12-16 yrs. old, 15 with ADHD), during SWM encoding. Psychophysiological connectivity analyses indicated that alpha ERD during SWM encoding was associated with both occipital activation and fronto-parieto-occipital functional connectivity, a finding that expands on prior associations between alpha ERD and occipital activation. This finding provides novel support for the interpretation of alpha ERD (and the associated changes in occipital activation) as a phenomenon that involves, and perhaps arises as a result of, top-down network interactions. Alpha ERD was associated less strongly with occipital activity, but associated more strongly with fronto-parieto-occipital connectivity in ADHD, consistent with a compensatory attentional response. Additionally, we illustrate that degradation of EEG data quality by MRI-amplified motion artifacts is robust to existing cleaning algorithms and is significantly correlated with hyperactivity symptoms and the ADHD Combined Type diagnosis. We conclude that persistent motion-related MR artifacts in EEG data can increase variance and introduce bias in interpretation of group differences in populations characterized by hypermobility--a clear limitation of current-state EEG-fMRI methodology.


Asunto(s)
Ritmo alfa/fisiología , Trastorno por Déficit de Atención con Hiperactividad , Corteza Cerebral/diagnóstico por imagen , Electroencefalografía , Imagen por Resonancia Magnética , Memoria a Corto Plazo/fisiología , Adolescente , Análisis de Varianza , Trastorno por Déficit de Atención con Hiperactividad/diagnóstico por imagen , Trastorno por Déficit de Atención con Hiperactividad/patología , Trastorno por Déficit de Atención con Hiperactividad/fisiopatología , Niño , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Red Nerviosa/diagnóstico por imagen , Oxígeno/sangre , Escalas de Valoración Psiquiátrica , Aprendizaje Espacial/fisiología
6.
Artículo en Inglés | MEDLINE | ID: mdl-25311448

RESUMEN

The definitive diagnosis of the type of epilepsy, if it exists, in medication-resistant seizure disorder is based on the efficient combination of clinical information, long-term video-electroencephalography (EEG) and neuroimaging. Diagnoses are reached by a consensus panel that combines these diverse modalities using clinical wisdom and experience. Here we compare two methods of multimodal computer-aided diagnosis, vector concatenation (VC) and conditional dependence (CD), using clinical archive data from 645 patients with medication-resistant seizure disorder, confirmed by video-EEG. CD models the clinical decision process, whereas VC allows for statistical modeling of cross-modality interactions. Due to the nature of clinical data, not all information was available in all patients. To overcome this, we multiply-imputed the missing data. Using a C4.5 decision tree, single modality classifiers achieved 53.1%, 51.5% and 51.1% average accuracy for MRI, clinical information and FDG-PET, respectively, for the discrimination between non-epileptic seizures, temporal lobe epilepsy, other focal epilepsies and generalized-onset epilepsy (vs. chance, p<0.01). Using VC, the average accuracy was significantly lower (39.2%). In contrast, the CD classifier that classified with MRI then clinical information achieved an average accuracy of 58.7% (vs. VC, p<0.01). The decrease in accuracy of VC compared to the MRI classifier illustrates how the addition of more informative features does not improve performance monotonically. The superiority of conditional dependence over vector concatenation suggests that the structure imposed by conditional dependence improved our ability to model the underlying diagnostic trends in the multimodality data.

7.
Front Neurol ; 4: 31, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23565107

RESUMEN

Interictal FDG-PET (iPET) is a core tool for localizing the epileptogenic focus, potentially before structural MRI, that does not require rare and transient epileptiform discharges or seizures on EEG. The visual interpretation of iPET is challenging and requires years of epilepsy-specific expertise. We have developed an automated computer-aided diagnostic (CAD) tool that has the potential to work both independent of and synergistically with expert analysis. Our tool operates on distributed metabolic changes across the whole brain measured by iPET to both diagnose and lateralize temporal lobe epilepsy (TLE). When diagnosing left TLE (LTLE) or right TLE (RTLE) vs. non-epileptic seizures (NES), our accuracy in reproducing the results of the gold standard long term video-EEG monitoring was 82% [95% confidence interval (CI) 69-90%] or 88% (95% CI 76-94%), respectively. The classifier that both diagnosed and lateralized the disease had overall accuracy of 76% (95% CI 66-84%), where 89% (95% CI 77-96%) of patients correctly identified with epilepsy were correctly lateralized. When identifying LTLE, our CAD tool utilized metabolic changes across the entire brain. By contrast, only temporal regions and the right frontal lobe cortex, were needed to identify RTLE accurately, a finding consistent with clinical observations and indicative of a potential pathophysiological difference between RTLE and LTLE. The goal of CADs is to complement - not replace - expert analysis. In our dataset, the accuracy of manual analysis (MA) of iPET (∼80%) was similar to CAD. The square correlation between our CAD tool and MA, however, was only 30%, indicating that our CAD tool does not recreate MA. The addition of clinical information to our CAD, however, did not substantively change performance. These results suggest that automated analysis might provide clinically valuable information to focus treatment more effectively.

8.
Artículo en Inglés | MEDLINE | ID: mdl-25302313

RESUMEN

The application of machine learning to epilepsy can be used both to develop clinically useful computer-aided diagnostic tools, and to reveal pathologically relevant insights into the disease. Such studies most frequently use neurologically normal patients as the control group to maximize the pathologic insight yielded from the model. This practice yields potentially inflated accuracy because the groups are quite dissimilar. A few manuscripts, however, opt to mimic the clinical comparison of epilepsy to non-epileptic seizures, an approach we believe to be more clinically realistic. In this manuscript, we describe the relative merits of each control group. We demonstrate that in our clinical quality FDG-PET database the performance achieved was similar using each control group. Based on these results, we find that the choice of control group likely does not hinder the reported performance. We argue that clinically applicable computer-aided diagnostic tools for epilepsy must directly address the clinical challenge of distinguishing patients with epilepsy from those with non-epileptic seizures.

9.
Artículo en Inglés | MEDLINE | ID: mdl-25241830

RESUMEN

Developing EEG-based computer aided diagnostic (CAD) tools would allow identification of epilepsy in individuals who have experienced possible seizures, yet such an algorithm requires efficient identification of meaningful features out of potentially more than 35,000 features of EEG activity. Mutual information can be used to identify a subset of minimally-redundant and maximally relevant (mRMR) features but requires a priori selection of two parameters: the number of features of interest and the number of quantization levels into which the continuous features are binned. Here we characterize the variance of cross-validation accuracy with respect to changes in these parameters for four classes of machine learning (ML) algorithms. This assesses the efficiency of combining mRMR with each of these algorithms by assessing when the variance of cross-validation accuracy is minimized and demonstrates how naive parameter selection may artificially depress accuracy. Our results can be used to improve the understanding of how feature selection interacts with four classes of ML algorithms and provide guidance for better a priori parameter selection in situations where an overwhelming number of redundant, noisy features are available for classification.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA