RESUMEN
OBJECTIVES: Around 30% of patients undergoing surgical resection for drug-resistant mesial temporal lobe epilepsy (MTLE) do not obtain seizure freedom. Success of anterior temporal lobe resection (ATLR) critically depends on the careful selection of surgical candidates, aiming at optimizing seizure freedom while minimizing postoperative morbidity. Structural MRI and FDG-PET neuroimaging are routinely used in presurgical assessment and guide the decision to proceed to surgery. In this study, we evaluate the potential of machine learning techniques applied to standard presurgical MRI and PET imaging features to provide enhanced prognostic value relative to current practice. METHODS: Eighty two patients with drug resistant MTLE were scanned with FDG-PET pre-surgery and T1-weighted MRI pre- and postsurgery. From these images the following features of interest were derived: volume of temporal lobe (TL) hypometabolism, % of extratemporal hypometabolism, presence of contralateral TL hypometabolism, presence of hippocampal sclerosis, laterality of seizure onset volume of tissue resected and % of temporal lobe hypometabolism resected. These measures were used as predictor variables in logistic regression, support vector machines, random forests and artificial neural networks. RESULTS: In the study cohort, 24 of 82 (28.3%) who underwent an ATLR for drug-resistant MTLE did not achieve Engel Class I (i.e., free of disabling seizures) outcome at a minimum of 2 years of postoperative follow-up. We found that machine learning approaches were able to predict up to 73% of the 24 ATLR surgical patients who did not achieve a Class I outcome, at the expense of incorrect prediction for up to 31% of patients who did achieve a Class I outcome. Overall accuracies ranged from 70% to 80%, with an area under the receiver operating characteristic curve (AUC) of .75-.81. We additionally found that information regarding overall extent of both total and significantly hypometabolic tissue resected was crucial to predictive performance, with AUC dropping to .59-.62 using presurgical information alone. Incorporating the laterality of seizure onset and the choice of machine learning algorithm did not significantly change predictive performance. SIGNIFICANCE: Collectively, these results indicate that "acceptable" to "good" patient-specific prognostication for drug-resistant MTLE surgery is feasible with machine learning approaches utilizing commonly collected imaging modalities, but that information on the surgical resection region is critical for optimal prognostication.
Asunto(s)
Epilepsia Refractaria , Epilepsia del Lóbulo Temporal , Epilepsia Refractaria/diagnóstico por imagen , Epilepsia Refractaria/cirugía , Epilepsia del Lóbulo Temporal/diagnóstico por imagen , Epilepsia del Lóbulo Temporal/cirugía , Fluorodesoxiglucosa F18 , Humanos , Aprendizaje Automático , Imagen por Resonancia Magnética , Convulsiones , Resultado del TratamientoRESUMEN
Purpose To examine Generative Visual Rationales (GVRs) as a tool for visualizing neural network learning of chest radiograph features in congestive heart failure (CHF). Materials and Methods A total of 103 489 frontal chest radiographs in 46 712 patients acquired from January 1, 2007, to December 31, 2016, were divided into a labeled data set (with B-type natriuretic peptide [BNP] result as a marker of CHF) and unlabeled data set (without BNP result). A generative model was trained on the unlabeled data set, and a neural network was trained on the encoded representations of the labeled data set to estimate BNP. The model was used to visualize how a radiograph with high estimated BNP would look without disease (a "healthy" radiograph). An overfitted model was developed for comparison, and 100 GVRs were blindly assessed by two experts for features of CHF. Area under the receiver operating characteristic curve (AUC), κ coefficient, and mixed-effects logistic regression were used for statistical analyses. Results At a cutoff BNP of 100 ng/L as a marker of CHF, the correctly trained model achieved an AUC of 0.82. Assessment of GVRs revealed that the correctly trained model highlighted conventional radiographic features of CHF as reasons for an elevated BNP prediction more frequently than the overfitted model, including cardiomegaly (153 [76.5%] of 200 vs 64 [32%] of 200, respectively; P < .001) and pleural effusions (47 [23.5%] of 200 vs 16 [8%] of 200, respectively; P = .003). Conclusion Features of congestive heart failure on chest radiographs learned by neural networks can be identified using Generative Visual Rationales, enabling detection of bias and overfitted models. © RSNA, 2018 See also the editorial by Ngo in this issue.
Asunto(s)
Insuficiencia Cardíaca/diagnóstico por imagen , Redes Neurales de la Computación , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía Torácica/métodos , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Niño , Preescolar , Bases de Datos Factuales , Femenino , Insuficiencia Cardíaca/sangre , Humanos , Lactante , Recién Nacido , Masculino , Persona de Mediana Edad , Péptido Natriurético Encefálico/sangre , Curva ROC , Tórax/diagnóstico por imagen , Adulto JovenRESUMEN
Integrating neurons into digital systems may enable performance infeasible with silicon alone. Here, we develop DishBrain, a system that harnesses the inherent adaptive computation of neurons in a structured environment. In vitro neural networks from human or rodent origins are integrated with in silico computing via a high-density multielectrode array. Through electrophysiological stimulation and recording, cultures are embedded in a simulated game-world, mimicking the arcade game "Pong." Applying implications from the theory of active inference via the free energy principle, we find apparent learning within five minutes of real-time gameplay not observed in control conditions. Further experiments demonstrate the importance of closed-loop structured feedback in eliciting learning over time. Cultures display the ability to self-organize activity in a goal-directed manner in response to sparse sensory information about the consequences of their actions, which we term synthetic biological intelligence. Future applications may provide further insights into the cellular correlates of intelligence.