Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
J Med Signals Sens ; 13(4): 272-279, 2023.
Article in English | MEDLINE | ID: mdl-37809016

ABSTRACT

Background: Diagnosing emotional states would improve human-computer interaction (HCI) systems to be more effective in practice. Correlations between Electroencephalography (EEG) signals and emotions have been shown in various research; therefore, EEG signal-based methods are the most accurate and informative. Methods: In this study, three Convolutional Neural Network (CNN) models, EEGNet, ShallowConvNet and DeepConvNet, which are appropriate for processing EEG signals, are applied to diagnose emotions. We use baseline removal preprocessing to improve classification accuracy. Each network is assessed in two setting ways: subject-dependent and subject-independent. We improve the selected CNN model to be lightweight and implementable on a Raspberry Pi processor. The emotional states are recognized for every three-second epoch of received signals on the embedded system, which can be applied in real-time usage in practice. Results: Average classification accuracies of 99.10% in the valence and 99.20% in the arousal for subject-dependent and 90.76% in the valence and 90.94% in the arousal for subject independent were achieved on the well-known DEAP dataset. Conclusion: Comparison of the results with the related works shows that a highly accurate and implementable model has been achieved for practice.

2.
Comput Med Imaging Graph ; 90: 101883, 2021 06.
Article in English | MEDLINE | ID: mdl-33895622

ABSTRACT

PURPOSE: Lung cancer is the leading cause of cancer mortality in the US, responsible for more deaths than breast, prostate, colon and pancreas cancer combined and large population studies have indicated that low-dose computed tomography (CT) screening of the chest can significantly reduce this death rate. Recently, the usefulness of Deep Learning (DL) models for lung cancer risk assessment has been demonstrated. However, in many cases model performances are evaluated on small/medium size test sets, thus not providing strong model generalization and stability guarantees which are necessary for clinical adoption. In this work, our goal is to contribute towards clinical adoption by investigating a deep learning framework on larger and heterogeneous datasets while also comparing to state-of-the-art models. METHODS: Three low-dose CT lung cancer screening datasets were used: National Lung Screening Trial (NLST, n = 3410), Lahey Hospital and Medical Center (LHMC, n = 3154) data, Kaggle competition data (from both stages, n = 1397 + 505) and the University of Chicago data (UCM, a subset of NLST, annotated by radiologists, n = 132). At the first stage, our framework employs a nodule detector; while in the second stage, we use both the image context around the nodules and nodule features as inputs to a neural network that estimates the malignancy risk for the entire CT scan. We trained our algorithm on a part of the NLST dataset, and validated it on the other datasets. Special care was taken to ensure there was no patient overlap between the train and validation sets. RESULTS AND CONCLUSIONS: The proposed deep learning model is shown to: (a) generalize well across all three data sets, achieving AUC between 86% to 94%, with our external test-set (LHMC) being at least twice as large compared to other works; (b) have better performance than the widely accepted PanCan Risk Model, achieving 6 and 9% better AUC score in our two test sets; (c) have improved performance compared to the state-of-the-art represented by the winners of the Kaggle Data Science Bowl 2017 competition on lung cancer screening; (d) have comparable performance to radiologists in estimating cancer risk at a patient level.


Subject(s)
Deep Learning , Lung Neoplasms , Early Detection of Cancer , Humans , Lung , Lung Neoplasms/diagnostic imaging , Male , Radiologists , Risk Assessment , Tomography, X-Ray Computed
3.
JCO Clin Cancer Inform ; 4: 865-874, 2020 10.
Article in English | MEDLINE | ID: mdl-33006906

ABSTRACT

PURPOSE: Literature on clinical note mining has highlighted the superiority of machine learning (ML) over hand-crafted rules. Nevertheless, most studies assume the availability of large training sets, which is rarely the case. For this reason, in the clinical setting, rules are still common. We suggest 2 methods to leverage the knowledge encoded in pre-existing rules to inform ML decisions and obtain high performance, even with scarce annotations. METHODS: We collected 501 prostate pathology reports from 6 American hospitals. Reports were split into 2,711 core segments, annotated with 20 attributes describing the histology, grade, extension, and location of tumors. The data set was split by institutions to generate a cross-institutional evaluation setting. We assessed 4 systems, namely a rule-based approach, an ML model, and 2 hybrid systems integrating the previous methods: a Rule as Feature model and a Classifier Confidence model. Several ML algorithms were tested, including logistic regression (LR), support vector machine (SVM), and eXtreme gradient boosting (XGB). RESULTS: When training on data from a single institution, LR lags behind the rules by 3.5% (F1 score: 92.2% v 95.7%). Hybrid models, instead, obtain competitive results, with Classifier Confidence outperforming the rules by +0.5% (96.2%). When a larger amount of data from multiple institutions is used, LR improves by +1.5% over the rules (97.2%), whereas hybrid systems obtain +2.2% for Rule as Feature (97.7%) and +2.6% for Classifier Confidence (98.3%). Replacing LR with SVM or XGB yielded similar performance gains. CONCLUSION: We developed methods to use pre-existing handcrafted rules to inform ML algorithms. These hybrid systems obtain better performance than either rules or ML models alone, even when training data are limited.


Subject(s)
Machine Learning , Prostate , Algorithms , Humans , Logistic Models , Male , Support Vector Machine , United States
4.
AMIA Jt Summits Transl Sci Proc ; 2019: 212-221, 2019.
Article in English | MEDLINE | ID: mdl-31258973

ABSTRACT

Electronic Health Records contain a wealth of clinical information that can potentially be used for a variety of clinical tasks. Clinical narratives contain information about the existence or absence of medical conditions as well as clinical findings. It is essential to be able to distinguish between the two since the negated events and the non-negated events often have very different prognostic value. In this paper, we present a feature-enriched neural network-based model for negation scope detection in biomedical texts. The system achieves a robust high performance on two different types of texts, scientific abstracts, and radiology reports, achieving the new state-of-the-art result without requiring the availability of gold cue information for negation scope detection task on the scientific abstracts part of BioScope1 corpus and competitive result on the radiology report corpus.

5.
AMIA Jt Summits Transl Sci Proc ; 2019: 232-241, 2019.
Article in English | MEDLINE | ID: mdl-31258975

ABSTRACT

During a radiology reading session, it is common that the radiologist refers back to the prior history of the patient for comparison. As a result, structuring of radiology report content for seamless, fast, and accurate access is in high demand in Radiology Information Systems (RIS). A common approach for defining a structure is based on the anatomical sites of radiological observations. Nevertheless, the language used for referring to and describing anatomical regions varies quite significantly among radiologists. Conventional approaches relying on ontology-based keyword matching fail to achieve acceptable precision and recall in anatomical phrase labeling in radiology reports due to such variation in language. In this work, a novel context-driven anatomical labeling framework is proposed. The proposed framework consists of two parallel Recurrent Neural Networks (RNN), one for inferring the context of a sentence and the other for word (token)-level labeling. The proposed framework was trained on a large set of radiology reports from a clinical site and evaluated on reports from two other clinical sites. The proposed framework outperformed the state-of-the-art approaches, especially in correctly labeling ambiguous cases.

6.
AMIA Jt Summits Transl Sci Proc ; 2019: 285-294, 2019.
Article in English | MEDLINE | ID: mdl-31258981

ABSTRACT

Radiology reports contain descriptions of radiological observations followed by diagnosis and follow up recommendations, transcribed by radiologists while reading medical images. One of the most challenging tasks in a radiology workflow is to extract, characterize and structure such content to be able to pair each observation with an appropriate action. This requires classification of the findings based on the provided characterization. In most clinical setups, this is done manually, which is tedious, time-consuming and prone to human error yet of great importance as various types of findings in the reports require different follow-up decision supports and draw different levels of attention. In this work, we present a framework for detection and classification of change characteristics of pulmonary nodular findings in radiology reports. We combine a pre-trained word embedding model with a deep learning based sentence encoder. To overcome the challenge of access to limited labeled data for training, we apply Siamese network with pairwise inputs, which enforces the similarities between findings under the same category. The proposed multitask neural network classifier was evaluated and compared against state-of-the-art approaches and demonstrated promising performance.

7.
J Digit Imaging ; 32(1): 6-18, 2019 02.
Article in English | MEDLINE | ID: mdl-30076490

ABSTRACT

In today's radiology workflow, free-text reporting is established as the most common medium to capture, store, and communicate clinical information. Radiologists routinely refer to prior radiology reports of a patient to recall critical information for new diagnosis, which is quite tedious, time consuming, and prone to human error. Automatic structuring of report content is desired to facilitate such inquiry of information. In this work, we propose an unsupervised machine learning approach to automatically structure radiology reports by detecting and normalizing anatomical phrases based on the Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) ontology. The proposed approach combines word embedding-based semantic learning with ontology-based concept mapping to derive the desired concept normalization. The word embedding model was trained using a large corpus of unlabeled radiology reports. Fifty-six anatomical labels were extracted from SNOMED CT as class labels of the whole human anatomy. The proposed framework was compared against a number of state-of-the-art supervised and unsupervised approaches. Radiology reports from three different clinical sites were manually labeled for testing. The proposed approach outperformed other techniques yielding an average precision of 82.6%. The proposed framework boosts the coverage and performance of conventional approaches for concept normalization, by applying word embedding techniques in semantic learning, while avoiding the challenge of having access to a large amount of annotated data, which is typically required for training classifiers.


Subject(s)
Electronic Health Records , Radiology/methods , Terminology as Topic , Unsupervised Machine Learning , Humans , Workflow
8.
IEEE Trans Med Imaging ; 37(12): 2695-2703, 2018 12.
Article in English | MEDLINE | ID: mdl-29994471

ABSTRACT

Temporal enhanced ultrasound (TeUS), comprising the analysis of variations in backscattered signals from a tissue over a sequence of ultrasound frames, has been previously proposed as a new paradigm for tissue characterization. In this paper, we propose to use deep recurrent neural networks (RNN) to explicitly model the temporal information in TeUS. By investigating several RNN models, we demonstrate that long short-term memory (LSTM) networks achieve the highest accuracy in separating cancer from benign tissue in the prostate. We also present algorithms for in-depth analysis of LSTM networks. Our in vivo study includes data from 255 prostate biopsy cores of 157 patients. We achieve area under the curve, sensitivity, specificity, and accuracy of 0.96, 0.76, 0.98, and 0.93, respectively. Our result suggests that temporal modeling of TeUS using RNN can significantly improve cancer detection accuracy over previously presented works.


Subject(s)
Deep Learning , Image Interpretation, Computer-Assisted/methods , Prostatic Neoplasms/diagnostic imaging , Algorithms , Humans , Male , Prostate/diagnostic imaging , Ultrasonography
9.
Article in English | MEDLINE | ID: mdl-29505407

ABSTRACT

Temporal-enhanced ultrasound (TeUS) is a novel noninvasive imaging paradigm that captures information from a temporal sequence of backscattered US radio frequency data obtained from a fixed tissue location. This technology has been shown to be effective for classification of various in vivo and ex vivo tissue types including prostate cancer from benign tissue. Our previous studies have indicated two primary phenomena that influence TeUS: 1) changes in tissue temperature due to acoustic absorption and 2) micro vibrations of tissue due to physiological vibration. In this paper, first, a theoretical formulation for TeUS is presented. Next, a series of simulations are carried out to investigate micro vibration as a source of tissue characterizing information in TeUS. The simulations include finite element modeling of micro vibration in synthetic phantoms, followed by US image generation during TeUS imaging. The simulations are performed on two media, a sparse array of scatterers and a medium with pathology mimicking scatterers that match nuclei distribution extracted from a prostate digital pathology data set. Statistical analysis of the simulated TeUS data shows its ability to accurately classify tissue types. Our experiments suggest that TeUS can capture the microstructural differences, including scatterer density, in tissues as they react to micro vibrations.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Ultrasonography/methods , Computer Simulation , Databases, Factual , Finite Element Analysis , Humans , Male , Phantoms, Imaging , Prostate/diagnostic imaging , Prostatic Neoplasms/diagnostic imaging
10.
Int J Comput Assist Radiol Surg ; 13(8): 1201-1209, 2018 Aug.
Article in English | MEDLINE | ID: mdl-29589258

ABSTRACT

PURPOSE: We have previously proposed temporal enhanced ultrasound (TeUS) as a new paradigm for tissue characterization. TeUS is based on analyzing a sequence of ultrasound data with deep learning and has been demonstrated to be successful for detection of cancer in ultrasound-guided prostate biopsy. Our aim is to enable the dissemination of this technology to the community for large-scale clinical validation. METHODS: In this paper, we present a unified software framework demonstrating near-real-time analysis of ultrasound data stream using a deep learning solution. The system integrates ultrasound imaging hardware, visualization and a deep learning back-end to build an accessible, flexible and robust platform. A client-server approach is used in order to run computationally expensive algorithms in parallel. We demonstrate the efficacy of the framework using two applications as case studies. First, we show that prostate cancer detection using near-real-time analysis of RF and B-mode TeUS data and deep learning is feasible. Second, we present real-time segmentation of ultrasound prostate data using an integrated deep learning solution. RESULTS: The system is evaluated for cancer detection accuracy on ultrasound data obtained from a large clinical study with 255 biopsy cores from 157 subjects. It is further assessed with an independent dataset with 21 biopsy targets from six subjects. In the first study, we achieve area under the curve, sensitivity, specificity and accuracy of 0.94, 0.77, 0.94 and 0.92, respectively, for the detection of prostate cancer. In the second study, we achieve an AUC of 0.85. CONCLUSION: Our results suggest that TeUS-guided biopsy can be potentially effective for the detection of prostate cancer.


Subject(s)
Image-Guided Biopsy/methods , Prostatic Neoplasms/diagnosis , Ultrasonography, Interventional/methods , Algorithms , Biopsy, Large-Core Needle , Computer Systems , Humans , Male , Sensitivity and Specificity
11.
Int J Comput Assist Radiol Surg ; 12(8): 1293-1305, 2017 Aug.
Article in English | MEDLINE | ID: mdl-28634789

ABSTRACT

PURPOSE  : Temporal Enhanced Ultrasound (TeUS) has been proposed as a new paradigm for tissue characterization based on a sequence of ultrasound radio frequency (RF) data. We previously used TeUS to successfully address the problem of prostate cancer detection in the fusion biopsies. METHODS  : In this paper, we use TeUS to address the problem of grading prostate cancer in a clinical study of 197 biopsy cores from 132 patients. Our method involves capturing high-level latent features of TeUS with a deep learning approach followed by distribution learning to cluster aggressive cancer in a biopsy core. In this hypothesis-generating study, we utilize deep learning based feature visualization as a means to obtain insight into the physical phenomenon governing the interaction of temporal ultrasound with tissue. RESULTS  : Based on the evidence derived from our feature visualization, and the structure of tissue from digital pathology, we build a simulation framework for studying the physical phenomenon underlying TeUS-based tissue characterization. CONCLUSION  : Results from simulation and feature visualization corroborated with the hypothesis that micro-vibrations of tissue microstructure, captured by low-frequency spectral features of TeUS, can be used for detection of prostate cancer.


Subject(s)
Magnetic Resonance Imaging/methods , Prostatic Neoplasms/diagnostic imaging , Ultrasonography, Interventional/methods , Humans , Image-Guided Biopsy/methods , Imaging, Three-Dimensional , Male , Neoplasm Staging , Neural Networks, Computer , Prostatic Neoplasms/diagnosis , Prostatic Neoplasms/pathology , Sensitivity and Specificity
12.
Int J Comput Assist Radiol Surg ; 12(7): 1111-1121, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28349507

ABSTRACT

PURPOSE: We present a method for prostate cancer (PCa) detection using temporal enhanced ultrasound (TeUS) data obtained either from radiofrequency (RF) ultrasound signals or B-mode images. METHODS: For the first time, we demonstrate that by applying domain adaptation and transfer learning methods, a tissue classification model trained on TeUS RF data (source domain) can be deployed for classification using TeUS B-mode data alone (target domain), where both data are obtained on the same ultrasound scanner. This is a critical step for clinical translation of tissue classification techniques that primarily rely on accessing RF data, since this imaging modality is not readily available on all commercial scanners in clinics. Proof of concept is provided for in vivo characterization of PCa using TeUS B-mode data, where different nonlinear processing filters in the pipeline of the RF to B-mode conversion result in a distribution shift between the two domains. RESULTS: Our in vivo study includes data obtained in MRI-guided targeted procedure for prostate biopsy. We achieve comparable area under the curve using TeUS RF and B-mode data for medium to large cancer tumor sizes in biopsy cores (>4 mm). CONCLUSION: Our result suggests that the proposed adaptation technique is successful in reducing the divergence between TeUS RF and B-mode data.


Subject(s)
Magnetic Resonance Imaging , Prostatic Neoplasms/diagnostic imaging , Ultrasonography/methods , Biopsy, Needle , Humans , Male , Prostatic Neoplasms/pathology , Radio Waves , Reproducibility of Results
13.
Int J Comput Assist Radiol Surg ; 11(6): 947-56, 2016 Jun.
Article in English | MEDLINE | ID: mdl-27059021

ABSTRACT

PURPOSE: This paper presents the results of a large study involving fusion prostate biopsies to demonstrate that temporal ultrasound can be used to accurately classify tissue labels identified in multi-parametric magnetic resonance imaging (mp-MRI) as suspicious for cancer. METHODS: We use deep learning to analyze temporal ultrasound data obtained from 255 cancer foci identified in mp-MRI. Each target is sampled in axial and sagittal planes. A deep belief network is trained to automatically learn the high-level latent features of temporal ultrasound data. A support vector machine classifier is then applied to differentiate cancerous versus benign tissue, verified by histopathology. Data from 32 targets are used for the training, while the remaining 223 targets are used for testing. RESULTS: Our results indicate that the distance between the biopsy target and the prostate boundary, and the agreement between axial and sagittal histopathology of each target impact the classification accuracy. In 84 test cores that are 5 mm or farther to the prostate boundary, and have consistent pathology outcomes in axial and sagittal biopsy planes, we achieve an area under the curve of 0.80. In contrast, all of these targets were labeled as moderately suspicious in mp-MR. CONCLUSION: Using temporal ultrasound data in a fusion prostate biopsy study, we achieved a high classification accuracy specifically for moderately scored mp-MRI targets. These targets are clinically common and contribute to the high false-positive rates associated with mp-MRI for prostate cancer detection. Temporal ultrasound data combined with mp-MRI have the potential to reduce the number of unnecessary biopsies in fusion biopsy settings.


Subject(s)
Image-Guided Biopsy/methods , Magnetic Resonance Imaging/methods , Prostate/diagnostic imaging , Prostatic Neoplasms/diagnosis , Ultrasonography/methods , Aged , Feasibility Studies , Humans , Male , Middle Aged
14.
Int J Comput Assist Radiol Surg ; 10(6): 727-35, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25843948

ABSTRACT

PURPOSE: In recent years, fusion of multi-parametric MRI (mp-MRI) with transrectal ultrasound (TRUS)-guided biopsy has enabled targeted prostate biopsy with improved cancer yield. Target identification is solely based on information from mp-MRI, which is subsequently transferred to the subject coordinates through an image registration approach. mp-MRI has shown to be highly sensitive to detect higher-grade prostate cancer, but suffers from a high rate of false positives for lower-grade cancer, leading to unnecessary biopsies. This paper utilizes a machine-learning framework to further improve the sensitivity of targeted biopsy through analyzing temporal ultrasound data backscattered from the prostate tissue. METHODS: Temporal ultrasound data were acquired during targeted fusion prostate biopsy from suspicious cancer foci identified in mp-MRI. Several spectral features, representing the signature of backscattered signal from the tissue, were extracted from the temporal ultrasound data. A supervised support vector machine classification model was trained to relate the features to the result of histopathology analysis of biopsy cores obtained from cancer foci. The model was used to predict the label of biopsy cores for mp-MRI-identified targets in an independent group of subjects. RESULTS: Training of the classier was performed on data obtained from 35 biopsy cores. A fivefold cross-validation strategy was utilized to examine the consistency of the selected features from temporal ultrasound data, where we achieved the classification accuracy and area under receiver operating characteristic curve of 94 % and 0.98, respectively. Subsequently, an independent group of 25 biopsy cores was used for validation of the model, in which mp-MRI had identified suspicious cancer foci. Using the trained model, we predicted the tissue pathology using temporal ultrasound data: 16 out of 17 benign cores, as well as all three higher-grade cancer cores, were correctly identified. CONCLUSION: The results show that temporal analysis of ultrasound data is potentially an effective approach to complement mp-MRI-TRUS-guided prostate cancer biopsy, specially to reduce the number of unnecessary biopsies and to reliably identify higher-grade cancers.


Subject(s)
Magnetic Resonance Imaging/methods , Prostate/pathology , Prostatic Neoplasms/pathology , Ultrasonography, Interventional/methods , Feasibility Studies , Humans , Image-Guided Biopsy/methods , Male , Neoplasm Grading , Prostate/ultrastructure , Prostatic Neoplasms/diagnostic imaging
15.
Neuroimage Clin ; 7: 114-21, 2015.
Article in English | MEDLINE | ID: mdl-25610773

ABSTRACT

Computational neuroanatomical techniques that are used to evaluate the structural correlates of disorders in the brain typically measure regional differences in gray matter or white matter, or measure regional differences in the deformation fields required to warp individual datasets to a standard space. Our aim in this study was to combine measurements of regional tissue composition and of deformations in order to characterize a particular brain disorder (here, major depressive disorder). We use structural Magnetic Resonance Imaging (MRI) data from young adults in a first episode of depression, and from an age- and sex-matched group of non-depressed individuals, and create population gray matter (GM) and white matter (WM) tissue average templates using DARTEL groupwise registration. We obtained GM and WM tissue maps in the template space, along with the deformation fields required to co-register the DARTEL template and the GM and WM maps in the population. These three features, reflecting tissue composition and shape of the brain, were used within a joint independent-components analysis (jICA) to extract spatially independent joint sources and their corresponding modulation profiles. Coefficients of the modulation profiles were used to capture differences between depressed and non-depressed groups. The combination of hippocampal shape deformations and local composition of tissue (but neither shape nor local composition of tissue alone) was shown to discriminate reliably between individuals in a first episode of depression and healthy controls, suggesting that brain structural differences between depressed and non-depressed individuals do not simply reflect chronicity of the disorder but are there from the very outset.


Subject(s)
Brain/pathology , Depression/pathology , Image Interpretation, Computer-Assisted/methods , Adolescent , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
16.
PLoS Genet ; 10(8): e1004523, 2014 Aug.
Article in English | MEDLINE | ID: mdl-25122193

ABSTRACT

Face expressions are a rich source of social signals. Here we estimated the proportion of phenotypic variance in the brain response to facial expressions explained by common genetic variance captured by ∼ 500,000 single nucleotide polymorphisms. Using genomic-relationship-matrix restricted maximum likelihood (GREML), we related this global genetic variance to that in the brain response to facial expressions, as assessed with functional magnetic resonance imaging (fMRI) in a community-based sample of adolescents (n = 1,620). Brain response to facial expressions was measured in 25 regions constituting a face network, as defined previously. In 9 out of these 25 regions, common genetic variance explained a significant proportion of phenotypic variance (40-50%) in their response to ambiguous facial expressions; this was not the case for angry facial expressions. Across the network, the strength of the genotype-phenotype relationship varied as a function of the inter-individual variability in the number of functional connections possessed by a given region (R(2) = 0.38, p<0.001). Furthermore, this variability showed an inverted U relationship with both the number of observed connections (R2 = 0.48, p<0.001) and the magnitude of brain response (R(2) = 0.32, p<0.001). Thus, a significant proportion of the brain response to facial expressions is predicted by common genetic variance in a subset of regions constituting the face network. These regions show the highest inter-individual variability in the number of connections with other network nodes, suggesting that the genetic model captures variations across the adolescent brains in co-opting these regions into the face network.


Subject(s)
Brain/physiology , Facial Expression , Genetic Variation , Polymorphism, Single Nucleotide/genetics , Adolescent , Brain/metabolism , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male
17.
Cereb Cortex ; 22(7): 1593-603, 2012 Jul.
Article in English | MEDLINE | ID: mdl-21893681

ABSTRACT

Whereas low-level sensory processes can be linked to macroanatomy with great confidence, the degree to which high-level cognitive processes map onto anatomy is less clear. If function respects anatomy, more accurate intersubject anatomical registration should result in better functional alignment. Here, we use auditory functional magnetic resonance imaging and compare the effectiveness of affine and nonlinear registration methods for aligning anatomy and functional activation across subjects. Anatomical alignment was measured using normalized cross-correlation within functionally defined regions of interest. Functional overlap was assessed using t-statistics from the group analyses and the degree to which group statistics predict high and consistent signal change in individual data sets. In regions related to early stages of auditory processing, nonlinear registration resulted in more accurate anatomical registration and stronger functional overlap among subjects compared with affine. In frontal and temporal areas reflecting high-level processing of linguistic meaning, nonlinear registration also improved the accuracy of anatomical registration. However, functional overlap across subjects was not enhanced in these regions. Therefore, functional organization, relative to anatomy, is more variable in the frontal and temporal areas supporting meaning-based processes than in areas devoted to sensory/perceptual auditory processing. This demonstrates for the first time that functional variability increases systematically between regions supporting lower and higher cognitive processes.


Subject(s)
Auditory Cortex/anatomy & histology , Auditory Cortex/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Pattern Recognition, Physiological/physiology , Adolescent , Adult , Brain Mapping/methods , Female , Humans , Male , Statistics as Topic , Young Adult
18.
Hum Brain Mapp ; 33(4): 938-57, 2012 Apr.
Article in English | MEDLINE | ID: mdl-21416563

ABSTRACT

Large-scale magnetic resonance (MR) studies of the human brain offer unique opportunities for identifying genetic and environmental factors shaping the human brain. Here, we describe a dataset collected in the context of a multi-centre study of the adolescent brain, namely the IMAGEN Study. We focus on one of the functional paradigms included in the project to probe the brain network underlying processing of ambiguous and angry faces. Using functional MR (fMRI) data collected in 1,110 adolescents, we constructed probabilistic maps of the neural network engaged consistently while viewing the ambiguous or angry faces; 21 brain regions responding to faces with high probability were identified. We were also able to address several methodological issues, including the minimal sample size yielding a stable location of a test region, namely the fusiform face area (FFA), as well as the effect of acquisition site (eight sites) and scanner (four manufacturers) on the location and magnitude of the fMRI response to faces in the FFA. Finally, we provided a comparison between male and female adolescents in terms of the effect sizes of sex differences in brain response to the ambiguous and angry faces in the 21 regions of interest. Overall, we found a stronger neural response to the ambiguous faces in several cortical regions, including the fusiform face area, in female (vs. male) adolescents, and a slightly stronger response to the angry faces in the amygdala of male (vs. female) adolescents.


Subject(s)
Brain Mapping/methods , Brain/physiology , Emotions/physiology , Face , Visual Perception/physiology , Adolescent , Female , Humans , Image Interpretation, Computer-Assisted , Magnetic Resonance Imaging , Male , Sex Characteristics
19.
Article in English | MEDLINE | ID: mdl-23367153

ABSTRACT

In prostate brachytherapy procedures, combining high-resolution endorectal coil (ERC)-MRI with Computed Tomography (CT) images has shown to improve the diagnostic specificity for malignant tumors. Despite such advantage, there exists a major complication in fusion of the two imaging modalities due to the deformation of the prostate shape in ERC-MRI. Conventionally, nonlinear deformable registration techniques have been utilized to account for such deformation. In this work, we present a model-based technique for accounting for the deformation of the prostate gland in ERC-MR imaging, in which a unique deformation vector is estimated for every point within the prostate gland. Modes of deformation for every point in the prostate are statistically identified using a set of MR-based training set (with and without ERC-MRI). Deformation of the prostate from a deformed (ERC-MRI) to a non-deformed state in a different modality (CT) is then realized by first calculating partial deformation information for a limited number of points (such as surface points or anatomical landmarks) and then utilizing the calculated deformation from a subset of the points to determine the coefficient values for the modes of deformations provided by the statistical deformation model. Using a leave-one-out cross-validation, our results demonstrated a mean estimation error of 1mm for a MR-to-MR registration.


Subject(s)
Magnetic Resonance Imaging/methods , Prostate/abnormalities , Prostatic Neoplasms/pathology , Humans , Male , Rectum
20.
Neuroimage ; 50(2): 532-44, 2010 Apr 01.
Article in English | MEDLINE | ID: mdl-20036334

ABSTRACT

Probabilistic maps are useful in functional neuroimaging research for anatomical labeling and for data analysis. The degree to which a probability map can accurately estimate the location of a structure of interest in a new individual depends on many factors, including variability in the morphology of the structure of interest over subjects, the registration (normalization procedure and template) applied to align the brains among individuals for constructing a probability map, and the registration used to map a new subject's data set to the frame of the probabilistic map. Here, we take Heschl's gyrus (HG) as our structure of interest, and explore the impact of different registration methods on the accuracy with which a probabilistic map of HG can approximate HG in a new individual. We assess and compare the goodness of fit of probability maps generated using five different registration techniques, as well as evaluating the goodness of fit of a previously published probabilistic map of HG generated using affine registration (Penhune et al., 1996). The five registration techniques are: three groupwise registration techniques (implicit reference-based or IRG, DARTEL, and BSpline-based); a high-dimensional pairwise registration (HAMMER) as well as a segmentation-based registration (unified segmentation of SPM5). The accuracy of the resulting maps in labeling HG was assessed using evidence-based diagnostic measures within a leave-one-out cross-validation framework. Our results demonstrated the out performance of IRG and DARTEL compared to other registration techniques in terms of sensitivity, specificity and positive predictive value (PPV). All the techniques displayed relatively low sensitivity rates, despite high PPV, indicating that the generated probability maps provide accurate but conservative estimates of the location and extent of HG in new individuals.


Subject(s)
Auditory Cortex/anatomy & histology , Brain Mapping/methods , Image Interpretation, Computer-Assisted/methods , Adolescent , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Sensitivity and Specificity , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...