Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Hum Brain Mapp ; 45(5): e26599, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38520360

RESUMO

While neurological manifestations are core features of Fabry disease (FD), quantitative neuroimaging biomarkers allowing to measure brain involvement are lacking. We used deep learning and the brain-age paradigm to assess whether FD patients' brains appear older than normal and to validate brain-predicted age difference (brain-PAD) as a possible disease severity biomarker. MRI scans of FD patients and healthy controls (HCs) from a single Institution were, retrospectively, studied. The Fabry stabilization index (FASTEX) was recorded as a measure of disease severity. Using minimally preprocessed 3D T1-weighted brain scans of healthy subjects from eight publicly available sources (N = 2160; mean age = 33 years [range 4-86]), we trained a model predicting chronological age based on a DenseNet architecture and used it to generate brain-age predictions in the internal cohort. Within a linear modeling framework, brain-PAD was tested for age/sex-adjusted associations with diagnostic group (FD vs. HC), FASTEX score, and both global and voxel-level neuroimaging measures. We studied 52 FD patients (40.6 ± 12.6 years; 28F) and 58 HC (38.4 ± 13.4 years; 28F). The brain-age model achieved accurate out-of-sample performance (mean absolute error = 4.01 years, R2 = .90). FD patients had significantly higher brain-PAD than HC (estimated marginal means: 3.1 vs. -0.1, p = .01). Brain-PAD was associated with FASTEX score (B = 0.10, p = .02), brain parenchymal fraction (B = -153.50, p = .001), white matter hyperintensities load (B = 0.85, p = .01), and tissue volume reduction throughout the brain. We demonstrated that FD patients' brains appear older than normal. Brain-PAD correlates with FD-related multi-organ damage and is influenced by both global brain volume and white matter hyperintensities, offering a comprehensive biomarker of (neurological) disease severity.


Assuntos
Aprendizado Profundo , Doença de Fabry , Leucoaraiose , Humanos , Pré-Escolar , Criança , Adolescente , Adulto Jovem , Adulto , Pessoa de Meia-Idade , Idoso , Idoso de 80 Anos ou mais , Doença de Fabry/diagnóstico por imagem , Estudos Retrospectivos , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Biomarcadores
2.
Eur Radiol ; 32(8): 5382-5391, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35284989

RESUMO

OBJECTIVES: To stratify patients with multiple sclerosis (pwMS) based on brain MRI-derived volumetric features using unsupervised machine learning. METHODS: The 3-T brain MRIs of relapsing-remitting pwMS including 3D-T1w and FLAIR-T2w sequences were retrospectively collected, along with Expanded Disability Status Scale (EDSS) scores and long-term (10 ± 2 years) clinical outcomes (EDSS, cognition, and progressive course). From the MRIs, volumes of demyelinating lesions and 116 atlas-defined gray matter regions were automatically segmented and expressed as z-scores referenced to external populations. Following feature selection, baseline MRI-derived biomarkers entered the Subtype and Stage Inference (SuStaIn) algorithm, which estimates subgroups characterized by distinct patterns of biomarker evolution and stages within subgroups. The trained model was then applied to longitudinal MRIs. Stability of subtypes and stage change over time were assessed via Krippendorf's α and multilevel linear regression models, respectively. The prognostic relevance of SuStaIn classification was assessed with ordinal/logistic regression analyses. RESULTS: We selected 425 pwMS (35.9 ± 9.9 years; F/M: 301/124), corresponding to 1129 MRI scans, along with healthy controls (N = 148; 35.9 ± 13.0 years; F/M: 77/71) and external pwMS (N = 80; 40.4 ± 11.9 years; F/M: 56/24) as reference populations. Based on 11 biomarkers surviving feature selection, two subtypes were identified, designated as "deep gray matter (DGM)-first" subtype (N = 238) and "cortex-first" subtype (N = 187) according to the atrophy pattern. Subtypes were consistent over time (α = 0.806), with significant annual stage increase (b = 0.20; p < 0.001). EDSS was associated with stage and DGM-first subtype (p ≤ 0.02). Baseline stage predicted long-term disability, transition to progressive course, and cognitive impairment (p ≤ 0.03), with the latter also associated with DGM-first subtype (p = 0.005). CONCLUSIONS: Unsupervised learning modelling of brain MRI-derived volumetric features provides a biologically reliable and prognostically meaningful stratification of pwMS. KEY POINTS: • The unsupervised modelling of brain MRI-derived volumetric features can provide a single-visit stratification of multiple sclerosis patients. • The so-obtained classification tends to be consistent over time and captures disease-related brain damage progression, supporting the biological reliability of the model. • Baseline stratification predicts long-term clinical disability, cognition, and transition to secondary progressive course.


Assuntos
Esclerose Múltipla Recidivante-Remitente , Esclerose Múltipla , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Progressão da Doença , Humanos , Imageamento por Ressonância Magnética , Esclerose Múltipla/diagnóstico por imagem , Esclerose Múltipla/patologia , Esclerose Múltipla Recidivante-Remitente/patologia , Reprodutibilidade dos Testes , Estudos Retrospectivos , Aprendizado de Máquina não Supervisionado
3.
J Med Biol Eng ; 36(4): 449-459, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27656117

RESUMO

We performed a systematic review of several pattern analysis approaches for classifying breast lesions using dynamic, morphological, and textural features in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Several machine learning approaches, namely artificial neural networks (ANN), support vector machines (SVM), linear discriminant analysis (LDA), tree-based classifiers (TC), and Bayesian classifiers (BC), and features used for classification are described. The findings of a systematic review of 26 studies are presented. The sensitivity and specificity are respectively 91 and 83 % for ANN, 85 and 82 % for SVM, 96 and 85 % for LDA, 92 and 87 % for TC, and 82 and 85 % for BC. The sensitivity and specificity are respectively 82 and 74 % for dynamic features, 93 and 60 % for morphological features, 88 and 81 % for textural features, 95 and 86 % for a combination of dynamic and morphological features, and 88 and 84 % for a combination of dynamic, morphological, and other features. LDA and TC have the best performance. A combination of dynamic and morphological features gives the best performance.

4.
Artif Intell Med ; 149: 102774, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38462278

RESUMO

Alzheimer's Disease is the most common cause of dementia, whose progression spans in different stages, from very mild cognitive impairment to mild and severe conditions. In clinical trials, Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) are mostly used for the early diagnosis of neurodegenerative disorders since they provide volumetric and metabolic function information of the brain, respectively. In recent years, Deep Learning (DL) has been employed in medical imaging with promising results. Moreover, the use of the deep neural networks, especially Convolutional Neural Networks (CNNs), has also enabled the development of DL-based solutions in domains characterized by the need of leveraging information coming from multiple data sources, raising the Multimodal Deep Learning (MDL). In this paper, we conduct a systematic analysis of MDL approaches for dementia severity assessment exploiting MRI and PET scans. We propose a Multi Input-Multi Output 3D CNN whose training iterations change according to the characteristic of the input as it is able to handle incomplete acquisitions, in which one image modality is missed. Experiments performed on OASIS-3 dataset show the satisfactory results of the implemented network, which outperforms approaches exploiting both single image modality and different MDL fusion techniques.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Humanos , Redes Neurais de Computação , Imageamento por Ressonância Magnética/métodos , Tomografia por Emissão de Pósitrons/métodos , Doença de Alzheimer/diagnóstico por imagem , Disfunção Cognitiva/diagnóstico por imagem
5.
IEEE J Biomed Health Inform ; 27(5): 2512-2523, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37022917

RESUMO

In Biomedical Named Entity Recognition (BioNER), the use of current cutting-edge deep learning-based methods, such as deep bidirectional transformers (e.g. BERT, GPT-3), can be substantially hampered by the absence of publicly accessible annotated datasets. When the BioNER system is required to annotate multiple entity types, various challenges arise because the majority of current publicly available datasets contain annotations for just one entity type: for example, mentions of disease entities may not be annotated in a dataset specialized in the recognition of drugs, resulting in a poor ground truth when using the two datasets to train a single multi-task model. In this work, we propose TaughtNet, a knowledge distillation-based framework allowing us to fine-tune a single multi-task student model by leveraging both the ground truth and the knowledge of single-task teachers. Our experiments on the recognition of mentions of diseases, chemical compounds and genes show the appropriateness and relevance of our approach w.r.t. strong state-of-the-art baselines in terms of precision, recall and F1 scores. Moreover, TaughtNet allows us to train smaller and lighter student models, which may be easier to be used in real-world scenarios, where they have to be deployed on limited-memory hardware devices and guarantee fast inferences, and shows a high potential to provide explainability. We publicly release both our code on github1 and our multi-task model on the huggingface repository.2.


Assuntos
Aprendizado Profundo , Humanos , Bases de Conhecimento
6.
J Imaging ; 9(3)2023 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-36976112

RESUMO

The United Nations Framework Convention on Climate Change (UNFCCC) has recently established the Reducing Emissions from Deforestation and forest Degradation (REDD+) program, which requires countries to report their carbon emissions and sink estimates through national greenhouse gas inventories (NGHGI). Thus, developing automatic systems capable of estimating the carbon absorbed by forests without in situ observation becomes essential. To support this critical need, in this work, we introduce ReUse, a simple but effective deep learning approach to estimate the carbon absorbed by forest areas based on remote sensing. The proposed method's novelty is in using the public above-ground biomass (AGB) data from the European Space Agency's Climate Change Initiative Biomass project as ground truth to estimate the carbon sequestration capacity of any portion of land on Earth using Sentinel-2 images and a pixel-wise regressive UNet. The approach has been compared with two literature proposals using a private dataset and human-engineered features. The results show a more remarkable generalization ability of the proposed approach, with a decrease in Mean Absolute Error and Root Mean Square Error over the runner-up of 16.9 and 14.3 in the area of Vietnam, 4.7 and 5.1 in the area of Myanmar, 8.0 and 1.4 in the area of Central Europe, respectively. As a case study, we also report an analysis made for the Astroni area, a World Wildlife Fund (WWF) natural reserve struck by a large fire, producing predictions consistent with values found by experts in the field after in situ investigations. These results further support the use of such an approach for the early detection of AGB variations in urban and rural areas.

7.
J Imaging ; 8(12)2022 Dec 03.
Artigo em Inglês | MEDLINE | ID: mdl-36547486

RESUMO

Glioblastoma Multiforme (GBM) is considered one of the most aggressive malignant tumors, characterized by a tremendously low survival rate. Despite alkylating chemotherapy being typically adopted to fight this tumor, it is known that O(6)-methylguanine-DNA methyltransferase (MGMT) enzyme repair abilities can antagonize the cytotoxic effects of alkylating agents, strongly limiting tumor cell destruction. However, it has been observed that MGMT promoter regions may be subject to methylation, a biological process preventing MGMT enzymes from removing the alkyl agents. As a consequence, the presence of the methylation process in GBM patients can be considered a predictive biomarker of response to therapy and a prognosis factor. Unfortunately, identifying signs of methylation is a non-trivial matter, often requiring expensive, time-consuming, and invasive procedures. In this work, we propose to face MGMT promoter methylation identification analyzing Magnetic Resonance Imaging (MRI) data using a Deep Learning (DL) based approach. In particular, we propose a Convolutional Neural Network (CNN) operating on suspicious regions on the FLAIR series, pre-selected through an unsupervised Knowledge-Based filter leveraging both FLAIR and T1-weighted series. The experiments, run on two different publicly available datasets, show that the proposed approach can obtain results comparable to (and in some cases better than) the considered competitor approach while consisting of less than 0.29% of its parameters. Finally, we perform an eXplainable AI (XAI) analysis to take a little step further toward the clinical usability of a DL-based approach for MGMT promoter detection in brain MRI.

8.
Cancers (Basel) ; 14(19)2022 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-36230497

RESUMO

BACKGROUND: The axillary lymph node status (ALNS) is one of the most important prognostic factors in breast cancer (BC) patients, and it is currently evaluated by invasive procedures. Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), highlights the physiological and morphological characteristics of primary tumor tissue. Deep learning approaches (DL), such as convolutional neural networks (CNNs), are able to autonomously learn the set of features directly from images for a specific task. MATERIALS AND METHODS: A total of 155 malignant BC lesions evaluated via DCE-MRI were included in the study. For each patient's clinical data, the tumor histological and MRI characteristics and axillary lymph node status (ALNS) were assessed. LNS was considered to be the final label and dichotomized (LN+ (27 patients) vs. LN- (128 patients)). Based on the concept that peritumoral tissue contains valuable information about tumor aggressiveness, in this work, we analyze the contributions of six different tumor bounding options to predict the LNS using a CNN. These bounding boxes include a single fixed-size box (SFB), a single variable-size box (SVB), a single isotropic-size box (SIB), a single lesion variable-size box (SLVB), a single lesion isotropic-size box (SLIB), and a two-dimensional slice (2DS) option. According to the characteristics of the volumes considered as inputs, three different CNNs were investigated: the SFB-NET (for the SFB), the VB-NET (for the SVB, SIB, SLVB, and SLIB), and the 2DS-NET (for the 2DS). All the experiments were run in 10-fold cross-validation. The performance of each CNN was evaluated in terms of accuracy, sensitivity, specificity, the area under the ROC curve (AUC), and Cohen's kappa coefficient (K). RESULTS: The best accuracy and AUC are obtained by the 2DS-NET (78.63% and 77.86%, respectively). The 2DS-NET also showed the highest specificity, whilst the highest sensibility was attained by the VB-NET based on the SVB and SIB as bounding options. CONCLUSION: We have demonstrated that a selective inclusion of the DCE-MRI's peritumoral tissue increases accuracy in the lymph node status prediction in BC patients using CNNs as a DL approach.

9.
Cancers (Basel) ; 15(1)2022 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-36612033

RESUMO

BACKGROUND: The incidence of breast cancer metastasis has decreased over the years. However, 20-30% of patients with early breast cancer still die from metastases. The purpose of this study is to evaluate the performance of a Deep Learning Convolutional Neural Networks (CNN) model to predict the risk of distant metastasis using 3T-MRI DCE sequences (Dynamic Contrast-Enhanced). METHODS: A total of 157 breast cancer patients who underwent staging 3T-MRI examinations from January 2011 to July 2022 were retrospectively examined. Patient data, tumor histological and MRI characteristics, and clinical and imaging follow-up examinations of up to 7 years were collected. Of the 157 MRI examinations, 39/157 patients (40 lesions) had distant metastases, while 118/157 patients (120 lesions) were negative for distant metastases (control group). We analyzed the role of the Deep Learning technique using a single variable size bounding box (SVB) option and employed a Voxel Based (VB) NET CNN model. The CNN performance was evaluated in terms of accuracy, sensitivity, specificity, and area under the ROC curve (AUC). RESULTS: The VB-NET model obtained a sensitivity, specificity, accuracy, and AUC of 52.50%, 80.51%, 73.42%, and 68.56%, respectively. A significant correlation was found between the risk of distant metastasis and tumor size, and the expression of PgR and HER2. CONCLUSIONS: We demonstrated a currently insufficient ability of the Deep Learning approach in predicting a distant metastasis status in patients with BC using CNNs.

10.
Diagnostics (Basel) ; 12(7)2022 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-35885471

RESUMO

The Prostate Imaging Reporting and Data System (PI-RADS) classification is based on a scale of values from 1 to 5. The value is assigned according to the probability that a finding is a malignant tumor (prostate carcinoma) and is calculated by evaluating the signal behavior in morphological, diffusion, and post-contrastographic sequences. A PI-RADS score of 3 is recognized as the equivocal likelihood of clinically significant prostate cancer, making its diagnosis very challenging. While PI-RADS values of 4 and 5 make biopsy necessary, it is very hard to establish whether to perform a biopsy or not in patients with a PI-RADS score 3. In recent years, machine learning algorithms have been proposed for a wide range of applications in medical fields, thanks to their ability to extract hidden information and to learn from a set of data without previous specific programming. In this paper, we evaluate machine learning approaches in detecting prostate cancer in patients with PI-RADS score 3 lesions via considering clinical-radiological characteristics. A total of 109 patients were included in this study. We collected data on body mass index (BMI), location of suspicious PI-RADS 3 lesions, serum prostate-specific antigen (PSA) level, prostate volume, PSA density, and histopathology results. The implemented classifiers exploit a patient's clinical and radiological information to generate a probability of malignancy that could help the physicians in diagnostic decisions, including the need for a biopsy.

11.
J Imaging ; 7(12)2021 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-34940743

RESUMO

The recent spread of Deep Learning (DL) in medical imaging is pushing researchers to explore its suitability for lesion segmentation in Dynamic Contrast-Enhanced Magnetic-Resonance Imaging (DCE-MRI), a complementary imaging procedure increasingly used in breast-cancer analysis. Despite some promising proposed solutions, we argue that a "naive" use of DL may have limited effectiveness as the presence of a contrast agent results in the acquisition of multimodal 4D images requiring thorough processing before training a DL model. We thus propose a pipelined approach where each stage is intended to deal with or to leverage a peculiar characteristic of breast DCE-MRI data: the use of a breast-masking pre-processing to remove non-breast tissues; the use of Three-Time-Points (3TP) slices to effectively highlight contrast agent time course; the application of a motion-correction technique to deal with patient involuntary movements; the leverage of a modified U-Net architecture tailored on the problem; and the introduction of a new "Eras/Epochs" training strategy to handle the unbalanced dataset while performing a strong data augmentation. We compared our pipelined solution against some literature works. The results show that our approach outperforms the competitors by a large margin (+9.13% over our previous solution) while also showing a higher generalization ability.

12.
Artif Intell Med ; 103: 101781, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-32143788

RESUMO

Nowadays, Dynamic Contrast Enhanced-Magnetic Resonance Imaging (DCE-MRI) has demonstrated to be a valid complementary diagnostic tool for early detection and diagnosis of breast cancer. However, without a CAD (Computer Aided Detection) system, manual DCE-MRI examination can be difficult and error-prone. The early stage of breast tissue segmentation, in a typical CAD, is crucial to increase reliability and reduce the computational effort by reducing the number of voxels to analyze and removing foreign tissues and air. In recent years, the deep convolutional neural networks (CNNs) enabled a sensible improvement in many visual tasks automation, such as image classification and object recognition. These advances also involved radiomics, enabling high-throughput extraction of quantitative features, resulting in a strong improvement in automatic diagnosis through medical imaging. However, machine learning and, in particular, deep learning approaches are gaining popularity in the radiomics field for tissue segmentation. This work aims to accurately segment breast parenchyma from the air and other tissues (such as chest-wall) by applying an ensemble of deep CNNs on 3D MR data. The novelty, besides applying cutting-edge techniques in the radiomics field, is a multi-planar combination of U-Net CNNs by a suitable projection-fusing approach, enabling multi-protocol applications. The proposed approach has been validated over two different datasets for a total of 109 DCE-MRI studies with histopathologically proven lesions and two different acquisition protocols. The median dice similarity index for both the datasets is 96.60 % (±0.30 %) and 95.78 % (±0.51 %) respectively with p < 0.05, and 100% of neoplastic lesion coverage.


Assuntos
Mama/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Aprendizado Profundo , Detecção Precoce de Câncer , Humanos , Reprodutibilidade dos Testes
13.
Artif Intell Med ; 97: 71-78, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30503016

RESUMO

BACKGROUND AND OBJECTIVE: The indirect immunofluorescence (IIF) on HEp-2 cells is the recommended technique for the detection of antinuclear antibodies. However, it is burdened by some limitations, as it is time consuming and subjective, and it requires trained personnel. In other fields the adoption of deep neural networks has provided an effective high-level abstraction of the raw data, resulting in the ability to automatically generate optimized high-level features. METHODS: To alleviate IIF limitations, this paper presents a computer-aided diagnosis (CAD) system classifying HEp-2 fluorescence intensity: it represents each image using an Invariant Scattering Convolutional Network (Scatnet), which is locally translation invariant and stable to deformations, a characteristic useful in case of HEp-2 samples. To cope with the inter-observer discrepancies found in the dataset, we also introduce a method for gold standard computation that assigns a label and a reliability score to each HEp-2 sample on the basis of annotations provided by expert physicians. Features by Scatnet and gold standard information are then used to train a Support Vector Machine. RESULTS: The proposed CAD is tested on a new dataset of 1771 images annotated by three independent medical centers. The performances achieved by our CAD in recognizing positive, weak positive and negative samples are also compared against those obtained by other two approaches presented so far in the literature. The same system trained on this new dataset is then tested on two public datasets, namely MIVIA and I3Asel. CONCLUSIONS: The results confirm the effectiveness of our proposal, also revealing that it achieves the same performance as medical experts.


Assuntos
Diagnóstico por Computador/métodos , Anticorpos Antinucleares/análise , Linhagem Celular , Conjuntos de Dados como Assunto , Fluorescência , Técnica Indireta de Fluorescência para Anticorpo , Humanos , Redes Neurais de Computação , Reprodutibilidade dos Testes
14.
PLoS One ; 13(10): e0202397, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30335753

RESUMO

BACKGROUND AND AIM: Lung ultrasound has been used to describe common respiratory diseases both by visual and computer-assisted gray scale analysis. In the present paper, we compare both methods in assessing neonatal respiratory status keeping two oxygenation indexes as standards. PATIENTS AND METHODS: Neonates admitted to the NICU for respiratory distress were enrolled. Two neonatologists not attending the patients performed a lung scan, built a single frame database and rated the images with a standardized score. The same dataset was processed using the gray scale analysis implemented with textural features and machine learning analysis. Both the oxygenation ratio (PaO2/FiO2) and the alveolar arterial oxygen gradient (A-a) were kept as reference standards. RESULTS: Seventy-five neonates with different respiratory status were enrolled in the study and a dataset of 600 ultrasound frames was built. Visual assessment of respiratory status correlated significantly with PaO2/FiO2 (r = -0.55; p<0.0001) and the A-a (r = 0.59; p<0.0001) with a strong interobserver agreement (K = 0.91). A significant correlation was also found between both oxygenation indexes and the gray scale analysis of lung ultrasound scans using regions of interest corresponding to 50K (r = -0.42; p<0.002 for PaO2/FiO2; r = 0.46 p<0.001 for A-a) and 100K (r = -0.35 p<0.01 for PaO2/FiO2; r = 0.58 p<0.0001 for A-a) pixels regions of interest. CONCLUSIONS: A semi quantitative estimate of the degree of neonatal respiratory distress was demonstrated both by a validated scoring system and by computer assisted analysis of the ultrasound scan. This data may help to implement point of care ultrasound diagnostics in the NICU.


Assuntos
Pulmão/diagnóstico por imagem , Síndrome do Desconforto Respiratório do Recém-Nascido/diagnóstico , Ultrassonografia , Gasometria , Feminino , Humanos , Recém-Nascido , Pulmão/fisiopatologia , Masculino , Oxigênio/metabolismo , Síndrome do Desconforto Respiratório do Recém-Nascido/diagnóstico por imagem , Síndrome do Desconforto Respiratório do Recém-Nascido/fisiopatologia
15.
Front Oncol ; 8: 294, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30175071

RESUMO

Radiomics leverages existing image datasets to provide non-visible data extraction via image post-processing, with the aim of identifying prognostic, and predictive imaging features at a sub-region of interest level. However, the application of radiomics is hampered by several challenges such as lack of image acquisition/analysis method standardization, impeding generalizability. As of yet, radiomics remains intriguing, but not clinically validated. We aimed to test the feasibility of a non-custom-constructed platform for disseminating existing large, standardized databases across institutions for promoting radiomics studies. Hence, University of Texas MD Anderson Cancer Center organized two public radiomics challenges in head and neck radiation oncology domain. This was done in conjunction with MICCAI 2016 satellite symposium using Kaggle-in-Class, a machine-learning and predictive analytics platform. We drew on clinical data matched to radiomics data derived from diagnostic contrast-enhanced computed tomography (CECT) images in a dataset of 315 patients with oropharyngeal cancer. Contestants were tasked to develop models for (i) classifying patients according to their human papillomavirus status, or (ii) predicting local tumor recurrence, following radiotherapy. Data were split into training, and test sets. Seventeen teams from various professional domains participated in one or both of the challenges. This review paper was based on the contestants' feedback; provided by 8 contestants only (47%). Six contestants (75%) incorporated extracted radiomics features into their predictive model building, either alone (n = 5; 62.5%), as was the case with the winner of the "HPV" challenge, or in conjunction with matched clinical attributes (n = 2; 25%). Only 23% of contestants, notably, including the winner of the "local recurrence" challenge, built their model relying solely on clinical data. In addition to the value of the integration of machine learning into clinical decision-making, our experience sheds light on challenges in sharing and directing existing datasets toward clinical applications of radiomics, including hyper-dimensionality of the clinical/imaging data attributes. Our experience may help guide researchers to create a framework for sharing and reuse of already published data that we believe will ultimately accelerate the pace of clinical applications of radiomics; both in challenge or clinical settings.

16.
Eur Radiol Exp ; 1(1): 10, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29708202

RESUMO

BACKGROUND: In breast magnetic resonance imaging (MRI) analysis for lesion detection and classification, radiologists agree that both morphological and dynamic features are important to differentiate benign from malignant lesions. We propose a multiple classifier system (MCS) to classify breast lesions on dynamic contrast-enhanced MRI (DCE-MRI) combining morphological features and dynamic information. METHODS: The proposed MCS combines the results of two classifiers trained with dynamic and morphological features separately. Twenty-six malignant and 22 benign breast lesions, histologically proven, were analysed. The lesions were subdivided into two groups: training set (14 benign and 18 malignant) and testing set (8 benign and 8 malignant). Volumes of interest were extracted both manually and automatically. We initially considered a feature set including 54 morphological features and 98 dynamic features. These were reduced by means of a selection procedure to delete redundant parameters. The performance of each of the two classifiers and of the overall MCS was compared with pathological classification. RESULTS: We obtained an accuracy of 91.7% on the testing set using automatic segmentation and combining the best classifier for morphological features (decision tree) and for dynamic information (Bayesian classifier). With implementation of the MCS, an increase in accuracy of 12.5% and of 31.3% was obtained compared with the accuracy of the Bayesian classifier tested with dynamic features and with that of the decision tree tested with morphological parameters, respectively. CONCLUSIONS: An MCS can optimise the accuracy for breast lesion classification combining morphological features and dynamic information.

17.
Comput Methods Programs Biomed ; 121(3): 127-36, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26143963

RESUMO

Electrocardiography (ECG) has been recently proposed as biometric trait for identification purposes. Intra-individual variations of ECG might affect identification performance. These variations are mainly due to Heart Rate Variability (HRV). In particular, HRV causes changes in the QT intervals along the ECG waveforms. This work is aimed at analysing the influence of seven QT interval correction methods (based on population models) on the performance of ECG-fiducial-based identification systems. In addition, we have also considered the influence of training set size, classifier, classifier ensemble as well as the number of consecutive heartbeats in a majority voting scheme. The ECG signals used in this study were collected from thirty-nine subjects within the Physionet open access database. Public domain software was used for fiducial points detection. Results suggested that QT correction is indeed required to improve the performance. However, there is no clear choice among the seven explored approaches for QT correction (identification rate between 0.97 and 0.99). MultiLayer Perceptron and Support Vector Machine seemed to have better generalization capabilities, in terms of classification performance, with respect to Decision Tree-based classifiers. No such strong influence of the training-set size and the number of consecutive heartbeats has been observed on the majority voting scheme.


Assuntos
Eletrocardiografia/métodos , Coração/fisiologia , Frequência Cardíaca , Humanos
18.
IEEE Trans Pattern Anal Mach Intell ; 26(10): 1367-72, 2004 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-15641723

RESUMO

We present an algorithm for graph isomorphism and subgraph isomorphism suited for dealing with large graphs. A first version of the algorithm has been presented in a previous paper, where we examined its performance for the isomorphism of small and medium size graphs. The algorithm is improved here to reduce its spatial complexity and to achieve a better performance on large graphs; its features are analyzed in detail with special reference to time and memory requirements. The results of a testing performed on a publicly available database of synthetically generated graphs and on graphs relative to a real application dealing with technical drawings are presented, confirming the effectiveness of the approach, especially when working with large graphs.


Assuntos
Algoritmos , Inteligência Artificial , Gráficos por Computador , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Análise por Conglomerados , Imageamento Tridimensional/métodos , Armazenamento e Recuperação da Informação/métodos , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Processamento de Sinais Assistido por Computador
19.
J Healthc Eng ; 4(4): 465-504, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24287428

RESUMO

Computer systems for Electrocardiogram (ECG) analysis support the clinician in tedious tasks (e.g., Holter ECG monitored in Intensive Care Units) or in prompt detection of dangerous events (e.g., ventricular fibrillation). Together with clinical applications (arrhythmia detection and heart rate variability analysis), ECG is currently being investigated in biometrics (human identification), an emerging area receiving increasing attention. Methodologies for clinical applications can have both differences and similarities with respect to biometrics. This paper reviews methods of ECG processing from a pattern recognition perspective. In particular, we focus on features commonly used for heartbeat classification. Considering the vast literature in the field and the limited space of this review, we dedicated a detailed discussion only to a few classifiers (Artificial Neural Networks and Support Vector Machines) because of their popularity; however, other techniques such as Hidden Markov Models and Kalman Filtering will be also mentioned.


Assuntos
Eletrocardiografia/métodos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Máquina de Vetores de Suporte , Arritmias Cardíacas/diagnóstico , Arritmias Cardíacas/fisiopatologia , Frequência Cardíaca/fisiologia , Humanos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa