Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
1.
Med Phys ; 2024 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-38758744

RESUMEN

BACKGROUND: In laparoscopic liver surgery, accurately predicting the displacement of key intrahepatic anatomical structures is crucial for informing the doctor's intraoperative decision-making. However, due to the constrained surgical perspective, only a partial surface of the liver is typically visible. Consequently, the utilization of non-rigid volume to surface registration methods becomes essential. But traditional registration methods lack the necessary accuracy and cannot meet real-time requirements. PURPOSE: To achieve high-precision liver registration with only partial surface information and estimate the displacement of internal liver tissues in real-time. METHODS: We propose a novel neural network architecture tailored for real-time non-rigid liver volume to surface registration. The network utilizes a voxel-based method, integrating sparse convolution with the newly proposed points of interest (POI) linear attention module. POI linear attention module specifically calculates attention on the previously extracted POI. Additionally, we identified the most suitable normalization method RMSINorm. RESULTS: We evaluated our proposed network and other networks on a dataset generated from real liver models and two real datasets. Our method achieves an average error of 4.23 mm and a mean frame rate of 65.4 fps in the generation dataset. It also achieves an average error of 8.29 mm in the human breathing motion dataset. CONCLUSIONS: Our network outperforms CNN-based networks and other attention networks in terms of accuracy and inference speed.

2.
J Imaging Inform Med ; 2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-38587770

RESUMEN

Uptake segmentation and classification on PSMA PET/CT are important for automating whole-body tumor burden determinations. We developed and evaluated an automated deep learning (DL)-based framework that segments and classifies uptake on PSMA PET/CT. We identified 193 [18F] DCFPyL PET/CT scans of patients with biochemically recurrent prostate cancer from two institutions, including 137 [18F] DCFPyL PET/CT scans for training and internally testing, and 56 scans from another institution for external testing. Two radiologists segmented and labelled foci as suspicious or non-suspicious for malignancy. A DL-based segmentation was developed with two independent CNNs. An anatomical prior guidance was applied to make the DL framework focus on PSMA-avid lesions. Segmentation performance was evaluated by Dice, IoU, precision, and recall. Classification model was constructed with multi-modal decision fusion framework evaluated by accuracy, AUC, F1 score, precision, and recall. Automatic segmentation of suspicious lesions was improved under prior guidance, with mean Dice, IoU, precision, and recall of 0.700, 0.566, 0.809, and 0.660 on the internal test set and 0.680, 0.548, 0.749, and 0.740 on the external test set. Our multi-modal decision fusion framework outperformed single-modal and multi-modal CNNs with accuracy, AUC, F1 score, precision, and recall of 0.764, 0.863, 0.844, 0.841, and 0.847 in distinguishing suspicious and non-suspicious foci on the internal test set and 0.796, 0.851, 0.865, 0.814, and 0.923 on the external test set. DL-based lesion segmentation on PSMA PET is facilitated through our anatomical prior guidance strategy. Our classification framework differentiates suspicious foci from those not suspicious for cancer with good accuracy.

3.
Comput Methods Programs Biomed ; 250: 108125, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38631130

RESUMEN

BACKGROUND AND OBJECTIVES: Automatic tumor segmentation plays a crucial role in cancer diagnosis and treatment planning. Computed tomography (CT) and positron emission tomography (PET) are extensively employed for their complementary medical information. However, existing methods ignore bilateral cross-modal interaction of global features during feature extraction, and they underutilize multi-stage tumor boundary features. METHODS: To address these limitations, we propose a dual-branch tumor segmentation network based on global cross-modal interaction and boundary guidance in PET/CT images (DGCBG-Net). DGCBG-Net consists of 1) a global cross-modal interaction module that extracts global contextual information from PET/CT images and promotes bilateral cross-modal interaction of global feature; 2) a shared multi-path downsampling module that learns complementary features from PET/CT modalities to mitigate the impact of misleading features and decrease the loss of discriminative features during downsampling; 3) a boundary prior-guided branch that extracts potential boundary features from CT images at multiple stages, assisting the semantic segmentation branch in improving the accuracy of tumor boundary segmentation. RESULTS: Extensive experiments are conducted on STS and Hecktor 2022 datasets to evaluate the proposed method. The average Dice scores of our DGCB-Net on the two datasets are 80.33% and 79.29%, with average IOU scores of 67.64% and 70.18%. DGCB-Net outperformed the current state-of-the-art methods with a 1.77% higher Dice score and a 2.12% higher IOU score. CONCLUSIONS: Extensive experimental results demonstrate that DGCBG-Net outperforms existing segmentation methods, and is competitive to state-of-arts.


Asunto(s)
Neoplasias , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Neoplasias/diagnóstico por imagen , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación
4.
Comput Biol Med ; 172: 108246, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38471350

RESUMEN

Diabetic retinopathy (DR) is a severe ocular complication of diabetes that can lead to vision damage and even blindness. Currently, traditional deep convolutional neural networks (CNNs) used for DR grading tasks face two primary challenges: (1) insensitivity to minority classes due to imbalanced data distribution, and (2) neglecting the relationship between the left and right eyes by utilizing the fundus image of only one eye for training without differentiating between them. To tackle these challenges, we proposed the DRGCNN (DR Grading CNN) model. To solve the problem caused by imbalanced data distribution, our model adopts a more balanced strategy by allocating an equal number of channels to feature maps representing various DR categories. Furthermore, we introduce a CAM-EfficientNetV2-M encoder dedicated to encoding input retinal fundus images for feature vector generation. The number of parameters of our encoder is 52.88 M, which is less than RegNet_y_16gf (80.57 M) and EfficientNetB7 (63.79 M), but the corresponding kappa value is higher. Additionally, in order to take advantage of the binocular relationship, we input fundus retinal images from both eyes of the patient into the network for features fusion during the training phase. We achieved a kappa value of 86.62% on the EyePACS dataset and 86.16% on the Messidor-2 dataset. Experimental results on these representative datasets for diabetic retinopathy (DR) demonstrate the exceptional performance of our DRGCNN model, establishing it as a highly competitive intelligent classification model in the field of DR. The code is available for use at https://github.com/Fat-Hai/DRGCNN.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Humanos , Retinopatía Diabética/diagnóstico por imagen , Redes Neurales de la Computación , Fondo de Ojo
5.
J Imaging Inform Med ; 2024 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-38514595

RESUMEN

Deep learning models have demonstrated great potential in medical imaging but are limited by the expensive, large volume of annotations required. To address this, we compared different active learning strategies by training models on subsets of the most informative images using real-world clinical datasets for brain tumor segmentation and proposing a framework that minimizes the data needed while maintaining performance. Then, 638 multi-institutional brain tumor magnetic resonance imaging scans were used to train three-dimensional U-net models and compare active learning strategies. Uncertainty estimation techniques including Bayesian estimation with dropout, bootstrapping, and margins sampling were compared to random query. Strategies to avoid annotating similar images were also considered. We determined the minimum data necessary to achieve performance equivalent to the model trained on the full dataset (α = 0.05). Bayesian approximation with dropout at training and testing showed results equivalent to that of the full data model (target) with around 30% of the training data needed by random query to achieve target performance (p = 0.018). Annotation redundancy restriction techniques can reduce the training data needed by random query to achieve target performance by 20%. We investigated various active learning strategies to minimize the annotation burden for three-dimensional brain tumor segmentation. Dropout uncertainty estimation achieved target performance with the least annotated data.

6.
Comput Methods Programs Biomed ; 247: 108114, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38447315

RESUMEN

BACKGROUND AND OBJECTIVE: Recurrent major depressive disorder (rMDD) has a high recurrence rate, and symptoms often worsen with each episode. Classifying rMDD using functional magnetic resonance imaging (fMRI) can enhance understanding of brain activity and aid diagnosis and treatment of this disorder. METHODS: We developed a Residual Denoising Autoencoder (Res-DAE) framework for the classification of rMDD. The functional connectivity (FC) was extracted from fMRI data as features. The framework addresses site heterogeneity by employing the Combat method to harmonize feature distribution differences. A feature selection method based on Fisher scores was used to reduce redundant information in the features. A data augmentation strategy using a Synthetic Minority Over-sampling Technique algorithm based on Extended Frobenius Norm measure was incorporated to increase the sample size. Furthermore, a residual module was integrated into the autoencoder network to preserve important features and improve the classification accuracy. RESULTS: We tested our framework on a large-scale, multisite fMRI dataset, which includes 189 rMDD patients and 427 healthy controls. The Res-DAE achieved an average accuracy of 75.1 % (sensitivity = 69 %, specificity = 77.8 %) in cross-validation, thereby outperforming comparison methods. In a larger dataset that also includes first-episode depression (comprising 832 MDD patients and 779 healthy controls), the accuracy reached 70 %. CONCLUSIONS: We proposed a deep learning framework that can effectively classify rMDD and 33 identify the altered FC associated with rMDD. Our study may reveal changes in brain function 34 associated with rMDD and provide assistance for the diagnosis and treatment of rMDD.


Asunto(s)
Trastorno Depresivo Mayor , Humanos , Trastorno Depresivo Mayor/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Mapeo Encefálico , Algoritmos , Encéfalo/diagnóstico por imagen
7.
Hum Brain Mapp ; 45(1): e26542, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38088473

RESUMEN

Major depressive disorder (MDD) is one of the most common psychiatric disorders worldwide with high recurrence rate. Identifying MDD patients, particularly those with recurrent episodes with resting-state fMRI, may reveal the relationship between MDD and brain function. We proposed a Transformer-Encoder model, which utilized functional connectivity extracted from large-scale multisite rs-fMRI datasets to classify MDD and HC. The model discarded the Transformer's Decoder part, reducing the model's complexity and decreasing the number of parameters to adapt to the limited sample size and it does not require a complex feature selection process and achieves end-to-end classification. Additionally, our model is suitable for classifying data combined from multiple brain atlases and has an optional unsupervised pre-training module to acquire optimal initial parameters and speed up the training process. The model's performance was tested on a large-scale multisite dataset and identified brain regions affected by MDD using the Grad-CAM method. After conducting five-fold cross-validation, our model achieved an average classification accuracy of 68.61% on a dataset consisting of 1611 samples. For the selected recurrent MDD dataset, the model reached an average classification accuracy of 78.11%. Abnormalities were detected in the frontal gyri and cerebral cortex of MDD patients in both datasets. Furthermore, the identified brain regions in the recurrent MDD dataset generally exhibited a higher contribution to the model's performance.


Asunto(s)
Trastorno Depresivo Mayor , Humanos , Trastorno Depresivo Mayor/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Corteza Cerebral , Mapeo Encefálico/métodos
8.
Sensors (Basel) ; 23(23)2023 Nov 29.
Artículo en Inglés | MEDLINE | ID: mdl-38067860

RESUMEN

Websites can improve their security and protect against harmful Internet attacks by incorporating CAPTCHA verification, which assists in distinguishing between human users and robots. Among the various types of CAPTCHA, the most prevalent variant involves text-based challenges that are intentionally designed to be easily understandable by humans while presenting a difficulty for machines or robots in recognizing them. Nevertheless, due to significant advancements in deep learning, constructing convolutional neural network (CNN)-based models that possess the capability of effectively recognizing text-based CAPTCHAs has become considerably simpler. In this regard, we present a CAPTCHA recognition method that entails creating multiple duplicates of the original CAPTCHA images and generating separate binary images that encode the exact locations of each group of CAPTCHA characters. These replicated images are subsequently fed into a well-trained CNN, one after another, for obtaining the final output characters. The model possesses a straightforward architecture with a relatively small storage in system, eliminating the need for CAPTCHA segmentation into individual characters. Following the training and testing of the suggested CNN model for CAPTCHA recognition, the experimental results demonstrate the model's effectiveness in accurately recognizing CAPTCHA characters.

9.
Comput Biol Med ; 166: 107466, 2023 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-37742417

RESUMEN

OBJECTIVE: To promote research on knowledge extraction and knowledge graph construction of chest discomfort medical cases in Traditional Chinese Medicine (TCM), this paper focuses on their named entity recognition (NER). The recognition task includes six entity types: "syndrome", "symptom", "etiology and pathogenesis", "treatment method", "medication", and "prescription". METHODS: We annotated data in a BIO (B-begin, I-inside, O-outside) manner. For the characteristics of medical case texts, we proposed a custom dictionary method that can be dynamically updated for word segmentation. To compare the effect of the method on the experimental results, we applied the method in the BiLSTM-CRF model and IDCNN-CRF model, respectively. RESULTS: The models using custom dictionaries (BiLSTM-CRF-Loaded and IDCNN-CRF-Loaded) outperformed the models without custom dictionaries (BiLSTM-CRF and IDCNN-CRF) in accuracy, precision, recall, and F1 score. The BiLSTM-CRF-Loaded model yielded F1 scores of 92.59% and 93.23% on the test set and validation set, respectively, surpassing the BERT-BiLSTM-CRF model by 3.59% and 4.87%. Furthermore, when analyzing the results for the six entity categories separately, we found that the use of custom dictionaries has a marked impact, with the categories of "etiology and pathogenesis" and "syndrome" demonstrating the most noticeable improvements. By comparing the F1 scores, we observed that the entity category "medication" yielded the highest performance, reaching F1 scores of 96.04% and 96.48% on the test set and validation set, respectively. CONCLUSION: We propose a word segmentation method based on a dynamically updated custom dictionary. The method is combined with the BILSTM-CRF and the IDCNN-CRF models, which enhances the model to recognize domain-specific terms and new entities. It can be widely applied in dealing with complex text structures and texts containing domain-specific terms.

10.
Comput Methods Programs Biomed ; 242: 107783, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37716220

RESUMEN

BACKGROUND: With the outbreak and spread of COVID-19 worldwide, limited ventilators fail to meet the surging demand for mechanical ventilation in the ICU. Clinical models based on structured data that have been proposed to rationalize ventilator allocation often suffer from poor ductility due to fixed fields and laborious normalization processes. The advent of pre-trained models and downstream fine-tuning methods allows for learning large amounts of unstructured clinical text for different tasks. But the hardware requirements of large-scale pre-trained models and purposeless networks downstream have led to a lack of promotion in the clinical domain. OBJECTIVE: In this study, an innovative architecture of a task-driven predictive model is proposed and a Task-driven Gated Recurrent Attention Pool model (TGRA-P) is developed based on the architecture. TGRA-P predicts early mortality risk from patients' clinical notes on mechanical ventilation in the ICU, which is used to assist clinicians in diagnosis and decision-making. METHODS: Specifically, a Task-Specific Embedding Module is proposed to fine-tune the embedding with task labels and save it as static files for downstream calls. It serves the task better and prevents GPU overload. The Gated Recurrent Attention Unit (GRA) is proposed to further enhance the dependency of the information preceding and following the text sequence with fewer parameters. In addition, we propose a Residual Max Pool (RMP) to avoid ignoring words in common text classification tasks by incorporating all word-level features of the notes for prediction. Finally, we use a fully connected decoding network as a classifier to predict the mortality risk. RESULT: The proposed model shows very promising results with an AUROC of 0.8245±0.0096, an AUPRC of 0.7532±0.0115, an accuracy of 0.7422±0.0028 and F1-score of 0.6612±0.0059 for 90-day mortality prediction using clinical notes of ICU mechanically ventilated patients on the MIMIC-III dataset, all of which are better than previous studies. Moreover, the superiority of the proposed model in comparison with other baseline models is also statistically validated through the calculated Cohen's d effect sizes. CONCLUSION: The experimental results show that TGRA-P based on the innovative task-driven prognostic architecture obtains state-of-the-art performance. In future work, we will build upon the provided code and investigate its applicability to different datasets. The model balances performance and efficiency, not only reducing the cost of early mortality risk prediction but also assisting physicians in making timely clinical interventions and decisions. By incorporating textual records that are challenging for clinicians to utilize, the model serves as a valuable complement to physicians' judgment, enhancing their decision-making process.


Asunto(s)
COVID-19 , Respiración Artificial , Humanos , Registros Electrónicos de Salud , Unidades de Cuidados Intensivos
11.
Comput Biol Med ; 165: 107396, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37703717

RESUMEN

Structural magnetic resonance imaging (sMRI), which can reflect cerebral atrophy, plays an important role in the early detection of Alzheimer's disease (AD). However, the information provided by analyzing only the morphological changes in sMRI is relatively limited, and the assessment of the atrophy degree is subjective. Therefore, it is meaningful to combine sMRI with other clinical information to acquire complementary diagnosis information and achieve a more accurate classification of AD. Nevertheless, how to fuse these multi-modal data effectively is still challenging. In this paper, we propose DE-JANet, a unified AD classification network that integrates image data sMRI with non-image clinical data, such as age and Mini-Mental State Examination (MMSE) score, for more effective multi-modal analysis. DE-JANet consists of three key components: (1) a dual encoder module for extracting low-level features from the image and non-image data according to specific encoding regularity, (2) a joint attention module for fusing multi-modal features, and (3) a token classification module for performing AD-related classification according to the fused multi-modal features. Our DE-JANet is evaluated on the ADNI dataset, with a mean accuracy of 0.9722 and 0.9538 for AD classification and mild cognition impairment (MCI) classification, respectively, which is superior to existing methods and indicates advanced performance on AD-related diagnosis tasks.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Atrofia , Disfunción Cognitiva/diagnóstico por imagen
12.
J Affect Disord ; 339: 511-519, 2023 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-37467800

RESUMEN

BACKGROUND: Major depressive disorder (MDD) has a high rate of recurrence. Identifying patients with recurrent MDD is advantageous in adopting prevention strategies to reduce the disabling effects of depression. METHOD: We propose a novel feature extraction method that includes dynamic temporal information, and inputs the extracted features into a graph convolutional network (GCN) to achieve classification of recurrent MDD. We extract the average time series using an atlas from resting-state functional magnetic resonance imaging (fMRI) data. Pearson correlation was calculated between brain region sequences at each time point, representing the functional connectivity at each time point. The connectivity is used as the adjacency matrix and the brain region sequences as node features for a GCN model to classify recurrent MDD. Gradient-weighted Class Activation Mapping (Grad-CAM) was used to analyze the contribution of different brain regions to the model. Brain regions making greater contribution to classification were considered to be the regions with altered brain function in recurrent MDD. RESULT: We achieved a classification accuracy of 75.8 % for recurrent MDD on the multi-site dataset, the Rest-meta-MDD. The brain regions closely related to recurrent MDD have been identified. LIMITATION: The pre-processing stage may affect the final classification performance and harmonizing site differences may improve the classification performance. CONCLUSION: The experimental results demonstrate that the proposed method can effectively classify recurrent MDD and extract dynamic changes of brain activity patterns in recurrent depression.


Asunto(s)
Trastorno Depresivo Mayor , Humanos , Trastorno Depresivo Mayor/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Mapeo Encefálico/métodos , Factores de Tiempo , Encéfalo/diagnóstico por imagen
13.
IEEE J Biomed Health Inform ; 27(8): 4052-4061, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37204947

RESUMEN

Segmentation of liver from CT scans is essential in computer-aided liver disease diagnosis and treatment. However, the 2DCNN ignores the 3D context, and the 3DCNN suffers from numerous learnable parameters and high computational cost. In order to overcome this limitation, we propose an Attentive Context-Enhanced Network (AC-E Network) consisting of 1) an attentive context encoding module (ACEM) that can be integrated into the 2D backbone to extract 3D context without a sharp increase in the number of learnable parameters; 2) a dual segmentation branch including complemental loss making the network attend to both the liver region and boundary so that getting the segmented liver surface with high accuracy. Extensive experiments on the LiTS and the 3D-IRCADb datasets demonstrate that our method outperforms existing approaches and is competitive to the state-of-the-art 2D-3D hybrid method on the equilibrium of the segmentation precision and the number of model parameters.


Asunto(s)
Abdomen , Neoplasias Hepáticas , Humanos , Tomografía Computarizada por Rayos X/métodos , Diagnóstico por Computador , Procesamiento de Imagen Asistido por Computador/métodos
14.
Cerebellum ; 22(5): 781-789, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-35933493

RESUMEN

Major depressive disorder (MDD) is a serious and widespread psychiatric disorder. Previous studies mainly focused on cerebrum functional connectivity, and the sample size was relatively small. However, functional connectivity is undirected. And, there is increasing evidence that the cerebellum is also involved in emotion and cognitive processing and makes outstanding contributions to the symptomology and pathology of depression. Therefore, we used a large sample size of resting-state functional magnetic resonance imaging (rs-fMRI) data to investigate the altered effective connectivity (EC) among the cerebellum and other cerebral cortex in patients with MDD. Here, from the perspective of data-driven analysis, we used two different atlases to divide the whole brain into different regions and analyzed the alterations of EC and EC networks in the MDD group compared with healthy controls group (HCs). The results showed that compared with HCs, there were significantly altered EC in the cerebellum-neocortex and cerebellum-basal ganglia circuits in MDD patients, which implied that the cerebellum may be a potential biomarker of depressive disorders. And, the alterations of EC brain networks in MDD patients may provide new insights into the pathophysiological mechanisms of depression.


Asunto(s)
Cerebro , Trastorno Depresivo Mayor , Humanos , Trastorno Depresivo Mayor/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Encéfalo , Cerebro/diagnóstico por imagen , Cerebelo/diagnóstico por imagen
15.
Diabetes Metab Syndr ; 16(9): 102589, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35995029

RESUMEN

BACKGROUND AND AIMS: Computer-aided diagnosis and prognosis rely heavily on fully automatic segmentation of abdominal fat tissue using Emission Tomography images. The identification of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in abdomen fat faces two main challenges: (1) the great difficulties in comparison to multi-stage semantic segmentation (VAT and SAT), and (2) the subtle differences due to the high similarity of the two classes in abdomen fat and complicated VAT distribution. METHODS: In this research, we built an automated convolutional neural network (A-CNN) for segmenting Abdominal adipose tissue (AAT) from radiology images. RESULTS: We developed a point-to-point design for the A-CNN learning process, wherein the representing features might be learned together with a hybrid feature extraction technique. We tested the proposed model on a CT dataset and evaluated it to existing CNN models. Furthermore, our suggested approach, A-CNN, outperformed existing deep learning methods regarding segmentation outcomes, notably in the AAT segment. CONCLUSIONS: Proposed method is extremely fast with remarkable performance on limited-scale low dose CT-scanning and demonstrates the strength in providing an efficient computer-aimed tool for segmentation of AAT in the clinic.


Asunto(s)
Grasa Abdominal , Redes Neurales de la Computación , Humanos , Grasa Abdominal/diagnóstico por imagen , Grasa Intraabdominal/diagnóstico por imagen , Grasa Subcutánea/diagnóstico por imagen , Tomografía Computarizada por Rayos X
16.
Behav Brain Res ; 435: 114058, 2022 10 28.
Artículo en Inglés | MEDLINE | ID: mdl-35995263

RESUMEN

BACKGROUND: The current diagnosis of major depressive disorder (MDD) is mainly based on the patient's self-report and clinical symptoms. Machine learning methods are used to identify MDD using resting-state functional magnetic resonance imaging (rs-fMRI) data. However, due to large site differences in multisite rs-fMRI data and the difficulty of sample collection, most of the current machine learning studies use small sample sizes of rs-fMRI datasets to detect the alterations of functional connectivity (FC) or network attribute (NA), which may affect the reliability of the experimental results. METHODS: Multisite rs-fMRI data were used to increase the size of the sample, and then we extracted the functional connectivity (FC) and network attribute (NA) features from 1611 rs-fMRI data (832 patients with MDD (MDDs) and 779 healthy controls (HCs)). ComBat algorithm was used to harmonize the data variances caused by the multisite effect, and multivariate linear regression was used to remove age and sex covariates. Two-sample t-test and wrapper-based feature selection methods (support vector machine recursive feature elimination with cross-validation (SVM-RFECV) and LightGBM's "feature_importances_" function) were used to select important features. The Shapley additive explanations (SHAP) method was used to assign the contribution of features to the best classification effect model. RESULTS: The best result was obtained from the LinearSVM model trained with the 136 important features selected by SVMRFE-CV. In the nested five-fold cross-validation (consisting of an outer and an inner loop of five-fold cross-validation) of 1611 data, the model achieved the accuracy, sensitivity, and specificity of 68.90 %, 71.75 %, and 65.84 %, respectively. The 136 important features were tested in a small dataset and obtained excellent classification results after balancing the ratio between patients with depression and HCs. CONCLUSIONS: The combined use of FC and NA features is effective for classifying MDDs and HCs. The important FC and NA features extracted from the large sample dataset have some generalization performance and may be used as a reference for the altered brain functional connectivity networks in MDD.


Asunto(s)
Trastorno Depresivo Mayor , Imagen por Resonancia Magnética , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Trastorno Depresivo Mayor/diagnóstico por imagen , Humanos , Aprendizaje Automático , Imagen por Resonancia Magnética/métodos , Reproducibilidad de los Resultados
17.
Phys Eng Sci Med ; 45(3): 867-882, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35849323

RESUMEN

Dynamic causal modeling (DCM) is a tool used for effective connectivity (EC) estimation in neuroimage analysis. But it is a model-driven analysis method, and the structure of the EC network needs to be determined in advance based on a large amount of prior knowledge. This characteristic makes it difficult to apply DCM to the exploratory brain network analysis. The exploratory analysis of DCM can be realized from two perspectives: one is to reduce the computational cost of the model; the other is to reduce the model space. From the perspective of model space reduction, a model space exploration strategy is proposed, including two algorithms. One algorithm, named GreedyEC, starts with reducing EC from full model, and the other, named GreedyROI, start with adding EC from one node model. Then the two algorithms were applied to the task state functional magnetic resonance imaging (fMRI) data of visual object recognition and selected the best DCM model from the perspective of model comparison based on Bayesian model compare method. Results show that combining the results of the two algorithms can further improve the effect of DCM exploratory analysis. For convenience in application, the algorithms were encapsulated into MATLAB function based on SPM to help neuroscience researchers to analyze the brain causal information flow network. The strategy provides a model space exploration tool that may obtain the best model from the perspective of model comparison and lower the threshold of DCM analysis.


Asunto(s)
Mapeo Encefálico , Imagen por Resonancia Magnética , Teorema de Bayes , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Imagen por Resonancia Magnética/métodos , Modelos Neurológicos
18.
Sci Rep ; 12(1): 7924, 2022 05 13.
Artículo en Inglés | MEDLINE | ID: mdl-35562532

RESUMEN

With modern management of primary liver cancer shifting towards non-invasive diagnostics, accurate tumor classification on medical imaging is increasingly critical for disease surveillance and appropriate targeting of therapy. Recent advancements in machine learning raise the possibility of automated tools that can accelerate workflow, enhance performance, and increase the accessibility of artificial intelligence to clinical researchers. We explore the use of an automated Tree-Based Optimization Tool that leverages a genetic programming algorithm for differentiation of the two common primary liver cancers on multiphasic MRI. Manual and automated analyses were performed to select an optimal machine learning model, with an accuracy of 73-75% (95% CI 0.59-0.85), sensitivity of 70-75% (95% CI 0.48-0.89), and specificity of 71-79% (95% CI 0.52-0.90) on manual optimization, and an accuracy of 73-75% (95% CI 0.59-0.85), sensitivity of 65-75% (95% CI 0.43-0.89) and specificity of 75-79% (95% CI 0.56-0.90) for automated machine learning. We found that automated machine learning performance was similar to that of manual optimization, and it could classify hepatocellular carcinoma and intrahepatic cholangiocarcinoma with an sensitivity and specificity comparable to that of radiologists. However, automated machine learning performance was poor on a subset of scans that met LI-RADS criteria for LR-M. Exploration of additional feature selection and classifier methods with automated machine learning to improve performance on LR-M cases as well as prospective validation in the clinical setting are needed prior to implementation.


Asunto(s)
Neoplasias de los Conductos Biliares , Carcinoma Hepatocelular , Colangiocarcinoma , Neoplasias Hepáticas , Inteligencia Artificial , Neoplasias de los Conductos Biliares/diagnóstico por imagen , Conductos Biliares Intrahepáticos , Carcinoma Hepatocelular/diagnóstico por imagen , Colangiocarcinoma/diagnóstico por imagen , Medios de Contraste , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Aprendizaje Automático , Imagen por Resonancia Magnética , Estudios Retrospectivos , Sensibilidad y Especificidad
19.
Neuro Oncol ; 24(2): 289-299, 2022 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-34174070

RESUMEN

BACKGROUND: Longitudinal measurement of tumor burden with magnetic resonance imaging (MRI) is an essential component of response assessment in pediatric brain tumors. We developed a fully automated pipeline for the segmentation of tumors in pediatric high-grade gliomas, medulloblastomas, and leptomeningeal seeding tumors. We further developed an algorithm for automatic 2D and volumetric size measurement of tumors. METHODS: The preoperative and postoperative cohorts were randomly split into training and testing sets in a 4:1 ratio. A 3D U-Net neural network was trained to automatically segment the tumor on T1 contrast-enhanced and T2/FLAIR images. The product of the maximum bidimensional diameters according to the RAPNO (Response Assessment in Pediatric Neuro-Oncology) criteria (AutoRAPNO) was determined. Performance was compared to that of 2 expert human raters who performed assessments independently. Volumetric measurements of predicted and expert segmentations were computationally derived and compared. RESULTS: A total of 794 preoperative MRIs from 794 patients and 1003 postoperative MRIs from 122 patients were included. There was excellent agreement of volumes between preoperative and postoperative predicted and manual segmentations, with intraclass correlation coefficients (ICCs) of 0.912 and 0.960 for the 2 preoperative and 0.947 and 0.896 for the 2 postoperative models. There was high agreement between AutoRAPNO scores on predicted segmentations and manually calculated scores based on manual segmentations (Rater 2 ICC = 0.909; Rater 3 ICC = 0.851). Lastly, the performance of AutoRAPNO was superior in repeatability to that of human raters for MRIs with multiple lesions. CONCLUSIONS: Our automated deep learning pipeline demonstrates potential utility for response assessment in pediatric brain tumors. The tool should be further validated in prospective studies.


Asunto(s)
Neoplasias Cerebelosas , Aprendizaje Profundo , Glioma , Meduloblastoma , Niño , Glioma/diagnóstico por imagen , Glioma/patología , Glioma/cirugía , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Meduloblastoma/diagnóstico por imagen , Meduloblastoma/cirugía , Estudios Prospectivos , Carga Tumoral
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...