Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32.365
Filtrar
1.
BMC Med Imaging ; 24(1): 201, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39095688

RESUMO

Skin cancer stands as one of the foremost challenges in oncology, with its early detection being crucial for successful treatment outcomes. Traditional diagnostic methods depend on dermatologist expertise, creating a need for more reliable, automated tools. This study explores deep learning, particularly Convolutional Neural Networks (CNNs), to enhance the accuracy and efficiency of skin cancer diagnosis. Leveraging the HAM10000 dataset, a comprehensive collection of dermatoscopic images encompassing a diverse range of skin lesions, this study introduces a sophisticated CNN model tailored for the nuanced task of skin lesion classification. The model's architecture is intricately designed with multiple convolutional, pooling, and dense layers, aimed at capturing the complex visual features of skin lesions. To address the challenge of class imbalance within the dataset, an innovative data augmentation strategy is employed, ensuring a balanced representation of each lesion category during training. Furthermore, this study introduces a CNN model with optimized layer configuration and data augmentation, significantly boosting diagnostic precision in skin cancer detection. The model's learning process is optimized using the Adam optimizer, with parameters fine-tuned over 50 epochs and a batch size of 128 to enhance the model's ability to discern subtle patterns in the image data. A Model Checkpoint callback ensures the preservation of the best model iteration for future use. The proposed model demonstrates an accuracy of 97.78% with a notable precision of 97.9%, recall of 97.9%, and an F2 score of 97.8%, underscoring its potential as a robust tool in the early detection and classification of skin cancer, thereby supporting clinical decision-making and contributing to improved patient outcomes in dermatology.


Assuntos
Aprendizado Profundo , Dermoscopia , Redes Neurais de Computação , Neoplasias Cutâneas , Humanos , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Dermoscopia/métodos , Interpretação de Imagem Assistida por Computador/métodos
2.
BMC Plant Biol ; 24(1): 738, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39095689

RESUMO

Automated detection and identification of vegetable diseases can enhance vegetable quality and increase profits. Images of greenhouse-grown vegetable diseases often feature complex backgrounds, a diverse array of diseases, and subtle symptomatic differences. Previous studies have grappled with accurately pinpointing lesion positions and quantifying infection degrees, resulting in overall low recognition rates. To tackle the challenges posed by insufficient validation datasets and low detection and recognition rates, this study capitalizes on the geographical advantage of Shouguang, renowned as the "Vegetable Town," to establish a self-built vegetable base for data collection and validation experiments. Concentrating on a broad spectrum of fruit and vegetable crops afflicted with various diseases, we conducted on-site collection of greenhouse disease images, compiled a large-scale dataset, and introduced the Space-Time Fusion Attention Network (STFAN). STFAN integrates multi-source information on vegetable disease occurrences, bolstering the model's resilience. Additionally, we proposed the Multilayer Encoder-Decoder Feature Fusion Network (MEDFFN) to counteract feature disappearance in deep convolutional blocks, complemented by the Boundary Structure Loss function to guide the model in acquiring more detailed and accurate boundary information. By devising a detection and recognition model that extracts high-resolution feature representations from multiple sources, precise disease detection and identification were achieved. This study offers technical backing for the holistic prevention and control of vegetable diseases, thereby advancing smart agriculture. Results indicate that, on our self-built VDGE dataset, compared to YOLOv7-tiny, YOLOv8n, and YOLOv9, the proposed model (Multisource Information Fusion Method for Vegetable Disease Detection, MIFV) has improved mAP by 3.43%, 3.02%, and 2.15%, respectively, showcasing significant performance advantages. The MIFV model parameters stand at 39.07 M, with a computational complexity of 108.92 GFLOPS, highlighting outstanding real-time performance and detection accuracy compared to mainstream algorithms. This research suggests that the proposed MIFV model can swiftly and accurately detect and identify vegetable diseases in greenhouse environments at a reduced cost.


Assuntos
Doenças das Plantas , Verduras , Doenças das Plantas/prevenção & controle , Produtos Agrícolas
3.
BMC Med Res Methodol ; 24(1): 167, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39095707

RESUMO

PURPOSE: Propensity score matching is vital in epidemiological studies using observational data, yet its estimates relies on correct model-specification. This study assesses supervised deep learning models and unsupervised autoencoders for propensity score estimation, comparing them with traditional methods for bias and variance accuracy in treatment effect estimations. METHODS: Utilizing a plasmode simulation based on the Right Heart Catheterization dataset, under a variety of settings, we evaluated (1) a supervised deep learning architecture and (2) an unsupervised autoencoder, alongside two traditional methods: logistic regression and a spline-based method in estimating propensity scores for matching. Performance metrics included bias, standard errors, and coverage probability. The analysis was also extended to real-world data, with estimates compared to those obtained via a double robust approach. RESULTS: The analysis revealed that supervised deep learning models outperformed unsupervised autoencoders in variance estimation while maintaining comparable levels of bias. These results were supported by analyses of real-world data, where the supervised model's estimates closely matched those derived from conventional methods. Additionally, deep learning models performed well compared to traditional methods in settings where exposure was rare. CONCLUSION: Supervised deep learning models hold promise in refining propensity score estimations in epidemiological research, offering nuanced confounder adjustment, especially in complex datasets. We endorse integrating supervised deep learning into epidemiological research and share reproducible codes for widespread use and methodological transparency.


Assuntos
Aprendizado Profundo , Pontuação de Propensão , Humanos , Aprendizado de Máquina Supervisionado , Modelos Logísticos , Cateterismo Cardíaco/métodos , Cateterismo Cardíaco/estatística & dados numéricos , Algoritmos , Simulação por Computador
4.
NMR Biomed ; : e5230, 2024 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-39097976

RESUMO

Native T1 mapping is a non-invasive technique used for early detection of diffused myocardial abnormalities, and it provides baseline tissue characterization. Post-contrast T1 mapping enhances tissue differentiation, enables extracellular volume (ECV) calculation, and improves myocardial viability assessment. Accurate and precise segmenting of the left ventricular (LV) myocardium on T1 maps is crucial for assessing myocardial tissue characteristics and diagnosing cardiovascular diseases (CVD). This study presents a deep learning (DL)-based pipeline for automatically segmenting LV myocardium on T1 maps and automatic computation of radial T1 and ECV values. The study employs a multicentric dataset consisting of retrospective multiparametric MRI data of 332 subjects to develop and assess the performance of the proposed method. The study compared DL architectures U-Net and Deep Res U-Net for LV myocardium segmentation, which achieved a dice similarity coefficient of 0.84 ± 0.43 and 0.85 ± 0.03, respectively. The dice similarity coefficients computed for radial sub-segmentation of the LV myocardium on basal, mid-cavity, and apical slices were 0.77 ± 0.21, 0.81 ± 0.17, and 0.61 ± 0.14, respectively. The t-test performed between ground truth vs. predicted values of native T1, post-contrast T1, and ECV showed no statistically significant difference (p > 0.05) for any of the radial sub-segments. The proposed DL method leverages the use of quantitative T1 maps for automatic LV myocardium segmentation and accurately computing radial T1 and ECV values, highlighting its potential for assisting radiologists in objective cardiac assessment and, hence, in CVD diagnostics.

5.
Phys Med ; 125: 104486, 2024 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-39098106

RESUMO

Artificial intelligence can standardize and automatize highly demanding procedures, such as manual segmentation, especially in an anatomical site as common as the pelvis. This study investigated four automated segmentation tools on computed tomography (CT) images in female and male pelvic radiotherapy (RT) starting from simpler and well-known atlas-based methods to the most recent neural networks-based algorithms. The evaluation included quantitative, qualitative and time efficiency assessments. A mono-institutional consecutive series of 40 cervical cancer and 40 prostate cancer structure sets were retrospectively selected. After a preparatory phase, the remaining 20 testing sets per each site were auto-segmented by the atlas-based model STAPLE, a Random Forest-based model, and two Deep Learning-based tools (DL), MVision and LimbusAI. Setting manual segmentation as the Ground Truth, 200 structure sets were compared in terms of Dice Similarity Coefficient (DSC), Hausdorff Distance (HD), and Distance-to-Agreement Portion (DAP). Automated segmentation and manual correction durations were recorded. Expert clinicians performed a qualitative evaluation. In cervical cancer CTs, DL outperformed the other tools with higher quantitative metrics, qualitative scores, and shorter correction times. On the other hand, in prostate cancer CTs, the performance across all the analyzed tools was comparable in terms of both quantitative and qualitative metrics. Such discrepancy in performance outcome could be explained by the wide range of anatomical variability in cervical cancer with respect to the strict bladder and rectum filling preparation in prostate Stereotactic Body Radiation Therapy (SBRT). Decreasing segmentation times can reduce the burden of pelvic radiation therapy routine in an automated workflow.

6.
Comput Biol Med ; 180: 108957, 2024 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-39098236

RESUMO

The tremors of Parkinson's disease (PD) and essential tremor (ET) are known to have overlapping characteristics that make it complicated for clinicians to distinguish them. While deep learning is robust in detecting features unnoticeable to humans, an opaque trained model is impractical in clinical scenarios as coincidental correlations in the training data may be used by the model to make classifications, which may result in misdiagnosis. This work aims to overcome the aforementioned challenge of deep learning models by introducing a multilayer BiLSTM network with explainable AI (XAI) that can better explain tremulous characteristics and quantify the respective discovered important regions in tremor differentiation. The proposed network classifies PD, ET, and normal tremors during drinking actions and derives the contribution from tremor characteristics, (i.e., time, frequency, amplitude, and actions) utilized in the classification task. The analysis shows that the XAI-BiLSTM marks the regions with high tremor amplitude as important in classification, which is verified by a high correlation between relevance distribution and tremor displacement amplitude. The XAI-BiLSTM discovered that the transition phases from arm resting to lifting (during the drinking cycle) is the most important action to classify tremors. Additionally, the XAI-BiLSTM reveals frequency ranges that only contribute to the classification of one tremor class, which may be the potential distinctive feature to overcome the overlapping frequencies problem. By revealing critical timing and frequency patterns unique to PD and ET tremors, this proposed XAI-BiLSTM model enables clinicians to make more informed classifications, potentially reducing misclassification rates and improving treatment outcomes.

7.
Comput Biol Med ; 180: 108979, 2024 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-39098237

RESUMO

In Alzheimer's disease (AD) assessment, traditional deep learning approaches have often employed separate methodologies to handle the diverse modalities of input data. Recognizing the critical need for a cohesive and interconnected analytical framework, we propose the AD-Transformer, a novel transformer-based unified deep learning model. This innovative framework seamlessly integrates structural magnetic resonance imaging (sMRI), clinical, and genetic data from the extensive Alzheimer's Disease Neuroimaging Initiative (ADNI) database, encompassing 1651 subjects. By employing a Patch-CNN block, the AD-Transformer efficiently transforms image data into image tokens, while a linear projection layer adeptly converts non-image data into corresponding tokens. As the core, a transformer block learns comprehensive representations of the input data, capturing the intricate interplay between modalities. The AD-Transformer sets a new benchmark in AD diagnosis and Mild Cognitive Impairment (MCI) conversion prediction, achieving remarkable average area under curve (AUC) values of 0.993 and 0.845, respectively, surpassing those of traditional image-only models and non-unified multimodal models. Our experimental results confirmed the potential of the AD-Transformer as a potent tool in AD diagnosis and MCI conversion prediction. By providing a unified framework that jointly learns holistic representations of both image and non-image data, the AD-Transformer paves the way for more effective and precise clinical assessments, offering a clinically adaptable strategy for leveraging diverse data modalities in the battle against AD.

8.
Data Brief ; 55: 110738, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39100778

RESUMO

This paper presents a comprehensive network slicing dataset designed to empower artificial intelligence (AI), and data-based performance prediction applications, in 5G and beyond (B5G) networks. The dataset, generated through a packet-level simulator, captures the complexities of network slicing considering the three main network slice types defined by 3GPP: Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Internet of Things (mIoT). It includes a wide range of network scenarios with varying topologies, slice instances, and traffic flows. The included scenarios consist of transport networks, excluding the Radio Access Network (RAN) infrastructure. Each sample consists of pairs of a network scenario and the associated performance metrics: the network configuration includes network topology, traffic characteristics, routing configurations, while the performance metrics are the delay, jitter, and loss for each flow. The dataset is generated with a custom network slicing admission control module, enabling the simulation of scenarios in multiple situations of over and underprovisioning. This network slicing dataset is a valuable asset for the research community, unlocking opportunities for innovations in 5G and B5G networks.

9.
MethodsX ; 13: 102843, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39101121

RESUMO

Event of the disastrous scenarios are actively discussed on microblogging platforms like Twitter which can lead to chaotic situations. In the era of machine learning and deep learning, these chaotic situations can be effectively controlled by developing efficient methods and models that can assist in classifying real and fake tweets. In this research article, an efficient method named BERT Embedding based CNN model with RMSProp Optimizer is proposed to effectively classify the tweets related disastrous scenario. Tweet classification is carried out via some of the popular the machine learning algorithms such as logistic regression and decision tree classifiers. Noting the low accuracy of machine learning models, Convolutional Neural Network (CNN) based deep learning model is selected as the primary classification method. CNNs performance is improved via optimization of the parameters with gradient based optimizers. To further elevate accuracy and to capture contextual semantics from the text data, BERT embeddings are included in the proposed model. The performance of proposed method - BERT Embedding based CNN model with RMSProp Optimizer achieved an F1 score of 0.80 and an Accuracy of 0.83. The methodology presented in this research article is comprised of the following key contributions:•Identification of suitable text classification model that can effectively capture complex patterns when dealing with large vocabularies or nuanced language structures in disaster management scenarios.•The method explores the gradient based optimization techniques such as Adam Optimizer, Stochastic Gradient Descent (SGD) Optimizer, AdaGrad, and RMSprop Optimizer to identify the most appropriate optimizer that meets the characteristics of the dataset and the CNN model architecture.•"BERT Embedding based CNN model with RMSProp Optimizer" - a method to classify the disaster tweets and capture semantic representations by leveraging BERT embeddings with appropriate feature selection is presented and models are validated with appropriate comparative analysis.

10.
Adv Sci (Weinh) ; : e2304305, 2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39101275

RESUMO

Self-supervised neural language models have recently achieved unprecedented success from natural language processing to learning the languages of biological sequences and organic molecules. These models have demonstrated superior performance in the generation, structure classification, and functional predictions for proteins and molecules with learned representations. However, most of the masking-based pre-trained language models are not designed for generative design, and their black-box nature makes it difficult to interpret their design logic. Here a Blank-filling Language Model for Materials (BLMM) Crystal Transformer is proposed, a neural network-based probabilistic generative model for generative and tinkering design of inorganic materials. The model is built on the blank-filling language model for text generation and has demonstrated unique advantages in learning the "materials grammars" together with high-quality generation, interpretability, and data efficiency. It can generate chemically valid materials compositions with as high as 89.7% charge neutrality and 84.8% balanced electronegativity, which are more than four and eight times higher compared to a pseudo-random sampling baseline. The probabilistic generation process of BLMM allows it to recommend materials tinkering operations based on learned materials chemistry, which makes it useful for materials doping. The model is applied to discover a set of new materials as validated using the Density Functional Theory (DFT) calculations. This work thus brings the unsupervised transformer language models based generative artificial intelligence to inorganic materials. A user-friendly web app for tinkering materials design has been developed and can be accessed freely at www.materialsatlas.org/blmtinker.

11.
Artigo em Inglês | MEDLINE | ID: mdl-39101555

RESUMO

Neuropathologic changes of Alzheimer disease (AD) including Aß accumulation and neuroinflammation are frequently observed in the cerebral cortex of patients with idiopathic normal pressure hydrocephalus (iNPH). We created an automated analysis platform to quantify Aß load and reactive microglia in the vicinity of Aß plaques and to evaluate their association with cognitive outcome in cortical biopsies of patients with iNPH obtained at the time of shunting. Aiforia Create deep learning software was used on whole slide images of Iba1/4G8 double immunostained frontal cortical biopsies of 120 shunted iNPH patients to identify Iba1-positive microglia somas and Aß areas, respectively. Dementia, AD clinical syndrome (ACS), and Clinical Dementia Rating Global score (CDR-GS) were evaluated retrospectively after a median follow-up of 4.4 years. Deep learning artificial intelligence yielded excellent (>95%) precision for tissue, Aß, and microglia somas. Using an age-adjusted model, higher Aß coverage predicted the development of dementia, the diagnosis of ACS, and more severe memory impairment by CDR-GS whereas measured microglial densities and Aß-related microglia did not correlate with cognitive outcome in these patients. Therefore, cognitive outcome seems to be hampered by higher Aß coverage in cortical biopsies in shunted iNPH patients but is not correlated with densities of surrounding microglia.

12.
Cytometry A ; 2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39101554

RESUMO

Imaging flow cytometry, which combines the advantages of flow cytometry and microscopy, has emerged as a powerful tool for cell analysis in various biomedical fields such as cancer detection. In this study, we develop multiplex imaging flow cytometry (mIFC) by employing a spatial wavelength division multiplexing technique. Our mIFC can simultaneously obtain brightfield and multi-color fluorescence images of individual cells in flow, which are excited by a metal halide lamp and measured by a single detector. Statistical analysis results of multiplex imaging experiments with resolution test lens, magnification test lens, and fluorescent microspheres validate the operation of the mIFC with good imaging channel consistency and micron-scale differentiation capabilities. A deep learning method is designed for multiplex image processing that consists of three deep learning networks (U-net, very deep super resolution, and visual geometry group 19). It is demonstrated that the cluster of differentiation 24 (CD24) imaging channel is more sensitive than the brightfield, nucleus, or cancer antigen 125 (CA125) imaging channel in classifying the three types of ovarian cell lines (IOSE80 normal cell, A2780, and OVCAR3 cancer cells). An average accuracy rate of 97.1% is achieved for the classification of these three types of cells by deep learning analysis when all four imaging channels are considered. Our single-detector mIFC is promising for the development of future imaging flow cytometers and for the automatic single-cell analysis with deep learning in various biomedical fields.

13.
Artigo em Inglês | MEDLINE | ID: mdl-39101603

RESUMO

OBJECTIVES: The objective of this study is to assess accuracy, time-efficiency and consistency of a novel artificial intelligence (AI)-driven automated tool for cone-beam computed tomography (CBCT) and intraoral scan (IOS) registration compared with manual and semi-automated approaches. MATERIALS AND METHODS: A dataset of 31 intraoral scans (IOSs) and CBCT scans was used to validate automated IOS-CBCT registration (AR) when compared with manual (MR) and semi-automated registration (SR). CBCT scans were conducted by placing cotton rolls between the cheeks and teeth to facilitate gingival delineation. The time taken to perform multimodal registration was recorded in seconds. A qualitative analysis was carried out to assess the correspondence between hard and soft tissue anatomy on IOS and CBCT. In addition, a quantitative analysis was conducted by measuring median surface deviation (MSD) and root mean square (RMS) differences between registered IOSs. RESULTS: AR was the most time-efficient, taking 51.4 ± 17.2 s, compared with MR (840 ± 168.9 s) and SR approaches (274.7 ± 100.7 s). Both AR and SR resulted in significantly higher qualitative scores, favoring perfect IOS-CBCT registration, compared with MR (p = .001). Additionally, AR demonstrated significantly superior quantitative performance compared with SR, as indicated by low MSD (0.04 ± 0.07 mm) and RMS (0.19 ± 0.31 mm). In contrast, MR exhibited a significantly higher discrepancy compared with both AR (MSD = 0.13 ± 0.20 mm; RMS = 0.32 ± 0.14 mm) and SR (MSD = 0.11 ± 0.15 mm; RMS = 0.40 ± 0.30 mm). CONCLUSIONS: The novel AI-driven method provided an accurate, time-efficient, and consistent multimodal IOS-CBCT registration, encompassing both soft and hard tissues. This approach stands as a valuable alternative to manual and semi-automated registration approaches in the presurgical implant planning workflow.

14.
Phys Eng Sci Med ; 2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39101991

RESUMO

Intensity-modulated radiation therapy (IMRT) has been widely used in treating head and neck tumors. However, due to the complex anatomical structures in the head and neck region, it is challenging for the plan optimizer to rapidly generate clinically acceptable IMRT treatment plans. A novel deep learning multi-scale Transformer (MST) model was developed in the current study aiming to accelerate the IMRT planning for head and neck tumors while generating more precise prediction of the voxel-level dose distribution. The proposed end-to-end MST model employs the shunted Transformer to capture multi-scale features and learn a global dependency, and utilizes 3D deformable convolution bottleneck blocks to extract shape-aware feature and compensate the loss of spatial information in the patch merging layers. Moreover, data augmentation and self-knowledge distillation are used to further improve the prediction performance of the model. The MST model was trained and evaluated on the OpenKBP Challenge dataset. Its prediction accuracy was compared with three previous dose prediction models: C3D, TrDosePred, and TSNet. The predicted dose distributions of our proposed MST model in the tumor region are closest to the original clinical dose distribution. The MST model achieves the dose score of 2.23 Gy and the DVH score of 1.34 Gy on the test dataset, outperforming the other three models by 8%-17%. For clinical-related DVH dosimetric metrics, the prediction accuracy in terms of mean absolute error (MAE) is 2.04% for D 99 , 1.54% for D 95 , 1.87% for D 1 , 1.87% for D mean , 1.89% for D 0.1 c c , respectively, superior to the other three models. The quantitative results demonstrated that the proposed MST model achieved more accurate voxel-level dose prediction than the previous models for head and neck tumors. The MST model has a great potential to be applied to other disease sites to further improve the quality and efficiency of radiotherapy planning.

15.
Sci Rep ; 14(1): 17777, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39090145

RESUMO

Disasters caused by mine water inflows significantly threaten the safety of coal mining operations. Deep mining complicates the acquisition of hydrogeological parameters, the mechanics of water inrush, and the prediction of sudden changes in mine water inflow. Traditional models and singular machine learning approaches often fail to accurately forecast abrupt shifts in mine water inflows. This study introduces a novel coupled decomposition-optimization-deep learning model that integrates Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN), Northern Goshawk Optimization (NGO), and Long Short-Term Memory (LSTM) networks. We evaluate three types of mine water inflow forecasting methods: a singular time series prediction model, a decomposition-prediction coupled model, and a decomposition-optimization-prediction coupled model, assessing their ability to capture sudden changes in data trends and their prediction accuracy. Results show that the singular prediction model is optimal with a sliding input step of 3 and a maximum of 400 epochs. Compared to the CEEMDAN-LSTM model, the CEEMDAN-NGO-LSTM model demonstrates superior performance in predicting local extreme shifts in mine water inflow volumes. Specifically, the CEEMDAN-NGO-LSTM model achieves scores of 96.578 in MAE, 1.471% in MAPE, 122.143 in RMSE, and 0.958 in NSE, representing average performance improvements of 44.950% and 19.400% over the LSTM model and CEEMDAN-LSTM model, respectively. Additionally, this model provides the most accurate predictions of mine water inflow volumes over the next five days. Therefore, the decomposition-optimization-prediction coupled model presents a novel technical solution for the safety monitoring of smart mines, offering significant theoretical and practical value for ensuring safe mining operations.

16.
Sci Rep ; 14(1): 17841, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39090177

RESUMO

The precise forecasting of air quality is of great significance as an integral component of early warning systems. This remains a formidable challenge owing to the limited information of emission source and the considerable uncertainties inherent in dynamic processes. To improve the accuracy of air quality forecasting, this work proposes a new spatiotemporal hybrid deep learning model based on variational mode decomposition (VMD), graph attention networks (GAT) and bi-directional long short-term memory (BiLSTM), referred to as VMD-GAT-BiLSTM, for air quality forecasting. The proposed model initially employ a VMD to decompose original PM2.5 data into a series of relatively stable sub-sequences, thus reducing the influence of unknown factors on model prediction capabilities. For each sub-sequence, a GAT is then designed to explore deep spatial relationships among different monitoring stations. Next, a BiLSTM is utilized to learn the temporal features of each decomposed sub-sequence. Finally, the forecasting results of each decomposed sub-sequence are aggregated and summed as the final air quality prediction results. Experiment results on the collected Beijing air quality dataset show that the proposed model presents superior performance to other used methods on both short-term and long-term air quality forecasting tasks.

17.
Insights Imaging ; 15(1): 188, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39090456

RESUMO

OBJECTIVES: To explore the predictive performance of tumor and multiple peritumoral regions on dynamic contrast-enhanced magnetic resonance imaging (MRI), to identify optimal regions of interest for developing a preoperative predictive model for the grade of microvascular invasion (MVI). METHODS: A total of 147 patients who were surgically diagnosed with hepatocellular carcinoma, and had a maximum tumor diameter ≤ 5 cm were recruited and subsequently divided into a training set (n = 117) and a testing set (n = 30) based on the date of surgery. We utilized a pre-trained AlexNet to extract deep learning features from seven different regions of the maximum transverse cross-section of tumors in various MRI sequence images. Subsequently, an extreme gradient boosting (XGBoost) classifier was employed to construct the MVI grade prediction model, with evaluation based on the area under the curve (AUC). RESULTS: The XGBoost classifier trained with data from the 20-mm peritumoral region showed superior AUC compared to the tumor region alone. AUC values consistently increased when utilizing data from 5-mm, 10-mm, and 20-mm peritumoral regions. Combining arterial and delayed-phase data yielded the highest predictive performance, with micro- and macro-average AUCs of 0.78 and 0.74, respectively. Integration of clinical data further improved AUCs values to 0.83 and 0.80. CONCLUSION: Compared with those of the tumor region, the deep learning features of the peritumoral region provide more important information for predicting the grade of MVI. Combining the tumor region and the 20-mm peritumoral region resulted in a relatively ideal and accurate region within which the grade of MVI can be predicted. CLINICAL RELEVANCE STATEMENT: The 20-mm peritumoral region holds more significance than the tumor region in predicting MVI grade. Deep learning features can indirectly predict MVI by extracting information from the tumor region and directly capturing MVI information from the peritumoral region. KEY POINTS: We investigated tumor and different peritumoral regions, as well as their fusion. MVI predominantly occurs in the peritumoral region, a superior predictor compared to the tumor region. The peritumoral 20 mm region is reasonable for accurately predicting the three-grade MVI.

18.
BMC Med Imaging ; 24(1): 198, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39090546

RESUMO

In the realm of disease prognosis and diagnosis, a plethora of medical images are utilized. These images are typically stored either within the local on-premises servers of healthcare providers or within cloud storage infrastructures. However, this conventional storage approach often incurs high infrastructure costs and results in sluggish information retrieval, ultimately leading to delays in diagnosis and consequential wastage of valuable time for patients. The methodology proposed in this paper offers a pioneering solution to expedite the diagnosis of medical conditions while simultaneously reducing infrastructure costs associated with data storage. Through this study, a high-speed biomedical image processing approach is designed to facilitate rapid prognosis and diagnosis. The proposed framework includes Deep learning QR code technique using an optimized database design aimed at alleviating the burden of intensive on-premises database requirements. The work includes medical dataset from Crawford Image and Data Archive and Duke CIVM for evaluating the proposed work suing different performance metrics, The work has also been compared from the previous research further enhancing the system's efficiency. By providing healthcare providers with high-speed access to medical records, this system enables swift retrieval of comprehensive patient details, thereby improving accuracy in diagnosis and supporting informed decision-making.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Diagnóstico por Imagem/métodos , Bases de Dados Factuais , Armazenamento e Recuperação da Informação/métodos
19.
BMC Med Imaging ; 24(1): 199, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39090563

RESUMO

PURPOSE: In pediatric medicine, precise estimation of bone age is essential for skeletal maturity evaluation, growth disorder diagnosis, and therapeutic intervention planning. Conventional techniques for determining bone age depend on radiologists' subjective judgments, which may lead to non-negligible differences in the estimated bone age. This study proposes a deep learning-based model utilizing a fully connected convolutional neural network(CNN) to predict bone age from left-hand radiographs. METHODS: The data set used in this study, consisting of 473 patients, was retrospectively retrieved from the PACS (Picture Achieving and Communication System) of a single institution. We developed a fully connected CNN consisting of four convolutional blocks, three fully connected layers, and a single neuron as output. The model was trained and validated on 80% of the data using the mean-squared error as a cost function to minimize the difference between the predicted and reference bone age values through the Adam optimization algorithm. Data augmentation was applied to the training and validation sets yielded in doubling the data samples. The performance of the trained model was evaluated on a test data set (20%) using various metrics including, the mean absolute error (MAE), median absolute error (MedAE), root-mean-squared error (RMSE), and mean absolute percentage error (MAPE). The code of the developed model for predicting the bone age in this study is available publicly on GitHub at https://github.com/afiosman/deep-learning-based-bone-age-estimation . RESULTS: Experimental results demonstrate the sound capabilities of our model in predicting the bone age on the left-hand radiographs as in the majority of the cases, the predicted bone ages and reference bone ages are nearly close to each other with a calculated MAE of 2.3 [1.9, 2.7; 0.95 confidence level] years, MedAE of 2.1 years, RMAE of 3.0 [1.5, 4.5; 0.95 confidence level] years, and MAPE of 0.29 (29%) on the test data set. CONCLUSION: These findings highlight the usability of estimating the bone age from left-hand radiographs, helping radiologists to verify their own results considering the margin of error on the model. The performance of our proposed model could be improved with additional refining and validation.


Assuntos
Determinação da Idade pelo Esqueleto , Aprendizado Profundo , Humanos , Estudos Retrospectivos , Determinação da Idade pelo Esqueleto/métodos , Criança , Feminino , Masculino , Arábia Saudita , Adolescente , Pré-Escolar , Lactente , Redes Neurais de Computação , Ossos da Mão/diagnóstico por imagem , Ossos da Mão/crescimento & desenvolvimento
20.
Cancer Imaging ; 24(1): 101, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39090668

RESUMO

OBJECTIVES: The roles of magnetic resonance imaging (MRI) -based radiomics approach and deep learning approach in cervical adenocarcinoma (AC) have not been explored. Herein, we aim to develop prognosis-predictive models based on MRI-radiomics and clinical features for AC patients. METHODS: Clinical and pathological information from one hundred and ninety-seven patients with cervical AC was collected and analyzed. For each patient, 107 radiomics features were extracted from T2-weighted MRI images. Feature selection was performed using Spearman correlation and random forest (RF) algorithms, and predictive models were built using support vector machine (SVM) technique. Deep learning models were also trained with T2-weighted MRI images and clinicopathological features through Convolutional Neural Network (CNN). Kaplan-Meier curve was analyzed using significant features. In addition, information from another group of 56 AC patients was used for the independent validation. RESULTS: A total of 107 radiomics features and 6 clinicopathological features (age, FIGO stage, differentiation, invasion depth, lymphovascular space invasion (LVSI), and lymph node metastasis (LNM) were included in the analysis. When predicting the 3-year, 4-year, and 5-year DFS, the model trained solely on radiomics features achieved AUC values of 0.659 (95%CI: 0.620-0.716), 0.791 (95%CI: 0.603-0.922), and 0.853 (95%CI: 0.745-0.912), respectively. However, the combined model, incorporating both radiomics and clinicopathological features, outperformed the radiomics model with AUC values of 0.934 (95%CI: 0.885-0.981), 0.937 (95%CI: 0.867-0.995), and 0.916 (95%CI: 0.857-0.970), respectively. For deep learning models, the MRI-based models achieved an AUC of 0.857, 0.777 and 0.828 for 3-year DFS, 4-year DFS and 5-year DFS prediction, respectively. And the combined deep learning models got a improved performance, the AUCs were 0.903. 0.862 and 0.969. In the independent test set, the combined model achieved an AUC of 0.873, 0.858 and 0.914 for 3-year DFS, 4-year DFS and 5-year DFS prediction, respectively. CONCLUSIONS: We demonstrated the prognostic value of integrating MRI-based radiomics and clinicopathological features in cervical adenocarcinoma. Both radiomics and deep learning models showed improved predictive performance when combined with clinical data, emphasizing the importance of a multimodal approach in patient management.


Assuntos
Adenocarcinoma , Aprendizado Profundo , Imageamento por Ressonância Magnética , Radiômica , Neoplasias do Colo do Útero , Adulto , Idoso , Feminino , Humanos , Pessoa de Meia-Idade , Adenocarcinoma/diagnóstico por imagem , Adenocarcinoma/patologia , Adenocarcinoma/cirurgia , Metástase Linfática/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Estadiamento de Neoplasias , Prognóstico , Estudos Retrospectivos , Neoplasias do Colo do Útero/diagnóstico por imagem , Neoplasias do Colo do Útero/patologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA