Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.246
Filtrar
1.
Chin Clin Oncol ; 13(Suppl 1): AB093, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39295411

RESUMEN

BACKGROUND: Central nervous system (CNS) tumours, especially glioma, are a complex disease and many challenges are encountered in their treatment. Artificial intelligence (AI) has made a colossal impact in many walks of life at a low cost. However, this avenue still needs to be explored in healthcare settings, demanding investment of resources towards growth in this area. We aim to develop machine learning (ML) algorithms to facilitate the accurate diagnosis and precise mapping of the brain tumour. METHODS: We queried the data from 2019 to 2022 and brain magnetic resonance imaging (MRI) of glioma patients were extracted. Images that had both T1-contrast and T2-fluid-attenuated inversion recovery (T2-FLAIR) volume sequences available were included. MRI images were annotated by a team supervised by a neuroradiologist. The extracted MRIs thus obtained were then fed to the preprocessing pipeline to extract brains using SynthStrip. They were further fed to the deep learning-based semantic segmentation pipelines using UNet-based architecture with convolutional neural network (CNN) at its backbone. Subsequently, the algorithm was tested to assess the efficacy in the pixel-wise diagnosis of tumours. RESULTS: In total, 69 samples of low-grade glioma (LGG) were used out of which 62 were used for fine-tuning a pre-trained model trained on brain tumor segmentation (BraTS) 2020 and 7 were used for testing. For the evaluation of the model, the Dice coefficient was used as the metric. The average Dice coefficient on the 7 test samples was 0.94. CONCLUSIONS: With the advent of technology, AI continues to modify our lifestyles. It is critical to adapt this technology in healthcare with the aim of improving the provision of patient care. We present our preliminary data for the use of ML algorithms in the diagnosis and segmentation of glioma. The promising result with comparable accuracy highlights the importance of early adaptation of this nascent technology.


Asunto(s)
Aprendizaje Profundo , Glioma , Imagen por Resonancia Magnética , Humanos , Glioma/clasificación , Glioma/patología , Imagen por Resonancia Magnética/métodos , Masculino , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/clasificación , Neoplasias Encefálicas/patología , Femenino
2.
Int J Geogr Inf Sci ; 38(10): 2061-2082, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39318700

RESUMEN

Cartographic map generalization involves complex rules, and a full automation has still not been achieved, despite many efforts over the past few decades. Pioneering studies show that some map generalization tasks can be partially automated by deep neural networks (DNNs). However, DNNs are still used as black-box models in previous studies. We argue that integrating explainable AI (XAI) into a DL-based map generalization process can give more insights to develop and refine the DNNs by understanding what cartographic knowledge exactly is learned. Following an XAI framework for an empirical case study, visual analytics and quantitative experiments were applied to explain the importance of input features regarding the prediction of a pre-trained ResU-Net model. This experimental case study finds that the XAI-based visualization results can easily be interpreted by human experts. With the proposed XAI workflow, we further find that the DNN pays more attention to the building boundaries than the interior parts of the buildings. We thus suggest that boundary intersection over union is a better evaluation metric than commonly used intersection over union in qualifying raster-based map generalization results. Overall, this study shows the necessity and feasibility of integrating XAI as part of future DL-based map generalization development frameworks.

3.
Magn Reson Med ; 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39270056

RESUMEN

PURPOSE: To shorten CEST acquisition time by leveraging Z-spectrum undersampling combined with deep learning for CEST map construction from undersampled Z-spectra. METHODS: Fisher information gain analysis identified optimal frequency offsets (termed "Fisher offsets") for the multi-pool fitting model, maximizing information gain for the amplitude and the FWHM parameters. These offsets guided initial subsampling levels. A U-NET, trained on undersampled brain CEST images from 18 volunteers, produced CEST maps at 3 T with varied undersampling levels. Feasibility was first tested using retrospective undersampling at three levels, followed by prospective in vivo undersampling (15 of 53 offsets), reducing scan time significantly. Additionally, glioblastoma grade IV pathology was simulated to evaluate network performance in patient-like cases. RESULTS: Traditional multi-pool models failed to quantify CEST maps from undersampled images (structural similarity index [SSIM] <0.2, peak SNR <20, Pearson r <0.1). Conversely, U-NET fitting successfully addressed undersampled data challenges. The study suggests CEST scan time reduction is feasible by undersampling 15, 25, or 35 of 53 Z-spectrum offsets. Prospective undersampling cut scan time by 3.5 times, with a maximum mean squared error of 4.4e-4, r = 0.82, and SSIM = 0.84, compared to the ground truth. The network also reliably predicted CEST values for simulated glioblastoma pathology. CONCLUSION: The U-NET architecture effectively quantifies CEST maps from undersampled Z-spectra at various undersampling levels.

4.
bioRxiv ; 2024 Sep 11.
Artículo en Inglés | MEDLINE | ID: mdl-39314387

RESUMEN

Motivation: Cryogenic Electron Microscopy (cryo-EM) is a core experimental technique used to determine the structure of macromolecules such as proteins. However, the effectiveness of cryo-EM is often hindered by the noise and missing density values in cryo-EM density maps caused by experimental conditions such as low contrast and conformational heterogeneity. Although various global and local map sharpening techniques are widely employed to improve cryo-EM density maps, it is still challenging to efficiently improve their quality for building better protein structures from them. Results: In this study, we introduce CryoTEN - a three-dimensional U-Net style transformer to improve cryo-EM maps effectively. CryoTEN is trained using a diverse set of 1,295 cryo-EM maps as inputs and their corresponding simulated maps generated from known protein structures as targets. An independent test set containing 150 maps is used to evaluate CryoTEN, and the results demonstrate that it can robustly enhance the quality of cryo-EM density maps. In addition, the automatic de novo protein structure modeling shows that the protein structures built from the density maps processed by CryoTEN have substantially better quality than those built from the original maps. Compared to the existing state-of-the-art deep learning methods for enhancing cryo-EM density maps, CryoTEN ranks second in improving the quality of density maps, while running > 10 times faster and requiring much less GPU memory than them. Availability and implementation: The source code and data is freely available at https://github.com/jianlin-cheng/cryoten.

5.
Front Artif Intell ; 7: 1376546, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39315244

RESUMEN

Background: This study delves into the crucial domain of sperm segmentation, a pivotal component of male infertility diagnosis. It explores the efficacy of diverse architectural configurations coupled with various encoders, leveraging frames from the VISEM dataset for evaluation. Methods: The pursuit of automated sperm segmentation led to the examination of multiple deep learning architectures, each paired with distinct encoders. Extensive experimentation was conducted on the VISEM dataset to assess their performance. Results: Our study evaluated various deep learning architectures with different encoders for sperm segmentation using the VISEM dataset. While each model configuration exhibited distinct strengths and weaknesses, UNet++ with ResNet34 emerged as a top-performing model, demonstrating exceptional accuracy in distinguishing sperm cells from non-sperm cells. However, challenges persist in accurately identifying closely adjacent sperm cells. These findings provide valuable insights for improving automated sperm segmentation in male infertility diagnosis. Discussion: The study underscores the significance of selecting appropriate model combinations based on specific diagnostic requirements. It also highlights the challenges related to distinguishing closely adjacent sperm cells. Conclusion: This research advances the field of automated sperm segmentation for male infertility diagnosis, showcasing the potential of deep learning techniques. Future work should aim to enhance accuracy in scenarios involving close proximity between sperm cells, ultimately improving clinical sperm analysis.

6.
Sensors (Basel) ; 24(18)2024 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-39338791

RESUMEN

There are two widely used methods to measure the cardiac cycle and obtain heart rate measurements: the electrocardiogram (ECG) and the photoplethysmogram (PPG). The sensors used in these methods have gained great popularity in wearable devices, which have extended cardiac monitoring beyond the hospital environment. However, the continuous monitoring of ECG signals via mobile devices is challenging, as it requires users to keep their fingers pressed on the device during data collection, making it unfeasible in the long term. On the other hand, the PPG does not contain this limitation. However, the medical knowledge to diagnose these anomalies from this sign is limited by the need for familiarity, since the ECG is studied and used in the literature as the gold standard. To minimize this problem, this work proposes a method, PPG2ECG, that uses the correlation between the domains of PPG and ECG signals to infer from the PPG signal the waveform of the ECG signal. PPG2ECG consists of mapping between domains by applying a set of convolution filters, learning to transform a PPG input signal into an ECG output signal using a U-net inception neural network architecture. We assessed our proposed method using two evaluation strategies based on personalized and generalized models and achieved mean error values of 0.015 and 0.026, respectively. Our method overcomes the limitations of previous approaches by providing an accurate and feasible method for continuous monitoring of ECG signals through PPG signals. The short distances between the infer-red ECG and the original ECG demonstrate the feasibility and potential of our method to assist in the early identification of heart diseases.


Asunto(s)
Electrocardiografía , Frecuencia Cardíaca , Redes Neurales de la Computación , Fotopletismografía , Procesamiento de Señales Asistido por Computador , Humanos , Electrocardiografía/métodos , Fotopletismografía/métodos , Frecuencia Cardíaca/fisiología , Algoritmos , Dispositivos Electrónicos Vestibles
7.
Bioinform Biol Insights ; 18: 11779322241272387, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39246684

RESUMEN

Objectives: This article focuses on the detection of cells in low-contrast brightfield microscopy images; in our case, it is chronic lymphocytic leukaemia cells. The automatic detection of cells from brightfield time-lapse microscopic images brings new opportunities in cell morphology and migration studies; to achieve the desired results, it is advisable to use state-of-the-art image segmentation methods that not only detect the cell but also detect its boundaries with the highest possible accuracy, thus defining its shape and dimensions. Methods: We compared eight state-of-the-art neural network architectures with different backbone encoders for image data segmentation, namely U-net, U-net++, the Pyramid Attention Network, the Multi-Attention Network, LinkNet, the Feature Pyramid Network, DeepLabV3, and DeepLabV3+. The training process involved training each of these networks for 1000 epochs using the PyTorch and PyTorch Lightning libraries. For instance segmentation, the watershed algorithm and three-class image semantic segmentation were used. We also used StarDist, a deep learning-based tool for object detection with star-convex shapes. Results: The optimal combination for semantic segmentation was the U-net++ architecture with a ResNeSt-269 background with a data set intersection over a union score of 0.8902. For the cell characteristics examined (area, circularity, solidity, perimeter, radius, and shape index), the difference in mean value using different chronic lymphocytic leukaemia cell segmentation approaches appeared to be statistically significant (Mann-Whitney U test, P < .0001). Conclusion: We found that overall, the algorithms demonstrate equal agreement with ground truth, but with the comparison, it can be seen that the different approaches prefer different morphological features of the cells. Consequently, choosing the most suitable method for instance-based cell segmentation depends on the particular application, namely, the specific cellular traits being investigated.

8.
Data Brief ; 56: 110852, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39281010

RESUMEN

Detecting and screening clouds is the first step in most optical remote sensing analyses. Cloud formation is diverse, presenting many shapes, thicknesses, and altitudes. This variety poses a significant challenge to the development of effective cloud detection algorithms, as most datasets lack an unbiased representation. To address this issue, we have built CloudSEN12+, a significant expansion of the CloudSEN12 dataset. This new dataset doubles the expert-labeled annotations, making it the largest cloud and cloud shadow detection dataset for Sentinel-2 imagery up to date. We have carefully reviewed and refined our previous annotations to ensure maximum trustworthiness. We expect CloudSEN12+ will be a valuable resource for the cloud detection research community.

9.
Heliyon ; 10(17): e36248, 2024 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-39286137

RESUMEN

This Proposed work explores how machine learning can be used to diagnose conjunctivitis, a common eye ailment. The main goal of the study is to capture eye images using camera-based systems, perform image pre-processing, and employ image segmentation techniques, particularly the UNet++ and U-net models. Additionally, the study involves extracting features from the relevant areas within the segmented images and using Convolutional Neural Networks for classification. All this is carried out using TensorFlow, a well-known machine-learning platform. The research involves thorough training and assessment of both the UNet and U-net++ segmentation models. A comprehensive analysis is conducted, focusing on their accuracy and performance. The study goes further to evaluate these models using both the UBIRIS dataset and a custom dataset created for this specific research. The experimental results emphasize a substantial improvement in the quality of segmentation achieved by the U-net++ model, the model achieved an overall accuracy of 97.07. Furthermore, the UNet++ architecture displays better accuracy in comparison to the traditional U-net model. These outcomes highlight the potential of U-net++ as a valuable advancement in the field of machine learning-based conjunctivitis diagnosis.

10.
Comput Biol Med ; 182: 109139, 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39270456

RESUMEN

We developed a method for automated detection of motion and noise artifacts (MNA) in electrodermal activity (EDA) signals, based on a one-dimensional U-Net architecture. EDA has been widely employed in diverse applications to assess sympathetic functions. However, EDA signals can be easily corrupted by MNA, which frequently occur in wearable systems, particularly those used for ambulatory recording. MNA can lead to false decisions, resulting in inaccurate assessment and diagnosis. Several approaches have been proposed for MNA detection; however, questions remain regarding the generalizability and the feasibility of implementation of the algorithms in real-time especially those involving deep learning approaches. In this work, we propose a deep learning approach based on a one-dimensional U-Net architecture using spectrograms of EDA for MNA detection. We developed our method using four distinct datasets, including two independent testing datasets, with a total of 9602 128-s EDA segments from 104 subjects. Our proposed scheme, including data augmentation, spectrogram computation, and 1D U-Net, yielded balanced accuracies of 80.0 ± 13.7 % and 75.0 ± 14.0 % for the two independent test datasets; these results are better than or comparable to those of other five state-of-the-art methods. Additionally, the computation time of our feature computation and machine learning classification was significantly lower than that of other methods (p < .001). The model requires only 0.28 MB of memory, which is far smaller than the two deep learning approaches (4.93 and 54.59 MB) which were used as comparisons to our study. Our model can be implemented in real-time in embedded systems, even with limited memory and an inefficient microprocessor, without compromising the accuracy of MNA detection.

11.
Strahlenther Onkol ; 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39283345

RESUMEN

BACKGROUND: The hypothesis of changing network layers to increase the accuracy of dose distribution prediction, instead of expanding their dimensions, which requires complex calculations, has been considered in our study. MATERIALS AND METHODS: A total of 137 prostate cancer patients treated with the tomotherapy technique were categorized as 80% training and validating as well as 20% testing for the nested UNet and UNet architectures. Mean absolute error (MAE) was used to measure the dosimetry indices of dose-volume histograms (DVHs), and geometry indices, including the structural similarity index measure (SSIM), dice similarity coefficient (DSC), and Jaccard similarity coefficient (JSC), were used to evaluate the isodose volume (IV) similarity prediction. To verify a statistically significant difference, the two-way statistical Wilcoxon test was used at a level of 0.05 (p < 0.05). RESULTS: Use of a nested UNet architecture reduced the predicted dose MAE in DVH indices. The MAE for planning target volume (PTV), bladder, rectum, and right and left femur were D98% = 1.11 ± 0.90; D98% = 2.27 ± 2.85, Dmean = 0.84 ± 0.62; D98% = 1.47 ± 12.02, Dmean = 0.77 ± 1.59; D2% = 0.65 ± 0.70, Dmean = 0.96 ± 2.82; and D2% = 1.18 ± 6.65, Dmean = 0.44 ± 1.13, respectively. Additionally, the greatest geometric similarity was observed in the mean SSIM for UNet and nested UNet (0.91 vs. 0.94, respectively). CONCLUSION: The nested UNet network can be considered a suitable network due to its ability to improve the accuracy of dose distribution prediction compared to the UNet network in an acceptable time.

12.
Ultrasound Med Biol ; 2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39244483

RESUMEN

OBJECTIVE: As metabolic dysfunction-associated steatotic liver disease (MASLD) becomes more prevalent worldwide, it is imperative to create more accurate technologies that make it easy to assess the liver in a point-of-care setting. The aim of this study is to test the performance of a new software tool implemented in Velacur (Sonic Incytes), a liver stiffness and ultrasound attenuation measurement device, on patients with MASLD. This tool employs a deep learning-based method to detect and segment shear waves in the liver tissue for subsequent analysis to improve tissue characterization for patient diagnosis. METHODS: This new tool consists of a deep learning based algorithm, which was trained on 15,045 expert-segmented images from 103 patients, using a U-Net architecture. The algorithm was then tested on 4429 images from 36 volunteers and patients with MASLD. Test subjects were scanned at different clinics with different Velacur operators. Evaluation was performed on both individual images (image based) and averaged across all images collected from a patient (patient based). Ground truth was defined by expert segmentation of the shear waves within each image. For evaluation, sensitivity and specificity for correct wave detection in the image were calculated. For those images containing waves, the Dice coefficient was calculated. A prototype of the software tool was also implemented on Velacur and assessed by operators in real world settings. RESULTS: The wave detection algorithm had a sensitivity of 81% and a specificity of 84%, with a Dice coefficient of 0.74 and 0.75 for image based and patient-based averages respectively. The implementation of this software tool as an overlay on the B-Mode ultrasound resulted in improved exam quality collected by operators. CONCLUSION: The shear wave algorithm performed well on a test set of volunteers and patients with metabolic dysfunction-associated steatotic liver disease. The addition of this software tool, implemented on the Velacur system, improved the quality of the liver assessments performed in a real world, point of care setting.

13.
Technol Health Care ; 2024 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-39240595

RESUMEN

BACKGROUND: Liver cancer poses a significant health challenge due to its high incidence rates and complexities in detection and treatment. Accurate segmentation of liver tumors using medical imaging plays a crucial role in early diagnosis and treatment planning. OBJECTIVE: This study proposes a novel approach combining U-Net and ResNet architectures with the Adam optimizer and sigmoid activation function. The method leverages ResNet's deep residual learning to address training issues in deep neural networks. At the same time, U-Net's structure facilitates capturing local and global contextual information essential for precise tumor characterization. The model aims to enhance segmentation accuracy by effectively capturing intricate tumor features and contextual details by integrating these architectures. The Adam optimizer expedites model convergence by dynamically adjusting the learning rate based on gradient statistics during training. METHODS: To validate the effectiveness of the proposed approach, segmentation experiments are conducted on a diverse dataset comprising 130 CT scans of liver cancers. Furthermore, a state-of-the-art fusion strategy is introduced, combining the robust feature learning capabilities of the UNet-ResNet classifier with Snake-based Level Set Segmentation. RESULTS: Experimental results demonstrate impressive performance metrics, including an accuracy of 0.98 and a minimal loss of 0.10, underscoring the efficacy of the proposed methodology in liver cancer segmentation. CONCLUSION: This fusion approach effectively delineates complex and diffuse tumor shapes, significantly reducing errors.

14.
J Imaging Inform Med ; 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39227537

RESUMEN

Thermography is a non-invasive and non-contact method for detecting cancer in its initial stages by examining the temperature variation between both breasts. Preprocessing methods such as resizing, ROI (region of interest) segmentation, and augmentation are frequently used to enhance the accuracy of breast thermogram analysis. In this study, a modified U-Net architecture (DTCWAU-Net) that uses dual-tree complex wavelet transform (DTCWT) and attention gate for breast thermal image segmentation for frontal and lateral view thermograms, aiming to outline ROI for potential tumor detection, was proposed. The proposed approach achieved an average Dice coefficient of 93.03% and a sensitivity of 94.82%, showcasing its potential for accurate breast thermogram segmentation. Classification of breast thermograms into healthy or cancerous categories was carried out by extracting texture- and histogram-based features and deep features from segmented thermograms. Feature selection was performed using Neighborhood Component Analysis (NCA), followed by the application of machine learning classifiers. When compared to other state-of-the-art approaches for detecting breast cancer using a thermogram, the proposed methodology showed a higher accuracy of 99.90% for VGG16 deep features with NCA and Random Forest classifier. Simulation results expound that the proposed method can be used in breast cancer screening, facilitating early detection, and enhancing treatment outcomes.

15.
Skeletal Radiol ; 2024 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-39230576

RESUMEN

OBJECTIVE: A fully automated laminar cartilage composition (MRI-based T2) analysis method was technically and clinically validated by comparing radiographically normal knees with (CL-JSN) and without contra-lateral joint space narrowing or other signs of radiographic osteoarthritis (OA, CL-noROA). MATERIALS AND METHODS: 2D U-Nets were trained from manually segmented femorotibial cartilages (n = 72) from all 7 echoes (AllE), or from the 1st echo only (1stE) of multi-echo-spin-echo (MESE) MRIs acquired by the Osteoarthritis Initiative (OAI). Because of its greater accuracy, only the AllE U-Net was then applied to knees from the OAI healthy reference cohort (n = 10), CL-JSN (n = 39), and (1:1) matched CL-noROA knees (n = 39) that all had manual expert segmentation, and to 982 non-matched CL-noROA knees without expert segmentation. RESULTS: The agreement (Dice similarity coefficient) between automated vs. manual expert cartilage segmentation was between 0.82 ± 0.05/0.79 ± 0.06 (AllE/1stE) and 0.88 ± 0.03/0.88 ± 0.03 (AllE/1stE) across femorotibial cartilage plates. The deviation between automated vs. manually derived laminar T2 reached up to - 2.2 ± 2.6 ms/ + 4.1 ± 10.2 ms (AllE/1stE). The AllE U-Net showed a similar sensitivity to cross-sectional laminar T2 differences between CL-JSN and CL-noROA knees in the matched (Cohen's D ≤ 0.54) and the non-matched (D ≤ 0.54) comparison as the matched manual analyses (D ≤ 0.48). Longitudinally, the AllE U-Net also showed a similar sensitivity to CL-JSN vs. CS-noROA differences in the matched (D ≤ 0.51) and the non-matched (D ≤ 0.43) comparison as matched manual analyses (D ≤ 0.41). CONCLUSION: The fully automated T2 analysis showed a high agreement, acceptable accuracy, and similar sensitivity to cross-sectional and longitudinal laminar T2 differences in an early OA model, compared with manual expert analysis. TRIAL REGISTRATION: Clinicaltrials.gov identification: NCT00080171.

16.
Artículo en Inglés | MEDLINE | ID: mdl-39230610

RESUMEN

BACKGROUND: Diagnosing and treating tonsillitis pose no significant challenge for otolaryngologists; however, it can increase the infection risk for healthcare professionals amidst the coronavirus pandemic. In recent years, with the advancement of artificial intelligence (AI), its application in medical imaging has also thrived. This research is to identify the optimal convolutional neural network (CNN) algorithm for accurate diagnosis of tonsillitis and early precision treatment. METHODS: Semi-supervised learning with pseudo-labels used for self-training was adopted to train our CNN, with the algorithm including UNet, PSPNet, and FPN. A total of 485 pharyngoscopic images from 485 participants were included, comprising healthy individuals (133 cases), patients with the common cold (295 cases), and patients with tonsillitis (57 cases). Both color and texture features from 485 images are extracted for analysis. RESULTS: UNet outperformed PSPNet and FPN in accurately segmenting oropharyngeal anatomy automatically, with average Dice coefficient of 97.74% and a pixel accuracy of 98.12%, making it suitable for enhancing the diagnosis of tonsillitis. The normal tonsils generally have more uniform and smooth textures and have pinkish color, similar to the surrounding mucosal tissues, while tonsillitis, particularly the antibiotic-required type, shows white or yellowish pus-filled spots or patches, and shows more granular or lumpy texture in contrast, indicating inflammation and changes in tissue structure. After training with 485 cases, our algorithm with UNet achieved accuracy rates of 93.75%, 97.1%, and 91.67% in differentiating the three tonsil groups, demonstrating excellent results. CONCLUSION: Our research highlights the potential of using UNet for fully automated semantic segmentation of oropharyngeal structures, which aids in subsequent feature extraction, machine learning, and enables accurate AI diagnosis of tonsillitis. This innovation shows promise for enhancing both the accuracy and speed of tonsillitis assessments.

17.
Sci Rep ; 14(1): 21298, 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39266655

RESUMEN

Learning operators with deep neural networks is an emerging paradigm for scientific computing. Deep Operator Network (DeepONet) is a modular operator learning framework that allows for flexibility in choosing the kind of neural network to be used in the trunk and/or branch of the DeepONet. This is beneficial as it has been shown many times that different types of problems require different kinds of network architectures for effective learning. In this work, we design an efficient neural operator based on the DeepONet architecture. We introduce U-Net enhanced DeepONet (U-DeepONet) for learning the solution operator of highly complex CO2-water two-phase flow in heterogeneous porous media. The U-DeepONet is more accurate in predicting gas saturation and pressure buildup than the state-of-the-art U-Net based Fourier Neural Operator (U-FNO) and the Fourier-enhanced Multiple-Input Operator (Fourier-MIONet) trained on the same dataset. Moreover, our U-DeepONet is significantly more efficient in training times than both the U-FNO (more than 18 times faster) and the Fourier-MIONet (more than 5 times faster), while consuming less computational resources. We also show that the U-DeepONet is more data efficient and better at generalization than both the U-FNO and the Fourier-MIONet.

18.
Heliyon ; 10(16): e35933, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39258194

RESUMEN

The growing interest in Subseasonal to Seasonal (S2S) prediction data across different industries underscores its potential use in comprehending weather patterns, extreme conditions, and important sectors such as agriculture and energy management. However, concerns about its accuracy have been raised. Furthermore, enhancing the precision of rainfall predictions remains challenging in S2S forecasts. This study enhanced the sub-seasonal to seasonal (S2S) prediction skills for precipitation amount and occurrence over the East Asian region by employing deep learning-based post-processing techniques. We utilized a modified U-Net architecture that wraps all its convolutional layers with TimeDistributed layers as a deep learning model. For the training datasets, the precipitation prediction data of six S2S climate models and their multi-model ensemble (MME) were constructed, and the daily precipitation occurrence was obtained from the three thresholds values, 0 % of the daily precipitation for no-rain events, <33 % for light-rain, >67 % for heavy-rain. Based on the precipitation amount prediction skills of the six climate models, deep learning-based post-processing outperformed post-processing using multiple linear regression (MLR) in the lead times of weeks 2-4. The prediction accuracy of precipitation occurrence with MLR-based post-processing did not significantly improve, whereas deep learning-based post-processing enhanced the prediction accuracy in the total lead times, demonstrating superiority over MLR. We enhanced the prediction accuracy in forecasting the amount and occurrence of precipitation in individual climate models using deep learning-based post-processing.

19.
Diagnostics (Basel) ; 14(17)2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39272696

RESUMEN

The aim and objective of the research are to develop an automated diagnosis system for the prediction of rheumatoid arthritis (RA) based on artificial intelligence (AI) and quantum computing for hand radiographs and thermal images. The hand radiographs and thermal images were segmented using a UNet++ model and color-based k-means clustering technique, respectively. The attributes from the segmented regions were generated using the Speeded-Up Robust Features (SURF) feature extractor and classification was performed using k-star and Hoeffding classifiers. For the ground truth and the predicted test image, the study utilizing UNet++ segmentation achieved a pixel-wise accuracy of 98.75%, an intersection over union (IoU) of 0.87, and a dice coefficient of 0.86, indicating a high level of similarity. The custom RA-X-ray thermal imaging (XTNet) surpassed all the models for the detection of RA with a classification accuracy of 90% and 93% for X-ray and thermal imaging modalities, respectively. Furthermore, the study employed quantum support vector machine (QSVM) as a quantum computing approach which yielded an accuracy of 93.75% and 87.5% for the detection of RA from hand X-ray and thermal images. In addition, vision transformer (ViT) was employed to classify RA which obtained an accuracy of 80% for hand X-rays and 90% for thermal images. Thus, depending on the performance measures, the RA-XTNet model can be used as an effective automated diagnostic method to diagnose RA accurately and rapidly in hand radiographs and thermal images.

20.
Sensors (Basel) ; 24(17)2024 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-39275364

RESUMEN

Different types of rural settlement agglomerations have been formed and mixed in space during the rural revitalization strategy implementation in China. Discriminating them from remote sensing images is of great significance for rural land planning and living environment improvement. Currently, there is a lack of automatic methods for obtaining information on rural settlement differentiation. In this paper, an improved encoder-decoder network structure, ASCEND-UNet, was designed based on the original UNet. It was implemented to segment and classify dispersed and clustered rural settlement buildings from high-resolution satellite images. The ASCEND-UNet model incorporated three components: firstly, the atrous spatial pyramid pooling (ASPP) multi-scale feature fusion module was added into the encoder, then the spatial and channel squeeze and excitation (scSE) block was embedded at the skip connection; thirdly, the hybrid dilated convolution (HDC) block was utilized in the decoder. In our proposed framework, the ASPP and HDC were used as multiple dilated convolution blocks to expand the receptive field by introducing a series of dilated rate convolutions. The scSE is an attention mechanism block focusing on features both in the spatial and channel dimension. A series of model comparisons and accuracy assessments with the original UNet, PSPNet, DeepLabV3+, and SegNet verified the effectiveness of our proposed model. Compared with the original UNet model, ASCEND-UNet achieved improvements of 4.67%, 2.80%, 3.73%, and 6.28% in precision, recall, F1-score and MIoU, respectively. The contributions of HDC, ASPP, and scSE modules were discussed in ablation experiments. Our proposed model obtained more accurate and stable results by integrating multiple dilated convolution blocks with an attention mechanism. This novel model enriches the automatic methods for semantic segmentation of different rural settlements from remote sensing images.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA