Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.003
Filtrar
1.
Neural Netw ; 181: 106765, 2024 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-39357269

RESUMO

SNNs are gaining popularity in AI research as a low-power alternative in deep learning due to their sparse properties and biological interpretability. Using SNNs for dense prediction tasks is becoming an important research area. In this paper, we firstly proposed a novel modification on the conventional Spiking U-Net architecture by adjusting the firing positions of neurons. The modified network model, named Analog Spiking U-Net (AS U-Net), is capable of incorporating the Convolutional Block Attention Module (CBAM) into the domain of SNNs. This is the first successful implementation of CBAM in SNNs, which has the potential to improve SNN model's segmentation performance while decreasing information loss. Then, the proposed AS U-Net (with CBAM&ViT) is trained by direct encoding on a comprehensive dataset obtained by merging several diabetic retinal vessel segmentation datasets. Based on the experimental results, the provided SNN model achieves the highest segmentation accuracy in retinal vessel segmentation for diabetes mellitus, surpassing other SNN-based models and most ANN-based related models. In addition, under the same structure, our model demonstrates comparable performance to the ANN model. And then, the novel model achieves state-of-the-art(SOTA) results in comparative experiments when both accuracy and energy consumption are considered (Fig. 1). At the same time, the ablative analysis of CBAM further confirms its feasibility and effectiveness in SNNs, which means that a novel approach could be provided for subsequent deployment and hardware chip application. In the end, we conduct extensive generalization experiments on the same type of segmentation task (ISBI and ISIC), the more complex multi-segmentation task (Synapse), and a series of image generation tasks (MNIST, Day2night, Maps, Facades) in order to visually demonstrate the generality of the proposed method.

2.
Front Oncol ; 14: 1433225, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39351348

RESUMO

Purpose: The 3D U-Net deep neural network structure is widely employed for dose prediction in radiotherapy. However, the attention to the network depth and its impact on the accuracy and robustness of dose prediction remains inadequate. Methods: 92 cervical cancer patients who underwent Volumetric Modulated Arc Therapy (VMAT) are geometrically augmented to investigate the effects of network depth on dose prediction by training and testing three different 3D U-Net structures with depths of 3, 4, and 5. Results: For planning target volume (PTV), the differences between predicted and true values of D98, D99, and Homogeneity were statistically 1.00 ± 0.23, 0.32 ± 0.72, and -0.02 ± 0.02 for the model with a depth of 5, respectively. Compared to the other two models, these parameters were also better. For most of the organs at risk, the mean and maximum differences between the predicted values and the true values for the model with a depth of 5 were better than for the other two models. Conclusions: The results reveal that the network model with a depth of 5 exhibits superior performance, albeit at the expense of the longest training time and maximum computational memory in the three models. A small server with two NVIDIA GeForce RTX 3090 GPUs with 24 G of memory was employed for this training. For the 3D U-Net model with a depth of more than 5 cannot be supported due to insufficient training memory, the 3D U-Net neural network with a depth of 5 is the commonly used and optimal choice for small servers.

3.
Radiat Oncol J ; 42(3): 181-191, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39354821

RESUMO

PURPOSE: To generate and investigate a supervised deep learning algorithm for creating synthetic computed tomography (sCT) images from kilovoltage cone-beam computed tomography (kV-CBCT) images for adaptive radiation therapy (ART) in head and neck cancer (HNC). MATERIALS AND METHODS: This study generated the supervised U-Net deep learning model using 3,491 image pairs from planning computed tomography (pCT) and kV-CBCT datasets obtained from 40 HNC patients. The dataset was split into 80% for training and 20% for testing. The evaluation of the sCT images compared to pCT images focused on three aspects: Hounsfield units accuracy, assessed using mean absolute error (MAE) and root mean square error (RMSE); image quality, evaluated using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) between sCT and pCT images; and dosimetric accuracy, encompassing 3D gamma passing rates for dose distribution and percentage dose difference. RESULTS: MAE, RMSE, PSNR, and SSIM showed improvements from their initial values of 53.15 ± 40.09, 153.99 ± 79.78, 47.91 ± 4.98 dB, and 0.97 ± 0.02 to 41.47 ± 30.59, 130.39 ± 78.06, 49.93 ± 6.00 dB, and 0.98 ± 0.02, respectively. Regarding dose evaluation, 3D gamma passing rates for dose distribution within sCT images under 2%/2 mm, 3%/2 mm, and 3%/3 mm criteria, yielded passing rates of 92.1% ± 3.8%, 93.8% ± 3.0%, and 96.9% ± 2.0%, respectively. The sCT images exhibited minor variations in the percentage dose distribution of the investigated target and structure volumes. However, it is worth noting that the sCT images exhibited anatomical variations when compared to the pCT images. CONCLUSION: These findings highlight the potential of the supervised U-Net deep learningmodel in generating kV-CBCT-based sCT images for ART in patients with HNC.

4.
J Imaging Inform Med ; 2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39227537

RESUMO

Thermography is a non-invasive and non-contact method for detecting cancer in its initial stages by examining the temperature variation between both breasts. Preprocessing methods such as resizing, ROI (region of interest) segmentation, and augmentation are frequently used to enhance the accuracy of breast thermogram analysis. In this study, a modified U-Net architecture (DTCWAU-Net) that uses dual-tree complex wavelet transform (DTCWT) and attention gate for breast thermal image segmentation for frontal and lateral view thermograms, aiming to outline ROI for potential tumor detection, was proposed. The proposed approach achieved an average Dice coefficient of 93.03% and a sensitivity of 94.82%, showcasing its potential for accurate breast thermogram segmentation. Classification of breast thermograms into healthy or cancerous categories was carried out by extracting texture- and histogram-based features and deep features from segmented thermograms. Feature selection was performed using Neighborhood Component Analysis (NCA), followed by the application of machine learning classifiers. When compared to other state-of-the-art approaches for detecting breast cancer using a thermogram, the proposed methodology showed a higher accuracy of 99.90% for VGG16 deep features with NCA and Random Forest classifier. Simulation results expound that the proposed method can be used in breast cancer screening, facilitating early detection, and enhancing treatment outcomes.

5.
Skeletal Radiol ; 2024 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-39230576

RESUMO

OBJECTIVE: A fully automated laminar cartilage composition (MRI-based T2) analysis method was technically and clinically validated by comparing radiographically normal knees with (CL-JSN) and without contra-lateral joint space narrowing or other signs of radiographic osteoarthritis (OA, CL-noROA). MATERIALS AND METHODS: 2D U-Nets were trained from manually segmented femorotibial cartilages (n = 72) from all 7 echoes (AllE), or from the 1st echo only (1stE) of multi-echo-spin-echo (MESE) MRIs acquired by the Osteoarthritis Initiative (OAI). Because of its greater accuracy, only the AllE U-Net was then applied to knees from the OAI healthy reference cohort (n = 10), CL-JSN (n = 39), and (1:1) matched CL-noROA knees (n = 39) that all had manual expert segmentation, and to 982 non-matched CL-noROA knees without expert segmentation. RESULTS: The agreement (Dice similarity coefficient) between automated vs. manual expert cartilage segmentation was between 0.82 ± 0.05/0.79 ± 0.06 (AllE/1stE) and 0.88 ± 0.03/0.88 ± 0.03 (AllE/1stE) across femorotibial cartilage plates. The deviation between automated vs. manually derived laminar T2 reached up to - 2.2 ± 2.6 ms/ + 4.1 ± 10.2 ms (AllE/1stE). The AllE U-Net showed a similar sensitivity to cross-sectional laminar T2 differences between CL-JSN and CL-noROA knees in the matched (Cohen's D ≤ 0.54) and the non-matched (D ≤ 0.54) comparison as the matched manual analyses (D ≤ 0.48). Longitudinally, the AllE U-Net also showed a similar sensitivity to CL-JSN vs. CS-noROA differences in the matched (D ≤ 0.51) and the non-matched (D ≤ 0.43) comparison as matched manual analyses (D ≤ 0.41). CONCLUSION: The fully automated T2 analysis showed a high agreement, acceptable accuracy, and similar sensitivity to cross-sectional and longitudinal laminar T2 differences in an early OA model, compared with manual expert analysis. TRIAL REGISTRATION: Clinicaltrials.gov identification: NCT00080171.

6.
Comput Biol Med ; 182: 109139, 2024 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-39270456

RESUMO

We developed a method for automated detection of motion and noise artifacts (MNA) in electrodermal activity (EDA) signals, based on a one-dimensional U-Net architecture. EDA has been widely employed in diverse applications to assess sympathetic functions. However, EDA signals can be easily corrupted by MNA, which frequently occur in wearable systems, particularly those used for ambulatory recording. MNA can lead to false decisions, resulting in inaccurate assessment and diagnosis. Several approaches have been proposed for MNA detection; however, questions remain regarding the generalizability and the feasibility of implementation of the algorithms in real-time especially those involving deep learning approaches. In this work, we propose a deep learning approach based on a one-dimensional U-Net architecture using spectrograms of EDA for MNA detection. We developed our method using four distinct datasets, including two independent testing datasets, with a total of 9602 128-s EDA segments from 104 subjects. Our proposed scheme, including data augmentation, spectrogram computation, and 1D U-Net, yielded balanced accuracies of 80.0 ± 13.7 % and 75.0 ± 14.0 % for the two independent test datasets; these results are better than or comparable to those of other five state-of-the-art methods. Additionally, the computation time of our feature computation and machine learning classification was significantly lower than that of other methods (p < .001). The model requires only 0.28 MB of memory, which is far smaller than the two deep learning approaches (4.93 and 54.59 MB) which were used as comparisons to our study. Our model can be implemented in real-time in embedded systems, even with limited memory and an inefficient microprocessor, without compromising the accuracy of MNA detection.

7.
Heliyon ; 10(16): e35933, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39258194

RESUMO

The growing interest in Subseasonal to Seasonal (S2S) prediction data across different industries underscores its potential use in comprehending weather patterns, extreme conditions, and important sectors such as agriculture and energy management. However, concerns about its accuracy have been raised. Furthermore, enhancing the precision of rainfall predictions remains challenging in S2S forecasts. This study enhanced the sub-seasonal to seasonal (S2S) prediction skills for precipitation amount and occurrence over the East Asian region by employing deep learning-based post-processing techniques. We utilized a modified U-Net architecture that wraps all its convolutional layers with TimeDistributed layers as a deep learning model. For the training datasets, the precipitation prediction data of six S2S climate models and their multi-model ensemble (MME) were constructed, and the daily precipitation occurrence was obtained from the three thresholds values, 0 % of the daily precipitation for no-rain events, <33 % for light-rain, >67 % for heavy-rain. Based on the precipitation amount prediction skills of the six climate models, deep learning-based post-processing outperformed post-processing using multiple linear regression (MLR) in the lead times of weeks 2-4. The prediction accuracy of precipitation occurrence with MLR-based post-processing did not significantly improve, whereas deep learning-based post-processing enhanced the prediction accuracy in the total lead times, demonstrating superiority over MLR. We enhanced the prediction accuracy in forecasting the amount and occurrence of precipitation in individual climate models using deep learning-based post-processing.

8.
Sci Rep ; 14(1): 21298, 2024 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-39266655

RESUMO

Learning operators with deep neural networks is an emerging paradigm for scientific computing. Deep Operator Network (DeepONet) is a modular operator learning framework that allows for flexibility in choosing the kind of neural network to be used in the trunk and/or branch of the DeepONet. This is beneficial as it has been shown many times that different types of problems require different kinds of network architectures for effective learning. In this work, we design an efficient neural operator based on the DeepONet architecture. We introduce U-Net enhanced DeepONet (U-DeepONet) for learning the solution operator of highly complex CO2-water two-phase flow in heterogeneous porous media. The U-DeepONet is more accurate in predicting gas saturation and pressure buildup than the state-of-the-art U-Net based Fourier Neural Operator (U-FNO) and the Fourier-enhanced Multiple-Input Operator (Fourier-MIONet) trained on the same dataset. Moreover, our U-DeepONet is significantly more efficient in training times than both the U-FNO (more than 18 times faster) and the Fourier-MIONet (more than 5 times faster), while consuming less computational resources. We also show that the U-DeepONet is more data efficient and better at generalization than both the U-FNO and the Fourier-MIONet.

9.
Res Sq ; 2024 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-39281859

RESUMO

Developmental toxicity (DevTox) tests evaluate the adverse effects of chemical exposures on an organism's development. While large animal tests are currently heavily relied on, the development of new approach methodologies (NAMs) is encouraging industries and regulatory agencies to evaluate these novel assays. Several practical advantages have made C. elegansa useful model for rapid toxicity testing and studying developmental biology. Although the potential to study DevTox is promising, current low-resolution and labor-intensive methodologies prohibit the use of C. elegans for sub-lethal DevTox studies at high throughputs. With the recent availability of a large-scale microfluidic device, vivoChip, we can now rapidly collect 3D high-resolution images of ~ 1,000 C. elegans from 24 different populations. In this paper, we demonstrate DevTox studies using a 2.5D U-Net architecture (vivoBodySeg) that can precisely segment C. elegans in images obtained from vivoChip devices, achieving an average Dice score of 97.80. The fully automated platform can analyze 36 GB data from each device to phenotype multiple body parameters within 35 min on a desktop PC at speeds ~ 140x faster than the manual analysis. Highly reproducible DevTox parameters (4-8% CV) and additional autofluorescence-based phenotypes allow us to assess the toxicity of chemicals with high statistical power.

10.
Front Artif Intell ; 7: 1376546, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39315244

RESUMO

Background: This study delves into the crucial domain of sperm segmentation, a pivotal component of male infertility diagnosis. It explores the efficacy of diverse architectural configurations coupled with various encoders, leveraging frames from the VISEM dataset for evaluation. Methods: The pursuit of automated sperm segmentation led to the examination of multiple deep learning architectures, each paired with distinct encoders. Extensive experimentation was conducted on the VISEM dataset to assess their performance. Results: Our study evaluated various deep learning architectures with different encoders for sperm segmentation using the VISEM dataset. While each model configuration exhibited distinct strengths and weaknesses, UNet++ with ResNet34 emerged as a top-performing model, demonstrating exceptional accuracy in distinguishing sperm cells from non-sperm cells. However, challenges persist in accurately identifying closely adjacent sperm cells. These findings provide valuable insights for improving automated sperm segmentation in male infertility diagnosis. Discussion: The study underscores the significance of selecting appropriate model combinations based on specific diagnostic requirements. It also highlights the challenges related to distinguishing closely adjacent sperm cells. Conclusion: This research advances the field of automated sperm segmentation for male infertility diagnosis, showcasing the potential of deep learning techniques. Future work should aim to enhance accuracy in scenarios involving close proximity between sperm cells, ultimately improving clinical sperm analysis.

11.
Neuroimage ; 300: 120872, 2024 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-39349149

RESUMO

In this study, we introduce MGA-Net, a novel mask-guided attention neural network, which extends the U-net model for precision neonatal brain imaging. MGA-Net is designed to extract the brain from other structures and reconstruct high-quality brain images. The network employs a common encoder and two decoders: one for brain mask extraction and the other for brain region reconstruction. A key feature of MGA-Net is its high-level mask-guided attention module, which leverages features from the brain mask decoder to enhance image reconstruction. To enable the same encoder and decoder to process both MRI and ultrasound (US) images, MGA-Net integrates sinusoidal positional encoding. This encoding assigns distinct positional values to MRI and US images, allowing the model to effectively learn from both modalities. Consequently, features learned from a single modality can aid in learning a modality with less available data, such as US. We extensively validated the proposed MGA-Net on diverse and independent datasets from varied clinical settings and neonatal age groups. The metrics used for assessment included the DICE similarity coefficient, recall, and accuracy for image segmentation; structural similarity for image reconstruction; and root mean squared error for total brain volume estimation from 3D ultrasound images. Our results demonstrate that MGA-Net significantly outperforms traditional methods, offering superior performance in brain extraction and segmentation while achieving high precision in image reconstruction and volumetric analysis. Thus, MGA-Net represents a robust and effective preprocessing tool for MRI and 3D ultrasound images, marking a significant advance in neuroimaging that enhances both research and clinical diagnostics in the neonatal period and beyond.

12.
Strahlenther Onkol ; 2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-39283345

RESUMO

BACKGROUND: The hypothesis of changing network layers to increase the accuracy of dose distribution prediction, instead of expanding their dimensions, which requires complex calculations, has been considered in our study. MATERIALS AND METHODS: A total of 137 prostate cancer patients treated with the tomotherapy technique were categorized as 80% training and validating as well as 20% testing for the nested UNet and UNet architectures. Mean absolute error (MAE) was used to measure the dosimetry indices of dose-volume histograms (DVHs), and geometry indices, including the structural similarity index measure (SSIM), dice similarity coefficient (DSC), and Jaccard similarity coefficient (JSC), were used to evaluate the isodose volume (IV) similarity prediction. To verify a statistically significant difference, the two-way statistical Wilcoxon test was used at a level of 0.05 (p < 0.05). RESULTS: Use of a nested UNet architecture reduced the predicted dose MAE in DVH indices. The MAE for planning target volume (PTV), bladder, rectum, and right and left femur were D98% = 1.11 ± 0.90; D98% = 2.27 ± 2.85, Dmean = 0.84 ± 0.62; D98% = 1.47 ± 12.02, Dmean = 0.77 ± 1.59; D2% = 0.65 ± 0.70, Dmean = 0.96 ± 2.82; and D2% = 1.18 ± 6.65, Dmean = 0.44 ± 1.13, respectively. Additionally, the greatest geometric similarity was observed in the mean SSIM for UNet and nested UNet (0.91 vs. 0.94, respectively). CONCLUSION: The nested UNet network can be considered a suitable network due to its ability to improve the accuracy of dose distribution prediction compared to the UNet network in an acceptable time.

13.
Sensors (Basel) ; 24(17)2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39275756

RESUMO

Liver cancer is one of the malignancies with high mortality rates worldwide, and its timely detection and accurate diagnosis are crucial for improving patient prognosis. To address the limitations of traditional image segmentation techniques and the U-Net network in capturing fine image features, this study proposes an improved model based on the U-Net architecture, named RHEU-Net. By replacing traditional convolution modules in the encoder and decoder with improved residual modules, the network's feature extraction capabilities and gradient stability are enhanced. A Hybrid Gated Attention (HGA) module is integrated before the skip connections, enabling the parallel processing of channel and spatial attentions, optimizing the feature fusion strategy, and effectively replenishing image details. A Multi-Scale Feature Enhancement (MSFE) layer is introduced at the bottleneck, utilizing multi-scale feature extraction technology to further enhance the expression of receptive fields and contextual information, improving the overall feature representation effect. Testing on the LiTS2017 dataset demonstrated that RHEU-Net achieved Dice scores of 95.72% for liver segmentation and 70.19% for tumor segmentation. These results validate the effectiveness of RHEU-Net and underscore its potential for clinical application.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas , Redes Neurais de Computação , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/patologia , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Fígado/diagnóstico por imagem , Fígado/patologia
14.
Magn Reson Med ; 2024 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-39270056

RESUMO

PURPOSE: To shorten CEST acquisition time by leveraging Z-spectrum undersampling combined with deep learning for CEST map construction from undersampled Z-spectra. METHODS: Fisher information gain analysis identified optimal frequency offsets (termed "Fisher offsets") for the multi-pool fitting model, maximizing information gain for the amplitude and the FWHM parameters. These offsets guided initial subsampling levels. A U-NET, trained on undersampled brain CEST images from 18 volunteers, produced CEST maps at 3 T with varied undersampling levels. Feasibility was first tested using retrospective undersampling at three levels, followed by prospective in vivo undersampling (15 of 53 offsets), reducing scan time significantly. Additionally, glioblastoma grade IV pathology was simulated to evaluate network performance in patient-like cases. RESULTS: Traditional multi-pool models failed to quantify CEST maps from undersampled images (structural similarity index [SSIM] <0.2, peak SNR <20, Pearson r <0.1). Conversely, U-NET fitting successfully addressed undersampled data challenges. The study suggests CEST scan time reduction is feasible by undersampling 15, 25, or 35 of 53 Z-spectrum offsets. Prospective undersampling cut scan time by 3.5 times, with a maximum mean squared error of 4.4e-4, r = 0.82, and SSIM = 0.84, compared to the ground truth. The network also reliably predicted CEST values for simulated glioblastoma pathology. CONCLUSION: The U-NET architecture effectively quantifies CEST maps from undersampled Z-spectra at various undersampling levels.

15.
bioRxiv ; 2024 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-39314387

RESUMO

Motivation: Cryogenic Electron Microscopy (cryo-EM) is a core experimental technique used to determine the structure of macromolecules such as proteins. However, the effectiveness of cryo-EM is often hindered by the noise and missing density values in cryo-EM density maps caused by experimental conditions such as low contrast and conformational heterogeneity. Although various global and local map sharpening techniques are widely employed to improve cryo-EM density maps, it is still challenging to efficiently improve their quality for building better protein structures from them. Results: In this study, we introduce CryoTEN - a three-dimensional U-Net style transformer to improve cryo-EM maps effectively. CryoTEN is trained using a diverse set of 1,295 cryo-EM maps as inputs and their corresponding simulated maps generated from known protein structures as targets. An independent test set containing 150 maps is used to evaluate CryoTEN, and the results demonstrate that it can robustly enhance the quality of cryo-EM density maps. In addition, the automatic de novo protein structure modeling shows that the protein structures built from the density maps processed by CryoTEN have substantially better quality than those built from the original maps. Compared to the existing state-of-the-art deep learning methods for enhancing cryo-EM density maps, CryoTEN ranks second in improving the quality of density maps, while running > 10 times faster and requiring much less GPU memory than them. Availability and implementation: The source code and data is freely available at https://github.com/jianlin-cheng/cryoten.

16.
Int J Geogr Inf Sci ; 38(10): 2061-2082, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39318700

RESUMO

Cartographic map generalization involves complex rules, and a full automation has still not been achieved, despite many efforts over the past few decades. Pioneering studies show that some map generalization tasks can be partially automated by deep neural networks (DNNs). However, DNNs are still used as black-box models in previous studies. We argue that integrating explainable AI (XAI) into a DL-based map generalization process can give more insights to develop and refine the DNNs by understanding what cartographic knowledge exactly is learned. Following an XAI framework for an empirical case study, visual analytics and quantitative experiments were applied to explain the importance of input features regarding the prediction of a pre-trained ResU-Net model. This experimental case study finds that the XAI-based visualization results can easily be interpreted by human experts. With the proposed XAI workflow, we further find that the DNN pays more attention to the building boundaries than the interior parts of the buildings. We thus suggest that boundary intersection over union is a better evaluation metric than commonly used intersection over union in qualifying raster-based map generalization results. Overall, this study shows the necessity and feasibility of integrating XAI as part of future DL-based map generalization development frameworks.

17.
Diagnostics (Basel) ; 14(18)2024 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-39335778

RESUMO

Background/Objective: This study aims to utilize advanced artificial intelligence (AI) image recog-nition technologies to establish a robust system for identifying features in lung computed tomog-raphy (CT) scans, thereby detecting respiratory infections such as SARS-CoV-2 pneumonia. Spe-cifically, the research focuses on developing a new model called Residual-Dense-Attention Gates U-Net (RDAG U-Net) to improve accuracy and efficiency in identification. Methods: This study employed Attention U-Net, Attention Res U-Net, and the newly developed RDAG U-Net model. RDAG U-Net extends the U-Net architecture by incorporating ResBlock and DenseBlock modules in the encoder to retain training parameters and reduce computation time. The training dataset in-cludes 3,520 CT scans from an open database, augmented to 10,560 samples through data en-hancement techniques. The research also focused on optimizing convolutional architectures, image preprocessing, interpolation methods, data management, and extensive fine-tuning of training parameters and neural network modules. Result: The RDAG U-Net model achieved an outstanding accuracy of 93.29% in identifying pulmonary lesions, with a 45% reduction in computation time compared to other models. The study demonstrated that RDAG U-Net performed stably during training and exhibited good generalization capability by evaluating loss values, model-predicted lesion annotations, and validation-epoch curves. Furthermore, using ITK-Snap to convert 2D pre-dictions into 3D lung and lesion segmentation models, the results delineated lesion contours, en-hancing interpretability. Conclusion: The RDAG U-Net model showed significant improvements in accuracy and efficiency in the analysis of CT images for SARS-CoV-2 pneumonia, achieving a 93.29% recognition accuracy and reducing computation time by 45% compared to other models. These results indicate the potential of the RDAG U-Net model in clinical applications, as it can accelerate the detection of pulmonary lesions and effectively enhance diagnostic accuracy. Additionally, the 2D and 3D visualization results allow physicians to understand lesions' morphology and distribution better, strengthening decision support capabilities and providing valuable medical diagnosis and treatment planning tools.

18.
Brain Inform ; 11(1): 24, 2024 Sep 26.
Artigo em Inglês | MEDLINE | ID: mdl-39325110

RESUMO

Light Sheet Fluorescence Microscopy (LSFM) is increasingly popular in neuroimaging for its ability to capture high-resolution 3D neural data. However, the presence of stripe noise significantly degrades image quality, particularly in complex 3D stripes with varying widths and brightness, posing challenges in neuroscience research. Existing stripe removal algorithms excel in suppressing noise and preserving details in 2D images with simple stripes but struggle with the complexity of 3D stripes. To address this, we propose a novel 3D U-net model for Stripe Removal in Light sheet fluorescence microscopy (USRL). This approach directly learns and removes stripes in 3D space across different scales, employing a dual-resolution strategy to effectively handle stripes of varying complexities. Additionally, we integrate a nonlinear mapping technique to normalize high dynamic range and unevenly distributed data before applying the stripe removal algorithm. We validate our method on diverse datasets, demonstrating substantial improvements in peak signal-to-noise ratio (PSNR) compared to existing algorithms. Moreover, our algorithm exhibits robust performance when applied to real LSFM data. Through extensive validation experiments, both on test sets and real-world data, our approach outperforms traditional methods, affirming its effectiveness in enhancing image quality. Furthermore, the adaptability of our algorithm extends beyond LSFM applications to encompass other imaging modalities. This versatility underscores its potential to enhance image usability across various research disciplines.

19.
Bioengineering (Basel) ; 11(9)2024 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-39329607

RESUMO

The precise segmentation of different regions of the prostate is crucial in the diagnosis and treatment of prostate-related diseases. However, the scarcity of labeled prostate data poses a challenge for the accurate segmentation of its different regions. We perform the segmentation of different regions of the prostate using U-Net- and Vision Transformer (ViT)-based architectures. We use five semi-supervised learning methods, including entropy minimization, cross pseudo-supervision, mean teacher, uncertainty-aware mean teacher (UAMT), and interpolation consistency training (ICT) to compare the results with the state-of-the-art prostate semi-supervised segmentation network uncertainty-aware temporal self-learning (UATS). The UAMT method improves the prostate segmentation accuracy and provides stable prostate region segmentation results. ICT plays a more stable role in the prostate region segmentation results, which provides strong support for the medical image segmentation task, and demonstrates the robustness of U-Net for medical image segmentation. UATS is still more applicable to the U-Net backbone and has a very significant effect on a positive prediction rate. However, the performance of ViT in combination with semi-supervision still requires further optimization. This comparative analysis applies various semi-supervised learning methods to prostate zonal segmentation. It guides future prostate segmentation developments and offers insights into utilizing limited labeled data in medical imaging.

20.
Sensors (Basel) ; 24(18)2024 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-39338791

RESUMO

There are two widely used methods to measure the cardiac cycle and obtain heart rate measurements: the electrocardiogram (ECG) and the photoplethysmogram (PPG). The sensors used in these methods have gained great popularity in wearable devices, which have extended cardiac monitoring beyond the hospital environment. However, the continuous monitoring of ECG signals via mobile devices is challenging, as it requires users to keep their fingers pressed on the device during data collection, making it unfeasible in the long term. On the other hand, the PPG does not contain this limitation. However, the medical knowledge to diagnose these anomalies from this sign is limited by the need for familiarity, since the ECG is studied and used in the literature as the gold standard. To minimize this problem, this work proposes a method, PPG2ECG, that uses the correlation between the domains of PPG and ECG signals to infer from the PPG signal the waveform of the ECG signal. PPG2ECG consists of mapping between domains by applying a set of convolution filters, learning to transform a PPG input signal into an ECG output signal using a U-net inception neural network architecture. We assessed our proposed method using two evaluation strategies based on personalized and generalized models and achieved mean error values of 0.015 and 0.026, respectively. Our method overcomes the limitations of previous approaches by providing an accurate and feasible method for continuous monitoring of ECG signals through PPG signals. The short distances between the infer-red ECG and the original ECG demonstrate the feasibility and potential of our method to assist in the early identification of heart diseases.


Assuntos
Eletrocardiografia , Frequência Cardíaca , Redes Neurais de Computação , Fotopletismografia , Processamento de Sinais Assistido por Computador , Humanos , Eletrocardiografia/métodos , Fotopletismografia/métodos , Frequência Cardíaca/fisiologia , Algoritmos , Dispositivos Eletrônicos Vestíveis
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...