Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.004
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Brief Bioinform ; 25(4)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38860738

RESUMEN

Picking protein particles in cryo-electron microscopy (cryo-EM) micrographs is a crucial step in the cryo-EM-based structure determination. However, existing methods trained on a limited amount of cryo-EM data still cannot accurately pick protein particles from noisy cryo-EM images. The general foundational artificial intelligence-based image segmentation model such as Meta's Segment Anything Model (SAM) cannot segment protein particles well because their training data do not include cryo-EM images. Here, we present a novel approach (CryoSegNet) of integrating an attention-gated U-shape network (U-Net) specially designed and trained for cryo-EM particle picking and the SAM. The U-Net is first trained on a large cryo-EM image dataset and then used to generate input from original cryo-EM images for SAM to make particle pickings. CryoSegNet shows both high precision and recall in segmenting protein particles from cryo-EM micrographs, irrespective of protein type, shape and size. On several independent datasets of various protein types, CryoSegNet outperforms two top machine learning particle pickers crYOLO and Topaz as well as SAM itself. The average resolution of density maps reconstructed from the particles picked by CryoSegNet is 3.33 Å, 7% better than 3.58 Å of Topaz and 14% better than 3.87 Å of crYOLO. It is publicly available at https://github.com/jianlin-cheng/CryoSegNet.


Asunto(s)
Microscopía por Crioelectrón , Procesamiento de Imagen Asistido por Computador , Microscopía por Crioelectrón/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Proteínas/química , Inteligencia Artificial , Algoritmos , Bases de Datos de Proteínas
2.
Neuroimage ; 300: 120872, 2024 Sep 28.
Artículo en Inglés | MEDLINE | ID: mdl-39349149

RESUMEN

In this study, we introduce MGA-Net, a novel mask-guided attention neural network, which extends the U-net model for precision neonatal brain imaging. MGA-Net is designed to extract the brain from other structures and reconstruct high-quality brain images. The network employs a common encoder and two decoders: one for brain mask extraction and the other for brain region reconstruction. A key feature of MGA-Net is its high-level mask-guided attention module, which leverages features from the brain mask decoder to enhance image reconstruction. To enable the same encoder and decoder to process both MRI and ultrasound (US) images, MGA-Net integrates sinusoidal positional encoding. This encoding assigns distinct positional values to MRI and US images, allowing the model to effectively learn from both modalities. Consequently, features learned from a single modality can aid in learning a modality with less available data, such as US. We extensively validated the proposed MGA-Net on diverse and independent datasets from varied clinical settings and neonatal age groups. The metrics used for assessment included the DICE similarity coefficient, recall, and accuracy for image segmentation; structural similarity for image reconstruction; and root mean squared error for total brain volume estimation from 3D ultrasound images. Our results demonstrate that MGA-Net significantly outperforms traditional methods, offering superior performance in brain extraction and segmentation while achieving high precision in image reconstruction and volumetric analysis. Thus, MGA-Net represents a robust and effective preprocessing tool for MRI and 3D ultrasound images, marking a significant advance in neuroimaging that enhances both research and clinical diagnostics in the neonatal period and beyond.

3.
Development ; 148(21)2021 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-34739029

RESUMEN

Genome editing simplifies the generation of new animal models for congenital disorders. However, the detailed and unbiased phenotypic assessment of altered embryonic development remains a challenge. Here, we explore how deep learning (U-Net) can automate segmentation tasks in various imaging modalities, and we quantify phenotypes of altered renal, neural and craniofacial development in Xenopus embryos in comparison with normal variability. We demonstrate the utility of this approach in embryos with polycystic kidneys (pkd1 and pkd2) and craniofacial dysmorphia (six1). We highlight how in toto light-sheet microscopy facilitates accurate reconstruction of brain and craniofacial structures within X. tropicalis embryos upon dyrk1a and six1 loss of function or treatment with retinoic acid inhibitors. These tools increase the sensitivity and throughput of evaluating developmental malformations caused by chemical or genetic disruption. Furthermore, we provide a library of pre-trained networks and detailed instructions for applying deep learning to the reader's own datasets. We demonstrate the versatility, precision and scalability of deep neural network phenotyping on embryonic disease models. By combining light-sheet microscopy and deep learning, we provide a framework for higher-throughput characterization of embryonic model organisms. This article has an associated 'The people behind the papers' interview.


Asunto(s)
Aprendizaje Profundo , Desarrollo Embrionario/genética , Fenotipo , Animales , Anomalías Craneofaciales/embriología , Anomalías Craneofaciales/genética , Anomalías Craneofaciales/patología , Modelos Animales de Enfermedad , Procesamiento de Imagen Asistido por Computador , Ratones , Microscopía , Mutación , Redes Neurales de la Computación , Trastornos del Neurodesarrollo/genética , Trastornos del Neurodesarrollo/patología , Enfermedades Renales Poliquísticas/embriología , Enfermedades Renales Poliquísticas/genética , Enfermedades Renales Poliquísticas/patología , Proteínas de Xenopus/genética , Xenopus laevis
4.
J Synchrotron Radiat ; 31(Pt 1): 136-149, 2024 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-38095668

RESUMEN

Bone material contains a hierarchical network of micro- and nano-cavities and channels, known as the lacuna-canalicular network (LCN), that is thought to play an important role in mechanobiology and turnover. The LCN comprises micrometer-sized lacunae, voids that house osteocytes, and submicrometer-sized canaliculi that connect bone cells. Characterization of this network in three dimensions is crucial for many bone studies. To quantify X-ray Zernike phase-contrast nanotomography data, deep learning is used to isolate and assess porosity in artifact-laden tomographies of zebrafish bones. A technical solution is proposed to overcome the halo and shade-off domains in order to reliably obtain the distribution and morphology of the LCN in the tomographic data. Convolutional neural network (CNN) models are utilized with increasing numbers of images, repeatedly validated by `error loss' and `accuracy' metrics. U-Net and Sensor3D CNN models were trained on data obtained from two different synchrotron Zernike phase-contrast transmission X-ray microscopes, the ANATOMIX beamline at SOLEIL (Paris, France) and the P05 beamline at PETRA III (Hamburg, Germany). The Sensor3D CNN model with a smaller batch size of 32 and a training data size of 70 images showed the best performance (accuracy 0.983 and error loss 0.032). The analysis procedures, validated by comparison with human-identified ground-truth images, correctly identified the voids within the bone matrix. This proposed approach may have further application to classify structures in volumetric images that contain non-linear artifacts that degrade image quality and hinder feature identification.


Asunto(s)
Aprendizaje Profundo , Animales , Humanos , Artefactos , Porosidad , Pez Cebra , Huesos/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
5.
Magn Reson Med ; 91(3): 1149-1164, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37929695

RESUMEN

PURPOSE: Preclinical MR fingerprinting (MRF) suffers from long acquisition time for organ-level coverage due to demanding image resolution and limited undersampling capacity. This study aims to develop a deep learning-assisted fast MRF framework for sub-millimeter T1 and T2 mapping of entire macaque brain on a preclinical 9.4 T MR system. METHODS: Three dimensional MRF images were reconstructed by singular value decomposition (SVD) compressed reconstruction. T1 and T2 mapping for each axial slice exploited a self-attention assisted residual U-Net to suppress aliasing-induced quantification errors, and the transmit-field (B1 + ) measurements for robustness against B1 + inhomogeneity. Supervised network training used MRF images simulated via virtual parametric maps and a desired undersampling scheme. This strategy bypassed the difficulties of acquiring fully sampled preclinical MRF data to guide network training. The proposed fast MRF framework was tested on experimental data acquired from ex vivo and in vivo macaque brains. RESULTS: The trained network showed reasonable adaptability to experimental MRF images, enabling robust delineation of various T1 and T2 distributions in the brain tissues. Further, the proposed MRF framework outperformed several existing fast MRF methods in handling the aliasing artifacts and capturing detailed cerebral structures in the mapping results. Parametric mapping of entire macaque brain at nominal resolution of 0.35 × $$ \times $$ 0.35 × $$ \times $$ 1 mm3 can be realized via a 20-min 3D MRF scan, which was sixfold faster than the baseline protocol. CONCLUSION: Introducing deep learning to MRF framework paves the way for efficient organ-level high-resolution quantitative MRI in preclinical applications.


Asunto(s)
Aprendizaje Profundo , Encéfalo/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Imagenología Tridimensional/métodos , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos
6.
Magn Reson Med ; 91(5): 2044-2056, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38193276

RESUMEN

PURPOSE: Subject movement during the MR examination is inevitable and causes not only image artifacts but also deteriorates the homogeneity of the main magnetic field (B0 ), which is a prerequisite for high quality data. Thus, characterization of changes to B0 , for example induced by patient movement, is important for MR applications that are prone to B0 inhomogeneities. METHODS: We propose a deep learning based method to predict such changes within the brain from the change of the head position to facilitate retrospective or even real-time correction. A 3D U-net was trained on in vivo gradient-echo brain 7T MRI data. The input consisted of B0 maps and anatomical images at an initial position, and anatomical images at a different head position (obtained by applying a rigid-body transformation on the initial anatomical image). The output consisted of B0 maps at the new head positions. We further fine-trained the network weights to each subject by measuring a limited number of head positions of the given subject, and trained the U-net with these data. RESULTS: Our approach was compared to established dynamic B0 field mapping via interleaved navigators, which suffer from limited spatial resolution and the need for undesirable sequence modifications. Qualitative and quantitative comparison showed similar performance between an interleaved navigator-equivalent method and proposed method. CONCLUSION: It is feasible to predict B0 maps from rigid subject movement and, when combined with external tracking hardware, this information could be used to improve the quality of MR acquisitions without the use of navigators.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Humanos , Estudios Retrospectivos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Movimiento (Física) , Movimiento , Procesamiento de Imagen Asistido por Computador/métodos , Artefactos
7.
Magn Reson Med ; 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39270056

RESUMEN

PURPOSE: To shorten CEST acquisition time by leveraging Z-spectrum undersampling combined with deep learning for CEST map construction from undersampled Z-spectra. METHODS: Fisher information gain analysis identified optimal frequency offsets (termed "Fisher offsets") for the multi-pool fitting model, maximizing information gain for the amplitude and the FWHM parameters. These offsets guided initial subsampling levels. A U-NET, trained on undersampled brain CEST images from 18 volunteers, produced CEST maps at 3 T with varied undersampling levels. Feasibility was first tested using retrospective undersampling at three levels, followed by prospective in vivo undersampling (15 of 53 offsets), reducing scan time significantly. Additionally, glioblastoma grade IV pathology was simulated to evaluate network performance in patient-like cases. RESULTS: Traditional multi-pool models failed to quantify CEST maps from undersampled images (structural similarity index [SSIM] <0.2, peak SNR <20, Pearson r <0.1). Conversely, U-NET fitting successfully addressed undersampled data challenges. The study suggests CEST scan time reduction is feasible by undersampling 15, 25, or 35 of 53 Z-spectrum offsets. Prospective undersampling cut scan time by 3.5 times, with a maximum mean squared error of 4.4e-4, r = 0.82, and SSIM = 0.84, compared to the ground truth. The network also reliably predicted CEST values for simulated glioblastoma pathology. CONCLUSION: The U-NET architecture effectively quantifies CEST maps from undersampled Z-spectra at various undersampling levels.

8.
Magn Reson Med ; 92(6): 2616-2630, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39044635

RESUMEN

PURPOSE: To develop a deep learning-based approach to reduce the scan time of multipool CEST MRI for Parkinson's disease (PD) while maintaining sufficient prediction accuracy. METHOD: A deep learning approach based on a modified one-dimensional U-Net, termed Z-spectral compressed sensing (CS), was proposed to recover dense Z-spectra from sparse ones. The neural network was trained using simulated Z-spectra generated by the Bloch equation with various parameter settings. Its feasibility and effectiveness were validated through numerical simulations and in vivo rat brain experiments, compared with commonly used linear, pchip, and Lorentzian interpolation methods. The proposed method was applied to detect metabolism-related changes in the 6-hydroxydopamine PD model with multipool CEST MRI, including APT, CEST@2 ppm, nuclear Overhauser enhancement, direct saturation, and magnetization transfer, and the prediction performance was evaluated by area under the curve. RESULTS: The numerical simulations and in vivo rat-brain experiments demonstrated that the proposed method could yield superior fidelity in retrieving dense Z-spectra compared with existing methods. Significant differences were observed in APT, CEST@2 ppm, nuclear Overhauser enhancement, and direct saturation between the striatum regions of wild-type and PD models, whereas magnetization transfer exhibited no significant difference. Receiver operating characteristic analysis demonstrated that multipool CEST achieved better predictive performance compared with individual pools. Combined with Z-spectral CS, the scan time of multipool CEST MRI can be reduced to 33% without distinctly compromising prediction accuracy. CONCLUSION: The integration of Z-spectral CS with multipool CEST MRI can enhance the prediction accuracy of PD and maintain the scan time within a reasonable range.


Asunto(s)
Encéfalo , Aprendizaje Profundo , Imagen por Resonancia Magnética , Enfermedad de Parkinson , Animales , Ratas , Enfermedad de Parkinson/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Ratas Sprague-Dawley , Procesamiento de Imagen Asistido por Computador/métodos , Masculino , Algoritmos , Simulación por Computador
9.
Strahlenther Onkol ; 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39283345

RESUMEN

BACKGROUND: The hypothesis of changing network layers to increase the accuracy of dose distribution prediction, instead of expanding their dimensions, which requires complex calculations, has been considered in our study. MATERIALS AND METHODS: A total of 137 prostate cancer patients treated with the tomotherapy technique were categorized as 80% training and validating as well as 20% testing for the nested UNet and UNet architectures. Mean absolute error (MAE) was used to measure the dosimetry indices of dose-volume histograms (DVHs), and geometry indices, including the structural similarity index measure (SSIM), dice similarity coefficient (DSC), and Jaccard similarity coefficient (JSC), were used to evaluate the isodose volume (IV) similarity prediction. To verify a statistically significant difference, the two-way statistical Wilcoxon test was used at a level of 0.05 (p < 0.05). RESULTS: Use of a nested UNet architecture reduced the predicted dose MAE in DVH indices. The MAE for planning target volume (PTV), bladder, rectum, and right and left femur were D98% = 1.11 ± 0.90; D98% = 2.27 ± 2.85, Dmean = 0.84 ± 0.62; D98% = 1.47 ± 12.02, Dmean = 0.77 ± 1.59; D2% = 0.65 ± 0.70, Dmean = 0.96 ± 2.82; and D2% = 1.18 ± 6.65, Dmean = 0.44 ± 1.13, respectively. Additionally, the greatest geometric similarity was observed in the mean SSIM for UNet and nested UNet (0.91 vs. 0.94, respectively). CONCLUSION: The nested UNet network can be considered a suitable network due to its ability to improve the accuracy of dose distribution prediction compared to the UNet network in an acceptable time.

10.
Eur J Nucl Med Mol Imaging ; 51(7): 1937-1954, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38326655

RESUMEN

PURPOSE: Total metabolic tumor volume (TMTV) segmentation has significant value enabling quantitative imaging biomarkers for lymphoma management. In this work, we tackle the challenging task of automated tumor delineation in lymphoma from PET/CT scans using a cascaded approach. METHODS: Our study included 1418 2-[18F]FDG PET/CT scans from four different centers. The dataset was divided into 900 scans for development/validation/testing phases and 518 for multi-center external testing. The former consisted of 450 lymphoma, lung cancer, and melanoma scans, along with 450 negative scans, while the latter consisted of lymphoma patients from different centers with diffuse large B cell, primary mediastinal large B cell, and classic Hodgkin lymphoma cases. Our approach involves resampling PET/CT images into different voxel sizes in the first step, followed by training multi-resolution 3D U-Nets on each resampled dataset using a fivefold cross-validation scheme. The models trained on different data splits were ensemble. After applying soft voting to the predicted masks, in the second step, we input the probability-averaged predictions, along with the input imaging data, into another 3D U-Net. Models were trained with semi-supervised loss. We additionally considered the effectiveness of using test time augmentation (TTA) to improve the segmentation performance after training. In addition to quantitative analysis including Dice score (DSC) and TMTV comparisons, the qualitative evaluation was also conducted by nuclear medicine physicians. RESULTS: Our cascaded soft-voting guided approach resulted in performance with an average DSC of 0.68 ± 0.12 for the internal test data from developmental dataset, and an average DSC of 0.66 ± 0.18 on the multi-site external data (n = 518), significantly outperforming (p < 0.001) state-of-the-art (SOTA) approaches including nnU-Net and SWIN UNETR. While TTA yielded enhanced performance gains for some of the comparator methods, its impact on our cascaded approach was found to be negligible (DSC: 0.66 ± 0.16). Our approach reliably quantified TMTV, with a correlation of 0.89 with the ground truth (p < 0.001). Furthermore, in terms of visual assessment, concordance between quantitative evaluations and clinician feedback was observed in the majority of cases. The average relative error (ARE) and the absolute error (AE) in TMTV prediction on external multi-centric dataset were ARE = 0.43 ± 0.54 and AE = 157.32 ± 378.12 (mL) for all the external test data (n = 518), and ARE = 0.30 ± 0.22 and AE = 82.05 ± 99.78 (mL) when the 10% outliers (n = 53) were excluded. CONCLUSION: TMTV-Net demonstrates strong performance and generalizability in TMTV segmentation across multi-site external datasets, encompassing various lymphoma subtypes. A negligible reduction of 2% in overall performance during testing on external data highlights robust model generalizability across different centers and cancer types, likely attributable to its training with resampled inputs. Our model is publicly available, allowing easy multi-site evaluation and generalizability analysis on datasets from different institutions.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Linfoma , Tomografía Computarizada por Tomografía de Emisión de Positrones , Carga Tumoral , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Linfoma/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Fluorodesoxiglucosa F18 , Automatización , Masculino , Femenino
11.
Calcif Tissue Int ; 115(4): 362-372, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39017691

RESUMEN

To evaluate the feasibility of acquiring vertebral height from chest low-dose computed tomography (LDCT) images using an artificial intelligence (AI) system based on 3D U-Net vertebral segmentation technology and the correlation and features of vertebral morphology with sex and age of the Chinese population. Patients who underwent chest LDCT between September 2020 and April 2023 were enrolled. The Altman and Pearson's correlation analyses were used to compare the correlation and consistency between the AI software and manual measurement of vertebral height. The anterior height (Ha), middle height (Hm), posterior height (Hp), and vertebral height ratios (VHRs) (Ha/Hp and Hm/Hp) were measured from T1 to L2 using an AI system. The VHR is the ratio of Ha to Hp or the ratio of Hm to Hp of the vertebrae, which can reflect the shape of the anterior wedge and biconcave vertebrae. Changes in these parameters, particularly the VHR, were analysed at different vertebral levels in different age and sex groups. The results of the AI methods were highly consistent and correlated with manual measurements. The Pearson's correlation coefficients were 0.855, 0.919, and 0.846, respectively. The trend of VHRs showed troughs at T7 and T11 and a peak at T9; however, Hm/Hp showed slight fluctuations. Regarding the VHR, significant sex differences were found at L1 and L2 in all age bands. This innovative study focuses on vertebral morphology for opportunistic analysis in the mainland Chinese population and the distribution tendency of vertebral morphology with ageing using a chest LDCT aided by an AI system based on 3D U-Net vertebral segmentation technology. The AI system demonstrates the potential to automatically perform opportunistic vertebral morphology analyses using LDCT scans obtained during lung cancer screening. We advocate the use of age-, sex-, and vertebral level-specific criteria for the morphometric evaluation of vertebral osteoporotic fractures for a more accurate diagnosis of vertebral fractures and spinal pathologies.


Asunto(s)
Redes Neurales de la Computación , Tomografía Computarizada por Rayos X , Humanos , Masculino , Femenino , Persona de Mediana Edad , Tomografía Computarizada por Rayos X/métodos , Anciano , Adulto , Columna Vertebral/diagnóstico por imagen , Columna Vertebral/anatomía & histología , Pueblo Asiatico , China , Anciano de 80 o más Años , Imagenología Tridimensional/métodos , Pueblos del Este de Asia
12.
J Magn Reson Imaging ; 59(2): 587-598, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37220191

RESUMEN

BACKGROUND: The delineation of brain arteriovenous malformations (bAVMs) is crucial for subsequent treatment planning. Manual segmentation is time-consuming and labor-intensive. Applying deep learning to automatically detect and segment bAVM might help to improve clinical practice efficiency. PURPOSE: To develop an approach for detecting bAVM and segmenting its nidus on Time-of-flight magnetic resonance angiography using deep learning methods. STUDY TYPE: Retrospective. SUBJECTS: 221 bAVM patients aged 7-79 underwent radiosurgery from 2003 to 2020. They were split into 177 training, 22 validation, and 22 test data. FIELD STRENGTH/SEQUENCE: 1.5 T, Time-of-flight magnetic resonance angiography based on 3D gradient echo. ASSESSMENT: The YOLOv5 and YOLOv8 algorithms were utilized to detect bAVM lesions and the U-Net and U-Net++ models to segment the nidus from the bounding boxes. The mean average precision, F1, precision, and recall were used to assess the model performance on the bAVM detection. To evaluate the model's performance on nidus segmentation, the Dice coefficient and balanced average Hausdorff distance (rbAHD) were employed. STATISTICAL TESTS: The Student's t-test was used to test the cross-validation results (P < 0.05). The Wilcoxon rank test was applied to compare the median for the reference values and the model inference results (P < 0.05). RESULTS: The detection results demonstrated that the model with pretraining and augmentation performed optimally. The U-Net++ with random dilation mechanism resulted in higher Dice and lower rbAHD, compared to that without that mechanism, across varying dilated bounding box conditions (P < 0.05). When combining detection and segmentation, the Dice and rbAHD were statistically different from the references calculated using the detected bounding boxes (P < 0.05). For the detected lesions in the test dataset, it showed the highest Dice of 0.82 and the lowest rbAHD of 5.3%. DATA CONCLUSION: This study showed that pretraining and data augmentation improved YOLO detection performance. Properly limiting lesion ranges allows for adequate bAVM segmentation. LEVEL OF EVIDENCE: 4 TECHNICAL EFFICACY STAGE: 1.


Asunto(s)
Aprendizaje Profundo , Malformaciones Arteriovenosas Intracraneales , Humanos , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Malformaciones Arteriovenosas Intracraneales/diagnóstico por imagen , Malformaciones Arteriovenosas Intracraneales/cirugía , Angiografía por Resonancia Magnética , Imagen por Resonancia Magnética , Estudios Retrospectivos , Niño , Adolescente , Adulto Joven , Adulto , Persona de Mediana Edad , Anciano
13.
Mult Scler ; 30(6): 687-695, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38469809

RESUMEN

BACKGROUND: Loss of brain gray matter fractional volume predicts multiple sclerosis (MS) progression and is associated with worsening physical and cognitive symptoms. Within deep gray matter, thalamic damage is evident in early stages of MS and correlates with physical and cognitive impairment. Natalizumab is a highly effective treatment that reduces disease progression and the number of inflammatory lesions in patients with relapsing-remitting MS (RRMS). OBJECTIVE: To evaluate the effect of natalizumab on gray matter and thalamic atrophy. METHODS: A combination of deep learning-based image segmentation and data augmentation was applied to MRI data from the AFFIRM trial. RESULTS: This post hoc analysis identified a reduction of 64.3% (p = 0.0044) and 64.3% (p = 0.0030) in mean percentage gray matter volume loss from baseline at treatment years 1 and 2, respectively, in patients treated with natalizumab versus placebo. The reduction in thalamic fraction volume loss from baseline with natalizumab versus placebo was 57.0% at year 2 (p < 0.0001) and 41.2% at year 1 (p = 0.0147). Similar findings resulted from analyses of absolute gray matter and thalamic fraction volume loss. CONCLUSION: These analyses represent the first placebo-controlled evidence supporting a role for natalizumab treatment in mitigating gray matter and thalamic fraction atrophy among patients with RRMS. CLINICALTRIALS.GOV IDENTIFIER: NCT00027300URL: https://clinicaltrials.gov/ct2/show/NCT00027300.


Asunto(s)
Atrofia , Sustancia Gris , Factores Inmunológicos , Imagen por Resonancia Magnética , Esclerosis Múltiple Recurrente-Remitente , Natalizumab , Tálamo , Humanos , Esclerosis Múltiple Recurrente-Remitente/tratamiento farmacológico , Esclerosis Múltiple Recurrente-Remitente/patología , Esclerosis Múltiple Recurrente-Remitente/diagnóstico por imagen , Natalizumab/farmacología , Natalizumab/uso terapéutico , Sustancia Gris/patología , Sustancia Gris/diagnóstico por imagen , Sustancia Gris/efectos de los fármacos , Adulto , Tálamo/patología , Tálamo/diagnóstico por imagen , Tálamo/efectos de los fármacos , Masculino , Femenino , Factores Inmunológicos/farmacología , Atrofia/patología , Persona de Mediana Edad , Aprendizaje Profundo
14.
Biomed Eng Online ; 23(1): 31, 2024 Mar 11.
Artículo en Inglés | MEDLINE | ID: mdl-38468262

RESUMEN

BACKGROUND: Ultrasound three-dimensional visualization, a cutting-edge technology in medical imaging, enhances diagnostic accuracy by providing a more comprehensive and readable portrayal of anatomical structures compared to traditional two-dimensional ultrasound. Crucial to this visualization is the segmentation of multiple targets. However, challenges like noise interference, inaccurate boundaries, and difficulties in segmenting small structures exist in the multi-target segmentation of ultrasound images. This study, using neck ultrasound images, concentrates on researching multi-target segmentation methods for the thyroid and surrounding tissues. METHOD: We improved the Unet++ to propose PA-Unet++ to enhance the multi-target segmentation accuracy of the thyroid and its surrounding tissues by addressing ultrasound noise interference. This involves integrating multi-scale feature information using a pyramid pooling module to facilitate segmentation of structures of various sizes. Additionally, an attention gate mechanism is applied to each decoding layer to progressively highlight target tissues and suppress the impact of background pixels. RESULTS: Video data obtained from 2D ultrasound thyroid serial scans served as the dataset for this paper.4600 images containing 23,000 annotated regions were divided into training and test sets at a ratio of 9:1, the results showed that: compared with the results of U-net++, the Dice of our model increased from 78.78% to 81.88% (+ 3.10%), the mIOU increased from 73.44% to 80.35% (+ 6.91%), and the PA index increased from 92.95% to 94.79% (+ 1.84%). CONCLUSIONS: Accurate segmentation is fundamental for various clinical applications, including disease diagnosis, treatment planning, and monitoring. This study will have a positive impact on the improvement of 3D visualization capabilities and clinical decision-making and research in the context of ultrasound image.


Asunto(s)
Imagenología Tridimensional , Glándula Tiroides , Glándula Tiroides/diagnóstico por imagen , Proyectos de Investigación , Tecnología , Procesamiento de Imagen Asistido por Computador
15.
Network ; 35(2): 134-153, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38050997

RESUMEN

Accurate retinal vessel segmentation is the prerequisite for early recognition and treatment of retina-related diseases. However, segmenting retinal vessels is still challenging due to the intricate vessel tree in fundus images, which has a significant number of tiny vessels, low contrast, and lesion interference. For this task, the u-shaped architecture (U-Net) has become the de-facto standard and has achieved considerable success. However, U-Net is a pure convolutional network, which usually shows limitations in global modelling. In this paper, we propose a novel Cross-scale U-Net with Semantic-position Dependencies (CS-UNet) for retinal vessel segmentation. In particular, we first designed a Semantic-position Dependencies Aggregator (SPDA) and incorporate it into each layer of the encoder to better focus on global contextual information by integrating the relationship of semantic and position. To endow the model with the capability of cross-scale interaction, the Cross-scale Relation Refine Module (CSRR) is designed to dynamically select the information associated with the vessels, which helps guide the up-sampling operation. Finally, we have evaluated CS-UNet on three public datasets: DRIVE, CHASE_DB1, and STARE. Compared to most existing state-of-the-art methods, CS-UNet demonstrated better performance.


Asunto(s)
Enfermedades de la Retina , Semántica , Animales , Vasos Retinianos/diagnóstico por imagen , Abomaso , Fondo de Ojo , Reconocimiento en Psicología , Algoritmos
16.
Network ; : 1-22, 2024 Feb 12.
Artículo en Inglés | MEDLINE | ID: mdl-38345038

RESUMEN

Retinal haemorrhage stands as an early indicator of diabetic retinopathy, necessitating accurate detection for timely diagnosis. Addressing this need, this study proposes an enhanced machine-based diagnostic test for diabetic retinopathy through an updated UNet framework, adept at scrutinizing fundus images for signs of retinal haemorrhages. The customized UNet underwent GPU training using the IDRiD database, validated against the publicly available DIARETDB1 and IDRiD datasets. Emphasizing the complexity of segmentation, the study employed preprocessing techniques, augmenting image quality and data integrity. Subsequently, the trained neural network showcased a remarkable performance boost, accurately identifying haemorrhage regions with 80% sensitivity, 99.6% specificity, and 98.6% accuracy. The experimental findings solidify the network's reliability, showcasing potential to alleviate ophthalmologists' workload significantly. Notably, achieving an Intersection over Union (IoU) of 76.61% and a Dice coefficient of 86.51% underscores the system's competence. The study's outcomes signify substantial enhancements in diagnosing critical diabetic retinal conditions, promising profound improvements in diagnostic accuracy and efficiency, thereby marking a significant advancement in automated retinal haemorrhage detection for diabetic retinopathy.

17.
Phytopathology ; 114(9): 2045-2054, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38831567

RESUMEN

Net blotch disease caused by Drechslera teres is a major fungal disease that affects barley (Hordeum vulgare) plants and can result in significant crop losses. In this study, we developed a deep learning model to quantify net blotch disease symptoms on different days postinfection on seedling leaves using Cascade R-CNN (region-based convolutional neural network) and U-Net (a convolutional neural network) architectures. We used a dataset of barley leaf images with annotations of net blotch disease to train and evaluate the model. The model achieved an accuracy of 95% for Cascade R-CNN in net blotch disease detection and a Jaccard index score of 0.99, indicating high accuracy in disease quantification and location. The combination of Cascade R-CNN and U-Net architectures improved the detection of small and irregularly shaped lesions in the images at 4 days postinfection, leading to better disease quantification. To validate the model developed, we compared the results obtained by automated measurement with a classical method (necrosis diameter measurement) and a pathogen detection by real-time PCR. The proposed deep learning model could be used in automated systems for disease quantification and to screen the efficacy of potential biocontrol agents to protect against disease.


Asunto(s)
Ascomicetos , Aprendizaje Profundo , Hordeum , Enfermedades de las Plantas , Hojas de la Planta , Hordeum/microbiología , Enfermedades de las Plantas/microbiología , Enfermedades de las Plantas/prevención & control , Ascomicetos/fisiología , Hojas de la Planta/microbiología , Producción de Cultivos/métodos , Redes Neurales de la Computación , Productos Agrícolas/microbiología
18.
MAGMA ; 37(2): 283-294, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38386154

RESUMEN

PURPOSE: Propeller fast-spin-echo diffusion magnetic resonance imaging (FSE-dMRI) is essential for the diagnosis of Cholesteatoma. However, at clinical 1.5 T MRI, its signal-to-noise ratio (SNR) remains relatively low. To gain sufficient SNR, signal averaging (number of excitations, NEX) is usually used with the cost of prolonged scan time. In this work, we leveraged the benefits of Locally Low Rank (LLR) constrained reconstruction to enhance the SNR. Furthermore, we enhanced both the speed and SNR by employing Convolutional Neural Networks (CNNs) for the accelerated PROPELLER FSE-dMRI on a 1.5 T clinical scanner. METHODS: Residual U-Net (RU-Net) was found to be efficient for propeller FSE-dMRI data. It was trained to predict 2-NEX images obtained by Locally Low Rank (LLR) constrained reconstruction and used 1-NEX images obtained via simplified reconstruction as the inputs. The brain scans from healthy volunteers and patients with cholesteatoma were performed for model training and testing. The performance of trained networks was evaluated with normalized root-mean-square-error (NRMSE), structural similarity index measure (SSIM), and peak SNR (PSNR). RESULTS: For 4 × under-sampled with 7 blades data, online reconstruction appears to provide suboptimal images-some small details are missing due to high noise interferences. Offline LLR enables suppression of noises and discovering some small structures. RU-Net demonstrated further improvement compared to LLR by increasing 18.87% of PSNR, 2.11% of SSIM, and reducing 53.84% of NRMSE. Moreover, RU-Net is about 1500 × faster than LLR (0.03 vs. 47.59 s/slice). CONCLUSION: The LLR remarkably enhances the SNR compared to online reconstruction. Moreover, RU-Net improves propeller FSE-dMRI as reflected in PSNR, SSIM, and NRMSE. It requires only 1-NEX data, which allows a 2 × scan time reduction. In addition, its speed is approximately 1500 times faster than that of LLR-constrained reconstruction.


Asunto(s)
Colesteatoma , Imagen de Difusión por Resonancia Magnética , Humanos , Imagen de Difusión por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/métodos , Relación Señal-Ruido , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos
19.
BMC Med Imaging ; 24(1): 95, 2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38654162

RESUMEN

OBJECTIVE: In radiation therapy, cancerous region segmentation in magnetic resonance images (MRI) is a critical step. For rectal cancer, the automatic segmentation of rectal tumors from an MRI is a great challenge. There are two main shortcomings in existing deep learning-based methods that lead to incorrect segmentation: 1) there are many organs surrounding the rectum, and the shape of some organs is similar to that of rectal tumors; 2) high-level features extracted by conventional neural networks often do not contain enough high-resolution information. Therefore, an improved U-Net segmentation network based on attention mechanisms is proposed to replace the traditional U-Net network. METHODS: The overall framework of the proposed method is based on traditional U-Net. A ResNeSt module was added to extract the overall features, and a shape module was added after the encoder layer. We then combined the outputs of the shape module and the decoder to obtain the results. Moreover, the model used different types of attention mechanisms, so that the network learned information to improve segmentation accuracy. RESULTS: We validated the effectiveness of the proposed method using 3773 2D MRI datasets from 304 patients. The results showed that the proposed method achieved 0.987, 0.946, 0.897, and 0.899 for Dice, MPA, MioU, and FWIoU, respectively; these values are significantly better than those of other existing methods. CONCLUSION: Due to time savings, the proposed method can help radiologists segment rectal tumors effectively and enable them to focus on patients whose cancerous regions are difficult for the network to segment. SIGNIFICANCE: The proposed method can help doctors segment rectal tumors, thereby ensuring good diagnostic quality and accuracy.


Asunto(s)
Aprendizaje Profundo , Imagen por Resonancia Magnética , Neoplasias del Recto , Neoplasias del Recto/diagnóstico por imagen , Neoplasias del Recto/patología , Humanos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Interpretación de Imagen Asistida por Computador/métodos , Masculino
20.
BMC Med Imaging ; 24(1): 158, 2024 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-38914942

RESUMEN

BACKGROUND: The assessment of in vitro wound healing images is critical for determining the efficacy of the therapy-of-interest that may influence the wound healing process. Existing methods suffer significant limitations, such as user dependency, time-consuming nature, and lack of sensitivity, thus paving the way for automated analysis approaches. METHODS: Hereby, three structurally different variations of U-net architectures based on convolutional neural networks (CNN) were implemented for the segmentation of in vitro wound healing microscopy images. The developed models were fed using two independent datasets after applying a novel augmentation method aimed at the more sensitive analysis of edges after the preprocessing. Then, predicted masks were utilized for the accurate calculation of wound areas. Eventually, the therapy efficacy-indicator wound areas were thoroughly compared with current well-known tools such as ImageJ and TScratch. RESULTS: The average dice similarity coefficient (DSC) scores were obtained as 0.958 ∼ 0.968 for U-net-based deep learning models. The averaged absolute percentage errors (PE) of predicted wound areas to ground truth were 6.41%, 3.70%, and 3.73%, respectively for U-net, U-net++, and Attention U-net, while ImageJ and TScratch had considerable averaged error rates of 22.59% and 33.88%, respectively. CONCLUSIONS: Comparative analyses revealed that the developed models outperformed the conventional approaches in terms of analysis time and segmentation sensitivity. The developed models also hold great promise for the prediction of the in vitro wound area, regardless of the therapy-of-interest, cell line, magnification of the microscope, or other application-dependent parameters.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Microscopía , Cicatrización de Heridas , Microscopía/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA