Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 942
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Brief Bioinform ; 25(4)2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38860738

RESUMO

Picking protein particles in cryo-electron microscopy (cryo-EM) micrographs is a crucial step in the cryo-EM-based structure determination. However, existing methods trained on a limited amount of cryo-EM data still cannot accurately pick protein particles from noisy cryo-EM images. The general foundational artificial intelligence-based image segmentation model such as Meta's Segment Anything Model (SAM) cannot segment protein particles well because their training data do not include cryo-EM images. Here, we present a novel approach (CryoSegNet) of integrating an attention-gated U-shape network (U-Net) specially designed and trained for cryo-EM particle picking and the SAM. The U-Net is first trained on a large cryo-EM image dataset and then used to generate input from original cryo-EM images for SAM to make particle pickings. CryoSegNet shows both high precision and recall in segmenting protein particles from cryo-EM micrographs, irrespective of protein type, shape and size. On several independent datasets of various protein types, CryoSegNet outperforms two top machine learning particle pickers crYOLO and Topaz as well as SAM itself. The average resolution of density maps reconstructed from the particles picked by CryoSegNet is 3.33 Å, 7% better than 3.58 Å of Topaz and 14% better than 3.87 Å of crYOLO. It is publicly available at https://github.com/jianlin-cheng/CryoSegNet.


Assuntos
Microscopia Crioeletrônica , Processamento de Imagem Assistida por Computador , Microscopia Crioeletrônica/métodos , Processamento de Imagem Assistida por Computador/métodos , Proteínas/química , Inteligência Artificial , Algoritmos , Bases de Dados de Proteínas
2.
Development ; 148(21)2021 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-34739029

RESUMO

Genome editing simplifies the generation of new animal models for congenital disorders. However, the detailed and unbiased phenotypic assessment of altered embryonic development remains a challenge. Here, we explore how deep learning (U-Net) can automate segmentation tasks in various imaging modalities, and we quantify phenotypes of altered renal, neural and craniofacial development in Xenopus embryos in comparison with normal variability. We demonstrate the utility of this approach in embryos with polycystic kidneys (pkd1 and pkd2) and craniofacial dysmorphia (six1). We highlight how in toto light-sheet microscopy facilitates accurate reconstruction of brain and craniofacial structures within X. tropicalis embryos upon dyrk1a and six1 loss of function or treatment with retinoic acid inhibitors. These tools increase the sensitivity and throughput of evaluating developmental malformations caused by chemical or genetic disruption. Furthermore, we provide a library of pre-trained networks and detailed instructions for applying deep learning to the reader's own datasets. We demonstrate the versatility, precision and scalability of deep neural network phenotyping on embryonic disease models. By combining light-sheet microscopy and deep learning, we provide a framework for higher-throughput characterization of embryonic model organisms. This article has an associated 'The people behind the papers' interview.


Assuntos
Aprendizado Profundo , Desenvolvimento Embrionário/genética , Fenótipo , Animais , Anormalidades Craniofaciais/embriologia , Anormalidades Craniofaciais/genética , Anormalidades Craniofaciais/patologia , Modelos Animais de Doenças , Processamento de Imagem Assistida por Computador , Camundongos , Microscopia , Mutação , Redes Neurais de Computação , Transtornos do Neurodesenvolvimento/genética , Transtornos do Neurodesenvolvimento/patologia , Doenças Renais Policísticas/embriologia , Doenças Renais Policísticas/genética , Doenças Renais Policísticas/patologia , Proteínas de Xenopus/genética , Xenopus laevis
3.
J Synchrotron Radiat ; 31(Pt 1): 136-149, 2024 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-38095668

RESUMO

Bone material contains a hierarchical network of micro- and nano-cavities and channels, known as the lacuna-canalicular network (LCN), that is thought to play an important role in mechanobiology and turnover. The LCN comprises micrometer-sized lacunae, voids that house osteocytes, and submicrometer-sized canaliculi that connect bone cells. Characterization of this network in three dimensions is crucial for many bone studies. To quantify X-ray Zernike phase-contrast nanotomography data, deep learning is used to isolate and assess porosity in artifact-laden tomographies of zebrafish bones. A technical solution is proposed to overcome the halo and shade-off domains in order to reliably obtain the distribution and morphology of the LCN in the tomographic data. Convolutional neural network (CNN) models are utilized with increasing numbers of images, repeatedly validated by `error loss' and `accuracy' metrics. U-Net and Sensor3D CNN models were trained on data obtained from two different synchrotron Zernike phase-contrast transmission X-ray microscopes, the ANATOMIX beamline at SOLEIL (Paris, France) and the P05 beamline at PETRA III (Hamburg, Germany). The Sensor3D CNN model with a smaller batch size of 32 and a training data size of 70 images showed the best performance (accuracy 0.983 and error loss 0.032). The analysis procedures, validated by comparison with human-identified ground-truth images, correctly identified the voids within the bone matrix. This proposed approach may have further application to classify structures in volumetric images that contain non-linear artifacts that degrade image quality and hinder feature identification.


Assuntos
Aprendizado Profundo , Animais , Humanos , Artefatos , Porosidade , Peixe-Zebra , Osso e Ossos/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
4.
Magn Reson Med ; 91(3): 1149-1164, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37929695

RESUMO

PURPOSE: Preclinical MR fingerprinting (MRF) suffers from long acquisition time for organ-level coverage due to demanding image resolution and limited undersampling capacity. This study aims to develop a deep learning-assisted fast MRF framework for sub-millimeter T1 and T2 mapping of entire macaque brain on a preclinical 9.4 T MR system. METHODS: Three dimensional MRF images were reconstructed by singular value decomposition (SVD) compressed reconstruction. T1 and T2 mapping for each axial slice exploited a self-attention assisted residual U-Net to suppress aliasing-induced quantification errors, and the transmit-field (B1 + ) measurements for robustness against B1 + inhomogeneity. Supervised network training used MRF images simulated via virtual parametric maps and a desired undersampling scheme. This strategy bypassed the difficulties of acquiring fully sampled preclinical MRF data to guide network training. The proposed fast MRF framework was tested on experimental data acquired from ex vivo and in vivo macaque brains. RESULTS: The trained network showed reasonable adaptability to experimental MRF images, enabling robust delineation of various T1 and T2 distributions in the brain tissues. Further, the proposed MRF framework outperformed several existing fast MRF methods in handling the aliasing artifacts and capturing detailed cerebral structures in the mapping results. Parametric mapping of entire macaque brain at nominal resolution of 0.35 × $$ \times $$ 0.35 × $$ \times $$ 1 mm3 can be realized via a 20-min 3D MRF scan, which was sixfold faster than the baseline protocol. CONCLUSION: Introducing deep learning to MRF framework paves the way for efficient organ-level high-resolution quantitative MRI in preclinical applications.


Assuntos
Aprendizado Profundo , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Imageamento Tridimensional/métodos , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos
5.
Magn Reson Med ; 91(5): 2044-2056, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38193276

RESUMO

PURPOSE: Subject movement during the MR examination is inevitable and causes not only image artifacts but also deteriorates the homogeneity of the main magnetic field (B0 ), which is a prerequisite for high quality data. Thus, characterization of changes to B0 , for example induced by patient movement, is important for MR applications that are prone to B0 inhomogeneities. METHODS: We propose a deep learning based method to predict such changes within the brain from the change of the head position to facilitate retrospective or even real-time correction. A 3D U-net was trained on in vivo gradient-echo brain 7T MRI data. The input consisted of B0 maps and anatomical images at an initial position, and anatomical images at a different head position (obtained by applying a rigid-body transformation on the initial anatomical image). The output consisted of B0 maps at the new head positions. We further fine-trained the network weights to each subject by measuring a limited number of head positions of the given subject, and trained the U-net with these data. RESULTS: Our approach was compared to established dynamic B0 field mapping via interleaved navigators, which suffer from limited spatial resolution and the need for undesirable sequence modifications. Qualitative and quantitative comparison showed similar performance between an interleaved navigator-equivalent method and proposed method. CONCLUSION: It is feasible to predict B0 maps from rigid subject movement and, when combined with external tracking hardware, this information could be used to improve the quality of MR acquisitions without the use of navigators.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Humanos , Estudos Retrospectivos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Movimento (Física) , Movimento , Processamento de Imagem Assistida por Computador/métodos , Artefatos
6.
Eur J Nucl Med Mol Imaging ; 51(7): 1937-1954, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38326655

RESUMO

PURPOSE: Total metabolic tumor volume (TMTV) segmentation has significant value enabling quantitative imaging biomarkers for lymphoma management. In this work, we tackle the challenging task of automated tumor delineation in lymphoma from PET/CT scans using a cascaded approach. METHODS: Our study included 1418 2-[18F]FDG PET/CT scans from four different centers. The dataset was divided into 900 scans for development/validation/testing phases and 518 for multi-center external testing. The former consisted of 450 lymphoma, lung cancer, and melanoma scans, along with 450 negative scans, while the latter consisted of lymphoma patients from different centers with diffuse large B cell, primary mediastinal large B cell, and classic Hodgkin lymphoma cases. Our approach involves resampling PET/CT images into different voxel sizes in the first step, followed by training multi-resolution 3D U-Nets on each resampled dataset using a fivefold cross-validation scheme. The models trained on different data splits were ensemble. After applying soft voting to the predicted masks, in the second step, we input the probability-averaged predictions, along with the input imaging data, into another 3D U-Net. Models were trained with semi-supervised loss. We additionally considered the effectiveness of using test time augmentation (TTA) to improve the segmentation performance after training. In addition to quantitative analysis including Dice score (DSC) and TMTV comparisons, the qualitative evaluation was also conducted by nuclear medicine physicians. RESULTS: Our cascaded soft-voting guided approach resulted in performance with an average DSC of 0.68 ± 0.12 for the internal test data from developmental dataset, and an average DSC of 0.66 ± 0.18 on the multi-site external data (n = 518), significantly outperforming (p < 0.001) state-of-the-art (SOTA) approaches including nnU-Net and SWIN UNETR. While TTA yielded enhanced performance gains for some of the comparator methods, its impact on our cascaded approach was found to be negligible (DSC: 0.66 ± 0.16). Our approach reliably quantified TMTV, with a correlation of 0.89 with the ground truth (p < 0.001). Furthermore, in terms of visual assessment, concordance between quantitative evaluations and clinician feedback was observed in the majority of cases. The average relative error (ARE) and the absolute error (AE) in TMTV prediction on external multi-centric dataset were ARE = 0.43 ± 0.54 and AE = 157.32 ± 378.12 (mL) for all the external test data (n = 518), and ARE = 0.30 ± 0.22 and AE = 82.05 ± 99.78 (mL) when the 10% outliers (n = 53) were excluded. CONCLUSION: TMTV-Net demonstrates strong performance and generalizability in TMTV segmentation across multi-site external datasets, encompassing various lymphoma subtypes. A negligible reduction of 2% in overall performance during testing on external data highlights robust model generalizability across different centers and cancer types, likely attributable to its training with resampled inputs. Our model is publicly available, allowing easy multi-site evaluation and generalizability analysis on datasets from different institutions.


Assuntos
Processamento de Imagem Assistida por Computador , Linfoma , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Carga Tumoral , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Linfoma/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Fluordesoxiglucose F18 , Automação , Masculino , Feminino
7.
Calcif Tissue Int ; 2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39017691

RESUMO

To evaluate the feasibility of acquiring vertebral height from chest low-dose computed tomography (LDCT) images using an artificial intelligence (AI) system based on 3D U-Net vertebral segmentation technology and the correlation and features of vertebral morphology with sex and age of the Chinese population. Patients who underwent chest LDCT between September 2020 and April 2023 were enrolled. The Altman and Pearson's correlation analyses were used to compare the correlation and consistency between the AI software and manual measurement of vertebral height. The anterior height (Ha), middle height (Hm), posterior height (Hp), and vertebral height ratios (VHRs) (Ha/Hp and Hm/Hp) were measured from T1 to L2 using an AI system. The VHR is the ratio of Ha to Hp or the ratio of Hm to Hp of the vertebrae, which can reflect the shape of the anterior wedge and biconcave vertebrae. Changes in these parameters, particularly the VHR, were analysed at different vertebral levels in different age and sex groups. The results of the AI methods were highly consistent and correlated with manual measurements. The Pearson's correlation coefficients were 0.855, 0.919, and 0.846, respectively. The trend of VHRs showed troughs at T7 and T11 and a peak at T9; however, Hm/Hp showed slight fluctuations. Regarding the VHR, significant sex differences were found at L1 and L2 in all age bands. This innovative study focuses on vertebral morphology for opportunistic analysis in the mainland Chinese population and the distribution tendency of vertebral morphology with ageing using a chest LDCT aided by an AI system based on 3D U-Net vertebral segmentation technology. The AI system demonstrates the potential to automatically perform opportunistic vertebral morphology analyses using LDCT scans obtained during lung cancer screening. We advocate the use of age-, sex-, and vertebral level-specific criteria for the morphometric evaluation of vertebral osteoporotic fractures for a more accurate diagnosis of vertebral fractures and spinal pathologies.

8.
J Magn Reson Imaging ; 59(2): 587-598, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37220191

RESUMO

BACKGROUND: The delineation of brain arteriovenous malformations (bAVMs) is crucial for subsequent treatment planning. Manual segmentation is time-consuming and labor-intensive. Applying deep learning to automatically detect and segment bAVM might help to improve clinical practice efficiency. PURPOSE: To develop an approach for detecting bAVM and segmenting its nidus on Time-of-flight magnetic resonance angiography using deep learning methods. STUDY TYPE: Retrospective. SUBJECTS: 221 bAVM patients aged 7-79 underwent radiosurgery from 2003 to 2020. They were split into 177 training, 22 validation, and 22 test data. FIELD STRENGTH/SEQUENCE: 1.5 T, Time-of-flight magnetic resonance angiography based on 3D gradient echo. ASSESSMENT: The YOLOv5 and YOLOv8 algorithms were utilized to detect bAVM lesions and the U-Net and U-Net++ models to segment the nidus from the bounding boxes. The mean average precision, F1, precision, and recall were used to assess the model performance on the bAVM detection. To evaluate the model's performance on nidus segmentation, the Dice coefficient and balanced average Hausdorff distance (rbAHD) were employed. STATISTICAL TESTS: The Student's t-test was used to test the cross-validation results (P < 0.05). The Wilcoxon rank test was applied to compare the median for the reference values and the model inference results (P < 0.05). RESULTS: The detection results demonstrated that the model with pretraining and augmentation performed optimally. The U-Net++ with random dilation mechanism resulted in higher Dice and lower rbAHD, compared to that without that mechanism, across varying dilated bounding box conditions (P < 0.05). When combining detection and segmentation, the Dice and rbAHD were statistically different from the references calculated using the detected bounding boxes (P < 0.05). For the detected lesions in the test dataset, it showed the highest Dice of 0.82 and the lowest rbAHD of 5.3%. DATA CONCLUSION: This study showed that pretraining and data augmentation improved YOLO detection performance. Properly limiting lesion ranges allows for adequate bAVM segmentation. LEVEL OF EVIDENCE: 4 TECHNICAL EFFICACY STAGE: 1.


Assuntos
Aprendizado Profundo , Malformações Arteriovenosas Intracranianas , Humanos , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Malformações Arteriovenosas Intracranianas/diagnóstico por imagem , Malformações Arteriovenosas Intracranianas/cirurgia , Angiografia por Ressonância Magnética , Imageamento por Ressonância Magnética , Estudos Retrospectivos , Criança , Adolescente , Adulto Jovem , Adulto , Pessoa de Meia-Idade , Idoso
9.
Mult Scler ; 30(6): 687-695, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38469809

RESUMO

BACKGROUND: Loss of brain gray matter fractional volume predicts multiple sclerosis (MS) progression and is associated with worsening physical and cognitive symptoms. Within deep gray matter, thalamic damage is evident in early stages of MS and correlates with physical and cognitive impairment. Natalizumab is a highly effective treatment that reduces disease progression and the number of inflammatory lesions in patients with relapsing-remitting MS (RRMS). OBJECTIVE: To evaluate the effect of natalizumab on gray matter and thalamic atrophy. METHODS: A combination of deep learning-based image segmentation and data augmentation was applied to MRI data from the AFFIRM trial. RESULTS: This post hoc analysis identified a reduction of 64.3% (p = 0.0044) and 64.3% (p = 0.0030) in mean percentage gray matter volume loss from baseline at treatment years 1 and 2, respectively, in patients treated with natalizumab versus placebo. The reduction in thalamic fraction volume loss from baseline with natalizumab versus placebo was 57.0% at year 2 (p < 0.0001) and 41.2% at year 1 (p = 0.0147). Similar findings resulted from analyses of absolute gray matter and thalamic fraction volume loss. CONCLUSION: These analyses represent the first placebo-controlled evidence supporting a role for natalizumab treatment in mitigating gray matter and thalamic fraction atrophy among patients with RRMS. CLINICALTRIALS.GOV IDENTIFIER: NCT00027300URL: https://clinicaltrials.gov/ct2/show/NCT00027300.


Assuntos
Atrofia , Substância Cinzenta , Fatores Imunológicos , Imageamento por Ressonância Magnética , Esclerose Múltipla Recidivante-Remitente , Natalizumab , Tálamo , Humanos , Esclerose Múltipla Recidivante-Remitente/tratamento farmacológico , Esclerose Múltipla Recidivante-Remitente/patologia , Esclerose Múltipla Recidivante-Remitente/diagnóstico por imagem , Natalizumab/farmacologia , Natalizumab/uso terapêutico , Substância Cinzenta/patologia , Substância Cinzenta/diagnóstico por imagem , Substância Cinzenta/efeitos dos fármacos , Adulto , Tálamo/patologia , Tálamo/diagnóstico por imagem , Tálamo/efeitos dos fármacos , Masculino , Feminino , Fatores Imunológicos/farmacologia , Atrofia/patologia , Pessoa de Meia-Idade , Aprendizado Profundo
10.
Biomed Eng Online ; 23(1): 31, 2024 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-38468262

RESUMO

BACKGROUND: Ultrasound three-dimensional visualization, a cutting-edge technology in medical imaging, enhances diagnostic accuracy by providing a more comprehensive and readable portrayal of anatomical structures compared to traditional two-dimensional ultrasound. Crucial to this visualization is the segmentation of multiple targets. However, challenges like noise interference, inaccurate boundaries, and difficulties in segmenting small structures exist in the multi-target segmentation of ultrasound images. This study, using neck ultrasound images, concentrates on researching multi-target segmentation methods for the thyroid and surrounding tissues. METHOD: We improved the Unet++ to propose PA-Unet++ to enhance the multi-target segmentation accuracy of the thyroid and its surrounding tissues by addressing ultrasound noise interference. This involves integrating multi-scale feature information using a pyramid pooling module to facilitate segmentation of structures of various sizes. Additionally, an attention gate mechanism is applied to each decoding layer to progressively highlight target tissues and suppress the impact of background pixels. RESULTS: Video data obtained from 2D ultrasound thyroid serial scans served as the dataset for this paper.4600 images containing 23,000 annotated regions were divided into training and test sets at a ratio of 9:1, the results showed that: compared with the results of U-net++, the Dice of our model increased from 78.78% to 81.88% (+ 3.10%), the mIOU increased from 73.44% to 80.35% (+ 6.91%), and the PA index increased from 92.95% to 94.79% (+ 1.84%). CONCLUSIONS: Accurate segmentation is fundamental for various clinical applications, including disease diagnosis, treatment planning, and monitoring. This study will have a positive impact on the improvement of 3D visualization capabilities and clinical decision-making and research in the context of ultrasound image.


Assuntos
Imageamento Tridimensional , Glândula Tireoide , Glândula Tireoide/diagnóstico por imagem , Projetos de Pesquisa , Tecnologia , Processamento de Imagem Assistida por Computador
11.
Network ; 35(2): 134-153, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38050997

RESUMO

Accurate retinal vessel segmentation is the prerequisite for early recognition and treatment of retina-related diseases. However, segmenting retinal vessels is still challenging due to the intricate vessel tree in fundus images, which has a significant number of tiny vessels, low contrast, and lesion interference. For this task, the u-shaped architecture (U-Net) has become the de-facto standard and has achieved considerable success. However, U-Net is a pure convolutional network, which usually shows limitations in global modelling. In this paper, we propose a novel Cross-scale U-Net with Semantic-position Dependencies (CS-UNet) for retinal vessel segmentation. In particular, we first designed a Semantic-position Dependencies Aggregator (SPDA) and incorporate it into each layer of the encoder to better focus on global contextual information by integrating the relationship of semantic and position. To endow the model with the capability of cross-scale interaction, the Cross-scale Relation Refine Module (CSRR) is designed to dynamically select the information associated with the vessels, which helps guide the up-sampling operation. Finally, we have evaluated CS-UNet on three public datasets: DRIVE, CHASE_DB1, and STARE. Compared to most existing state-of-the-art methods, CS-UNet demonstrated better performance.


Assuntos
Doenças Retinianas , Semântica , Animais , Vasos Retinianos/diagnóstico por imagem , Abomaso , Fundo de Olho , Reconhecimento Psicológico , Algoritmos
12.
Network ; : 1-22, 2024 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-38345038

RESUMO

Retinal haemorrhage stands as an early indicator of diabetic retinopathy, necessitating accurate detection for timely diagnosis. Addressing this need, this study proposes an enhanced machine-based diagnostic test for diabetic retinopathy through an updated UNet framework, adept at scrutinizing fundus images for signs of retinal haemorrhages. The customized UNet underwent GPU training using the IDRiD database, validated against the publicly available DIARETDB1 and IDRiD datasets. Emphasizing the complexity of segmentation, the study employed preprocessing techniques, augmenting image quality and data integrity. Subsequently, the trained neural network showcased a remarkable performance boost, accurately identifying haemorrhage regions with 80% sensitivity, 99.6% specificity, and 98.6% accuracy. The experimental findings solidify the network's reliability, showcasing potential to alleviate ophthalmologists' workload significantly. Notably, achieving an Intersection over Union (IoU) of 76.61% and a Dice coefficient of 86.51% underscores the system's competence. The study's outcomes signify substantial enhancements in diagnosing critical diabetic retinal conditions, promising profound improvements in diagnostic accuracy and efficiency, thereby marking a significant advancement in automated retinal haemorrhage detection for diabetic retinopathy.

13.
MAGMA ; 37(2): 283-294, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38386154

RESUMO

PURPOSE: Propeller fast-spin-echo diffusion magnetic resonance imaging (FSE-dMRI) is essential for the diagnosis of Cholesteatoma. However, at clinical 1.5 T MRI, its signal-to-noise ratio (SNR) remains relatively low. To gain sufficient SNR, signal averaging (number of excitations, NEX) is usually used with the cost of prolonged scan time. In this work, we leveraged the benefits of Locally Low Rank (LLR) constrained reconstruction to enhance the SNR. Furthermore, we enhanced both the speed and SNR by employing Convolutional Neural Networks (CNNs) for the accelerated PROPELLER FSE-dMRI on a 1.5 T clinical scanner. METHODS: Residual U-Net (RU-Net) was found to be efficient for propeller FSE-dMRI data. It was trained to predict 2-NEX images obtained by Locally Low Rank (LLR) constrained reconstruction and used 1-NEX images obtained via simplified reconstruction as the inputs. The brain scans from healthy volunteers and patients with cholesteatoma were performed for model training and testing. The performance of trained networks was evaluated with normalized root-mean-square-error (NRMSE), structural similarity index measure (SSIM), and peak SNR (PSNR). RESULTS: For 4 × under-sampled with 7 blades data, online reconstruction appears to provide suboptimal images-some small details are missing due to high noise interferences. Offline LLR enables suppression of noises and discovering some small structures. RU-Net demonstrated further improvement compared to LLR by increasing 18.87% of PSNR, 2.11% of SSIM, and reducing 53.84% of NRMSE. Moreover, RU-Net is about 1500 × faster than LLR (0.03 vs. 47.59 s/slice). CONCLUSION: The LLR remarkably enhances the SNR compared to online reconstruction. Moreover, RU-Net improves propeller FSE-dMRI as reflected in PSNR, SSIM, and NRMSE. It requires only 1-NEX data, which allows a 2 × scan time reduction. In addition, its speed is approximately 1500 times faster than that of LLR-constrained reconstruction.


Assuntos
Colesteatoma , Imagem de Difusão por Ressonância Magnética , Humanos , Imagem de Difusão por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética/métodos , Razão Sinal-Ruído , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos
14.
BMC Med Imaging ; 24(1): 95, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38654162

RESUMO

OBJECTIVE: In radiation therapy, cancerous region segmentation in magnetic resonance images (MRI) is a critical step. For rectal cancer, the automatic segmentation of rectal tumors from an MRI is a great challenge. There are two main shortcomings in existing deep learning-based methods that lead to incorrect segmentation: 1) there are many organs surrounding the rectum, and the shape of some organs is similar to that of rectal tumors; 2) high-level features extracted by conventional neural networks often do not contain enough high-resolution information. Therefore, an improved U-Net segmentation network based on attention mechanisms is proposed to replace the traditional U-Net network. METHODS: The overall framework of the proposed method is based on traditional U-Net. A ResNeSt module was added to extract the overall features, and a shape module was added after the encoder layer. We then combined the outputs of the shape module and the decoder to obtain the results. Moreover, the model used different types of attention mechanisms, so that the network learned information to improve segmentation accuracy. RESULTS: We validated the effectiveness of the proposed method using 3773 2D MRI datasets from 304 patients. The results showed that the proposed method achieved 0.987, 0.946, 0.897, and 0.899 for Dice, MPA, MioU, and FWIoU, respectively; these values are significantly better than those of other existing methods. CONCLUSION: Due to time savings, the proposed method can help radiologists segment rectal tumors effectively and enable them to focus on patients whose cancerous regions are difficult for the network to segment. SIGNIFICANCE: The proposed method can help doctors segment rectal tumors, thereby ensuring good diagnostic quality and accuracy.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Neoplasias Retais , Neoplasias Retais/diagnóstico por imagem , Neoplasias Retais/patologia , Humanos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Interpretação de Imagem Assistida por Computador/métodos , Masculino
15.
BMC Med Imaging ; 24(1): 158, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38914942

RESUMO

BACKGROUND: The assessment of in vitro wound healing images is critical for determining the efficacy of the therapy-of-interest that may influence the wound healing process. Existing methods suffer significant limitations, such as user dependency, time-consuming nature, and lack of sensitivity, thus paving the way for automated analysis approaches. METHODS: Hereby, three structurally different variations of U-net architectures based on convolutional neural networks (CNN) were implemented for the segmentation of in vitro wound healing microscopy images. The developed models were fed using two independent datasets after applying a novel augmentation method aimed at the more sensitive analysis of edges after the preprocessing. Then, predicted masks were utilized for the accurate calculation of wound areas. Eventually, the therapy efficacy-indicator wound areas were thoroughly compared with current well-known tools such as ImageJ and TScratch. RESULTS: The average dice similarity coefficient (DSC) scores were obtained as 0.958 ∼ 0.968 for U-net-based deep learning models. The averaged absolute percentage errors (PE) of predicted wound areas to ground truth were 6.41%, 3.70%, and 3.73%, respectively for U-net, U-net++, and Attention U-net, while ImageJ and TScratch had considerable averaged error rates of 22.59% and 33.88%, respectively. CONCLUSIONS: Comparative analyses revealed that the developed models outperformed the conventional approaches in terms of analysis time and segmentation sensitivity. The developed models also hold great promise for the prediction of the in vitro wound area, regardless of the therapy-of-interest, cell line, magnification of the microscope, or other application-dependent parameters.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Microscopia , Cicatrização , Microscopia/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
16.
BMC Med Imaging ; 24(1): 102, 2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38724896

RESUMO

Precision and intelligence in evaluating the complexities of middle ear structures are required to diagnose auriculotemporal and ossicle-related diseases within otolaryngology. Due to the complexity of the anatomical details and the varied etiologies of illnesses such as trauma, chronic otitis media, and congenital anomalies, traditional diagnostic procedures may not yield accurate diagnoses. This research intends to enhance the diagnosis of diseases of the auriculotemporal region and ossicles by combining High-Resolution Spiral Computed Tomography (HRSCT) scanning with Deep Learning Techniques (DLT). This study employs a deep learning method, Convolutional Neural Network-UNet (CNN-UNet), to extract sub-pixel information from medical photos. This method equips doctors and researchers with cutting-edge resources, leading to groundbreaking discoveries and better patient healthcare. The research effort is the interaction between the CNN-UNet model and high-resolution Computed Tomography (CT) scans, automating activities including ossicle segmentation, fracture detection, and disruption cause classification, accelerating the diagnostic process and increasing clinical decision-making. The suggested HRSCT-DLT model represents the integration of high-resolution spiral CT scans with the CNN-UNet model, which has been fine-tuned to address the nuances of auriculotemporal and ossicular diseases. This novel combination improves diagnostic efficiency and our overall understanding of these intricate diseases. The results of this study highlight the promise of combining high-resolution CT scanning with the CNN-UNet model in otolaryngology, paving the way for more accurate diagnosis and more individualized treatment plans for patients experiencing auriculotemporal and ossicle-related disruptions.


Assuntos
Ossículos da Orelha , Tomografia Computadorizada Espiral , Humanos , Tomografia Computadorizada Espiral/métodos , Ossículos da Orelha/diagnóstico por imagem , Aprendizado Profundo , Otopatias/diagnóstico por imagem , Osso Temporal/diagnóstico por imagem , Adulto , Redes Neurais de Computação
17.
BMC Med Imaging ; 24(1): 38, 2024 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-38331800

RESUMO

Deep learning recently achieved advancement in the segmentation of medical images. In this regard, U-Net is the most predominant deep neural network, and its architecture is the most prevalent in the medical imaging society. Experiments conducted on difficult datasets directed us to the conclusion that the traditional U-Net framework appears to be deficient in certain respects, despite its overall excellence in segmenting multimodal medical images. Therefore, we propose several modifications to the existing cutting-edge U-Net model. The technical approach involves applying a Multi-Dimensional U-Convolutional Neural Network to achieve accurate segmentation of multimodal biomedical images, enhancing precision and comprehensiveness in identifying and analyzing structures across diverse imaging modalities. As a result of the enhancements, we propose a novel framework called Multi-Dimensional U-Convolutional Neural Network (MDU-CNN) as a potential successor to the U-Net framework. On a large set of multimodal medical images, we compared our proposed framework, MDU-CNN, to the classical U-Net. There have been small changes in the case of perfect images, and a huge improvement is obtained in the case of difficult images. We tested our model on five distinct datasets, each of which presented unique challenges, and found that it has obtained a better performance of 1.32%, 5.19%, 4.50%, 10.23% and 0.87%, respectively.


Assuntos
Redes Neurais de Computação , Sociedades Médicas , Humanos , Processamento de Imagem Assistida por Computador
18.
Acta Radiol ; 65(1): 41-48, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37071506

RESUMO

BACKGROUND: Computed tomography (CT) and magnetic resonance imaging (MRI) are indicated for use in preoperative planning and may complicate diagnosis and place a burden on patients with lumbar disc herniation. PURPOSE: To investigate the diagnostic potential of MRI-based synthetic CT with conventional CT in the diagnosis of lumbar disc herniation. MATERIAL AND METHODS: After obtaining prior institutional review board approval, 19 patients who underwent conventional and synthetic CT imaging were enrolled in this prospective study. Synthetic CT images were generated from the MRI data using U-net. The two sets of images were compared and analyzed qualitatively by two musculoskeletal radiologists. The images were rated on a 4-point scale to determine their subjective quality. The agreement between the conventional and synthetic images for a diagnosis of lumbar disc herniation was determined independently using the kappa statistic. The diagnostic performances of conventional and synthetic CT images were evaluated for sensitivity, specificity, and accuracy, and the consensual results based on T2-weighted imaging were employed as the reference standard. RESULTS: The inter-reader and intra-reader agreement were almost moderate for all evaluated modalities (κ = 0.57-0.79 and 0.47-0.75, respectively). The sensitivity, specificity, and accuracy for detecting lumbar disc herniation were similar for synthetic and conventional CT images (synthetic vs. conventional, reader 1: sensitivity = 91% vs. 81%, specificity = 83% vs. 100%, accuracy = 87% vs. 91%; P < 0.001; reader 2: sensitivity = 84% vs. 81%, specificity = 85% vs. 98%, accuracy = 84% vs. 90%; P < 0.001). CONCLUSION: Synthetic CT images can be used in the diagnostics of lumbar disc herniation.


Assuntos
Deslocamento do Disco Intervertebral , Humanos , Deslocamento do Disco Intervertebral/diagnóstico por imagem , Estudos Prospectivos , Estudos de Viabilidade , Vértebras Lombares/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Imageamento por Ressonância Magnética/métodos
19.
Urol Int ; 108(2): 100-107, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38081150

RESUMO

INTRODUCTION: Bladder cancer (BC) is a major health concern that poses a significant threat to the population, with an increasing incidence rate and a high risk of recurrence and progression. The primary clinical method for diagnosing BC is cystoscopy, but due to the limitations of traditional white light cystoscopy and inadequate clinical experience among junior physicians, its detection rate for bladder tumor, especially small and flat lesions, is relatively low. However, recent years have seen remarkable advancements in the application of artificial intelligence (AI) technology in the field of medicine. This has led to the development of numerous AI algorithms that have been successfully integrated into medical practices, providing valuable assistance to clinicians. The purpose of this study is to develop a cystoscopy algorithm that is real time, cost effective, high performing, and accurate, with the aim of enhancing the detection rate of bladder tumors during cystoscopy. MATERIALS AND METHODS: For this study, a dataset of 3,500 cystoscopic images obtained from 100 patients diagnosed with BC was collected, and a deep learning model was developed utilizing the U-Net algorithm within a convolutional neural network for training purposes. RESULTS: This study randomly divided 3,500 images from 100 BC patients into training and validation groups, and each patient's pathology result was confirmed. In the validation group, the accuracy of tumor recognition by the U-Net algorithm reached 98% compared to primary urologists, with greater accuracy and faster detection speed. CONCLUSION: This study highlights the potential of U-Net-based deep learning techniques in the detection of bladder tumors. The establishment and optimization of the U-Net model is a significant breakthrough and it provides a valuable reference for future research in the field of medical image processing.


Assuntos
Inteligência Artificial , Neoplasias da Bexiga Urinária , Humanos , Neoplasias da Bexiga Urinária/diagnóstico , Neoplasias da Bexiga Urinária/patologia , Cistoscopia/métodos , Redes Neurais de Computação , Algoritmos
20.
Skeletal Radiol ; 53(3): 537-545, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37698626

RESUMO

BACKGROUND: The rotator cuff (RC) is a crucial anatomical element within the shoulder joint, facilitating an extensive array of motions while maintaining joint stability. Comprised of the subscapularis, infraspinatus, supraspinatus, and teres minor muscles, the RC plays an integral role in shoulder functionality. RC injuries represent prevalent, incapacitating conditions that impose a substantial impact on approximately 8% of the adult population in the USA. Segmentation of these muscles provides valuable anatomical information for evaluating muscle quality and allows for better treatment planning. MATERIALS AND METHODS: We developed a model based on residual deep convolutional encoder-decoder U-net to segment RC muscles on oblique sagittal T1-weighted images MRI. Our data consisted of shoulder MRIs from a cohort of 157 individuals, consisting of individuals without RC tendon tear (N=79) and patients with partial RC tendon tear (N=78). We evaluated different modeling approaches. The performance of the models was evaluated by calculating the Dice coefficient on the hold out test set. RESULTS: The best-performing model's median Dice coefficient was measured to be 89% (Q1:85%, Q3:96%) for the supraspinatus, 86% (Q1:82%, Q3:88%) for the subscapularis, 86% (Q1:82%, Q3:90%) for the infraspinatus, and 78% (Q1:70%, Q3:81%) for the teres minor muscle, indicating a satisfactory level of accuracy in the model's predictions. CONCLUSION: Our computational models demonstrated the capability to delineate RC muscles with a level of precision akin to that of experienced radiologists. As hypothesized, the proposed algorithm exhibited superior performance when segmenting muscles with well-defined boundaries, including the supraspinatus, subscapularis, and infraspinatus muscles.


Assuntos
Lesões do Manguito Rotador , Articulação do Ombro , Adulto , Humanos , Manguito Rotador/diagnóstico por imagem , Ombro , Lesões do Manguito Rotador/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA