Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 258
Filtrar
1.
Dentomaxillofac Radiol ; 53(5): 325-335, 2024 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-38696751

RESUMEN

OBJECTIVES: Currently, there is no reliable automated measurement method to study the changes in the condylar process after orthognathic surgery. Therefore, this study proposes an automated method to measure condylar changes in patients with skeletal class II malocclusion following surgical-orthodontic treatment. METHODS: Cone-beam CT (CBCT) scans from 48 patients were segmented using the nnU-Net network for automated maxillary and mandibular delineation. Regions unaffected by orthognathic surgery were selectively cropped. Automated registration yielded condylar displacement and volume calculations, each repeated three times for precision. Logistic regression and linear regression were used to analyse the correlation between condylar position changes at different time points. RESULTS: The Dice score for the automated segmentation of the condyle was 0.971. The intraclass correlation coefficients (ICCs) for all repeated measurements ranged from 0.93 to 1.00. The results of the automated measurement showed that 83.33% of patients exhibited condylar resorption occurring six months or more after surgery. Logistic regression and linear regression indicated a positive correlation between counterclockwise rotation in the pitch plane and condylar resorption (P < .01). And a positive correlation between the rotational angles in both three planes and changes in the condylar volume at six months after surgery (P ≤ .04). CONCLUSIONS: This study's automated method for measuring condylar changes shows excellent repeatability. Skeletal class II malocclusion patients may experience condylar resorption after bimaxillary orthognathic surgery, and this is correlated with counterclockwise rotation in the sagittal plane. ADVANCES IN KNOWLEDGE: This study proposes an innovative multi-step registration method based on CBCT, and establishes an automated approach for quantitatively measuring condyle changes post-orthognathic surgery. This method opens up new possibilities for studying condylar morphology.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Maloclusión Clase II de Angle , Cóndilo Mandibular , Procedimientos Quirúrgicos Ortognáticos , Humanos , Tomografía Computarizada de Haz Cónico/métodos , Maloclusión Clase II de Angle/diagnóstico por imagen , Maloclusión Clase II de Angle/cirugía , Cóndilo Mandibular/diagnóstico por imagen , Femenino , Masculino , Adulto , Adolescente , Adulto Joven
2.
Neuroimage ; 295: 120652, 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-38797384

RESUMEN

Accurate processing and analysis of non-human primate (NHP) brain magnetic resonance imaging (MRI) serves an indispensable role in understanding brain evolution, development, aging, and diseases. Despite the accumulation of diverse NHP brain MRI datasets at various developmental stages and from various imaging sites/scanners, existing computational tools designed for human MRI typically perform poor on NHP data, due to huge differences in brain sizes, morphologies, and imaging appearances across species, sites, and ages, highlighting the imperative for NHP-specialized MRI processing tools. To address this issue, in this paper, we present a robust, generic, and fully automated computational pipeline, called non-human primates Brain Extraction and Segmentation Toolbox (nBEST), whose main functionality includes brain extraction, non-cerebrum removal, and tissue segmentation. Building on cutting-edge deep learning techniques by employing lifelong learning to flexibly integrate data from diverse NHP populations and innovatively constructing 3D U-NeXt architecture, nBEST can well handle structural NHP brain MR images from multi-species, multi-site, and multi-developmental-stage (from neonates to the elderly). We extensively validated nBEST based on, to our knowledge, the largest assemblage dataset in NHP brain studies, encompassing 1,469 scans with 11 species (e.g., rhesus macaques, cynomolgus macaques, chimpanzees, marmosets, squirrel monkeys, etc.) from 23 independent datasets. Compared to alternative tools, nBEST outperforms in precision, applicability, robustness, comprehensiveness, and generalizability, greatly benefiting downstream longitudinal, cross-sectional, and cross-species quantitative analyses. We have made nBEST an open-source toolbox (https://github.com/TaoZhong11/nBEST) and we are committed to its continual refinement through lifelong learning with incoming data to greatly contribute to the research field.


Asunto(s)
Encéfalo , Aprendizaje Profundo , Imagen por Resonancia Magnética , Animales , Encéfalo/diagnóstico por imagen , Encéfalo/anatomía & histología , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Macaca mulatta , Neuroimagen/métodos , Pan troglodytes/anatomía & histología , Envejecimiento/fisiología
3.
Artículo en Inglés | MEDLINE | ID: mdl-38669174

RESUMEN

Accurate segmentation of brain structures is crucial for analyzing longitudinal changes in children's brains. However, existing methods are mostly based on models established at a single time-point due to difficulty in obtaining annotated data and dynamic variation of tissue intensity. The main problem with such approaches is that, when conducting longitudinal analysis, images from different time points are segmented by different models, leading to significant variation in estimating development trends. In this paper, we propose a novel unified model with co-registration framework to segment children's brain images covering neonates to preschoolers, which is formulated as two stages. First, to overcome the shortage of annotated data, we propose building gold-standard segmentation with co-registration framework guided by longitudinal data. Second, we construct a unified segmentation model tailored to brain images at 0-6 years old through the introduction of a convolutional network (named SE-VB-Net), which combines our previously proposed VB-Net with Squeeze-and-Excitation (SE) block. Moreover, different from existing methods that only require both T1- and T2-weighted MR images as inputs, our designed model also allows a single T1-weighted MR image as input. The proposed method is evaluated on the main dataset (320 longitudinal subjects with average 2 time-points) and two external datasets (10 cases with 6-month-old and 40 cases with 20-45 weeks, respectively). Results demonstrate that our proposed method achieves a high performance (>92%), even over a single time-point. This means that it is suitable for brain image analysis with large appearance variation, and largely broadens the application scenarios.

4.
Med Image Anal ; 94: 103148, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38554550

RESUMEN

Deep learning methods show great potential for the efficient and precise estimation of quantitative parameter maps from multiple magnetic resonance (MR) images. Current deep learning-based MR parameter mapping (MPM) methods are mostly trained and tested using data with specific acquisition settings. However, scan protocols usually vary with centers, scanners, and studies in practice. Thus, deep learning methods applicable to MPM with varying acquisition settings are highly required but still rarely investigated. In this work, we develop a model-based deep network termed MMPM-Net for robust MPM with varying acquisition settings. A deep learning-based denoiser is introduced to construct the regularization term in the nonlinear inversion problem of MPM. The alternating direction method of multipliers is used to solve the optimization problem and then unrolled to construct MMPM-Net. The variation in acquisition parameters can be addressed by the data fidelity component in MMPM-Net. Extensive experiments are performed on R2 mapping and R1 mapping datasets with substantial variations in acquisition settings, and the results demonstrate that the proposed MMPM-Net method outperforms other state-of-the-art MR parameter mapping methods both qualitatively and quantitatively.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Metacrilatos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo , Imagen por Resonancia Magnética/métodos
5.
IEEE Trans Med Imaging ; 43(7): 2522-2536, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38386579

RESUMEN

Automatic vertebral osteophyte recognition in Digital Radiography is of great importance for the early prediction of degenerative disease but is still a challenge because of the tiny size and high inter-class similarity between normal and osteophyte vertebrae. Meanwhile, common sampling strategies applied in Convolution Neural Network could cause detailed context loss. All of these could lead to an incorrect positioning predicament. In this paper, based on important pathological priors, we define a set of potential lesions of each vertebra and propose a novel Pathological Priors Inspired Network (PPIN) to achieve accurate osteophyte recognition. PPIN comprises a backbone feature extractor integrating with a Wavelet Transform Sampling module for high-frequency detailed context extraction, a detection branch for locating all potential lesions and a classification branch for producing final osteophyte recognition. The Anatomical Map-guided Filter between two branches helps the network focus on the specific anatomical regions via the generated heatmaps of potential lesions in the detection branch to address the incorrect positioning problem. To reduce the inter-class similarity, a Bilateral Augmentation Module based on the graph relationship is proposed to imitate the clinical diagnosis process and to extract discriminative contextual information between adjacent vertebrae in the classification branch. Experiments on the two osteophytes-specific datasets collected from the public VinDr-Spine database show that the proposed PPIN achieves the best recognition performance among multitask frameworks and shows strong generalization. The results on a private dataset demonstrate the potential in clinical application. The Class Activation Maps also show the powerful localization capability of PPIN. The source codes are available in https://github.com/Phalo/PPIN.


Asunto(s)
Osteofito , Humanos , Osteofito/diagnóstico por imagen , Algoritmos , Redes Neurales de la Computación , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Columna Vertebral/diagnóstico por imagen , Análisis de Ondículas
6.
IEEE Trans Med Imaging ; 43(5): 1958-1971, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38206779

RESUMEN

Breast cancer is becoming a significant global health challenge, with millions of fatalities annually. Magnetic Resonance Imaging (MRI) can provide various sequences for characterizing tumor morphology and internal patterns, and becomes an effective tool for detection and diagnosis of breast tumors. However, previous deep-learning based tumor segmentation methods from multi-parametric MRI still have limitations in exploring inter-modality information and focusing task-informative modality/modalities. To address these shortcomings, we propose a Modality-Specific Information Disentanglement (MoSID) framework to extract both inter- and intra-modality attention maps as prior knowledge for guiding tumor segmentation. Specifically, by disentangling modality-specific information, the MoSID framework provides complementary clues for the segmentation task, by generating modality-specific attention maps to guide modality selection and inter-modality evaluation. Our experiments on two 3D breast datasets and one 2D prostate dataset demonstrate that the MoSID framework outperforms other state-of-the-art multi-modality segmentation methods, even in the cases of missing modalities. Based on the segmented lesions, we further train a classifier to predict the patients' response to radiotherapy. The prediction accuracy is comparable to the case of using manually-segmented tumors for treatment outcome prediction, indicating the robustness and effectiveness of the proposed segmentation method. The code is available at https://github.com/Qianqian-Chen/MoSID.


Asunto(s)
Neoplasias de la Mama , Interpretación de Imagen Asistida por Computador , Imagen por Resonancia Magnética , Humanos , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Masculino , Algoritmos , Aprendizaje Profundo , Mama/diagnóstico por imagen , Bases de Datos Factuales , Neoplasias de la Próstata/diagnóstico por imagen
7.
IEEE Trans Biomed Eng ; 71(1): 183-194, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37432838

RESUMEN

Early diagnosis and timely intervention are significantly beneficial to patients with autism spectrum disorder (ASD). Although structural magnetic resonance imaging (sMRI) has become an essential tool to facilitate the diagnosis of ASD, these sMRI-based approaches still have the following issues. The heterogeneity and subtle anatomical changes place higher demands for effective feature descriptors. Additionally, the original features are usually high-dimensional, while most existing methods prefer to select feature subsets in the original space, in which noises and outliers may hinder the discriminative ability of selected features. In this article, we propose a margin-maximized norm-mixed representation learning framework for ASD diagnosis with multi-level flux features extracted from sMRI. Specifically, a flux feature descriptor is devised to quantify comprehensive gradient information of brain structures on both local and global levels. For the multi-level flux features, we learn latent representations in an assumed low-dimensional space, in which a self-representation term is incorporated to characterize the relationships among features. We also introduce mixed norms to finely select original flux features for the construction of latent representations while preserving the low-rankness of latent representations. Furthermore, a margin maximization strategy is applied to enlarge the inter-class distance of samples, thereby increasing the discriminative ability of latent representations. The extensive experiments on several datasets show that our proposed method can achieve promising classification performance (the average area under curve, accuracy, specificity, and sensitivity on the studied ASD datasets are 0.907, 0.896, 0.892, and 0.908, respectively) and also find potential biomarkers for ASD diagnosis.


Asunto(s)
Trastorno del Espectro Autista , Humanos , Trastorno del Espectro Autista/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Mapeo Encefálico/métodos , Aprendizaje
8.
Med Image Anal ; 92: 103045, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38071865

RESUMEN

Automatic and accurate dose distribution prediction plays an important role in radiotherapy plan. Although previous methods can provide promising performance, most methods did not consider beam-shaped radiation of treatment delivery in clinical practice. This leads to inaccurate prediction, especially on beam paths. To solve this problem, we propose a beam-wise dose composition learning (BDCL) method for dose prediction in the context of head and neck (H&N) radiotherapy plan. Specifically, a global dose network is first utilized to predict coarse dose values in the whole-image space. Then, we propose to generate individual beam masks to decompose the coarse dose distribution into multiple field doses, called beam voters, which are further refined by a subsequent beam dose network and reassembled to form the final dose distribution. In particular, we design an overlap consistency module to keep the similarity of high-level features in overlapping regions between different beam voters. To make the predicted dose distribution more consistent with the real radiotherapy plan, we also propose a dose-volume histogram (DVH) calibration process to facilitate feature learning in some clinically concerned regions. We further apply an edge enhancement procedure to enhance the learning of the extracted feature from the dose falloff regions. Experimental results on a public H&N cancer dataset from the AAPM OpenKBP challenge show that our method achieves superior performance over other state-of-the-art approaches by significant margins. Source code is released at https://github.com/TL9792/BDCLDosePrediction.


Asunto(s)
Neoplasias de Cabeza y Cuello , Radioterapia de Intensidad Modulada , Humanos , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador/métodos , Radioterapia de Intensidad Modulada/métodos , Neoplasias de Cabeza y Cuello/radioterapia
9.
IEEE Trans Med Imaging ; 43(2): 794-806, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37782590

RESUMEN

The superiority of magnetic resonance (MR)-only radiotherapy treatment planning (RTP) has been well demonstrated, benefiting from the synthesis of computed tomography (CT) images which supplements electron density and eliminates the errors of multi-modal images registration. An increasing number of methods has been proposed for MR-to-CT synthesis. However, synthesizing CT images of different anatomical regions from MR images with different sequences using a single model is challenging due to the large differences between these regions and the limitations of convolutional neural networks in capturing global context information. In this paper, we propose a multi-scale tokens-aware Transformer network (MTT-Net) for multi-region and multi-sequence MR-to-CT synthesis in a single model. Specifically, we develop a multi-scale image tokens Transformer to capture multi-scale global spatial information between different anatomical structures in different regions. Besides, to address the limited attention areas of tokens in Transformer, we introduce a multi-shape window self-attention into Transformer to enlarge the receptive fields for learning the multi-directional spatial representations. Moreover, we adopt a domain classifier in generator to introduce the domain knowledge for distinguishing the MR images of different regions and sequences. The proposed MTT-Net is evaluated on a multi-center dataset and an unseen region, and remarkable performance was achieved with MAE of 69.33 ± 10.39 HU, SSIM of 0.778 ± 0.028, and PSNR of 29.04 ± 1.32 dB in head & neck region, and MAE of 62.80 ± 7.65 HU, SSIM of 0.617 ± 0.058 and PSNR of 25.94 ± 1.02 dB in abdomen region. The proposed MTT-Net outperforms state-of-the-art methods in both accuracy and visual quality.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Tomografía Computarizada por Rayos X , Redes Neurales de la Computación , Espectroscopía de Resonancia Magnética
10.
Comput Med Imaging Graph ; 111: 102319, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38147798

RESUMEN

Image registration plays a crucial role in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), used as a fundamental step for the subsequent diagnosis of benign and malignant tumors. However, the registration process encounters significant challenges due to the substantial intensity changes observed among different time points, resulting from the injection of contrast agents. Furthermore, previous studies have often overlooked the alignment of small structures, such as tumors and vessels. In this work, we propose a novel DCE-MRI registration framework that can effectively align the DCE-MRI time series. Specifically, our DCE-MRI registration framework consists of two steps, i.e., a de-enhancement synthesis step and a coarse-to-fine registration step. In the de-enhancement synthesis step, a disentanglement network separates DCE-MRI images into a content component representing the anatomical structures and a style component indicating the presence or absence of contrast agents. This step generates synthetic images where the contrast agents are removed from the original images, alleviating the negative effects of intensity changes on the subsequent registration process. In the registration step, we utilize a coarse registration network followed by a refined registration network. These two networks facilitate the estimation of both the coarse and refined displacement vector fields (DVFs) in a pairwise and groupwise registration manner, respectively. In addition, to enhance the alignment accuracy for small structures, a voxel-wise constraint is further conducted by assessing the smoothness of the time-intensity curves (TICs). Experimental results on liver DCE-MRI demonstrate that our proposed method outperforms state-of-the-art approaches, offering more robust and accurate alignment results.


Asunto(s)
Medios de Contraste , Neoplasias , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Algoritmos , Reproducibilidad de los Resultados , Imagen por Resonancia Magnética/métodos , Hígado/diagnóstico por imagen
11.
Eur Radiol ; 34(7): 4287-4299, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38127073

RESUMEN

OBJECTIVES: To develop an ensemble multi-task deep learning (DL) framework for automatic and simultaneous detection, segmentation, and classification of primary bone tumors (PBTs) and bone infections based on multi-parametric MRI from multi-center. METHODS: This retrospective study divided 749 patients with PBTs or bone infections from two hospitals into a training set (N = 557), an internal validation set (N = 139), and an external validation set (N = 53). The ensemble framework was constructed using T1-weighted image (T1WI), T2-weighted image (T2WI), and clinical characteristics for binary (PBTs/bone infections) and three-category (benign/intermediate/malignant PBTs) classification. The detection and segmentation performances were evaluated using Intersection over Union (IoU) and Dice score. The classification performance was evaluated using the receiver operating characteristic (ROC) curve and compared with radiologist interpretations. RESULT: On the external validation set, the single T1WI-based and T2WI-based multi-task models obtained IoUs of 0.71 ± 0.25/0.65 ± 0.30 for detection and Dice scores of 0.75 ± 0.26/0.70 ± 0.33 for segmentation. The framework achieved AUCs of 0.959 (95%CI, 0.955-1.000)/0.900 (95%CI, 0.773-0.100) and accuracies of 90.6% (95%CI, 79.7-95.9%)/78.3% (95%CI, 58.1-90.3%) for the binary/three-category classification. Meanwhile, for the three-category classification, the performance of the framework was superior to that of three junior radiologists (accuracy: 65.2%, 69.6%, and 69.6%, respectively) and comparable to that of two senior radiologists (accuracy: 78.3% and 78.3%). CONCLUSION: The MRI-based ensemble multi-task framework shows promising performance in automatically and simultaneously detecting, segmenting, and classifying PBTs and bone infections, which was preferable to junior radiologists. CLINICAL RELEVANCE STATEMENT: Compared with junior radiologists, the ensemble multi-task deep learning framework effectively improves differential diagnosis for patients with primary bone tumors or bone infections. This finding may help physicians make treatment decisions and enable timely treatment of patients. KEY POINTS: • The ensemble framework fusing multi-parametric MRI and clinical characteristics effectively improves the classification ability of single-modality models. • The ensemble multi-task deep learning framework performed well in detecting, segmenting, and classifying primary bone tumors and bone infections. • The ensemble framework achieves an optimal classification performance superior to junior radiologists' interpretations, assisting the clinical differential diagnosis of primary bone tumors and bone infections.


Asunto(s)
Neoplasias Óseas , Aprendizaje Profundo , Humanos , Neoplasias Óseas/diagnóstico por imagen , Femenino , Estudios Retrospectivos , Masculino , Persona de Mediana Edad , Adulto , Imagen por Resonancia Magnética/métodos , Anciano , Adolescente , Interpretación de Imagen Asistida por Computador/métodos , Enfermedades Óseas Infecciosas/diagnóstico por imagen , Adulto Joven , Niño
12.
Artif Intell Med ; 146: 102720, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-38042604

RESUMEN

Automatic segmentation of the three substructures of glomerular filtration barrier (GFB) in transmission electron microscopy (TEM) images holds immense potential for aiding pathologists in renal disease diagnosis. However, the labor-intensive nature of manual annotations limits the training data for a fully-supervised deep learning model. Addressing this, our study harnesses self-supervised representation learning (SSRL) to utilize vast unlabeled data and mitigate annotation scarcity. Our innovation, GCLR, is a hybrid pixel-level pretext task tailored for GFB segmentation, integrating two subtasks: global clustering (GC) and local restoration (LR). GC captures the overall GFB by learning global context representations, while LR refines three substructures by learning local detail representations. Experiments on 18,928 unlabeled glomerular TEM images for self-supervised pre-training and 311 labeled ones for fine-tuning demonstrate that our proposed GCLR obtains the state-of-the-art segmentation results for all three substructures of GFB with the Dice similarity coefficient of 86.56 ± 0.16%, 75.56 ± 0.36%, and 79.41 ± 0.16%, respectively, compared with other representative self-supervised pretext tasks. Our proposed GCLR also outperforms the fully-supervised pre-training methods based on the three large-scale public datasets - MitoEM, COCO, and ImageNet - with less training data and time.


Asunto(s)
Barrera de Filtración Glomerular , Glomérulos Renales , Análisis por Conglomerados , Microscopía Electrónica de Transmisión , Aprendizaje Automático Supervisado , Procesamiento de Imagen Asistido por Computador
13.
IEEE J Biomed Health Inform ; 27(12): 5883-5894, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37792661

RESUMEN

Automatic delineation of the lumen and vessel contours in intravascular ultrasound (IVUS) images is crucial for the subsequent IVUS-based analysis. Existing methods usually address this task through mask-based segmentation, which cannot effectively handle the anatomical plausibility of the lumen and external elastic lamina (EEL) contours and thus limits their performance. In this article, we propose a contour encoding based method called coupled contour regression network (CCRNet) to directly predict the lumen and EEL contour pairs. The lumen and EEL contours are resampled, coupled, and embedded into a low-dimensional space to learn a compact contour representation. Then, we employ a convolutional network backbone to predict the coupled contour signatures and reconstruct the signatures to the object contours by a linear decoder. Assisted by the implicit anatomical prior of the paired lumen and EEL contours in the signature space and contour decoder, CCRNet has the potential to avoid producing unreasonable results. We evaluated our proposed method on a large IVUS dataset consisting of 7204 cross-sectional frames from 185 pullbacks. The CCRNet can rapidly extract the contours at 100 fps. Without any post-processing, all produced contours are anatomically reasonable in the test 19 pullbacks. The mean Dice similarity coefficients of our CCRNet for the lumen and EEL are 0.940 and 0.958, which are comparable to the mask-based models. In terms of the contour metric Hausdorff distance, our CCRNet achieves 0.258 mm for lumen and 0.268 mm for EEL, which outperforms the mask-based models.


Asunto(s)
Ultrasonografía Intervencional , Humanos , Estudios Transversales , Ultrasonografía Intervencional/métodos , Ultrasonografía
14.
Comput Biol Med ; 165: 107373, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37611424

RESUMEN

Motion artifacts in magnetic resonance imaging (MRI) have always been a serious issue because they can affect subsequent diagnosis and treatment. Supervised deep learning methods have been investigated for the removal of motion artifacts; however, they require paired data that are difficult to obtain in clinical settings. Although unsupervised methods are widely proposed to fully use clinical unpaired data, they generally focus on anatomical structures generated by the spatial domain while ignoring phase error (deviations or inaccuracies in phase information that are possibly caused by rigid motion artifacts during image acquisition) provided by the frequency domain. In this study, a 2D unsupervised deep learning method named unsupervised disentangled dual-domain network (UDDN) was proposed to effectively disentangle and remove unwanted rigid motion artifacts from images. In UDDN, a dual-domain encoding module was presented to capture different types of information from the spatial and frequency domains to enrich the information. Moreover, a cross-domain attention fusion module was proposed to effectively fuse information from different domains, reduce information redundancy, and improve the performance of motion artifact removal. UDDN was validated on a publicly available dataset and a clinical dataset. Qualitative and quantitative experimental results showed that our method could effectively remove motion artifacts and reconstruct image details. Moreover, the performance of UDDN surpasses that of several state-of-the-art unsupervised methods and is comparable with that of the supervised method. Therefore, our method has great potential for clinical application in MRI, such as real-time removal of rigid motion artifacts.


Asunto(s)
Artefactos , Imagen por Resonancia Magnética , Imagen por Resonancia Magnética/métodos , Movimiento (Física) , Procesamiento de Imagen Asistido por Computador/métodos
15.
Bioengineering (Basel) ; 10(7)2023 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-37508857

RESUMEN

Accurate segmentation of interstitial lung disease (ILD) patterns from computed tomography (CT) images is an essential prerequisite to treatment and follow-up. However, it is highly time-consuming for radiologists to pixel-by-pixel segment ILD patterns from CT scans with hundreds of slices. Consequently, it is hard to obtain large amounts of well-annotated data, which poses a huge challenge for data-driven deep learning-based methods. To alleviate this problem, we propose an end-to-end semi-supervised learning framework for the segmentation of ILD patterns (ESSegILD) from CT images via self-training with selective re-training. The proposed ESSegILD model is trained using a large CT dataset with slice-wise sparse annotations, i.e., only labeling a few slices in each CT volume with ILD patterns. Specifically, we adopt a popular semi-supervised framework, i.e., Mean-Teacher, that consists of a teacher model and a student model and uses consistency regularization to encourage consistent outputs from the two models under different perturbations. Furthermore, we propose introducing the latest self-training technique with a selective re-training strategy to select reliable pseudo-labels generated by the teacher model, which are used to expand training samples to promote the student model during iterative training. By leveraging consistency regularization and self-training with selective re-training, our proposed ESSegILD can effectively utilize unlabeled data from a partially annotated dataset to progressively improve the segmentation performance. Experiments are conducted on a dataset of 67 pneumonia patients with incomplete annotations containing over 11,000 CT images with eight different lung patterns of ILDs, with the results indicating that our proposed method is superior to the state-of-the-art methods.

16.
Heliyon ; 9(7): e17651, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37449128

RESUMEN

Accurate segmentation of the mandibular canal is essential in dental implant and maxillofacial surgery, which can help prevent nerve or vascular damage inside the mandibular canal. Achieving this is challenging because of the low contrast in CBCT scans and the small scales of mandibular canal areas. Several innovative methods have been proposed for mandibular canal segmentation with positive performance. However, most of these methods segment the mandibular canal based on sliding patches, which may adversely affect the morphological integrity of the tubular structure. In this study, we propose whole mandibular canal segmentation using transformed dental CBCT volume in the Frenet frame. Considering the connectivity of the mandibular canal, we propose to transform the CBCT volume to obtain a sub-volume containing the whole mandibular canal based on the Frenet frame to ensure complete 3D structural information. Moreover, to further improve the performance of mandibular canal segmentation, we use clDice to guarantee the integrity of the mandibular canal structure and segment the mandibular canal. Experimental results on our CBCT dataset show that integrating the proposed transformed volume in the Frenet frame into other state-of-the-art methods achieves a 0.5%∼12.1% improvement in Dice performance. Our proposed method can achieve impressive results with a Dice value of 0.865 (±0.035), and a clDice value of 0.971 (±0.020), suggesting that our method can segment the mandibular canal with superior performance.

17.
Nat Commun ; 14(1): 3741, 2023 06 23.
Artículo en Inglés | MEDLINE | ID: mdl-37353501

RESUMEN

Cardiovascular disease is a major global public health problem, and intelligent diagnostic approaches play an increasingly important role in the analysis of electrocardiograms (ECGs). Convenient wearable ECG devices enable the detection of transient arrhythmias and improve patient health by making it possible to seek intervention during continuous monitoring. We collected 658,486 wearable 12-lead ECGs, among which 164,538 were annotated, and the remaining 493,948 were without diagnostic. We present four data augmentation operations and a self-supervised learning classification framework that can recognize 60 ECG diagnostic terms. Our model achieves an average area under the receiver-operating characteristic curve (AUROC) and average F1 score on the offline test of 0.975 and 0.575. The average sensitivity, specificity and F1-score during the 2-month online test are 0.736, 0.954 and 0.468, respectively. This approach offers real-time intelligent diagnosis, and detects abnormal segments in long-term ECG monitoring in the clinical setting for further diagnosis by cardiologists.


Asunto(s)
Arritmias Cardíacas , Dispositivos Electrónicos Vestibles , Humanos , Arritmias Cardíacas/diagnóstico , Electrocardiografía , Algoritmos , Aprendizaje Automático Supervisado
18.
Pharm Biol ; 61(1): 737-745, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37129023

RESUMEN

CONTEXT: Protocatechuic acid (PCA) has a protective effect on alcoholic liver injury, but the role of PCA in type 2 diabetes-induced liver injury is not well known. OBJECTIVES: This study explores the therapeutic effect and potential mechanism of PCA on type 2 diabetes-induced liver injury. MATERIALS AND METHODS: An insulin resistance/type 2 diabetic (IR/D) model was established by high-fat diet for 4 weeks + streptozotocin (35 mg/kg; i.p) in male Wistar rats pretreated with or without PCA (15 or 30 mg/kg for 6 d). RESULTS: PCA at 15 and 30 mg/kg significantly upregulated the levels of body weight (BW; 230.2, 257.8 g), high density lipids (22.68, 34.78 mg/dL), glutathione (10.24, 16.21 nmol/mg), superoxide dismutase (21.62, 29.34 U/mg), glucagon-like peptide-1, glucose transporter-4, Wnt1, and ß-catenin, while downregulating those of liver weight (LW; 9.4, 6.7 g), BW/LW (4.1, 2.6%), serum glucose (165, 120 mg/dL), serum insulin (13.46, 8.67 µIU/mL), homeostatic model assessment of insulin resistance (5.48, 2.57), total cholesterol (68.52, 54.31 mg/dL), triglycerides (72.15, 59.64 mg/dL), low density lipids (42.18, 30.71), aspartate aminotransferase (54.34 and 38.68 U/L), alanine aminotransferase (42.87, 29.98 U/L), alkaline phosphatase (210.16, 126.47 U/L), malondialdehyde (16.52, 10.35), pro-inflammatory markers (tumor necrosis factor α (TNF-α (149.67, 120.33 pg/mg)) , IL-6 (89.79, 73.69 pg/mg) and IL-1ß (49.67, 38.73 pg/mg)), nuclear factor kappa B (NF-κB), and interleukin-1ß, and ameliorated the abnormal pathological changes in IR/D rats. DISCUSSION AND CONCLUSION: PCA mitigates the IR, lipid accumulation, oxidative stress, and inflammation in liver tissues of IR/D rats by modulating the NF-κB and Wnt1/ß-catenin pathways.


Asunto(s)
Enfermedad Hepática Crónica Inducida por Sustancias y Drogas , Diabetes Mellitus Tipo 2 , Resistencia a la Insulina , Masculino , Ratas , Animales , beta Catenina/metabolismo , FN-kappa B/metabolismo , Diabetes Mellitus Tipo 2/metabolismo , Enfermedad Hepática Crónica Inducida por Sustancias y Drogas/metabolismo , Ratas Wistar , Hígado , Estrés Oxidativo , Factor de Necrosis Tumoral alfa/metabolismo , Triglicéridos
19.
Comput Med Imaging Graph ; 107: 102245, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37245416

RESUMEN

Automatic segmentation of vertebral bodies (VBs) and intervertebral discs (IVDs) in 3D magnetic resonance (MR) images is vital in diagnosing and treating spinal diseases. However, segmenting the VBs and IVDs simultaneously is not trivial. Moreover, problems exist, including blurry segmentation caused by anisotropy resolution, high computational cost, inter-class similarity and intra-class variability, and data imbalances. We proposed a two-stage algorithm, named semi-supervised hybrid spine network (SSHSNet), to address these problems by achieving accurate simultaneous VB and IVD segmentation. In the first stage, we constructed a 2D semi-supervised DeepLabv3+ by using cross pseudo supervision to obtain intra-slice features and coarse segmentation. In the second stage, a 3D full-resolution patch-based DeepLabv3+ was built. This model can be used to extract inter-slice information and combine the coarse segmentation and intra-slice features provided from the first stage. Moreover, a cross tri-attention module was applied to compensate for the loss of inter-slice and intra-slice information separately generated from 2D and 3D networks, thereby improving feature representation ability and achieving satisfactory segmentation results. The proposed SSHSNet was validated on a publicly available spine MR image dataset, and remarkable segmentation performance was achieved. Moreover, results show that the proposed method has great potential in dealing with the data imbalance problem. Based on previous reports, few studies have incorporated a semi-supervised learning strategy with a cross attention mechanism for spine segmentation. Therefore, the proposed method may provide a useful tool for spine segmentation and aid clinically in spinal disease diagnoses and treatments. Codes are publicly available at: https://github.com/Meiyan88/SSHSNet.


Asunto(s)
Imagen por Resonancia Magnética , Columna Vertebral , Columna Vertebral/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Algoritmos , Aprendizaje Automático Supervisado , Procesamiento de Imagen Asistido por Computador/métodos
20.
Artículo en Inglés | MEDLINE | ID: mdl-37247314

RESUMEN

Isocitrate dehydrogenase (IDH) is one of the most important genotypes in patients with glioma because it can affect treatment planning. Machine learning-based methods have been widely used for prediction of IDH status (denoted as IDH prediction). However, learning discriminative features for IDH prediction remains challenging because gliomas are highly heterogeneous in MRI. In this paper, we propose a multi-level feature exploration and fusion network (MFEFnet) to comprehensively explore discriminative IDH-related features and fuse different features at multiple levels for accurate IDH prediction in MRI. First, a segmentation-guided module is established by incorporating a segmentation task and is used to guide the network in exploiting features that are highly related to tumors. Second, an asymmetry magnification module is used to detect T2-FLAIR mismatch sign from image and feature levels. The T2-FLAIR mismatch-related features can be magnified from different levels to increase the power of feature representations. Finally, a dual-attention feature fusion module is introduced to fuse and exploit the relationships of different features from intra- and inter-slice feature fusion levels. The proposed MFEFnet is evaluated on a multi-center dataset and shows promising performance in an independent clinical dataset. The interpretability of the different modules is also evaluated to illustrate the effectiveness and credibility of the method. Overall, MFEFnet shows great potential for IDH prediction.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA