Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 60
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Clin Exp Hypertens ; 45(1): 2228518, 2023 Dec 31.
Artículo en Inglés | MEDLINE | ID: mdl-37366048

RESUMEN

OBJECTIVE: To explore the association of renal surface nodularity (RSN) with the increased adverse vascular event (AVE) risk in patients with arterial hypertension. METHODS: This cross-sectional study included patients with arterial hypertension aged 18-60 years who underwent contrasted computed tomography (CT) of kidney from January 2012 to December 2020. The subjects were classified into AVE or not (non-AVE) matched with age (≤5 years) and sex. Their CT images were analyzed using both qualitative (semiRSN) and quantitative RSN (qRSN) methods, respectively. Their clinical characteristics included age, sex, systolic blood pressure (SBP), diastolic blood pressure, hypertension course, diabetes history, hyperlipidemia, and estimated glomerular filtration rate (eGFR). RESULTS: Compared with non-AVE group (n = 91), AVE (n = 91) was at lower age, higher SBP, and fewer rate of diabetes and hyperlipidemia history (all P < .01). Rate of positive semiRSN was higher in AVE than non-AVE (49.45% vs 14.29%, P < .001). qRSN was larger in AVE than non-AVE [1.03 (0.85, 1.33) vs 0.86 (0.75,1.03), P < .001]. The increased AVE was associated with semiRSN (odds ratio = 7.04, P < .001) and qRSN (odds ratio = 5.09, P = .003), respectively. For distinguishing AVE from non-AVE, the area under receiver operating characteristic was bigger in the models combining the clinical characteristics with either semiRSN or qRSN than that of semiRSN or qRSN alone (P ≤.01). CONCLUSION: Among the patients with arterial hypertension aged 18-60 years, CT imaging-based RSN was associated with increased AVE risk.


Asunto(s)
Hipertensión , Humanos , Estudios Transversales , Hipertensión/complicaciones , Riñón/diagnóstico por imagen , Presión Sanguínea , Tasa de Filtración Glomerular , Factores de Riesgo
2.
J Digit Imaging ; 32(1): 183-197, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30187316

RESUMEN

Ophthalmic medical images, such as optical coherence tomography (OCT) images and color photo of fundus, provide valuable information for clinical diagnosis and treatment of ophthalmic diseases. In this paper, we introduce a software system specially oriented to ophthalmic images processing, analysis, and visualization (OIPAV) to assist users. OIPAV is a cross-platform system built on a set of powerful and widely used toolkit libraries. Based on the plugin mechanism, the system has an extensible framework. It provides rich functionalities including data I/O, image processing, interaction, ophthalmic diseases detection, data analysis, and visualization. By using OIPAV, users can easily access to the ophthalmic image data manufactured from different imaging devices, facilitate workflows of processing ophthalmic images, and improve quantitative evaluations. With a satisfying function scalability and expandability, the software is applicable for both ophthalmic researchers and clinicians.


Asunto(s)
Oftalmopatías/diagnóstico por imagen , Angiografía con Fluoresceína/métodos , Interpretación de Imagen Asistida por Computador/métodos , Tomografía de Coherencia Óptica/métodos , Ojo/diagnóstico por imagen , Humanos
3.
Psychiatry Res Neuroimaging ; 337: 111762, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38043369

RESUMEN

PURPOSE: This study explores subcortices and their intrinsic functional connectivity (iFC) in autism spectrum disorder (ASD) adults and investigates their relationship with clinical severity. METHODS: Resting-state functional magnetic resonance imaging (rs-fMRI) data were acquired from 74 ASD patients, and 63 gender and age-matched typically developing (TD) adults. Independent component analysis (ICA) was conducted to evaluate subcortical patterns of basal ganglia (BG) and thalamus. These two brain areas were treated as regions of interest to further calculate whole-brain FC. In addition, we employed multivariate machine learning to identify subcortices-based FC brain patterns and clinical scores to classify ASD adults from those TD subjects. RESULTS: In ASD individuals, autism diagnostic observation schedule (ADOS) was negatively correlated with the BG network. Similarly, social responsiveness scale (SRS) was negatively correlated with the thalamus network. The BG-based iFC analysis revealed adults with ASD versus TD had lower FC, and its FC with the right medial temporal lobe (MTL), was positively correlated with SRS and ADOS separately. ASD could be predicted with a balanced accuracy of around 60.0 % using brain patterns and 84.7 % using clinical variables. CONCLUSION: Our results revealed the abnormal subcortical iFC may be related to autism symptoms.


Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Adulto , Humanos , Trastorno del Espectro Autista/diagnóstico por imagen , Mapeo Encefálico/métodos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen
4.
Med Phys ; 2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39241262

RESUMEN

BACKGROUND: In clinical anesthesia, precise segmentation of muscle layers from abdominal ultrasound images is crucial for identifying nerve block locations accurately. Despite deep learning advancements, challenges persist in segmenting muscle layers with accurate topology due to pseudo and weak edges caused by acoustic artifacts in ultrasound imagery. PURPOSE: To assist anesthesiologists in locating nerve block areas, we have developed a novel deep learning algorithm that can accurately segment muscle layers in abdominal ultrasound images with interference. METHODS: We propose a comprehensive approach emphasizing the preservation of the segmentation's low-rank property to ensure correct topology. Our methodology integrates a Semantic Feature Extraction (SFE) module for redundant encoding, a Low-rank Reconstruction (LR) module to compress this encoding, and an Edge Reconstruction (ER) module to refine segmentation boundaries. Our evaluation involved rigorous testing on clinical datasets, comparing our algorithm against seven established deep learning-based segmentation methods using metrics such as Mean Intersection-over-Union (MIoU) and Hausdorff distance (HD). Statistical rigor was ensured through effect size quantification with Cliff's Delta, Multivariate Analysis of Variance (MANOVA) for multivariate analysis, and application of the Holm-Bonferroni method for multiple comparisons correction. RESULTS: We demonstrate that our method outperforms other industry-recognized deep learning approaches on both MIoU and HD metrics, achieving the best outcomes with 88.21%/4.98 ( p m a x = 0.1893 $p_{max}=0.1893$ ) on the standard test set and 85.48%/6.98 ( p m a x = 0.0448 $p_{max}=0.0448$ ) on the challenging test set. The best&worst results for the other models on the standard test set were (87.20%/5.72)&(83.69%/8.12), and on the challenging test set were (81.25%/10.00)&(71.74%/16.82). Ablation studies further validate the distinct contributions of the proposed modules, which synergistically achieve a balance between maintaining topological integrity and edge precision. CONCLUSIONS: Our findings validate the effective segmentation of muscle layers with accurate topology in complex ultrasound images, leveraging low-rank constraints. The proposed method not only advances the field of medical imaging segmentation but also offers practical benefits for clinical anesthesia by improving the reliability of nerve block localization.

5.
IEEE Trans Biomed Eng ; PP2024 Sep 25.
Artículo en Inglés | MEDLINE | ID: mdl-39320994

RESUMEN

OBJECTIVE: Multi-modal MR/CT image segmentation is an important task in disease diagnosis and treatment, but it is usually difficult to acquire aligned multi-modal images of a patient in clinical practice due to the high cost and specific allergic reactions to contrast agents. To address these issues, a task complementation framework is proposed to enable unpaired multi-modal image complementation learning in the training stage and single-modal image segmentation in the inference stage. METHOD: To fuse unpaired dual-modal images in the training stage and allow single-modal image segmentation in the inference stage, a synthesis-segmentation task complementation network is constructed to mutually facilitate cross-modal image synthesis and segmentation since the same content feature can be used to perform the image segmentation task and image synthesis task. To maintain the consistency of the target organ with varied shapes, a curvature consistency loss is proposed to align the segmentation predictions of the original image and the cross-modal synthesized image. To segment the small lesions or substructures, a regression-segmentation task complementation network is constructed to utilize the auxiliary feature of the target organ. RESULTS: Comprehensive experiments have been performed with an in-house dataset and a publicly available dataset. The experimental results have demonstrated the superiority of our framework over state-of-the-art methods. CONCLUSION: The proposed method can fuse dual-modal CT/MR images in the training stage and only needs single-modal CT/MR images in the inference stage. SIGNIFICANCE: The proposed method can be used in routine clinical occasions when only single-modal CT/MR image is available for a patient.

6.
IEEE Trans Biomed Eng ; 71(9): 2789-2799, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38662563

RESUMEN

OBJECTIVE: Optical Coherence Tomography (OCT) images can provide non-invasive visualization of fundus lesions; however, scanners from different OCT manufacturers largely vary from each other, which often leads to model deterioration to unseen OCT scanners due to domain shift. METHODS: To produce the T-styles of the potential target domain, an Orthogonal Style Space Reparameterization (OSSR) method is proposed to apply orthogonal constraints in the latent orthogonal style space to the sampled marginal styles. To leverage the high-level features of multi-source domains and potential T-styles in the graph semantic space, a Graph Adversarial Network (GAN) is constructed to align the generated samples with the source domain samples. To align features with the same label based on the semantic feature in the graph semantic space, Graph Semantic Alignment (GSA) is performed to focus on the shape and the morphological differences between the lesions and their surrounding regions. RESULTS: Comprehensive experiments have been performed on two OCT image datasets. Compared to state-of-the-art methods, the proposed method can achieve better segmentation. CONCLUSION: The proposed fundus lesion segmentation method can be trained with labeled OCT images from multiple manufacturers' scanners and be tested on an unseen manufacturer's scanner with better domain generalization. SIGNIFICANCE: The proposed method can be used in routine clinical occasions when an unseen manufacturer's OCT image is available for a patient.


Asunto(s)
Algoritmos , Interpretación de Imagen Asistida por Computador , Tomografía de Coherencia Óptica , Tomografía de Coherencia Óptica/métodos , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Fondo de Ojo , Bases de Datos Factuales , Enfermedades de la Retina/diagnóstico por imagen
7.
IEEE Trans Image Process ; 33: 4882-4895, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39236126

RESUMEN

Unsupervised domain adaptation medical image segmentation is aimed to segment unlabeled target domain images with labeled source domain images. However, different medical imaging modalities lead to large domain shift between their images, in which well-trained models from one imaging modality often fail to segment images from anothor imaging modality. In this paper, to mitigate domain shift between source domain and target domain, a style consistency unsupervised domain adaptation image segmentation method is proposed. First, a local phase-enhanced style fusion method is designed to mitigate domain shift and produce locally enhanced organs of interest. Second, a phase consistency discriminator is constructed to distinguish the phase consistency of domain-invariant features between source domain and target domain, so as to enhance the disentanglement of the domain-invariant and style encoders and removal of domain-specific features from the domain-invariant encoder. Third, a style consistency estimation method is proposed to obtain inconsistency maps from intermediate synthesized target domain images with different styles to measure the difficult regions, mitigate domain shift between synthesized target domain images and real target domain images, and improve the integrity of interested organs. Fourth, style consistency entropy is defined for target domain images to further improve the integrity of the interested organ by the concentration on the inconsistent regions. Comprehensive experiments have been performed with an in-house dataset and a publicly available dataset. The experimental results have demonstrated the superiority of our framework over state-of-the-art methods.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático no Supervisado , Tomografía Computarizada por Rayos X/métodos
8.
IEEE Trans Med Imaging ; PP2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39167524

RESUMEN

CT and MR are currently the most common imaging techniques for pancreatic cancer diagnosis. Accurate segmentation of the pancreas in CT and MR images can provide significant help in the diagnosis and treatment of pancreatic cancer. Traditional supervised segmentation methods require a large number of labeled CT and MR training data, which is usually time-consuming and laborious. Meanwhile, due to domain shift, traditional segmentation networks are difficult to be deployed on different imaging modality datasets. Cross-domain segmentation can utilize labeled source domain data to assist unlabeled target domains in solving the above problems. In this paper, a cross-domain pancreas segmentation algorithm is proposed based on Moment-Consistent Contrastive Cycle Generative Adversarial Networks (MC-CCycleGAN). MC-CCycleGAN is a style transfer network, in which the encoder of its generator is used to extract features from real images and style transfer images, constrain feature extraction through a contrastive loss, and fully extract structural features of input images during style transfer while eliminate redundant style features. The multi-order central moments of the pancreas are proposed to describe its anatomy in high dimensions and a contrastive loss is also proposed to constrain the moment consistency, so as to maintain consistency of the pancreatic structure and shape before and after style transfer. Multi-teacher knowledge distillation framework is proposed to transfer the knowledge from multiple teachers to a single student, so as to improve the robustness and performance of the student network. The experimental results have demonstrated the superiority of our framework over state-of-the-art domain adaptation methods.

9.
IEEE Trans Biomed Eng ; 71(9): 2557-2567, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38512744

RESUMEN

OBJECTIVE: Multi-modal magnetic resonance (MR) image segmentation is an important task in disease diagnosis and treatment, but it is usually difficult to obtain multiple modalities for a single patient in clinical applications. To address these issues, a cross-modal consistency framework is proposed for a single-modal MR image segmentation. METHODS: To enable single-modal MR image segmentation in the inference stage, a weighted cross-entropy loss and a pixel-level feature consistency loss are proposed to train the target network with the guidance of the teacher network and the auxiliary network. To fuse dual-modal MR images in the training stage, the cross-modal consistency is measured according to Dice similarity entropy loss and Dice similarity contrastive loss, so as to maximize the prediction similarity of the teacher network and the auxiliary network. To reduce the difference in image contrast between different MR images for the same organs, a contrast alignment network is proposed to align input images with different contrasts to reference images with a good contrast. RESULTS: Comprehensive experiments have been performed on a publicly available prostate dataset and an in-house pancreas dataset to verify the effectiveness of the proposed method. Compared to state-of-the-art methods, the proposed method can achieve better segmentation. CONCLUSION: The proposed image segmentation method can fuse dual-modal MR images in the training stage and only need one-modal MR images in the inference stage. SIGNIFICANCE: The proposed method can be used in routine clinical occasions when only single-modal MR image with variable contrast is available for a patient.


Asunto(s)
Algoritmos , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Procesamiento de Imagen Asistido por Computador/métodos , Próstata/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Páncreas/diagnóstico por imagen
10.
Phys Med Biol ; 69(7)2024 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-38394676

RESUMEN

Objective.Neovascular age-related macular degeneration (nAMD) and polypoidal choroidal vasculopathy (PCV) present many similar clinical features. However, there are significant differences in the progression of nAMD and PCV. and it is crucial to make accurate diagnosis for treatment. In this paper, we propose a structure-radiomic fusion network (DRFNet) to differentiate PCV and nAMD in optical coherence tomography (OCT) images.Approach.The subnetwork (RIMNet) is designed to automatically segment the lesion of nAMD and PCV. Another subnetwork (StrEncoder) is designed to extract deep structural features of the segmented lesion. The subnetwork (RadEncoder) is designed to extract radiomic features from the segmented lesions based on radiomics. 305 eyes (155 with nAMD and 150 with PCV) are included and manually annotated CNV region in this study. The proposed method was trained and evaluated by 4-fold cross validation using the collected data and was compared with the advanced differentiation methods.Main results.The proposed method achieved high classification performace of nAMD/PCV differentiation in OCT images, which was an improvement of 4.68 compared with other best method.Significance. The presented structure-radiomic fusion network (DRFNet) has great performance of diagnosing nAMD and PCV and high clinical value by using OCT instead of indocyanine green angiography.


Asunto(s)
Coroides , Vasculopatía Coroidea Polipoidea , Humanos , Coroides/irrigación sanguínea , Tomografía de Coherencia Óptica/métodos , Radiómica , Angiografía con Fluoresceína/métodos , Estudios Retrospectivos
11.
IEEE J Biomed Health Inform ; 27(3): 1237-1248, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-35759605

RESUMEN

Lung tumor segmentation in PET-CT images plays an important role to assist physicians in clinical application to accurately diagnose and treat lung cancer. However, it is still a challenging task in medical image processing field. Due to respiration and movement, the lung tumor varies largely in PET images and CT images. Even the two images are almost simultaneously collected and registered, the shape and size of lung tumors in PET-CT images are different from each other. To address these issues, a modality-specific segmentation network (MoSNet) is proposed for lung tumor segmentation in PET-CT images. MoSNet can simultaneously segment the modality-specific lung tumor in PET images and CT images. MoSNet learns a modality-specific representation to describe the inconsistency between PET images and CT images and a modality-fused representation to encode the common feature of lung tumor in PET images and CT images. An adversarial method is proposed to minimize an approximate modality discrepancy through an adversarial objective with respect to a modality discriminator and reserve modality-common representation. This improves the representation power of the network for modality-specific lung tumor segmentation in PET images and CT images. The novelty of MoSNet is its ability to produce a modality-specific map that explicitly quantifies the modality-specific weights for the features in each modality. To demonstrate the superiority of our method, MoSNet is validated in 126 PET-CT images with NSCLC. Experimental results show that MoSNet outperforms state-of-the-art lung tumor segmentation methods.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos
12.
Comput Methods Programs Biomed ; 233: 107454, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36921468

RESUMEN

BACKGROUND AND OBJECTIVE: Retinal vessel segmentation plays an important role in the automatic retinal disease screening and diagnosis. How to segment thin vessels and maintain the connectivity of vessels are the key challenges of the retinal vessel segmentation task. Optical coherence tomography angiography (OCTA) is a noninvasive imaging technique that can reveal high-resolution retinal vessels. Aiming at make full use of its characteristic of high resolution, a new end-to-end transformer based network named as OCT2Former (OCT-a Transformer) is proposed to segment retinal vessel accurately in OCTA images. METHODS: The proposed OCT2Former is based on encoder-decoder structure, which mainly includes dynamic transformer encoder and lightweight decoder. Dynamic transformer encoder consists of dynamic token aggregation transformer and auxiliary convolution branch, in which the multi-head dynamic token aggregation attention based dynamic token aggregation transformer is designed to capture the global retinal vessel context information from the first layer throughout the network and the auxiliary convolution branch is proposed to compensate for the lack of inductive bias of the transformer and assist in the efficient feature extraction. A convolution based lightweight decoder is proposed to decode features efficiently and reduce the complexity of the proposed OCT2Former. RESULTS: The proposed OCT2Former is validated on three publicly available datasets i.e. OCTA-SS, ROSE-1, OCTA-500 (subset OCTA-6M and OCTA-3M). The Jaccard indexes of the proposed OCT2Former on these datasets are 0.8344, 0.7855, 0.8099 and 0.8513, respectively, outperforming the best convolution based network 1.43, 1.32, 0.75 and 1.46%, respectively. CONCLUSION: The experimental results have demonstrated that the proposed OCT2Former can achieve competitive performance on retinal OCTA vessel segmentation tasks.


Asunto(s)
Tamizaje Masivo , Vasos Retinianos , Vasos Retinianos/diagnóstico por imagen , Angiografía con Fluoresceína/métodos , Tomografía de Coherencia Óptica/métodos
13.
Phys Med Biol ; 68(9)2023 05 03.
Artículo en Inglés | MEDLINE | ID: mdl-37054733

RESUMEN

Objective. Corneal confocal microscopy (CCM) is a rapid and non-invasive ophthalmic imaging technique that can reveal corneal nerve fiber. The automatic segmentation of corneal nerve fiber in CCM images is vital for the subsequent abnormality analysis, which is the main basis for the early diagnosis of degenerative neurological systemic diseases such as diabetic peripheral neuropathy.Approach. In this paper, a U-shape encoder-decoder structure based multi-scale and local feature guidance neural network (MLFGNet) is proposed for the automatic corneal nerve fiber segmentation in CCM images. Three novel modules including multi-scale progressive guidance (MFPG) module, local feature guided attention (LFGA) module, and multi-scale deep supervision (MDS) module are proposed and applied in skip connection, bottom of the encoder and decoder path respectively, which are designed from both multi-scale information fusion and local information extraction perspectives to enhance the network's ability to discriminate the global and local structure of nerve fibers. The proposed MFPG module solves the imbalance between semantic information and spatial information, the LFGA module enables the network to capture attention relationships on local feature maps and the MDS module fully utilizes the relationship between high-level and low-level features for feature reconstruction in the decoder path.Main results. The proposed MLFGNet is evaluated on three CCM image Datasets, the Dice coefficients reach 89.33%, 89.41%, and 88.29% respectively.Significance. The proposed method has excellent segmentation performance for corneal nerve fibers and outperforms other state-of-the-art methods.


Asunto(s)
Ojo , Cara , Almacenamiento y Recuperación de la Información , Fibras Nerviosas , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador
14.
IEEE Trans Med Imaging ; 42(3): 713-725, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36260572

RESUMEN

Accurate segmentation of retinal images can assist ophthalmologists to determine the degree of retinopathy and diagnose other systemic diseases. However, the structure of the retina is complex, and different anatomical structures often affect the segmentation of fundus lesions. In this paper, a new segmentation strategy called a dual stream segmentation network embedded into a conditional generative adversarial network is proposed to improve the accuracy of retinal lesion segmentation. First, a dual stream encoder is proposed to utilize the capabilities of two different networks and extract more feature information. Second, a multiple level fuse block is proposed to decode the richer and more effective features from the two different parallel encoders. Third, the proposed network is further trained in a semi-supervised adversarial manner to leverage from labeled images and unlabeled images with high confident pseudo labels, which are selected by the dual stream Bayesian segmentation network. An annotation discriminator is further proposed to reduce the negativity that prediction tends to become increasingly similar to the inaccurate predictions of unlabeled images. The proposed method is cross-validated in 384 clinical fundus fluorescein angiography images and 1040 optical coherence tomography images. Compared to state-of-the-art methods, the proposed method can achieve better segmentation of retinal capillary non-perfusion region and choroidal neovascularization.


Asunto(s)
Retina , Enfermedades de la Retina , Humanos , Teorema de Bayes , Fondo de Ojo , Retina/diagnóstico por imagen , Tomografía de Coherencia Óptica
15.
IEEE J Biomed Health Inform ; 27(7): 3467-3477, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37099475

RESUMEN

Skin wound segmentation in photographs allows non-invasive analysis of wounds that supports dermatological diagnosis and treatment. In this paper, we propose a novel feature augment network (FANet) to achieve automatic segmentation of skin wounds, and design an interactive feature augment network (IFANet) to provide interactive adjustment on the automatic segmentation results. The FANet contains the edge feature augment (EFA) module and the spatial relationship feature augment (SFA) module, which can make full use of the notable edge information and the spatial relationship information be-tween the wound and the skin. The IFANet, with FANet as the backbone, takes the user interactions and the initial result as inputs, and outputs the refined segmentation result. The pro-posed networks were tested on a dataset composed of miscellaneous skin wound images, and a public foot ulcer segmentation challenge dataset. The results indicate that the FANet gives good segmentation results while the IFANet can effectively improve them based on simple marking. Comprehensive comparative experiments show that our proposed networks outperform some other existing automatic or interactive segmentation methods, respectively.


Asunto(s)
Polisorbatos , Piel , Humanos , Procesamiento de Imagen Asistido por Computador , Piel/diagnóstico por imagen
16.
Med Phys ; 50(3): 1586-1600, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36345139

RESUMEN

BACKGROUND: Medical image segmentation is an important task in the diagnosis and treatment of cancers. The low contrast and highly flexible anatomical structure make it challenging to accurately segment the organs or lesions. PURPOSE: To improve the segmentation accuracy of the organs or lesions in magnetic resonance (MR) images, which can be useful in clinical diagnosis and treatment of cancers. METHODS: First, a selective feature interaction (SFI) module is designed to selectively extract the similar features of the sequence images based on the similarity interaction. Second, a multi-scale guided feature reconstruction (MGFR) module is designed to reconstruct low-level semantic features and focus on small targets and the edges of the pancreas. Third, to reduce manual annotation of large amounts of data, a semi-supervised training method is also proposed. Uncertainty estimation is used to further improve the segmentation accuracy. RESULTS: Three hundred ninety-five 3D MR images from 395 patients with pancreatic cancer, 259 3D MR images from 259 patients with brain tumors, and four-fold cross-validation strategy are used to evaluate the proposed method. Compared to state-of-the-art deep learning segmentation networks, the proposed method can achieve better segmentation of pancreas or tumors in MR images. CONCLUSIONS: SFI-Net can fuse dual sequence MR images for abnormal pancreas or tumor segmentation. The proposed semi-supervised strategy can further improve the performance of SFI-Net.


Asunto(s)
Neoplasias Encefálicas , Neoplasias Pancreáticas , Humanos , Imagen por Resonancia Magnética/métodos , Neoplasias Pancreáticas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
17.
IEEE Trans Biomed Eng ; 70(7): 2013-2024, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37018248

RESUMEN

Macular hole (MH) and cystoid macular edema (CME) are two common retinal pathologies that cause vision loss. Accurate segmentation of MH and CME in retinal OCT images can greatly aid ophthalmologists to evaluate the relevant diseases. However, it is still challenging as the complicated pathological features of MH and CME in retinal OCT images, such as the diversity of morphologies, low imaging contrast, and blurred boundaries. In addition, the lack of pixel-level annotation data is one of the important factors that hinders the further improvement of segmentation accuracy. Focusing on these challenges, we propose a novel self-guided optimization semi-supervised method termed Semi-SGO for joint segmentation of MH and CME in retinal OCT images. Aiming to improve the model's ability to learn the complicated pathological features of MH and CME, while alleviating the feature learning tendency problem that may be caused by the introduction of skip-connection in U-shaped segmentation architecture, we develop a novel dual decoder dual-task fully convolutional neural network (D3T-FCN). Meanwhile, based on our proposed D3T-FCN, we introduce a knowledge distillation technique to further design a novel semi-supervised segmentation method called Semi-SGO, which can leverage unlabeled data to further improve the segmentation accuracy. Comprehensive experimental results show that our proposed Semi-SGO outperforms other state-of-the-art segmentation networks. Furthermore, we also develop an automatic method for measuring the clinical indicators of MH and CME to validate the clinical significance of our proposed Semi-SGO. The code will be released on Github 1,2.


Asunto(s)
Edema Macular , Perforaciones de la Retina , Humanos , Edema Macular/diagnóstico por imagen , Perforaciones de la Retina/complicaciones , Tomografía de Coherencia Óptica/métodos , Retina/diagnóstico por imagen , Redes Neurales de la Computación
18.
Phys Med Biol ; 67(12)2022 06 15.
Artículo en Inglés | MEDLINE | ID: mdl-35613604

RESUMEN

Objective. Retinal fluid mainly includes intra-retinal fluid (IRF), sub-retinal fluid (SRF) and pigment epithelial detachment (PED), whose accurate segmentation in optical coherence tomography (OCT) image is of great importance to the diagnosis and treatment of the relative fundus diseases.Approach. In this paper, a novel two-stage multi-class retinal fluid joint segmentation framework based on cascaded convolutional neural networks is proposed. In the pre-segmentation stage, a U-shape encoder-decoder network is adopted to acquire the retinal mask and generate a retinal relative distance map, which can provide the spatial prior information for the next fluid segmentation. In the fluid segmentation stage, an improved context attention and fusion network based on context shrinkage encode module and multi-scale and multi-category semantic supervision module (named as ICAF-Net) is proposed to jointly segment IRF, SRF and PED.Main results. the proposed segmentation framework was evaluated on the dataset of RETOUCH challenge. The average Dice similarity coefficient, intersection over union and accuracy (Acc) reach 76.39%, 64.03% and 99.32% respectively.Significance. The proposed framework can achieve good performance in the joint segmentation of multi-class fluid in retinal OCT images and outperforms some state-of-the-art segmentation networks.


Asunto(s)
Redes Neurales de la Computación , Retina , Procesamiento de Imagen Asistido por Computador/métodos , Retina/diagnóstico por imagen , Tomografía de Coherencia Óptica/métodos
19.
Int J Hypertens ; 2022: 1553700, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35284141

RESUMEN

Background: This study sought to explore the association between quantitative classification of renal surface nodularity (qRSN) based on computed tomography (CT) imaging and early renal injury (ERI) in patients with arterial hypertension. Methods: A total of 143 patients with a history of hypertension were retrospectively enrolled; clinical information (age, sex, hypertension grade, and hypertension course), laboratory tests, and qRSN were collected or assessed. The subjects were divided into an ERI group (n = 60) or a control group (CP, n = 83) according to ERI diagnosis based on the following criteria: cystatin C > 1.02 mg/L. Univariate analysis and multiple logistic regression were used to assess the association between ERI and qRSN. A receiver operating characteristic curve (ROC) was performed to compare multiple logistic regression models with or without qRSN for differentiating the ERI group from the control group. Results: In univariate analysis, hypertension grade, hypertension course, triglycerides (TG), and qRSN were related to ERI in patients with arterial hypertension (all P < 0.1), with strong interrater agreement of qRSN. Multiple logistic regression analysis showed an area under the ROC curve of 0.697 in the model without qRSN and 0.790 in the model with qRSN, which was significantly different (Z = 2.314, P=0.021). Conclusion: CT imaging-based qRSN was associated with ERI in patients with arterial hypertension and may be an imaging biomarker of early renal injury.

20.
Phys Med Biol ; 67(22)2022 11 07.
Artículo en Inglés | MEDLINE | ID: mdl-36220014

RESUMEN

Although positron emission tomography-computed tomography (PET-CT) images have been widely used, it is still challenging to accurately segment the lung tumor. The respiration, movement and imaging modality lead to large modality discrepancy of the lung tumors between PET images and CT images. To overcome these difficulties, a novel network is designed to simultaneously obtain the corresponding lung tumors of PET images and CT images. The proposed network can fuse the complementary information and preserve modality-specific features of PET images and CT images. Due to the complementarity between PET images and CT images, the two modality images should be fused for automatic lung tumor segmentation. Therefore, cross modality decoding blocks are designed to extract modality-specific features of PET images and CT images with the constraints of the other modality. The edge consistency loss is also designed to solve the problem of blurred boundaries of PET images and CT images. The proposed method is tested on 126 PET-CT images with non-small cell lung cancer, and Dice similarity coefficient scores of lung tumor segmentation reach 75.66 ± 19.42 in CT images and 79.85 ± 16.76 in PET images, respectively. Extensive comparisons with state-of-the-art lung tumor segmentation methods have also been performed to demonstrate the superiority of the proposed network.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA