Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Phys Med Biol ; 69(5)2024 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-38406849

RESUMEN

MRI image segmentation is widely used in clinical practice as a prerequisite and a key for diagnosing brain tumors. The quest for an accurate automated segmentation method for brain tumor images, aiming to ease clinical doctors' workload, has gained significant attention as a research focal point. Despite the success of fully supervised methods in brain tumor segmentation, challenges remain. Due to the high cost involved in annotating medical images, the dataset available for training fully supervised methods is very limited. Additionally, medical images are prone to noise and motion artifacts, negatively impacting quality. In this work, we propose MAPSS, a motion-artifact-augmented pseudo-label network for semi-supervised segmentation. Our method combines motion artifact data augmentation with the pseudo-label semi-supervised training framework. We conduct several experiments under different semi-supervised settings on a publicly available dataset BraTS2020 for brain tumor segmentation. The experimental results show that MAPSS achieves accurate brain tumor segmentation with only a small amount of labeled data and maintains robustness in motion-artifact-influenced images. We also assess the generalization performance of MAPSS using the Left Atrium dataset. Our algorithm is of great significance for assisting doctors in formulating treatment plans and improving treatment quality.


Asunto(s)
Artefactos , Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Algoritmos , Atrios Cardíacos , Movimiento (Física) , Procesamiento de Imagen Asistido por Computador
2.
J Magn Reson Imaging ; 59(3): 1083-1092, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37367938

RESUMEN

BACKGROUND: Conventional MRI staging can be challenging in the preoperative assessment of rectal cancer. Deep learning methods based on MRI have shown promise in cancer diagnosis and prognostication. However, the value of deep learning in rectal cancer T-staging is unclear. PURPOSE: To develop a deep learning model based on preoperative multiparametric MRI for evaluation of rectal cancer and to investigate its potential to improve T-staging accuracy. STUDY TYPE: Retrospective. POPULATION: After cross-validation, 260 patients (123 with T-stage T1-2 and 134 with T-stage T3-4) with histopathologically confirmed rectal cancer were randomly divided to the training (N = 208) and test sets (N = 52). FIELD STRENGTH/SEQUENCE: 3.0 T/Dynamic contrast enhanced (DCE), T2-weighted imaging (T2W), and diffusion-weighted imaging (DWI). ASSESSMENT: The deep learning (DL) model of multiparametric (DCE, T2W, and DWI) convolutional neural network were constructed for evaluating preoperative diagnosis. The pathological findings served as the reference standard for T-stage. For comparison, the single parameter DL-model, a logistic regression model composed of clinical features and subjective assessment of radiologists were used. STATISTICAL TESTS: The receiver operating characteristic curve (ROC) was used to evaluate the models, the Fleiss' kappa for the intercorrelation coefficients, and DeLong test for compare the diagnostic performance of ROCs. P-values less than 0.05 were considered statistically significant. RESULTS: The Area Under Curve (AUC) of the multiparametric DL-model was 0.854, which was significantly higher than the radiologist's assessment (AUC = 0.678), clinical model (AUC = 0.747), and the single parameter DL-models including T2W-model (AUC = 0.735), DWI-model (AUC = 0.759), and DCE-model (AUC = 0.789). DATA CONCLUSION: In the evaluation of rectal cancer patients, the proposed multiparametric DL-model outperformed the radiologist's assessment, the clinical model as well as the single parameter models. The multiparametric DL-model has the potential to assist clinicians by providing more reliable and precise preoperative T staging diagnosis. EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 2.


Asunto(s)
Aprendizaje Profundo , Imágenes de Resonancia Magnética Multiparamétrica , Neoplasias del Recto , Humanos , Imagen por Resonancia Magnética/métodos , Imágenes de Resonancia Magnética Multiparamétrica/métodos , Estudios Retrospectivos
3.
Comput Biol Med ; 159: 106884, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37071938

RESUMEN

Breast cancer is the most common cancer in women. Ultrasound is a widely used screening tool for its portability and easy operation, and DCE-MRI can highlight the lesions more clearly and reveal the characteristics of tumors. They are both noninvasive and nonradiative for assessment of breast cancer. Doctors make diagnoses and further instructions through the sizes, shapes and textures of the breast masses showed on medical images, so automatic tumor segmentation via deep neural networks can to some extent assist doctors. Compared to some challenges which the popular deep neural networks have faced, such as large amounts of parameters, lack of interpretability, overfitting problem, etc., we propose a segmentation network named Att-U-Node which uses attention modules to guide a neural ODE-based framework, trying to alleviate the problems mentioned above. Specifically, the network uses ODE blocks to make up an encoder-decoder structure, feature modeling by neural ODE is completed at each level. Besides, we propose to use an attention module to calculate the coefficient and generate a much refined attention feature for skip connection. Three public available breast ultrasound image datasets (i.e. BUSI, BUS and OASBUD) and a private breast DCE-MRI dataset are used to assess the efficiency of the proposed model, besides, we upgrade the model to 3D for tumor segmentation with the data selected from Public QIN Breast DCE-MRI. The experiments show that the proposed model achieves competitive results compared with the related methods while mitigates the common problems of deep neural networks.


Asunto(s)
Neoplasias de la Mama , Neoplasias Mamarias Animales , Femenino , Humanos , Animales , Neoplasias de la Mama/diagnóstico por imagen , Mama , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador
4.
Neuroimage ; 244: 118568, 2021 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-34508895

RESUMEN

The annotation of brain lesion images is a key step in clinical diagnosis and treatment of a wide spectrum of brain diseases. In recent years, segmentation methods based on deep learning have gained unprecedented popularity, leveraging a large amount of data with high-quality voxel-level annotations. However, due to the limited time clinicians can provide for the cumbersome task of manual image segmentation, semi-supervised medical image segmentation methods present an alternative solution as they require only a few labeled samples for training. In this paper, we propose a novel semi-supervised segmentation framework that combines improved mean teacher and adversarial network. Specifically, our framework consists of (i) a student model and a teacher model for segmenting the target and generating the signed distance maps of object surfaces, and (ii) a discriminator network for extracting hierarchical features and distinguishing the signed distance maps of labeled and unlabeled data. Besides, based on two different adversarial learning processes, a multi-scale feature consistency loss derived from the student and teacher models is proposed, and a shape-aware embedding scheme is integrated into our framework. We evaluated the proposed method on the public brain lesion datasets from ISBI 2015, ISLES 2015, and BRATS 2018 for the multiple sclerosis lesion, ischemic stroke lesion, and brain tumor segmentation respectively. Experiments demonstrate that our method can effectively leverage unlabeled data while outperforming the supervised baseline and other state-of-the-art semi-supervised methods trained with the same labeled data. The proposed framework is suitable for joint training of limited labeled data and additional unlabeled data, which is expected to reduce the effort of obtaining annotated images.


Asunto(s)
Neoplasias Encefálicas/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Aprendizaje Profundo , Esclerosis Múltiple/diagnóstico por imagen , Accidente Cerebrovascular/diagnóstico por imagen , Conjuntos de Datos como Asunto , Humanos , Imagen por Resonancia Magnética , Proyectos de Investigación , Estudiantes
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...