Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Labelled Comp Radiopharm ; 65(9): 234-243, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35748089

RESUMO

Deuterated reagents have been used in many research fields. Isotope abundance, as the feature parameter of deuterated reagents, the precise quantification, is of great importance. Based on quantitative nuclear magnetic resonance technology, a novel method that combines 1 H NMR + 2 H NMR was systematically established to determine the isotopic abundance of deuterated reagents. The results showed that the isotopic abundance of partially labeled and fully labeled compounds calculated by this new method was even more accurate than that calculated by classical 1 H NMR and mass spectrometry (MS) methods. In brief, this new method is a robust strategy for the determination of isotope abundance in large-scale deuterated reagents.


Assuntos
Imageamento por Ressonância Magnética , Deutério/química , Espectroscopia de Ressonância Magnética/métodos , Espectrometria de Massas
2.
Entropy (Basel) ; 24(11)2022 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-36421515

RESUMO

Radiotherapy is one of the main treatments for localized head and neck (HN) cancer. To design a personalized treatment with reduced radio-induced toxicity, accurate delineation of organs at risk (OAR) is a crucial step. Manual delineation is time- and labor-consuming, as well as observer-dependent. Deep learning (DL) based segmentation has proven to overcome some of these limitations, but requires large databases of homogeneously contoured image sets for robust training. However, these are not easily obtained from the standard clinical protocols as the OARs delineated may vary depending on the patient's tumor site and specific treatment plan. This results in incomplete or partially labeled data. This paper presents a solution to train a robust DL-based automated segmentation tool exploiting a clinical partially labeled dataset. We propose a two-step workflow for OAR segmentation: first, we developed longitudinal OAR-specific 3D segmentation models for pseudo-contour generation, completing the missing contours for some patients; with all OAR available, we trained a multi-class 3D convolutional neural network (nnU-Net) for final OAR segmentation. Results obtained in 44 independent datasets showed superior performance of the proposed methodology for the segmentation of fifteen OARs, with an average Dice score coefficient and surface Dice similarity coefficient of 80.59% and 88.74%. We demonstrated that the model can be straightforwardly integrated into the clinical workflow for standard and adaptive radiotherapy.

3.
Med Image Anal ; 95: 103156, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38603844

RESUMO

The state-of-the-art multi-organ CT segmentation relies on deep learning models, which only generalize when trained on large samples of carefully curated data. However, it is challenging to train a single model that can segment all organs and types of tumors since most large datasets are partially labeled or are acquired across multiple institutes that may differ in their acquisitions. A possible solution is Federated learning, which is often used to train models on multi-institutional datasets where the data is not shared across sites. However, predictions of federated learning can be unreliable after the model is locally updated at sites due to 'catastrophic forgetting'. Here, we address this issue by using knowledge distillation (KD) so that the local training is regularized with the knowledge of a global model and pre-trained organ-specific segmentation models. We implement the models in a multi-head U-Net architecture that learns a shared embedding space for different organ segmentation, thereby obtaining multi-organ predictions without repeated processes. We evaluate the proposed method using 8 publicly available abdominal CT datasets of 7 different organs. Of those datasets, 889 CTs were used for training, 233 for internal testing, and 30 volumes for external testing. Experimental results verified that our proposed method substantially outperforms other state-of-the-art methods in terms of accuracy, inference time, and the number of parameters.


Assuntos
Aprendizado Profundo , Tomografia Computadorizada por Raios X , Humanos , Conjuntos de Dados como Assunto , Bases de Dados Factuais
4.
Comput Biol Med ; 160: 106995, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37187134

RESUMO

Despite the significant performance improvement on multi-organ segmentation with supervised deep learning-based methods, the label-hungry nature hinders their applications in practical disease diagnosis and treatment planning. Due to the challenges in obtaining expert-level accurate, densely annotated multi-organ datasets, label-efficient segmentation, such as partially supervised segmentation trained on partially labeled datasets or semi-supervised medical image segmentation, has attracted increasing attention recently. However, most of these methods suffer from the limitation that they neglect or underestimate the challenging unlabeled regions during model training. To this end, we propose a novel Context-aware Voxel-wise Contrastive Learning method, referred as CVCL, to take full advantage of both labeled and unlabeled information in label-scarce datasets for a performance improvement on multi-organ segmentation. Experimental results demonstrate that our proposed method achieves superior performance than other state-of-the-art methods.

5.
Comput Biol Med ; 136: 104658, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34311262

RESUMO

Accurate and robust multiorgan abdominal CT segmentation plays a significant role in numerous clinical applications, such as therapy treatment planning and treatment delivery. Almost all existing segmentation networks rely on fully annotated data with strong supervision. However, annotating fully annotated multiorgan data in CT images is both laborious and time-consuming. In comparison, massive partially labeled datasets are usually easily accessible. In this paper, we propose conditional nnU-Net trained on the union of partially labeled datasets for multiorgan segmentation. The deep model employs the state-of-the-art nnU-Net as the backbone and introduces a conditioning strategy by feeding auxiliary information into the decoder architecture as an additional input layer. This model leverages the prior conditional information to identify the organ class at the pixel-wise level and encourages organs' spatial information recovery. Furthermore, we adopt a deep supervision mechanism to refine the outputs at different scales and apply the combination of Dice loss and Focal loss to optimize the training model. Our proposed method is evaluated on seven publicly available datasets of the liver, pancreas, spleen and kidney, in which promising segmentation performance has been achieved. The proposed conditional nnU-Net breaks down the barriers between nonoverlapping labeled datasets and further alleviates the problem of data hunger in multiorgan segmentation.

6.
Med Image Anal ; 70: 101979, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33636451

RESUMO

Annotating multiple organs in medical images is both costly and time-consuming; therefore, existing multi-organ datasets with labels are often low in sample size and mostly partially labeled, that is, a dataset has a few organs labeled but not all organs. In this paper, we investigate how to learn a single multi-organ segmentation network from a union of such datasets. To this end, we propose two types of novel loss function, particularly designed for this scenario: (i) marginal loss and (ii) exclusion loss. Because the background label for a partially labeled image is, in fact, a 'merged' label of all unlabelled organs and 'true' background (in the sense of full labels), the probability of this 'merged' background label is a marginal probability, summing the relevant probabilities before merging. This marginal probability can be plugged into any existing loss function (such as cross entropy loss, Dice loss, etc.) to form a marginal loss. Leveraging the fact that the organs are non-overlapping, we propose the exclusion loss to gauge the dissimilarity between labeled organs and the estimated segmentation of unlabelled organs. Experiments on a union of five benchmark datasets in multi-organ segmentation of liver, spleen, left and right kidneys, and pancreas demonstrate that using our newly proposed loss functions brings a conspicuous performance improvement for state-of-the-art methods without introducing any extra computation.


Assuntos
Baço , Tomografia Computadorizada por Raios X , Humanos , Rim/diagnóstico por imagem , Fígado/diagnóstico por imagem , Probabilidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA