Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38758290

RESUMO

PURPOSE: Body composition measurements from routine abdominal CT can yield personalized risk assessments for asymptomatic and diseased patients. In particular, attenuation and volume measures of muscle and fat are associated with important clinical outcomes, such as cardiovascular events, fractures, and death. This study evaluates the reliability of an Internal tool for the segmentation of muscle and fat (subcutaneous and visceral) as compared to the well-established public TotalSegmentator tool. METHODS: We assessed the tools across 900 CT series from the publicly available SAROS dataset, focusing on muscle, subcutaneous fat, and visceral fat. The Dice score was employed to assess accuracy in subcutaneous fat and muscle segmentation. Due to the lack of ground truth segmentations for visceral fat, Cohen's Kappa was utilized to assess segmentation agreement between the tools. RESULTS: Our Internal tool achieved a 3% higher Dice (83.8 vs. 80.8) for subcutaneous fat and a 5% improvement (87.6 vs. 83.2) for muscle segmentation, respectively. A Wilcoxon signed-rank test revealed that our results were statistically different with p < 0.01. For visceral fat, the Cohen's Kappa score of 0.856 indicated near-perfect agreement between the two tools. Our internal tool also showed very strong correlations for muscle volume (R 2 =0.99), muscle attenuation (R 2 =0.93), and subcutaneous fat volume (R 2 =0.99) with a moderate correlation for subcutaneous fat attenuation (R 2 =0.45). CONCLUSION: Our findings indicated that our Internal tool outperformed TotalSegmentator in measuring subcutaneous fat and muscle. The high Cohen's Kappa score for visceral fat suggests a reliable level of agreement between the two tools. These results demonstrate the potential of our tool in advancing the accuracy of body composition analysis.

2.
ArXiv ; 2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38410656

RESUMO

Purpose: Body composition measurements from routine abdominal CT can yield personalized risk assessments for asymptomatic and diseased patients. In particular, attenuation and volume measures of muscle and fat are associated with important clinical outcomes, such as cardiovascular events, fractures, and death. This study evaluates the reliability of an Internal tool for the segmentation of muscle and fat (subcutaneous and visceral) as compared to the well-established public TotalSegmentator tool. Methods: We assessed the tools across 900 CT series from the publicly available SAROS dataset, focusing on muscle, subcutaneous fat, and visceral fat. The Dice score was employed to assess accuracy in subcutaneous fat and muscle segmentation. Due to the lack of ground truth segmentations for visceral fat, Cohen's Kappa was utilized to assess segmentation agreement between the tools. Results: Our Internal tool achieved a 3% higher Dice (83.8 vs. 80.8) for subcutaneous fat and a 5% improvement (87.6 vs. 83.2) for muscle segmentation respectively. A Wilcoxon signed-rank test revealed that our results were statistically different with p < 0.01. For visceral fat, the Cohen's kappa score of 0.856 indicated near-perfect agreement between the two tools. Our internal tool also showed very strong correlations for muscle volume (R2=0.99), muscle attenuation (R2=0.93), and subcutaneous fat volume (R2=0.99) with a moderate correlation for subcutaneous fat attenuation (R2=0.45). Conclusion: Our findings indicated that our Internal tool outperformed TotalSegmentator in measuring subcutaneous fat and muscle. The high Cohen's Kappa score for visceral fat suggests a reliable level of agreement between the two tools. These results demonstrate the potential of our tool in advancing the accuracy of body composition analysis.

3.
Radiology ; 309(1): e231147, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37815442

RESUMO

Background Large language models (LLMs) such as ChatGPT, though proficient in many text-based tasks, are not suitable for use with radiology reports due to patient privacy constraints. Purpose To test the feasibility of using an alternative LLM (Vicuna-13B) that can be run locally for labeling radiography reports. Materials and Methods Chest radiography reports from the MIMIC-CXR and National Institutes of Health (NIH) data sets were included in this retrospective study. Reports were examined for 13 findings. Outputs reporting the presence or absence of the 13 findings were generated by Vicuna by using a single-step or multistep prompting strategy (prompts 1 and 2, respectively). Agreements between Vicuna outputs and CheXpert and CheXbert labelers were assessed using Fleiss κ. Agreement between Vicuna outputs from three runs under a hyperparameter setting that introduced some randomness (temperature, 0.7) was also assessed. The performance of Vicuna and the labelers was assessed in a subset of 100 NIH reports annotated by a radiologist with use of area under the receiver operating characteristic curve (AUC). Results A total of 3269 reports from the MIMIC-CXR data set (median patient age, 68 years [IQR, 59-79 years]; 161 male patients) and 25 596 reports from the NIH data set (median patient age, 47 years [IQR, 32-58 years]; 1557 male patients) were included. Vicuna outputs with prompt 2 showed, on average, moderate to substantial agreement with the labelers on the MIMIC-CXR (κ median, 0.57 [IQR, 0.45-0.66] with CheXpert and 0.64 [IQR, 0.45-0.68] with CheXbert) and NIH (κ median, 0.52 [IQR, 0.41-0.65] with CheXpert and 0.55 [IQR, 0.41-0.74] with CheXbert) data sets, respectively. Vicuna with prompt 2 performed at par (median AUC, 0.84 [IQR, 0.74-0.93]) with both labelers on nine of 11 findings. Conclusion In this proof-of-concept study, outputs of the LLM Vicuna reporting the presence or absence of 13 findings on chest radiography reports showed moderate to substantial agreement with existing labelers. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Cai in this issue.


Assuntos
Camelídeos Americanos , Radiologia , Estados Unidos , Humanos , Masculino , Animais , Idoso , Pessoa de Meia-Idade , Privacidade , Estudos de Viabilidade , Estudos Retrospectivos , Idioma
4.
Biomed Opt Express ; 14(2): 533-549, 2023 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-36874499

RESUMO

Retina fundus imaging for diagnosing diabetic retinopathy (DR) is an efficient and patient-friendly modality, where many high-resolution images can be easily obtained for accurate diagnosis. With the advancements of deep learning, data-driven models may facilitate the process of high-throughput diagnosis especially in areas with less availability of certified human experts. Many datasets of DR already exist for training learning-based models. However, most are often unbalanced, do not have a large enough sample count, or both. This paper proposes a two-stage pipeline for generating photo-realistic retinal fundus images based on either artificially generated or free-hand drawn semantic lesion maps. The first stage uses a conditional StyleGAN to generate synthetic lesion maps based on a DR severity grade. The second stage then uses GauGAN to convert the synthetic lesion maps into high resolution fundus images. We evaluate the photo-realism of generated images using the Fréchet inception distance (FID), and show the efficacy of our pipeline through downstream tasks, such as; dataset augmentation for automatic DR grading and lesion segmentation.

5.
IEEE Trans Med Imaging ; 41(10): 2728-2738, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35468060

RESUMO

Detecting Out-of-Distribution (OoD) data is one of the greatest challenges in safe and robust deployment of machine learning algorithms in medicine. When the algorithms encounter cases that deviate from the distribution of the training data, they often produce incorrect and over-confident predictions. OoD detection algorithms aim to catch erroneous predictions in advance by analysing the data distribution and detecting potential instances of failure. Moreover, flagging OoD cases may support human readers in identifying incidental findings. Due to the increased interest in OoD algorithms, benchmarks for different domains have recently been established. In the medical imaging domain, for which reliable predictions are often essential, an open benchmark has been missing. We introduce the Medical-Out-Of-Distribution-Analysis-Challenge (MOOD) as an open, fair, and unbiased benchmark for OoD methods in the medical imaging domain. The analysis of the submitted algorithms shows that performance has a strong positive correlation with the perceived difficulty, and that all algorithms show a high variance for different anomalies, making it yet hard to recommend them for clinical practice. We also see a strong correlation between challenge ranking and performance on a simple toy test set, indicating that this might be a valuable addition as a proxy dataset during anomaly detection algorithm development.


Assuntos
Benchmarking , Aprendizado de Máquina , Algoritmos , Humanos
6.
IEEE Trans Med Imaging ; 38(12): 2755-2767, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31021795

RESUMO

Detecting acoustic shadows in ultrasound images is important in many clinical and engineering applications. Real-time feedback of acoustic shadows can guide sonographers to a standardized diagnostic viewing plane with minimal artifacts and can provide additional information for other automatic image analysis algorithms. However, automatically detecting shadow regions using learning-based algorithms is challenging because pixel-wise ground truth annotation of acoustic shadows is subjective and time consuming. In this paper, we propose a weakly supervised method for automatic confidence estimation of acoustic shadow regions. Our method is able to generate a dense shadow-focused confidence map. In our method, a shadow-seg module is built to learn general shadow features for shadow segmentation, based on global image-level annotations as well as a small number of coarse pixel-wise shadow annotations. A transfer function is introduced to extend the obtained binary shadow segmentation to a reference confidence map. In addition, a confidence estimation network is proposed to learn the mapping between input images and the reference confidence maps. This network is able to predict shadow confidence maps directly from input images during inference. We use evaluation metrics such as DICE, inter-class correlation, and so on, to verify the effectiveness of our method. Our method is more consistent than human annotation and outperforms the state-of-the-art quantitatively in shadow segmentation and qualitatively in confidence estimation of shadow regions. Furthermore, we demonstrate the applicability of our method by integrating shadow confidence maps into tasks such as ultrasound image classification, multi-view image fusion, and automated biometric measurements.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina Supervisionado , Ultrassonografia Pré-Natal/métodos , Algoritmos , Aprendizado Profundo , Feminino , Feto/diagnóstico por imagem , Humanos , Gravidez
7.
Med Image Anal ; 53: 156-164, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30784956

RESUMO

Automatic detection of anatomical landmarks is an important step for a wide range of applications in medical image analysis. Manual annotation of landmarks is a tedious task and prone to observer errors. In this paper, we evaluate novel deep reinforcement learning (RL) strategies to train agents that can precisely and robustly localize target landmarks in medical scans. An artificial RL agent learns to identify the optimal path to the landmark by interacting with an environment, in our case 3D images. Furthermore, we investigate the use of fixed- and multi-scale search strategies with novel hierarchical action steps in a coarse-to-fine manner. Several deep Q-network (DQN) architectures are evaluated for detecting multiple landmarks using three different medical imaging datasets: fetal head ultrasound (US), adult brain and cardiac magnetic resonance imaging (MRI). The performance of our agents surpasses state-of-the-art supervised and RL methods. Our experiments also show that multi-scale search strategies perform significantly better than fixed-scale agents in images with large field of view and noisy background such as in cardiac MRI. Moreover, the novel hierarchical steps can significantly speed up the searching process by a factor of 4-5 times.


Assuntos
Pontos de Referência Anatômicos , Encéfalo/diagnóstico por imagem , Aprendizado Profundo , Cabeça/diagnóstico por imagem , Coração/diagnóstico por imagem , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Adulto , Feminino , Cabeça/embriologia , Humanos , Gravidez
8.
IEEE Trans Med Imaging ; 37(8): 1737-1750, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29994453

RESUMO

Limited capture range, and the requirement to provide high quality initialization for optimization-based 2-D/3-D image registration methods, can significantly degrade the performance of 3-D image reconstruction and motion compensation pipelines. Challenging clinical imaging scenarios, which contain significant subject motion, such as fetal in-utero imaging, complicate the 3-D image and volume reconstruction process. In this paper, we present a learning-based image registration method capable of predicting 3-D rigid transformations of arbitrarily oriented 2-D image slices, with respect to a learned canonical atlas co-ordinate system. Only image slice intensity information is used to perform registration and canonical alignment, no spatial transform initialization is required. To find image transformations, we utilize a convolutional neural network architecture to learn the regression function capable of mapping 2-D image slices to a 3-D canonical atlas space. We extensively evaluate the effectiveness of our approach quantitatively on simulated magnetic resonance imaging (MRI), fetal brain imagery with synthetic motion and further demonstrate qualitative results on real fetal MRI data where our method is integrated into a full reconstruction and motion compensation pipeline. Our learning based registration achieves an average spatial prediction error of 7 mm on simulated data and produces qualitatively improved reconstructions for heavily moving fetuses with gestational ages of approximately 20 weeks. Our model provides a general and computationally efficient solution to the 2-D/3-D registration initialization problem and is suitable for real-time scenarios.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Algoritmos , Encéfalo/diagnóstico por imagem , Feminino , Feto/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Movimento , Gravidez
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2017: 189-192, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29059842

RESUMO

This paper describes the development of an array of individually addressable pH sensitive microneedles using injection moulding and their integration within a portable device for real-time wireless recording of pH distributions in biological samples. The fabricated microneedles are subjected to gold patterning followed by electrodeposition of iridium oxide to sensitize them to 0.07 units of pH change. Miniaturised electronics suitable for the sensors readout, analog-to-digital conversion and wireless transmission of the potentiometric data are embodied within the device, enabling it to measure real-time pH of soft biological samples such as muscles. In this paper, real-time recording of the cardiac pH distribution, during ischemia followed by reperfusion cycles in cardiac muscles of male Wistar rats has been demonstrated by using the microneedle array.


Assuntos
Agulhas , Animais , Galvanoplastia , Concentração de Íons de Hidrogênio , Injeções , Masculino , Potenciometria , Ratos , Ratos Wistar , Fatores de Tempo , Tecnologia sem Fio
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...