Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Comput Biol Med ; 136: 104716, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34364262

RESUMO

BACKGROUND: Artificial intelligence (AI) typically requires a significant amount of high-quality data to build reliable models, where gathering enough data within a single institution can be particularly challenging. In this study we investigated the impact of using sequential learning to exploit very small, siloed sets of clinical and imaging data to train AI models. Furthermore, we evaluated the capacity of such models to achieve equivalent performance when compared to models trained with the same data over a single centralized database. METHODS: We propose a privacy preserving distributed learning framework, learning sequentially from each dataset. The framework is applied to three machine learning algorithms: Logistic Regression, Support Vector Machines (SVM), and Perceptron. The models were evaluated using four open-source datasets (Breast cancer, Indian liver, NSCLC-Radiomics dataset, and Stage III NSCLC). FINDINGS: The proposed framework ensured a comparable predictive performance against a centralized learning approach. Pairwise DeLong tests showed no significant difference between the compared pairs for each dataset. INTERPRETATION: Distributed learning contributes to preserve medical data privacy. We foresee this technology will increase the number of collaborative opportunities to develop robust AI, becoming the default solution in scenarios where collecting enough data from a single reliable source is logistically impossible. Distributed sequential learning provides privacy persevering means for institutions with small but clinically valuable datasets to collaboratively train predictive AI while preserving the privacy of their patients. Such models perform similarly to models that are built on a larger central dataset.


Assuntos
Inteligência Artificial , Privacidade , Algoritmos , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
2.
J Med Imaging (Bellingham) ; 7(2): 022412, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32341935

RESUMO

Purpose: Accurate detection of cancer lesions in positron emission tomography (PET) is fundamental to achieving favorable clinical outcomes. Therefore, image reconstruction, processing, visualization, and interpretation techniques must be optimized for this task. The objective of this work was to (1) develop and validate an efficient method to generate well-characterized synthetic lesions in real patient data and (2) to apply these lesions in a human perception experiment to establish baseline measurements of the limits of lesion detection as a function of lesion size and contrast using current imaging technologies. Approach: A fully integrated software package for synthesizing well-characterized lesions in real patient PET was developed using a vendor provided PET image reconstruction toolbox (REGRECON5, General Electric Healthcare, Waukesha, Wisconsin). Lesion characteristics were validated experimentally for geometric accuracy, activity accuracy, and absence of artifacts. The Lesion Synthesis Toolbox was used to generate a library of 133 synthetic lesions of varying sizes ( n = 7 ) and contrast levels ( n = 19 ) in manually defined locations in the livers of 37 patient studies. A lesion-localization perception study was performed with seven observers to determine the limits of detection with regard to lesion size and contrast using our web-based perception study tool. Results: The Lesion Synthesis Toolbox was validated for accurate lesion placement and size. Lesion intensities were deemed accurate with slightly elevated activities (5% at 2:1 lesion-to-background contrast) in small lesions ( Ø = 15 mm spheres), and no bias in large lesions ( Ø = 22.5 mm ). Bed-stitching artifacts were not observed, and lesion attenuation correction bias was small ( - 1.6 ± 1.2 % ). The 133 liver lesions were synthesized in ∼ 50 h , and readers were able to complete the perception study of these lesions in 12 ± 3 min with consistent limits of detection amongst all readers. Conclusions: Our open-source utilities can be employed by nonexperts to generate well-characterized synthetic lesions in real patient PET images and for administering perception studies on clinical workstations without the need to install proprietary software.

3.
IEEE Trans Med Imaging ; 36(1): 132-141, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-28055829

RESUMO

Simple and robust techniques are lacking to assess performance of flow quantification using dynamic imaging. We therefore developed a method to qualify flow quantification technologies using a physical compartment exchange phantom and image analysis tool. We validate and demonstrate utility of this method using dynamic PET and SPECT. Dynamic image sequences were acquired on two PET/CT and a cardiac dedicated SPECT (with and without attenuation and scatter corrections) systems. A two-compartment exchange model was fit to image derived time-activity curves to quantify flow rates. Flowmeter measured flow rates (20-300 mL/min) were set prior to imaging and were used as reference truth to which image derived flow rates were compared. Both PET cameras had excellent agreement with truth ( [Formula: see text]). High-end PET had no significant bias (p > 0.05) while lower-end PET had minimal slope bias (wash-in and wash-out slopes were 1.02 and 1.01) but no significant reduction in precision relative to high-end PET (<15% vs. <14% limits of agreement, p > 0.3). SPECT (without scatter and attenuation corrections) slope biases were noted (0.85 and 1.32) and attributed to camera saturation in early time frames. Analysis of wash-out rates from non-saturated, late time frames resulted in excellent agreement with truth ( [Formula: see text], slope = 0.97). Attenuation and scatter corrections did not significantly impact SPECT performance. The proposed phantom, software and quality assurance paradigm can be used to qualify imaging instrumentation and protocols for quantification of kinetic rate parameters using dynamic imaging.


Assuntos
Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia Computadorizada de Emissão de Fóton Único , Imagem Multimodal , Imagens de Fantasmas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA