Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neurocomputing (Amst) ; 485: 36-46, 2022 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-35185296

RESUMO

The front-line imaging modalities computed tomography (CT) and X-ray play important roles for triaging COVID patients. Thoracic CT has been accepted to have higher sensitivity than a chest X-ray for COVID diagnosis. Considering the limited access to resources (both hardware and trained personnel) and issues related to decontamination, CT may not be ideal for triaging suspected subjects. Artificial intelligence (AI) assisted X-ray based application for triaging and monitoring require experienced radiologists to identify COVID patients in a timely manner with the additional ability to delineate and quantify the disease region is seen as a promising solution for widespread clinical use. Our proposed solution differs from existing solutions presented by industry and academic communities. We demonstrate a functional AI model to triage by classifying and segmenting a single chest X-ray image, while the AI model is trained using both X-ray and CT data. We report on how such a multi-modal training process improves the solution compared to single modality (X-ray only) training. The multi-modal solution increases the AUC (area under the receiver operating characteristic curve) from 0.89 to 0.93 for a binary classification between COVID-19 and non-COVID-19 cases. It also positively impacts the Dice coefficient (0.59 to 0.62) for localizing the COVID-19 pathology. To compare the performance of experienced readers to the AI model, a reader study is also conducted. The AI model showed good consistency with respect to radiologists. The DICE score between two radiologists on the COVID group was 0.53 while the AI had a DICE value of 0.52 and 0.55 when compared to the segmentation done by the two radiologists separately. From a classification perspective, the AUCs of two readers was 0.87 and 0.81 while the AUC of the AI is 0.93 based on the reader study dataset. We also conducted a generalization study by comparing our method to the-state-art methods on independent datasets. The results show better performance from the proposed method. Leveraging multi-modal information for the development benefits the single-modal inferencing.

2.
J Vasc Interv Radiol ; 16(4): 493-505, 2005 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-15802449

RESUMO

PURPOSE: To assess the feasibility of the use of preprocedural imaging for guide wire, catheter, and needle navigation with electromagnetic tracking in phantom and animal models. MATERIALS AND METHODS: An image-guided intervention software system was developed based on open-source software components. Catheters, needles, and guide wires were constructed with small position and orientation sensors in the tips. A tetrahedral-shaped weak electromagnetic field generator was placed in proximity to an abdominal vascular phantom or three pigs on the angiography table. Preprocedural computed tomographic (CT) images of the phantom or pig were loaded into custom-developed tracking, registration, navigation, and rendering software. Devices were manipulated within the phantom or pig with guidance from the previously acquired CT scan and simultaneous real-time angiography. Navigation within positron emission tomography (PET) and magnetic resonance (MR) volumetric datasets was also performed. External and endovascular fiducials were used for registration in the phantom, and registration error and tracking error were estimated. RESULTS: The CT scan position of the devices within phantoms and pigs was accurately determined during angiography and biopsy procedures, with manageable error for some applications. Preprocedural CT depicted the anatomy in the region of the devices with real-time position updating and minimal registration error and tracking error (<5 mm). PET can also be used with this system to guide percutaneous biopsies to the most metabolically active region of a tumor. CONCLUSIONS: Previously acquired CT, MR, or PET data can be accurately codisplayed during procedures with reconstructed imaging based on the position and orientation of catheters, guide wires, or needles. Multimodality interventions are feasible by allowing the real-time updated display of previously acquired functional or morphologic imaging during angiography, biopsy, and ablation.


Assuntos
Diagnóstico por Imagem/métodos , Fenômenos Eletromagnéticos/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Radiologia Intervencionista/métodos , Angiografia , Animais , Biópsia por Agulha/métodos , Cateterismo/instrumentação , Eletrônica Médica/instrumentação , Desenho de Equipamento , Estudos de Viabilidade , Humanos , Imageamento por Ressonância Magnética , Modelos Animais , Agulhas , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons , Radiografia Intervencionista , Radiologia Intervencionista/instrumentação , Software , Suínos , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA