Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
IEEE Trans Med Imaging ; PP2024 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-38717880

RESUMEN

The integration of Computer-Aided Diagnosis (CAD) with Large Language Models (LLMs) presents a promising frontier in clinical applications, notably in automating diagnostic processes akin to those performed by radiologists and providing consultations similar to a virtual family doctor. Despite the promising potential of this integration, current works face at least two limitations: (1) From the perspective of a radiologist, existing studies typically have a restricted scope of applicable imaging domains, failing to meet the diagnostic needs of different patients. Also, the insufficient diagnostic capability of LLMs further undermine the quality and reliability of the generated medical reports. (2) Current LLMs lack the requisite depth in medical expertise, rendering them less effective as virtual family doctors due to the potential unreliability of the advice provided during patient consultations. To address these limitations, we introduce ChatCAD+, to be universal and reliable. Specifically, it is featured by two main modules: (1) Reliable Report Generation and (2) Reliable Interaction. The Reliable Report Generation module is capable of interpreting medical images from diverse domains and generate high-quality medical reports via our proposed hierarchical in-context learning. Concurrently, the interaction module leverages up-to-date information from reputable medical websites to provide reliable medical advice. Together, these designed modules synergize to closely align with the expertise of human medical professionals, offering enhanced consistency and reliability for interpretation and advice. The source code is available at GitHub.

2.
IEEE Trans Med Imaging ; 43(1): 517-528, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37751352

RESUMEN

In digital dentistry, cone-beam computed tomography (CBCT) can provide complete 3D tooth models, yet suffers from a long concern of requiring excessive radiation dose and higher expense. Therefore, 3D tooth model reconstruction from 2D panoramic X-ray image is more cost-effective, and has attracted great interest in clinical applications. In this paper, we propose a novel dual-space framework, namely DTR-Net, to reconstruct 3D tooth model from 2D panoramic X-ray images in both image and geometric spaces. Specifically, in the image space, we apply a 2D-to-3D generative model to recover intensities of CBCT image, guided by a task-oriented tooth segmentation network in a collaborative training manner. Meanwhile, in the geometric space, we benefit from an implicit function network in the continuous space, learning using points to capture complicated tooth shapes with geometric properties. Experimental results demonstrate that our proposed DTR-Net achieves state-of-the-art performance both quantitatively and qualitatively in 3D tooth model reconstruction, indicating its potential application in dental practice.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Diente , Rayos X , Procesamiento de Imagen Asistido por Computador/métodos , Diente/diagnóstico por imagen , Radiografía Panorámica/métodos , Tomografía Computarizada de Haz Cónico/métodos
3.
Med Image Anal ; 92: 103045, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38071865

RESUMEN

Automatic and accurate dose distribution prediction plays an important role in radiotherapy plan. Although previous methods can provide promising performance, most methods did not consider beam-shaped radiation of treatment delivery in clinical practice. This leads to inaccurate prediction, especially on beam paths. To solve this problem, we propose a beam-wise dose composition learning (BDCL) method for dose prediction in the context of head and neck (H&N) radiotherapy plan. Specifically, a global dose network is first utilized to predict coarse dose values in the whole-image space. Then, we propose to generate individual beam masks to decompose the coarse dose distribution into multiple field doses, called beam voters, which are further refined by a subsequent beam dose network and reassembled to form the final dose distribution. In particular, we design an overlap consistency module to keep the similarity of high-level features in overlapping regions between different beam voters. To make the predicted dose distribution more consistent with the real radiotherapy plan, we also propose a dose-volume histogram (DVH) calibration process to facilitate feature learning in some clinically concerned regions. We further apply an edge enhancement procedure to enhance the learning of the extracted feature from the dose falloff regions. Experimental results on a public H&N cancer dataset from the AAPM OpenKBP challenge show that our method achieves superior performance over other state-of-the-art approaches by significant margins. Source code is released at https://github.com/TL9792/BDCLDosePrediction.


Asunto(s)
Neoplasias de Cabeza y Cuello , Radioterapia de Intensidad Modulada , Humanos , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador/métodos , Radioterapia de Intensidad Modulada/métodos , Neoplasias de Cabeza y Cuello/radioterapia
4.
Nat Commun ; 13(1): 2096, 2022 04 19.
Artículo en Inglés | MEDLINE | ID: mdl-35440592

RESUMEN

Accurate delineation of individual teeth and alveolar bones from dental cone-beam CT (CBCT) images is an essential step in digital dentistry for precision dental healthcare. In this paper, we present an AI system for efficient, precise, and fully automatic segmentation of real-patient CBCT images. Our AI system is evaluated on the largest dataset so far, i.e., using a dataset of 4,215 patients (with 4,938 CBCT scans) from 15 different centers. This fully automatic AI system achieves a segmentation accuracy comparable to experienced radiologists (e.g., 0.5% improvement in terms of average Dice similarity coefficient), while significant improvement in efficiency (i.e., 500 times faster). In addition, it consistently obtains accurate results on the challenging cases with variable dental abnormalities, with the average Dice scores of 91.5% and 93.0% for tooth and alveolar bone segmentation. These results demonstrate its potential as a powerful system to boost clinical workflows of digital dentistry.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Diente , Inteligencia Artificial , Tomografía Computarizada de Haz Cónico/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Diente/diagnóstico por imagen , Flujo de Trabajo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...