Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Ann Surg ; 2024 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-38881457

RESUMEN

OBJECTIVE: To assess ChatGPT's capability of grading postoperative complications using the Clavien-Dindo classification (CDC) via Artificial Intelligence (AI) with Natural Language Processing (NLP). BACKGROUND: The CDC standardizes grading of postoperative complications. However, consistent, and precise application in dynamic clinical settings is challenging. AI offers a potential solution for efficient automated grading. METHODS: ChatGPT's accuracy in defining the CDC, generating clinical examples, grading complications from existing scenarios, and interpreting complications from fictional clinical summaries, was tested. RESULTS: ChatGPT 4 precisely mirrored the CDC, outperforming version 3.5. In generating clinical examples, ChatGPT 4 showcased 99% agreement with minor errors in urinary catheterization. For single complications, it achieved 97% accuracy. ChatGPT was able to accurately extract, grade, and analyze complications from free text fictional discharge summaries. It demonstrated near perfect performance when confronted with real-world discharge summaries: comparison between the human and ChatGPT4 grading showed a κ value of 0.92 (95% CI 0.82-1) (P<0.001). CONCLUSIONS: ChatGPT 4 demonstrates promising proficiency and accuracy in applying the CDC. In the future, AI has the potential to become the mainstay tool to accurately capture, extract, and analyze CDC data from clinical datasets.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38252362

RESUMEN

PURPOSE: Virtual reality (VR) allows for an immersive and interactive analysis of imaging data such as computed tomography (CT) and magnetic resonance imaging (MRI). The aim of this study is to assess the comprehensibility of VR anatomy and its value in assessing resectability of pancreatic ductal adenocarcinoma (PDAC). METHODS: This study assesses exposure to VR anatomy and evaluates the potential role of VR in assessing resectability of PDAC. Firstly, volumetric abdominal CT and MRI data were displayed in an immersive VR environment. Volunteering physicians were asked to identify anatomical landmarks in VR. In the second stage, experienced clinicians were asked to identify vascular involvement in a total of 12 CT and MRI scans displaying PDAC (2 resectable, 2 borderline resectable, and 2 locally advanced tumours per modality). Results were compared to 2D standard PACS viewing. RESULTS: In VR visualisation of CT and MRI, the abdominal anatomical landmarks were recognised by all participants except the pancreas (30/34) in VR CT and the splenic (31/34) and common hepatic artery (18/34) in VR MRI, respectively. In VR CT, resectable, borderline resectable, and locally advanced PDAC were correctly identified in 22/24, 20/24 and 19/24 scans, respectively. Whereas, in VR MRI, resectable, borderline resectable, and locally advanced PDAC were correctly identified in 19/24, 19/24 and 21/24 scans, respectively. Interobserver agreement as measured by Fleiss κ was 0.7 for CT and 0.4 for MRI, respectively (p < 0.001). Scans were significantly assessed more accurately in VR CT than standard 2D PACS CT, with a median of 5.5 (IQR 4.75-6) and a median of 3 (IQR 2-3) correctly assessed out of 6 scans (p < 0.001). CONCLUSION: VR enhanced visualisation of abdominal CT and MRI scan data provides intuitive handling and understanding of anatomy and might allow for more accurate staging of PDAC and could thus become a valuable adjunct in PDAC resectability assessment in the future.

3.
Dig Dis ; 42(1): 70-77, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-37956655

RESUMEN

INTRODUCTION: Chronic pancreatitis (CP) is a relevant chronic medical problem whereby delayed presentation and poor patient understanding can cause adverse effects. Quality of patient information available on the internet about CP is not known. METHODS: A systematic review of the information about CP available online using the search term "chronic pancreatitis" in using the search engine Google has been conducted. The quality of the top 100 websites returned from this search term was analysed using the validated Ensuring Quality Information for Patients (EQIP) tool (maximum score 36). Additional items were included in the website analysis specific to CP. RESULTS: In total, 45 websites were eligible for analysis. The median EQIP score of the websites was 16 (interquartile range 12-19.5). The majority of websites originated from the USA and the United Kingdom with 31 and 11 websites, respectively. Provision of additional information was inconsistent, with most websites covering information regarding aetiology and advocating alcohol and tobacco cessation, but only few reporting on more complex issues. CONCLUSION: Internet available information about CP is of limited quality. There is an immediate need for high quality, patient targeted, and informative literature accessible on the internet about this topic.

4.
J Med Internet Res ; 25: e47479, 2023 06 30.
Artículo en Inglés | MEDLINE | ID: mdl-37389908

RESUMEN

BACKGROUND: ChatGPT-4 is the latest release of a novel artificial intelligence (AI) chatbot able to answer freely formulated and complex questions. In the near future, ChatGPT could become the new standard for health care professionals and patients to access medical information. However, little is known about the quality of medical information provided by the AI. OBJECTIVE: We aimed to assess the reliability of medical information provided by ChatGPT. METHODS: Medical information provided by ChatGPT-4 on the 5 hepato-pancreatico-biliary (HPB) conditions with the highest global disease burden was measured with the Ensuring Quality Information for Patients (EQIP) tool. The EQIP tool is used to measure the quality of internet-available information and consists of 36 items that are divided into 3 subsections. In addition, 5 guideline recommendations per analyzed condition were rephrased as questions and input to ChatGPT, and agreement between the guidelines and the AI answer was measured by 2 authors independently. All queries were repeated 3 times to measure the internal consistency of ChatGPT. RESULTS: Five conditions were identified (gallstone disease, pancreatitis, liver cirrhosis, pancreatic cancer, and hepatocellular carcinoma). The median EQIP score across all conditions was 16 (IQR 14.5-18) for the total of 36 items. Divided by subsection, median scores for content, identification, and structure data were 10 (IQR 9.5-12.5), 1 (IQR 1-1), and 4 (IQR 4-5), respectively. Agreement between guideline recommendations and answers provided by ChatGPT was 60% (15/25). Interrater agreement as measured by the Fleiss κ was 0.78 (P<.001), indicating substantial agreement. Internal consistency of the answers provided by ChatGPT was 100%. CONCLUSIONS: ChatGPT provides medical information of comparable quality to available static internet information. Although currently of limited quality, large language models could become the future standard for patients and health care professionals to gather medical information.


Asunto(s)
Inteligencia Artificial , Personal de Salud , Humanos , Reproducibilidad de los Resultados , Internet , Lenguaje
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...