Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 98
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Eur Radiol ; 34(1): 330-337, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37505252

RESUMO

OBJECTIVES: Provide physicians and researchers an efficient way to extract information from weakly structured radiology reports with natural language processing (NLP) machine learning models. METHODS: We evaluate seven different German bidirectional encoder representations from transformers (BERT) models on a dataset of 857,783 unlabeled radiology reports and an annotated reading comprehension dataset in the format of SQuAD 2.0 based on 1223 additional reports. RESULTS: Continued pre-training of a BERT model on the radiology dataset and a medical online encyclopedia resulted in the most accurate model with an F1-score of 83.97% and an exact match score of 71.63% for answerable questions and 96.01% accuracy in detecting unanswerable questions. Fine-tuning a non-medical model without further pre-training led to the lowest-performing model. The final model proved stable against variation in the formulations of questions and in dealing with questions on topics excluded from the training set. CONCLUSIONS: General domain BERT models further pre-trained on radiological data achieve high accuracy in answering questions on radiology reports. We propose to integrate our approach into the workflow of medical practitioners and researchers to extract information from radiology reports. CLINICAL RELEVANCE STATEMENT: By reducing the need for manual searches of radiology reports, radiologists' resources are freed up, which indirectly benefits patients. KEY POINTS: • BERT models pre-trained on general domain datasets and radiology reports achieve high accuracy (83.97% F1-score) on question-answering for radiology reports. • The best performing model achieves an F1-score of 83.97% for answerable questions and 96.01% accuracy for questions without an answer. • Additional radiology-specific pretraining of all investigated BERT models improves their performance.


Assuntos
Armazenamento e Recuperação da Informação , Radiologia , Humanos , Idioma , Aprendizado de Máquina , Processamento de Linguagem Natural
2.
BMC Infect Dis ; 24(1): 799, 2024 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-39118057

RESUMO

BACKGROUND: Assessment of artificial intelligence (AI)-based models across languages is crucial to ensure equitable access and accuracy of information in multilingual contexts. This study aimed to compare AI model efficiency in English and Arabic for infectious disease queries. METHODS: The study employed the METRICS checklist for the design and reporting of AI-based studies in healthcare. The AI models tested included ChatGPT-3.5, ChatGPT-4, Bing, and Bard. The queries comprised 15 questions on HIV/AIDS, tuberculosis, malaria, COVID-19, and influenza. The AI-generated content was assessed by two bilingual experts using the validated CLEAR tool. RESULTS: In comparing AI models' performance in English and Arabic for infectious disease queries, variability was noted. English queries showed consistently superior performance, with Bard leading, followed by Bing, ChatGPT-4, and ChatGPT-3.5 (P = .012). The same trend was observed in Arabic, albeit without statistical significance (P = .082). Stratified analysis revealed higher scores for English in most CLEAR components, notably in completeness, accuracy, appropriateness, and relevance, especially with ChatGPT-3.5 and Bard. Across the five infectious disease topics, English outperformed Arabic, except for flu queries in Bing and Bard. The four AI models' performance in English was rated as "excellent", significantly outperforming their "above-average" Arabic counterparts (P = .002). CONCLUSIONS: Disparity in AI model performance was noticed between English and Arabic in response to infectious disease queries. This language variation can negatively impact the quality of health content delivered by AI models among native speakers of Arabic. This issue is recommended to be addressed by AI developers, with the ultimate goal of enhancing health outcomes.


Assuntos
Inteligência Artificial , Doenças Transmissíveis , Idioma , Humanos , COVID-19
3.
J Med Internet Res ; 2024 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-39240144

RESUMO

BACKGROUND: FHIR (Fast Healthcare Interoperability Resources) has been proposed to enable health data interoperability. So far, its applicability has been demonstrated for selected research projects with limited data. OBJECTIVE: Here, we designed and implemented a conceptual medical intelligence framework to leverage real-world care data for clinical decision-making. METHODS: A Python package for the utilization of multimodal FHIR data (FHIRPACK) was developed and pioneered in five real-world clinical use cases, i.e., myocardial infarction (MI), stroke, diabetes, sepsis, and prostate cancer (PC). Patients were identified based on ICD-10 codes, and outcomes were derived from laboratory tests, prescriptions, procedures, and diagnostic reports. Results were provided as browser-based dashboards. RESULTS: For 2022, 1,302,988 patient encounters were analyzed. MI: In 72.7% of cases (N=261) medication regimens fulfilled guideline recommendations. Stroke: Out of 1,277 patients, 165 patients received thrombolysis and 108 thrombectomy. Diabetes: In 443,866 serum glucose and 16,180 HbA1c measurements from 35,494 unique patients, the prevalence of dysglycemic findings was 39% (N=13,887). Among those with dysglycemia, diagnosis was coded in 44.2% (N=6,138) of the patients. Sepsis: In 1,803 patients, Staphylococcus epidermidis was the primarily isolated pathogen (N=773, 28.9%) and piperacillin/tazobactam was the primarily prescribed antibiotic (N=593, 37.2%). PC: Three out of 54 patients who received radical prostatectomy were identified as cases with PSA persistence or biochemical recurrence. CONCLUSIONS: Leveraging FHIR data through large-scale analytics can enhance healthcare quality and improve patient outcomes across five clinical specialties. We identified i) sepsis patients requiring less broad antibiotic therapy, ii) patients with myocardial infarction who could benefit from statin and antiplatelet therapy, iii) stroke patients with longer than recommended times to intervention, iv) patients with hyperglycemia who could benefit from specialist referral and v) PC patients with early increases in cancer markers.

4.
BMC Med Educ ; 24(1): 250, 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38500112

RESUMO

OBJECTIVE: The gold standard of oral cancer (OC) treatment is diagnostic confirmation by biopsy followed by surgical treatment. However, studies have shown that dentists have difficulty performing biopsies, dental students lack knowledge about OC, and surgeons do not always maintain a safe margin during tumor resection. To address this, biopsies and resections could be trained under realistic conditions outside the patient. The aim of this study was to develop and to validate a porcine pseudotumor model of the tongue. METHODS: An interdisciplinary team reflecting various specialties involved in the oncological treatment of head and neck oncology developed a porcine pseudotumor model of the tongue in which biopsies and resections can be practiced. The refined model was validated in a final trial of 10 participants who each resected four pseudotumors on a tongue, resulting in a total of 40 resected pseudotumors. The participants (7 residents and 3 specialists) had an experience in OC treatment ranging from 0.5 to 27 years. Resection margins (minimum and maximum) were assessed macroscopically and compared beside self-assessed margins and resection time between residents and specialists. Furthermore, the model was evaluated using Likert-type questions on haptic and radiological fidelity, its usefulness as a training model, as well as its imageability using CT and ultrasound. RESULTS: The model haptically resembles OC (3.0 ± 0.5; 4-point Likert scale), can be visualized with medical imaging and macroscopically evaluated immediately after resection providing feedback. Although, participants (3.2 ± 0.4) tended to agree that they had resected the pseudotumor with an ideal safety margin (10 mm), the mean minimum resection margin was insufficient at 4.2 ± 1.2 mm (mean ± SD), comparable to reported margins in literature. Simultaneously, a maximum resection margin of 18.4 ± 6.1 mm was measured, indicating partial over-resection. Although specialists were faster at resection (p < 0.001), this had no effect on margins (p = 0.114). Overall, the model was well received by the participants, and they could see it being implemented in training (3.7 ± 0.5). CONCLUSION: The model, which is cost-effective, cryopreservable, and provides a risk-free training environment, is ideal for training in OC biopsy and resection and could be incorporated into dental, medical, or oncologic surgery curricula. Future studies should evaluate the long-term training effects using this model and its potential impact on improving patient outcomes.


Assuntos
Margens de Excisão , Neoplasias Bucais , Animais , Humanos , Biópsia , Cadáver , Cabeça , Neoplasias Bucais/cirurgia , Neoplasias Bucais/patologia , Suínos
5.
Clin Oral Investig ; 28(7): 381, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38886242

RESUMO

OBJECTIVES: Tooth extraction is one of the most frequently performed medical procedures. The indication is based on the combination of clinical and radiological examination and individual patient parameters and should be made with great care. However, determining whether a tooth should be extracted is not always a straightforward decision. Moreover, visual and cognitive pitfalls in the analysis of radiographs may lead to incorrect decisions. Artificial intelligence (AI) could be used as a decision support tool to provide a score of tooth extractability. MATERIAL AND METHODS: Using 26,956 single teeth images from 1,184 panoramic radiographs (PANs), we trained a ResNet50 network to classify teeth as either extraction-worthy or preservable. For this purpose, teeth were cropped with different margins from PANs and annotated. The usefulness of the AI-based classification as well that of dentists was evaluated on a test dataset. In addition, the explainability of the best AI model was visualized via a class activation mapping using CAMERAS. RESULTS: The ROC-AUC for the best AI model to discriminate teeth worthy of preservation was 0.901 with 2% margin on dental images. In contrast, the average ROC-AUC for dentists was only 0.797. With a 19.1% tooth extractions prevalence, the AI model's PR-AUC was 0.749, while the dentist evaluation only reached 0.589. CONCLUSION: AI models outperform dentists/specialists in predicting tooth extraction based solely on X-ray images, while the AI performance improves with increasing contextual information. CLINICAL RELEVANCE: AI could help monitor at-risk teeth and reduce errors in indications for extractions.


Assuntos
Inteligência Artificial , Radiografia Panorâmica , Extração Dentária , Humanos , Odontólogos , Feminino , Masculino , Adulto
6.
J Med Syst ; 48(1): 55, 2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38780820

RESUMO

Designing implants for large and complex cranial defects is a challenging task, even for professional designers. Current efforts on automating the design process focused mainly on convolutional neural networks (CNN), which have produced state-of-the-art results on reconstructing synthetic defects. However, existing CNN-based methods have been difficult to translate to clinical practice in cranioplasty, as their performance on large and complex cranial defects remains unsatisfactory. In this paper, we present a statistical shape model (SSM) built directly on the segmentation masks of the skulls represented as binary voxel occupancy grids and evaluate it on several cranial implant design datasets. Results show that, while CNN-based approaches outperform the SSM on synthetic defects, they are inferior to SSM when it comes to large, complex and real-world defects. Experienced neurosurgeons evaluate the implants generated by the SSM to be feasible for clinical use after minor manual corrections. Datasets and the SSM model are publicly available at https://github.com/Jianningli/ssm .


Assuntos
Redes Neurais de Computação , Crânio , Humanos , Crânio/cirurgia , Crânio/anatomia & histologia , Crânio/diagnóstico por imagem , Modelos Estatísticos , Processamento de Imagem Assistida por Computador/métodos , Procedimentos de Cirurgia Plástica/métodos , Próteses e Implantes
7.
Eur J Nucl Med Mol Imaging ; 50(7): 2196-2209, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36859618

RESUMO

PURPOSE: The aim of this study was to systematically evaluate the effect of thresholding algorithms used in computer vision for the quantification of prostate-specific membrane antigen positron emission tomography (PET) derived tumor volume (PSMA-TV) in patients with advanced prostate cancer. The results were validated with respect to the prognostication of overall survival in patients with advanced-stage prostate cancer. MATERIALS AND METHODS: A total of 78 patients who underwent [177Lu]Lu-PSMA-617 radionuclide therapy from January 2018 to December 2020 were retrospectively included in this study. [68Ga]Ga-PSMA-11 PET images, acquired prior to radionuclide therapy, were used for the analysis of thresholding algorithms. All PET images were first analyzed semi-automatically using a pre-evaluated, proprietary software solution as the baseline method. Subsequently, five histogram-based thresholding methods and two local adaptive thresholding methods that are well established in computer vision were applied to quantify molecular tumor volume. The resulting whole-body molecular tumor volumes were validated with respect to the prognostication of overall patient survival as well as their statistical correlation to the baseline methods and their performance on standardized phantom scans. RESULTS: The whole-body PSMA-TVs, quantified using different thresholding methods, demonstrate a high positive correlation with the baseline methods. We observed the highest correlation with generalized histogram thresholding (GHT) (Pearson r (r), p value (p): r = 0.977, p < 0.001) and Sauvola thresholding (r = 0.974, p < 0.001) and the lowest correlation with Multiotsu (r = 0.877, p < 0.001) and Yen thresholding methods (r = 0.878, p < 0.001). The median survival time of all patients was 9.87 months (95% CI [9.3 to 10.13]). Stratification by median whole-body PSMA-TV resulted in a median survival time from 11.8 to 13.5 months for the patient group with lower tumor burden and 6.5 to 6.6 months for the patient group with higher tumor burden. The patient group with lower tumor burden had significantly higher probability of survival (p < 0.00625) in eight out of nine thresholding methods (Fig. 2); those methods were SUVmax50 (p = 0.0038), SUV ≥3 (p = 0.0034), Multiotsu (p = 0.0015), Yen (p = 0.0015), Niblack (p = 0.001), Sauvola (p = 0.0001), Otsu (p = 0.0053), and Li thresholding (p = 0.0053). CONCLUSION: Thresholding methods commonly used in computer vision are promising tools for the semiautomatic quantification of whole-body PSMA-TV in [68Ga]Ga-PSMA-11-PET. The proposed algorithm-driven thresholding strategy is less arbitrary and less prone to biases than thresholding with predefined values, potentially improving the application of whole-body PSMA-TV as an imaging biomarker.


Assuntos
Neoplasias de Próstata Resistentes à Castração , Neoplasias da Próstata , Humanos , Masculino , Radioisótopos de Gálio , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Tomografia por Emissão de Pósitrons , Antígeno Prostático Específico , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Neoplasias da Próstata/patologia , Neoplasias de Próstata Resistentes à Castração/patologia , Estudos Retrospectivos , Carga Tumoral
8.
BMC Med Imaging ; 23(1): 174, 2023 10 31.
Artigo em Inglês | MEDLINE | ID: mdl-37907876

RESUMO

BACKGROUND: With the rise in importance of personalized medicine and deep learning, we combine the two to create personalized neural networks. The aim of the study is to show a proof of concept that data from just one patient can be used to train deep neural networks to detect tumor progression in longitudinal datasets. METHODS: Two datasets with 64 scans from 32 patients with glioblastoma multiforme (GBM) were evaluated in this study. The contrast-enhanced T1w sequences of brain magnetic resonance imaging (MRI) images were used. We trained a neural network for each patient using just two scans from different timepoints to map the difference between the images. The change in tumor volume can be calculated with this map. The neural networks were a form of a Wasserstein-GAN (generative adversarial network), an unsupervised learning architecture. The combination of data augmentation and the network architecture allowed us to skip the co-registration of the images. Furthermore, no additional training data, pre-training of the networks or any (manual) annotations are necessary. RESULTS: The model achieved an AUC-score of 0.87 for tumor change. We also introduced a modified RANO criteria, for which an accuracy of 66% can be achieved. CONCLUSIONS: We show a novel approach to deep learning in using data from just one patient to train deep neural networks to monitor tumor change. Using two different datasets to evaluate the results shows the potential to generalize the method.


Assuntos
Glioblastoma , Redes Neurais de Computação , Humanos , Imageamento por Ressonância Magnética , Encéfalo , Glioblastoma/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
9.
Eur Radiol ; 32(12): 8769-8776, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35788757

RESUMO

OBJECTIVES: Over the course of their treatment, patients often switch hospitals, requiring staff at the new hospital to import external imaging studies to their local database. In this study, the authors present MOdality Mapping and Orchestration (MOMO), a Deep Learning-based approach to automate this mapping process by combining metadata analysis and a neural network ensemble. METHODS: A set of 11,934 imaging series with existing anatomical labels was retrieved from the PACS database of the local hospital to train an ensemble of neural networks (DenseNet-161 and ResNet-152), which process radiological images and predict the type of study they belong to. We developed an algorithm that automatically extracts relevant metadata from imaging studies, regardless of their structure, and combines it with the neural network ensemble, forming a powerful classifier. A set of 843 anonymized external studies from 321 hospitals was hand-labeled to assess performance. We tested several variations of this algorithm. RESULTS: MOMO achieves 92.71% accuracy and 2.63% minor errors (at 99.29% predictive power) on the external study classification task, outperforming both a commercial product (82.86% accuracy, 1.36% minor errors, 96.20% predictive power) and a pure neural network ensemble (72.69% accuracy, 10.3% minor errors, 99.05% predictive power) performing the same task. We find that the highest performance is achieved by an algorithm that combines all information into one vote-based classifier. CONCLUSION: Deep Learning combined with metadata matching is a promising and flexible approach for the automated classification of external DICOM studies for PACS archiving. KEY POINTS: • The algorithm can successfully identify 76 medical study types across seven modalities (CT, X-ray angiography, radiographs, MRI, PET (+CT/MRI), ultrasound, and mammograms). • The algorithm outperforms a commercial product performing the same task by a significant margin (> 9% accuracy gain). • The performance of the algorithm increases through the application of Deep Learning techniques.


Assuntos
Aprendizado Profundo , Humanos , Redes Neurais de Computação , Algoritmos , Bases de Dados Factuais , Imageamento por Ressonância Magnética/métodos
10.
J Digit Imaging ; 35(2): 340-355, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35064372

RESUMO

Imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) are widely used in diagnostics, clinical studies, and treatment planning. Automatic algorithms for image analysis have thus become an invaluable tool in medicine. Examples of this are two- and three-dimensional visualizations, image segmentation, and the registration of all anatomical structure and pathology types. In this context, we introduce Studierfenster ( www.studierfenster.at ): a free, non-commercial open science client-server framework for (bio-)medical image analysis. Studierfenster offers a wide range of capabilities, including the visualization of medical data (CT, MRI, etc.) in two-dimensional (2D) and three-dimensional (3D) space in common web browsers, such as Google Chrome, Mozilla Firefox, Safari, or Microsoft Edge. Other functionalities are the calculation of medical metrics (dice score and Hausdorff distance), manual slice-by-slice outlining of structures in medical images, manual placing of (anatomical) landmarks in medical imaging data, visualization of medical data in virtual reality (VR), and a facial reconstruction and registration of medical data for augmented reality (AR). More sophisticated features include the automatic cranial implant design with a convolutional neural network (CNN), the inpainting of aortic dissections with a generative adversarial network, and a CNN for automatic aortic landmark detection in CT angiography images. A user study with medical and non-medical experts in medical image analysis was performed, to evaluate the usability and the manual functionalities of Studierfenster. When participants were asked about their overall impression of Studierfenster in an ISO standard (ISO-Norm) questionnaire, a mean of 6.3 out of 7.0 possible points were achieved. The evaluation also provided insights into the results achievable with Studierfenster in practice, by comparing these with two ground truth segmentations performed by a physician of the Medical University of Graz in Austria. In this contribution, we presented an online environment for (bio-)medical image analysis. In doing so, we established a client-server-based architecture, which is able to process medical data, especially 3D volumes. Our online environment is not limited to medical applications for humans. Rather, its underlying concept could be interesting for researchers from other fields, in applying the already existing functionalities or future additional implementations of further image processing applications. An example could be the processing of medical acquisitions like CT or MRI from animals [Clinical Pharmacology & Therapeutics, 84(4):448-456, 68], which get more and more common, as veterinary clinics and centers get more and more equipped with such imaging devices. Furthermore, applications in entirely non-medical research in which images/volumes need to be processed are also thinkable, such as those in optical measuring techniques, astronomy, or archaeology.


Assuntos
Computação em Nuvem , Processamento de Imagem Assistida por Computador , Humanos , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
11.
J Digit Imaging ; 32(6): 1008-1018, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31485953

RESUMO

As of common routine in tumor resections, surgeons rely on local examinations of the removed tissues and on the swiftly made microscopy findings of the pathologist, which are based on intraoperatively taken tissue probes. This approach may imply an extended duration of the operation, increased effort for the medical staff, and longer occupancy of the operating room (OR). Mixed reality technologies, and particularly augmented reality, have already been applied in surgical scenarios with positive initial outcomes. Nonetheless, these methods have used manual or marker-based registration. In this work, we design an application for a marker-less registration of PET-CT information for a patient. The algorithm combines facial landmarks extracted from an RGB video stream, and the so-called Spatial-Mapping API provided by the HMD Microsoft HoloLens. The accuracy of the system is compared with a marker-based approach, and the opinions of field specialists have been collected during a demonstration. A survey based on the standard ISO-9241/110 has been designed for this purpose. The measurements show an average positioning error along the three axes of (x, y, z) = (3.3 ± 2.3, - 4.5 ± 2.9, - 9.3 ± 6.1) mm. Compared with the marker-based approach, this shows an increment of the positioning error of approx. 3 mm along two dimensions (x, y), which might be due to the absence of explicit markers. The application has been positively evaluated by the specialists; they have shown interest in continued further work and contributed to the development process with constructive criticism.


Assuntos
Realidade Aumentada , Imageamento Tridimensional/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Cirurgia Assistida por Computador/métodos , Cirurgia Bucal/métodos , Algoritmos , Humanos , Projetos Piloto , Reprodutibilidade dos Testes
12.
Clin Nephrol ; 90(2): 125-141, 2018 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-29578402

RESUMO

AIMS: New chemotherapeutic agents prolong survival of patients with pancreatic ductal adenocarcinoma (PDAC). Although their incidence is rising, patients with end-stage renal disease (ESRD) requiring hemodialysis (HD) are not included in the phase III trials evaluating the effects of these chemotherapies. Many experts recommend applying chemotherapy after HD using a reduced dose. Alternatively, the concept of prior dosing allows for the application of dialyzable chemotherapeutic drugs using a normal dose, with an HD followed shortly after to mimic normal renal function. In this work, we provide guidance for clinicians on how to use chemotherapy in patients with PDAC on HD and how to identify substances suitable for prior dosing. MATERIALS AND METHODS: We systematically searched PubMed, from inception to September 2016, for published studies describing patients with ESRD on HD who received chemotherapies commonly applied in PDAC, including gemcitabine, fluorouracil (5-FU), capecitabine, oxaliplatin, irinotecan, docetaxel, erlotinib, sunitinib, S-1, and afatinib. Applied dosages, described toxicities, application time relative to HD, and pharmacokinetic measurements of the drug and its metabolites were assessed. Quantitative analysis of the drug plasma concentrations, including half-life during and in between HD and fraction of the drug eliminated during HD, were assessed. RESULTS: We identified 56 studies describing 128 patients with ESRD undergoing HD during chemotherapeutic treatment. Quantitative pharmacokinetic analysis revealed that the following substances are dialyzable and thus suitable for application using the prior-dosing method: gemcitabine, 5-FU, oxaliplatin, irinotecan, and S-1. CONCLUSION: This work supports the application of dialyzable chemotherapeutic agents in patients with PDAC in standard dose when HD is performed shortly after the infusion.
.


Assuntos
Antineoplásicos/uso terapêutico , Protocolos de Quimioterapia Combinada Antineoplásica/uso terapêutico , Carcinoma Ductal Pancreático/tratamento farmacológico , Falência Renal Crônica/terapia , Neoplasias Pancreáticas/tratamento farmacológico , Diálise Renal , Afatinib , Camptotecina/análogos & derivados , Camptotecina/uso terapêutico , Carcinoma Ductal Pancreático/complicações , Desoxicitidina/análogos & derivados , Desoxicitidina/uso terapêutico , Docetaxel , Fluoruracila/uso terapêutico , Humanos , Irinotecano , Falência Renal Crônica/complicações , Compostos Organoplatínicos/uso terapêutico , Oxaliplatina , Neoplasias Pancreáticas/complicações , Quinazolinas/uso terapêutico , Taxoides/uso terapêutico , Gencitabina
13.
J Biomed Inform ; 55: 124-31, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25882923

RESUMO

The surgical navigation system has experienced tremendous development over the past decades for minimizing the risks and improving the precision of the surgery. Nowadays, Augmented Reality (AR)-based surgical navigation is a promising technology for clinical applications. In the AR system, virtual and actual reality are mixed, offering real-time, high-quality visualization of an extensive variety of information to the users (Moussa et al., 2012) [1]. For example, virtual anatomical structures such as soft tissues, blood vessels and nerves can be integrated with the real-world scenario in real time. In this study, an AR-based surgical navigation system (AR-SNS) is developed using an optical see-through HMD (head-mounted display), aiming at improving the safety and reliability of the surgery. With the use of this system, including the calibration of instruments, registration, and the calibration of HMD, the 3D virtual critical anatomical structures in the head-mounted display are aligned with the actual structures of patient in real-world scenario during the intra-operative motion tracking process. The accuracy verification experiment demonstrated that the mean distance and angular errors were respectively 0.809±0.05mm and 1.038°±0.05°, which was sufficient to meet the clinical requirements.


Assuntos
Gráficos por Computador/instrumentação , Aumento da Imagem/instrumentação , Imageamento Tridimensional/instrumentação , Cirurgia Assistida por Computador/instrumentação , Interface Usuário-Computador , Desenho de Equipamento , Análise de Falha de Equipamento , Cabeça , Dispositivos de Proteção da Cabeça , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
14.
J Imaging Inform Med ; 2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38862851

RESUMO

3D data from high-resolution volumetric imaging is a central resource for diagnosis and treatment in modern medicine. While the fast development of AI enhances imaging and analysis, commonly used visualization methods lag far behind. Recent research used extended reality (XR) for perceiving 3D images with visual depth perception and touch but used restrictive haptic devices. While unrestricted touch benefits volumetric data examination, implementing natural haptic interaction with XR is challenging. The research question is whether a multisensory XR application with intuitive haptic interaction adds value and should be pursued. In a study, 24 experts for biomedical images in research and medicine explored 3D medical shapes with 3 applications: a multisensory virtual reality (VR) prototype using haptic gloves, a simple VR prototype using controllers, and a standard PC application. Results of standardized questionnaires showed no significant differences between all application types regarding usability and no significant difference between both VR applications regarding presence. Participants agreed to statements that VR visualizations provide better depth information, using the hands instead of controllers simplifies data exploration, the multisensory VR prototype allows intuitive data exploration, and it is beneficial over traditional data examination methods. While most participants mentioned manual interaction as the best aspect, they also found it the most improvable. We conclude that a multisensory XR application with improved manual interaction adds value for volumetric biomedical data examination. We will proceed with our open-source research project ISH3DE (Intuitive Stereoptic Haptic 3D Data Exploration) to serve medical education, therapeutic decisions, surgery preparations, or research data analysis.

15.
Comput Methods Programs Biomed ; 245: 108013, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38262126

RESUMO

The recent release of ChatGPT, a chat bot research project/product of natural language processing (NLP) by OpenAI, stirs up a sensation among both the general public and medical professionals, amassing a phenomenally large user base in a short time. This is a typical example of the 'productization' of cutting-edge technologies, which allows the general public without a technical background to gain firsthand experience in artificial intelligence (AI), similar to the AI hype created by AlphaGo (DeepMind Technologies, UK) and self-driving cars (Google, Tesla, etc.). However, it is crucial, especially for healthcare researchers, to remain prudent amidst the hype. This work provides a systematic review of existing publications on the use of ChatGPT in healthcare, elucidating the 'status quo' of ChatGPT in medical applications, for general readers, healthcare professionals as well as NLP scientists. The large biomedical literature database PubMed is used to retrieve published works on this topic using the keyword 'ChatGPT'. An inclusion criterion and a taxonomy are further proposed to filter the search results and categorize the selected publications, respectively. It is found through the review that the current release of ChatGPT has achieved only moderate or 'passing' performance in a variety of tests, and is unreliable for actual clinical deployment, since it is not intended for clinical applications by design. We conclude that specialized NLP models trained on (bio)medical datasets still represent the right direction to pursue for critical clinical applications.


Assuntos
Inteligência Artificial , Atenção à Saúde , Processamento de Linguagem Natural , Humanos
16.
Int J Numer Method Biomed Eng ; : e3860, 2024 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-39209324

RESUMO

The article presents a semi-automatic approach to generating structured hexahedral meshes of patient-specific aortas ailed by aortic dissection. The condition manifests itself as a formation of two blood flow channels in the aorta, as a result of a tear in the inner layers of the aortic wall. Subsequently, the morphology of the aorta is greatly impacted, making the task of domain discretization highly challenging. The meshing algorithm presented herein is automatic for the individual lumina, whereas the tears require user interaction. Starting from an input (triangle) surface mesh, we construct an implicit surface representation as well as a topological skeleton, which provides a basis for the generation of a block-structure. Thereafter, the mesh generation is performed via transfinite maps. The meshes are structured and fully hexahedral, exhibit good quality and reliably match the original surface. As they are generated with computational fluid dynamics in mind, a fluid flow simulation is performed to verify their usefulness. Moreover, since the approach is based on valid block-structures, the meshes can be made very coarse (around 1000 elements for an entire aortic dissection domain), and thus promote using solvers based on the geometric multigrid method, which is typically reliant on the presence of a hierarchy of coarser meshes.

17.
Artigo em Inglês | MEDLINE | ID: mdl-39213271

RESUMO

Interactive segmentation is a crucial research area in medical image analysis aiming to boost the efficiency of costly annotations by incorporating human feedback. This feedback takes the form of clicks, scribbles, or masks and allows for iterative refinement of the model output so as to efficiently guide the system towards the desired behavior. In recent years, deep learning-based approaches have propelled results to a new level causing a rapid growth in the field with 121 methods proposed in the medical imaging domain alone. In this review, we provide a structured overview of this emerging field featuring a comprehensive taxonomy, a systematic review of existing methods, and an in-depth analysis of current practices. Based on these contributions, we discuss the challenges and opportunities in the field. For instance, we find that there is a severe lack of comparison across methods which needs to be tackled by standardized baselines and benchmarks.

18.
Med Image Anal ; 95: 103199, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38759258

RESUMO

The accurate diagnosis on pathological subtypes for lung cancer is of significant importance for the follow-up treatments and prognosis managements. In this paper, we propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on computed tomography (CT) images. Inspired by studies stating that cross-scale associations exist in the image patterns between the same case's CT images and its pathological images, we innovatively developed a pathological feature synthetic module (PFSM), which quantitatively maps cross-modality associations through deep neural networks, to derive the "gold standard" information contained in the corresponding pathological images from CT images. Additionally, we designed a radiological feature extraction module (RFEM) to directly acquire CT image information and integrated it with the pathological priors under an effective feature fusion framework, enabling the entire classification model to generate more indicative and specific pathologically related features and eventually output more accurate predictions. The superiority of the proposed model lies in its ability to self-generate hybrid features that contain multi-modality image information based on a single-modality input. To evaluate the effectiveness, adaptability, and generalization ability of our model, we performed extensive experiments on a large-scale multi-center dataset (i.e., 829 cases from three hospitals) to compare our model and a series of state-of-the-art (SOTA) classification models. The experimental results demonstrated the superiority of our model for lung cancer subtypes classification with significant accuracy improvements in terms of accuracy (ACC), area under the curve (AUC), positive predictive value (PPV) and F1-score.


Assuntos
Neoplasias Pulmonares , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/classificação , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos
19.
JMIR Serious Games ; 12: e52785, 2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39292499

RESUMO

At the Worldwide Developers Conference in June 2023, Apple introduced the Vision Pro. The Apple Vision Pro (AVP) is a mixed reality headset; more specifically, it is a virtual reality device with an additional video see-through capability. The video see-through capability turns the AVP into an augmented reality (AR) device. The AR feature is enabled by streaming the real world via cameras on the (virtual reality) screens in front of the user's eyes. This is, of course, not unique and is similar to other devices, such as the Varjo XR-3 (Varjo Technologies Oy). Nevertheless, the AVP has some interesting features, such as an inside-out screen that can show the headset wearer's eyes to "outsiders," and a button on the top, called the "digital crown," that allows a seamless blend of digital content with the user's physical space by turning it. In addition, it is untethered, except for the cable to the battery, which makes the headset more agile, compared to the Varjo XR-3. This could actually come closer to "The Ultimate Display," which Ivan Sutherland had already sketched in 1965. After a great response from the media and social networks to the release, we were able to test and review the new AVP ourselves in March 2024. Including an expert survey with 13 of our colleagues after testing the AVP in our institute, this Viewpoint explores whether the AVP can overcome clinical challenges that AR especially still faces in the medical domain; we also go beyond this and discuss whether the AVP could support clinicians in essential tasks to allow them to spend more time with their patients.

20.
Comput Methods Programs Biomed ; 243: 107912, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37981454

RESUMO

BACKGROUND AND OBJECTIVE: We present a novel deep learning-based skull stripping algorithm for magnetic resonance imaging (MRI) that works directly in the information rich complex valued k-space. METHODS: Using four datasets from different institutions with a total of around 200,000 MRI slices, we show that our network can perform skull-stripping on the raw data of MRIs while preserving the phase information which no other skull stripping algorithm is able to work with. For two of the datasets, skull stripping performed by HD-BET (Brain Extraction Tool) in the image domain is used as the ground truth, whereas the third and fourth dataset comes with per-hand annotated brain segmentations. RESULTS: All four datasets were very similar to the ground truth (DICE scores of 92 %-99 % and Hausdorff distances of under 5.5 pixel). Results on slices above the eye-region reach DICE scores of up to 99 %, whereas the accuracy drops in regions around the eyes and below, with partially blurred output. The output of k-Strip often has smoothed edges at the demarcation to the skull. Binary masks are created with an appropriate threshold. CONCLUSION: With this proof-of-concept study, we were able to show the feasibility of working in the k-space frequency domain, preserving phase information, with consistent results. Besides preserving valuable information for further diagnostics, this approach makes an immediate anonymization of patient data possible, already before being transformed into the image domain. Future research should be dedicated to discovering additional ways the k-space can be used for innovative image analysis and further workflows.


Assuntos
Algoritmos , Crânio , Humanos , Crânio/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Processamento de Imagem Assistida por Computador/métodos , Cabeça , Imageamento por Ressonância Magnética/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA