Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 6.288
Filter
1.
Stud Health Technol Inform ; 313: 215-220, 2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38682533

ABSTRACT

BACKGROUND: Tele-ophthalmology is gaining recognition for its role in improving eye care accessibility via cloud-based solutions. The Google Cloud Platform (GCP) Healthcare API enables secure and efficient management of medical image data such as high-resolution ophthalmic images. OBJECTIVES: This study investigates cloud-based solutions' effectiveness in tele-ophthalmology, with a focus on GCP's role in data management, annotation, and integration for a novel imaging device. METHODS: Leveraging the Integrating the Healthcare Enterprise (IHE) Eye Care profile, the cloud platform was utilized as a PACS and integrated with the Open Health Imaging Foundation (OHIF) Viewer for image display and annotation capabilities for ophthalmic images. RESULTS: The setup of a GCP DICOM storage and the OHIF Viewer facilitated remote image data analytics. Prolonged loading times and relatively large individual image file sizes indicated system challenges. CONCLUSION: Cloud platforms have the potential to ease distributed data analytics, as needed for efficient tele-ophthalmology scenarios in research and clinical practice, by providing scalable and secure image management solutions.


Subject(s)
Cloud Computing , Ophthalmology , Telemedicine , Humans , Radiology Information Systems , Information Storage and Retrieval/methods
2.
Radiology ; 311(1): e232133, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38687216

ABSTRACT

Background The performance of publicly available large language models (LLMs) remains unclear for complex clinical tasks. Purpose To evaluate the agreement between human readers and LLMs for Breast Imaging Reporting and Data System (BI-RADS) categories assigned based on breast imaging reports written in three languages and to assess the impact of discordant category assignments on clinical management. Materials and Methods This retrospective study included reports for women who underwent MRI, mammography, and/or US for breast cancer screening or diagnostic purposes at three referral centers. Reports with findings categorized as BI-RADS 1-5 and written in Italian, English, or Dutch were collected between January 2000 and October 2023. Board-certified breast radiologists and the LLMs GPT-3.5 and GPT-4 (OpenAI) and Bard, now called Gemini (Google), assigned BI-RADS categories using only the findings described by the original radiologists. Agreement between human readers and LLMs for BI-RADS categories was assessed using the Gwet agreement coefficient (AC1 value). Frequencies were calculated for changes in BI-RADS category assignments that would affect clinical management (ie, BI-RADS 0 vs BI-RADS 1 or 2 vs BI-RADS 3 vs BI-RADS 4 or 5) and compared using the McNemar test. Results Across 2400 reports, agreement between the original and reviewing radiologists was almost perfect (AC1 = 0.91), while agreement between the original radiologists and GPT-4, GPT-3.5, and Bard was moderate (AC1 = 0.52, 0.48, and 0.42, respectively). Across human readers and LLMs, differences were observed in the frequency of BI-RADS category upgrades or downgrades that would result in changed clinical management (118 of 2400 [4.9%] for human readers, 611 of 2400 [25.5%] for Bard, 573 of 2400 [23.9%] for GPT-3.5, and 435 of 2400 [18.1%] for GPT-4; P < .001) and that would negatively impact clinical management (37 of 2400 [1.5%] for human readers, 435 of 2400 [18.1%] for Bard, 344 of 2400 [14.3%] for GPT-3.5, and 255 of 2400 [10.6%] for GPT-4; P < .001). Conclusion LLMs achieved moderate agreement with human reader-assigned BI-RADS categories across reports written in three languages but also yielded a high percentage of discordant BI-RADS categories that would negatively impact clinical management. © RSNA, 2024 Supplemental material is available for this article.


Subject(s)
Breast Neoplasms , Humans , Female , Retrospective Studies , Breast Neoplasms/diagnostic imaging , Middle Aged , Radiology Information Systems/statistics & numerical data , Magnetic Resonance Imaging/methods , Mammography/methods , Breast/diagnostic imaging , Aged , Adult , Language , Ultrasonography, Mammary/methods
4.
Comput Methods Programs Biomed ; 248: 108113, 2024 May.
Article in English | MEDLINE | ID: mdl-38479148

ABSTRACT

BACKGROUND AND OBJECTIVE: In recent years, Artificial Intelligence (AI) and in particular Deep Neural Networks (DNN) became a relevant research topic in biomedical image segmentation due to the availability of more and more data sets along with the establishment of well known competitions. Despite the popularity of DNN based segmentation on the research side, these techniques are almost unused in the daily clinical practice even if they could support effectively the physician during the diagnostic process. Apart from the issues related to the explainability of the predictions of a neural model, such systems are not integrated in the diagnostic workflow, and a standardization of their use is needed to achieve this goal. METHODS: This paper presents IODeep a new DICOM Information Object Definition (IOD) aimed at storing both the weights and the architecture of a DNN already trained on a particular image dataset that is labeled as regards the acquisition modality, the anatomical region, and the disease under investigation. RESULTS: The IOD architecture is presented along with a DNN selection algorithm from the PACS server based on the labels outlined above, and a simple PACS viewer purposely designed for demonstrating the effectiveness of the DICOM integration, while no modifications are required on the PACS server side. Also a service based architecture in support of the entire workflow has been implemented. CONCLUSION: IODeep ensures full integration of a trained AI model in a DICOM infrastructure, and it is also enables a scenario where a trained model can be either fine-tuned with hospital data or trained in a federated learning scheme shared by different hospitals. In this way AI models can be tailored to the real data produced by a Radiology ward thus improving the physician decision making process. Source code is freely available at https://github.com/CHILab1/IODeep.git.


Subject(s)
Deep Learning , Radiology Information Systems , Artificial Intelligence , Computers , Software
5.
Nihon Hoshasen Gijutsu Gakkai Zasshi ; 80(4): 385-389, 2024 Apr 20.
Article in Japanese | MEDLINE | ID: mdl-38403594

ABSTRACT

The Ministry of Health, Labor and Welfare mandated the creation of the business continuity plan (BCP) for disaster key hospitals on March 31, 2017. Supposing the hospital information system (HIS) failure occurred, the picture archiving and communication system (PACS) also suffers obstacles, we assumed building a new network was necessary for radiological examination images. The purpose of this study was to investigate whether building a new network for radiological examination images is necessary in an emergency. Using wireless fidelity (Wi-Fi), the new network consisting of one image server and two tablet terminals A and B was constructed. The study measured the portable image transfer time for various stages of the network. The results were as follows: Transfer time from the mobile X-ray unit to the image server was 4.12±0.86 s, that from the image server to the tablet device A was 5.14±0.71 s, and that from the image server to the tablet device B was 7.32±1.66 s. Therefore, the new network configuration can provide a reliable means of accessing radiological images during emergency situations when the HIS and PACS may experience obstacles or failures.


Subject(s)
Radiology Information Systems , Disasters , Hospital Information Systems , Disaster Planning/methods , Humans
6.
IEEE J Biomed Health Inform ; 28(5): 3079-3089, 2024 May.
Article in English | MEDLINE | ID: mdl-38421843

ABSTRACT

Medicalimaging-based report writing for effective diagnosis in radiology is time-consuming and can be error-prone by inexperienced radiologists. Automatic reporting helps radiologists avoid missed diagnoses and saves valuable time. Recently, transformer-based medical report generation has become prominent in capturing long-term dependencies of sequential data with its attention mechanism. Nevertheless, input features obtained from traditional visual extractor of conventional transformers do not capture spatial and semantic information of an image. So, the transformer is unable to capture fine-grained details and may not produce detailed descriptive reports of radiology images. Therefore, we propose a spatio-semantic visual extractor (SSVE) to capture multi-scale spatial and semantic information from radiology images. Here, we incorporate two types of networks in ResNet 101 backbone architecture, i.e. (i) deformable network at the intermediate layer of ResNet 101 that utilizes deformable convolutions in order to obtain spatially invariant features, and (ii) semantic network at the final layer of backbone architecture which uses dilated convolutions to extract rich multi-scale semantic information. Further, these network representations are fused to encode fine-grained details of radiology images. The performance of our proposed model outperforms existing works on two radiology report datasets, i.e., IU X-ray and MIMIC-CXR.


Subject(s)
Semantics , Humans , Radiology Information Systems , Neural Networks, Computer , Algorithms
7.
Jpn J Radiol ; 42(5): 476-486, 2024 May.
Article in English | MEDLINE | ID: mdl-38291269

ABSTRACT

AIM: To retrospectively explored whether systematic training in the use of Liver Imaging Reporting and Data System (LI-RADS) v2018 on computed tomography (CT) can improve the interobserver agreements and performances in LR categorization for focal liver lesions (FLLs) among different radiologists. MATERIALS AND METHODS: A total of 18 visiting radiologists and the liver multiphase CT images of 70 hepatic observations in 63 patients at high risk of HCC were included in this study. The LI-RADS v2018 training procedure included three thematic lectures, with an interval of 1 month. After each seminar, the radiologists had 1 month to adopt the algorithm into their daily work. The interobserver agreements and performances in LR categorization for FLLs among the radiologists before and after training were compared. RESULTS: After training, the interobserver agreements in classifying the LR categories for all radiologists were significantly increased for most LR categories (P < 0.001), except for LR-1 (P = 0.053). After systematic training, the areas under the curve (AUCs) for LR categorization performance for all participants were significantly increased for most LR categories (P < 0.001), except for LR-1 (P = 0.062). CONCLUSION: Systematic training in the use of the LI-RADS can improve the interobserver agreements and performances in LR categorization for FLLs among radiologists with different levels of experience.


Subject(s)
Liver Neoplasms , Observer Variation , Tomography, X-Ray Computed , Humans , Retrospective Studies , Tomography, X-Ray Computed/methods , Liver Neoplasms/diagnostic imaging , Female , Male , Middle Aged , Aged , Radiology Information Systems , Liver/diagnostic imaging , Radiologists , Carcinoma, Hepatocellular/diagnostic imaging , Adult , Reproducibility of Results
8.
Curr Probl Diagn Radiol ; 53(3): 329-331, 2024.
Article in English | MEDLINE | ID: mdl-38246794

ABSTRACT

The inclusion of comparison studies within radiology reports is an important, standard practice. Despite this, we identified that after-hours preliminary reports rendered by trainees within our institution often omitted reference to comparison studies for pediatric inpatient portable radiographs. We addressed this issue through a quality improvement project targeting pediatric radiographs. Key interventions included modifying the structured reports by removing default text in the comparison field, designating the comparison field as mandatory, and restructuring the report templates to remove extraneous information. We also initiated a targeted educational campaign. 392 reports before and 267 reports after intervention (total 732 reports) were evaluated to determine the number of reports lacking comparison information when comparisons were available. Following the interventions, there was a statistically significant decrease in incomplete reports from 12.5% to 6%. This project highlights the success of utilizing structured reporting to improve the quality of trainee reports.


Subject(s)
Radiology Information Systems , Research Report , Child , Humans , Quality Improvement , Documentation
9.
Radiol Artif Intell ; 6(2): e230205, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38265301

ABSTRACT

This study evaluated the ability of generative large language models (LLMs) to detect speech recognition errors in radiology reports. A dataset of 3233 CT and MRI reports was assessed by radiologists for speech recognition errors. Errors were categorized as clinically significant or not clinically significant. Performances of five generative LLMs-GPT-3.5-turbo, GPT-4, text-davinci-003, Llama-v2-70B-chat, and Bard-were compared in detecting these errors, using manual error detection as the reference standard. Prompt engineering was used to optimize model performance. GPT-4 demonstrated high accuracy in detecting clinically significant errors (precision, 76.9%; recall, 100%; F1 score, 86.9%) and not clinically significant errors (precision, 93.9%; recall, 94.7%; F1 score, 94.3%). Text-davinci-003 achieved F1 scores of 72% and 46.6% for clinically significant and not clinically significant errors, respectively. GPT-3.5-turbo obtained 59.1% and 32.2% F1 scores, while Llama-v2-70B-chat scored 72.8% and 47.7%. Bard showed the lowest accuracy, with F1 scores of 47.5% and 20.9%. GPT-4 effectively identified challenging errors of nonsense phrases and internally inconsistent statements. Longer reports, resident dictation, and overnight shifts were associated with higher error rates. In conclusion, advanced generative LLMs show potential for automatic detection of speech recognition errors in radiology reports. Keywords: CT, Large Language Model, Machine Learning, MRI, Natural Language Processing, Radiology Reports, Speech, Unsupervised Learning Supplemental material is available for this article.


Subject(s)
Camelids, New World , Radiology Information Systems , Radiology , Speech Perception , Animals , Speech , Speech Recognition Software , Reproducibility of Results
10.
AJR Am J Roentgenol ; 222(4): e2330573, 2024 04.
Article in English | MEDLINE | ID: mdl-38230901

ABSTRACT

GPT-4 outperformed a radiology domain-specific natural language processing model in classifying imaging findings from chest radiograph reports, both with and without predefined labels. Prompt engineering for context further improved performance. The findings indicate a role for large language models to accelerate artificial intelligence model development in radiology by automating data annotation.


Subject(s)
Natural Language Processing , Radiography, Thoracic , Humans , Radiography, Thoracic/methods , Radiology Information Systems
11.
Clin Imaging ; 107: 110069, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38237327

ABSTRACT

In a traditionally male-dominated field, the journey of Dr. Andriole represents a pioneering path in the realms of radiology and medical imaging informatics. Her career has not only reshaped the landscape of radiology but also championed diversity, equity, and inclusion in healthcare technology. Through a comprehensive exploration of Dr. Andriole's career trajectory, we navigate her transition from analog to digital radiology, her influential role in pioneering picture archiving communication systems (PACS), and her dedication to mentorship and education in the field. Dr. Andriole's journey underscores the growing influence of women in radiology and informatics, exemplified by her Gold Medal accolades from esteemed organizations. Dr. Andriole's career serves as a beacon for aspiring radiologists and informaticians, emphasizing the significance of passion, mentorship, and collaborative teamwork in advancing the fields of radiology and informatics.


Subject(s)
Medical Informatics , Radiology Information Systems , Radiology , Male , Female , Humans , Radiology/education , Radiography , Medical Informatics/methods , Diagnostic Imaging
13.
Acad Radiol ; 31(5): 1799-1804, 2024 May.
Article in English | MEDLINE | ID: mdl-38103973

ABSTRACT

Large language models (LLMs) such as ChatGPT and Bard have emerged as powerful tools in medicine, showcasing strong results in tasks such as radiology report translations and research paper drafting. While their implementation in clinical practice holds promise, their response accuracy remains variable. This study aimed to evaluate the accuracy of ChatGPT and Bard in clinical decision-making based on the American College of Radiology Appropriateness Criteria for various cancers. Both LLMs were evaluated in terms of their responses to open-ended (OE) and select-all-that-apply (SATA) prompts. Furthermore, the study incorporated prompt engineering (PE) techniques to enhance the accuracy of LLM outputs. The results revealed similar performances between ChatGPT and Bard on OE prompts, with ChatGPT exhibiting marginally higher accuracy in SATA scenarios. The introduction of PE also marginally improved LLM outputs in OE prompts but did not enhance SATA responses. The results highlight the potential of LLMs in aiding clinical decision-making processes, especially when guided by optimally engineered prompts. Future studies in diverse clinical situations are imperative to better understand the impact of LLMs in radiology.


Subject(s)
Algorithms , Early Detection of Cancer , Humans , Early Detection of Cancer/methods , Clinical Decision-Making/methods , Neoplasms/diagnostic imaging , Radiology Information Systems
14.
Curr Probl Diagn Radiol ; 53(1): 1-16, 2024.
Article in English | MEDLINE | ID: mdl-37783620

ABSTRACT

The surging demand for diagnostic imaging has highlighted inefficiencies with traditional input devices. Radiologists, using conventional mice and keyboards, grapple with cumbersome shortcuts leading to fatigue, errors, and possible injuries. Gaming keyboards, designed for gamers' precision and adaptability, feature customizable keys that simplify complex tasks into single-touch actions, offering radiologists a more efficient workflow with less physical and mental strain. Incorporating these keyboards could revolutionize radiologists' engagement with PACS. The customizable feature significantly trims time spent searching, ushering in swifter, ergonomic interactions. This manuscript delineates a guide for adapting a Logitech gaming keyboard to radiology needs, from profile creations and shortcut mapping to intricate macro setups. Although the guide uses a Logitech gaming keyboard for demonstration, it is designed to be intuitive, helping users adapt to their unique needs across different modalities, subspecialties, and various radiology viewer software. Furthermore, its fundamental concepts are transferrable to other mouse brands or models with similar customization software. As radiology pivots toward utmost efficiency, gaming keyboards emerge as invaluable assets, promising significant workflow enhancements.


Subject(s)
Radiology Information Systems , Radiology , Video Games , Humans , Workflow , Ergonomics , Software
15.
Eur J Radiol ; 168: 111134, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37806192

ABSTRACT

RATIONALE AND OBJECTIVES: This study aims to validate a new radiology reporting style using eye tracking to maximize radiologist interpretation time, increase accuracy, and minimize dictation time, ultimately providing a clinically relevant, concise, and accurate reporting style. MATERIALS AND METHODS: The positive findings only dictation style using a podcast stand-alone microphone (n = 76) was compared with the standard check-list dictation style using a handheld microphone (n = 81). Experienced board-certified radiologists used each style for various imaging modalities. The number of voice recognition corrections per case was tracked. Eye-tracking glasses captured eye movement to document dictation, interpretation, and total examination times. This device also generated thermal heat maps for each style. The statistical difference between the two methods was assessed via descriptive analysis and inferential statistics. RESULTS: Eye tracking revealed that the new positive findings dictation style led to a noteworthy shift in radiologists' visual attention, with reduced heat map overlaying the reporting software compared to the standard check-list style, indicating greater focus on medical images. Cases with at least one voice recognition correction significantly decreased using the positive findings dictation style versus the standard check-list style (5.26 % vs. 14.81 %; p = 0.0240). The positive findings dictation style significantly decreased average dictation time (16.54 s [s] vs. 29.39 s; p = 0.0003) without impacting interpretation time (70.90 s vs. 64.30 s; p = 0.7799) or total examination time (87.45 s vs. 93.69 s; p = 0.3756) compared to the standard style. CONCLUSION: Positive findings only dictation style significantly decreased dictation time and enhanced accuracy without compromising total interpretation time.


Subject(s)
Eye-Tracking Technology , Radiology Information Systems , Humans , Software , Radiologists , Time
16.
J Xray Sci Technol ; 31(6): 1315-1332, 2023.
Article in English | MEDLINE | ID: mdl-37840464

ABSTRACT

BACKGROUND: Dental panoramic imaging plays a pivotal role in dentistry for diagnosis and treatment planning. However, correctly positioning patients can be challenging for technicians due to the complexity of the imaging equipment and variations in patient anatomy, leading to positioning errors. These errors can compromise image quality and potentially result in misdiagnoses. OBJECTIVE: This research aims to develop and validate a deep learning model capable of accurately and efficiently identifying multiple positioning errors in dental panoramic imaging. METHODS AND MATERIALS: This retrospective study used 552 panoramic images selected from a hospital Picture Archiving and Communication System (PACS). We defined six types of errors (E1-E6) namely, (1) slumped position, (2) chin tipped low, (3) open lip, (4) head turned to one side, (5) head tilted to one side, and (6) tongue against the palate. First, six Convolutional Neural Network (CNN) models were employed to extract image features, which were then fused using transfer learning. Next, a Support Vector Machine (SVM) was applied to create a classifier for multiple positioning errors, using the fused image features. Finally, the classifier performance was evaluated using 3 indices of precision, recall rate, and accuracy. RESULTS: Experimental results show that the fusion of image features with six binary SVM classifiers yielded high accuracy, recall rates, and precision. Specifically, the classifier achieved an accuracy of 0.832 for identifying multiple positioning errors. CONCLUSIONS: This study demonstrates that six SVM classifiers effectively identify multiple positioning errors in dental panoramic imaging. The fusion of extracted image features and the employment of SVM classifiers improve diagnostic precision, suggesting potential enhancements in dental imaging efficiency and diagnostic accuracy. Future research should consider larger datasets and explore real-time clinical application.


Subject(s)
Deep Learning , Radiology Information Systems , Humans , Retrospective Studies , Diagnostic Imaging , Neural Networks, Computer
17.
Curr Probl Diagn Radiol ; 52(6): 456-463, 2023.
Article in English | MEDLINE | ID: mdl-37783619

ABSTRACT

The increasing demand for diagnostic imaging has added to the radiologists' workload, highlighting the shortcomings of conventional computer mice. Radiologists grapple with inefficiencies from frequent mouse clicks and keyboard shortcuts required for various PACS functions. These inefficiencies contribute to cognitive strain, errors, and repetitive strain injuries. High-performance gaming mice, known for their precision in the gaming world, offer multiple custom buttons and superior tracking. These features can streamline radiology tasks. Utilizing a gaming mouse tailored for radiology tasks can substantially enhance efficiency. Our guide offers a step-by-step approach to harnessing the gaming mouse's capabilities for radiology tasks, ensuring radiologists can enhance their workflow and minimize injury risks. Although the guide uses a Logitech gaming mouse for demonstration, it is designed to be intuitive, helping users adapt to their unique needs across different modalities, subspecialties, and various radiology viewer software. Importantly, its fundamental concepts are transferrable to other mouse brands or models with similar customization software.


Subject(s)
Radiology Information Systems , Radiology , Video Games , Humans , Workflow , Radiography
18.
Tomography ; 9(5): 1829-1838, 2023 10 06.
Article in English | MEDLINE | ID: mdl-37888737

ABSTRACT

Digital Imaging and Communications in Medicine (DICOM) is an international standard that defines a format for storing medical images and a protocol to enable and facilitate data communication among medical imaging systems. The DICOM standard has been instrumental in transforming the medical imaging world over the last three decades. Its adoption has been a significant experience for manufacturers, healthcare users, and research scientists. In this review, thirty years after introducing the standard, we discuss the innovation, advantages, and limitations of adopting the DICOM and its possible future directions.


Subject(s)
Radiology Information Systems , Software , Diagnostic Imaging
19.
Comput Methods Programs Biomed ; 242: 107787, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37717524

ABSTRACT

BACKGROUND AND MOTIVATION: Digital pathology has been evolving over the last years, proposing significant workflow advantages that have fostered its adoption in professional environments. Patient clinical and image data are readily available in remote data banks that can be consumed efficiently over standard communication technologies. The appearance of new imaging techniques and advanced artificial intelligence algorithms has significantly reduced the burden on medical professionals by speeding up the screening process. Despite these advancements, the usage of digital pathology in professional environments has been slowed down by poor interoperability between services resulting from a lack of standard interfaces and integrative solutions. This work addresses this issue by proposing a cloud-based digital pathology platform built on standard and open interfaces. METHODS: The work proposes and describes a vendor-neutral platform that provides interfaces for managing digital slides, and medical reports, and integrating digital image analysis services compatible with existing standards. The solution integrates the open-source plugin-based Dicoogle PACS for interoperability and extensibility, which grants the proposed solution great feature customization. RESULTS: The solution was developed in collaboration with iPATH research project partners, including the validation by medical pathologists. The result is a pure Web collaborative framework that supports both research and production environments. A total of 566 digital slides from different pathologies were successfully uploaded to the platform. Using the integration interfaces, a mitosis detection algorithm was successfully installed into the platform, and it was trained with 2400 annotations collected from breast carcinoma images. CONCLUSION: Interoperability is a key factor when discussing digital pathology solutions, as it facilitates their integration into existing institutions' information systems. Moreover, it improves data sharing and integration of third-party services such as image analysis services, which have become relevant in today's digital pathology workflow. The proposed solution fully embraces the DICOM standard for digital pathology, presenting an interoperable cloud-based solution that provides great feature customization thanks to its extensible architecture.


Subject(s)
Hospital Information Systems , Radiology Information Systems , Humans , Artificial Intelligence , Diagnostic Imaging , Algorithms
SELECTION OF CITATIONS
SEARCH DETAIL
...