Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 75
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Radiology ; 311(2): e233270, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38713028

RESUMO

Background Generating radiologic findings from chest radiographs is pivotal in medical image analysis. The emergence of OpenAI's generative pretrained transformer, GPT-4 with vision (GPT-4V), has opened new perspectives on the potential for automated image-text pair generation. However, the application of GPT-4V to real-world chest radiography is yet to be thoroughly examined. Purpose To investigate the capability of GPT-4V to generate radiologic findings from real-world chest radiographs. Materials and Methods In this retrospective study, 100 chest radiographs with free-text radiology reports were annotated by a cohort of radiologists, two attending physicians and three residents, to establish a reference standard. Of 100 chest radiographs, 50 were randomly selected from the National Institutes of Health (NIH) chest radiographic data set, and 50 were randomly selected from the Medical Imaging and Data Resource Center (MIDRC). The performance of GPT-4V at detecting imaging findings from each chest radiograph was assessed in the zero-shot setting (where it operates without prior examples) and few-shot setting (where it operates with two examples). Its outcomes were compared with the reference standard with regards to clinical conditions and their corresponding codes in the International Statistical Classification of Diseases, Tenth Revision (ICD-10), including the anatomic location (hereafter, laterality). Results In the zero-shot setting, in the task of detecting ICD-10 codes alone, GPT-4V attained an average positive predictive value (PPV) of 12.3%, average true-positive rate (TPR) of 5.8%, and average F1 score of 7.3% on the NIH data set, and an average PPV of 25.0%, average TPR of 16.8%, and average F1 score of 18.2% on the MIDRC data set. When both the ICD-10 codes and their corresponding laterality were considered, GPT-4V produced an average PPV of 7.8%, average TPR of 3.5%, and average F1 score of 4.5% on the NIH data set, and an average PPV of 10.9%, average TPR of 4.9%, and average F1 score of 6.4% on the MIDRC data set. With few-shot learning, GPT-4V showed improved performance on both data sets. When contrasting zero-shot and few-shot learning, there were improved average TPRs and F1 scores in the few-shot setting, but there was not a substantial increase in the average PPV. Conclusion Although GPT-4V has shown promise in understanding natural images, it had limited effectiveness in interpreting real-world chest radiographs. © RSNA, 2024 Supplemental material is available for this article.


Assuntos
Radiografia Torácica , Humanos , Radiografia Torácica/métodos , Estudos Retrospectivos , Feminino , Masculino , Pessoa de Meia-Idade , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Idoso , Adulto
2.
Can Assoc Radiol J ; 75(1): 82-91, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37439250

RESUMO

Purpose: The development and evaluation of machine learning models that automatically identify the body part(s) imaged, axis of imaging, and the presence of intravenous contrast material of a CT series of images. Methods: This retrospective study included 6955 series from 1198 studies (501 female, 697 males, mean age 56.5 years) obtained between January 2010 and September 2021. Each series was annotated by a trained board-certified radiologist with labels consisting of 16 body parts, 3 imaging axes, and whether an intravenous contrast agent was used. The studies were randomly assigned to the training, validation and testing sets with a proportion of 70%, 20% and 10%, respectively, to develop a 3D deep neural network for each classification task. External validation was conducted with a total of 35,272 series from 7 publicly available datasets. The classification accuracy for each series was independently assessed for each task to evaluate model performance. Results: The accuracies for identifying the body parts, imaging axes, and the presence of intravenous contrast were 96.0% (95% CI: 94.6%, 97.2%), 99.2% (95% CI: 98.5%, 99.7%), and 97.5% (95% CI: 96.4%, 98.5%) respectively. The generalizability of the models was demonstrated through external validation with accuracies of 89.7 - 97.8%, 98.6 - 100%, and 87.8 - 98.6% for the same tasks. Conclusions: The developed models demonstrated high performance on both internal and external testing in identifying key aspects of a CT series.


Assuntos
Aprendizado Profundo , Masculino , Humanos , Feminino , Pessoa de Meia-Idade , Estudos Retrospectivos , Corpo Humano , Aprendizado de Máquina , Tomografia Computadorizada por Raios X/métodos , Meios de Contraste
3.
J Magn Reson Imaging ; 58(4): 1153-1160, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-36645114

RESUMO

BACKGROUND: Total kidney volume (TKV) is an important biomarker for assessing kidney function, especially for autosomal dominant polycystic kidney disease (ADPKD). However, TKV measurements from a single MRI pulse sequence have limited reproducibility, ± ~5%, similar to ADPKD annual kidney growth rates. PURPOSE: To improve TKV measurement reproducibility on MRI by extending artificial intelligence algorithms to automatically segment kidneys on T1-weighted, T2-weighted, and steady state free precession (SSFP) sequences in axial and coronal planes and averaging measurements. STUDY TYPE: Retrospective training, prospective testing. SUBJECTS: Three hundred ninety-seven patients (356 with ADPKD, 41 without), 75% for training and 25% for validation, 40 ADPKD patients for testing and 17 ADPKD patients for assessing reproducibility. FIELD STRENGTH/SEQUENCE: T2-weighted single-shot fast spin echo (T2), SSFP, and T1-weighted 3D spoiled gradient echo (T1) at 1.5 and 3T. ASSESSMENT: 2D U-net segmentation algorithm was trained on images from all sequences. Five observers independently measured each kidney volume manually on axial T2 and using model-assisted segmentations on all sequences and image plane orientations for two MRI exams in two sessions separated by 1-3 weeks to assess reproducibility. Manual and model-assisted segmentation times were recorded. STATISTICAL TESTS: Bland-Altman, Schapiro-Wilk (normality assessment), Pearson's chi-squared (categorical variables); Dice similarity coefficient, interclass correlation coefficient, and concordance correlation coefficient for analyzing TKV reproducibility. P-value < 0.05 was considered statistically significant. RESULTS: In 17 ADPKD subjects, model-assisted segmentations of axial T2 images were significantly faster than manual segmentations (2:49 minute vs. 11:34 minute), with no significant absolute percent difference in TKV (5.9% vs. 5.3%, P = 0.88) between scans 1 and 2. Absolute percent differences between the two scans for model-assisted segmentations on other sequences were 5.5% (axial T1), 4.5% (axial SSFP), 4.1% (coronal SSFP), and 3.2% (coronal T2). Averaging measurements from all five model-assisted segmentations significantly reduced absolute percent difference to 2.5%, further improving to 2.1% after excluding an outlier. DATA CONCLUSION: Measuring TKV on multiple MRI pulse sequences in coronal and axial planes is practical with deep learning model-assisted segmentations and can improve TKV measurement reproducibility more than 2-fold in ADPKD. EVIDENCE LEVEL: 2 TECHNICAL EFFICACY: Stage 1.


Assuntos
Rim Policístico Autossômico Dominante , Humanos , Rim Policístico Autossômico Dominante/diagnóstico por imagem , Estudos Retrospectivos , Estudos Prospectivos , Reprodutibilidade dos Testes , Inteligência Artificial , Rim/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos
4.
J Biomed Inform ; 132: 104139, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35811026

RESUMO

Accurate identification of the presence, absence or possibility of relevant entities in clinical notes is important for healthcare professionals to quickly understand crucial clinical information. This introduces the task of assertion classification - to correctly identify the assertion status of an entity in the unstructured clinical notes. Recent rule-based and machine-learning approaches suffer from labor-intensive pattern engineering and severe class bias toward majority classes. To solve this problem, in this study, we propose a prompt-based learning approach, which treats the assertion classification task as a masked language auto-completion problem. We evaluated the model on six datasets. Our prompt-based method achieved a micro-averaged F-1 of 0.954 on the i2b2 2010 assertion dataset, with ∼1.8% improvements over previous works. In particular, our model showed excellence in detecting classes with few instances (few-shot). Evaluations on five external datasets showcase the outstanding generalizability of the prompt-based method to unseen data. To examine the rationality of our model, we further introduced two rationale faithfulness metrics: comprehensiveness and sufficiency. The results reveal that compared to the "pre-train, fine-tune" procedure, our prompt-based model has a stronger capability of identifying the comprehensive (∼63.93%) and sufficient (∼11.75%) linguistic features from free text. We further evaluated the model-agnostic explanations using LIME. The results imply a better rationale agreement between our model and human beings (∼71.93% in average F-1), which demonstrates the superior trustworthiness of our model.


Assuntos
Registros Eletrônicos de Saúde , Processamento de Linguagem Natural , Humanos , Linguística , Aprendizado de Máquina
5.
J Digit Imaging ; 35(2): 335-339, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35018541

RESUMO

Preparing radiology examinations for interpretation requires prefetching relevant prior examinations and implementing hanging protocols to optimally display the examination along with comparisons. Body part is a critical piece of information to facilitate both prefetching and hanging protocols, but body part information encoded using the Digital Imaging and Communications in Medicine (DICOM) standard is widely variable, error-prone, not granular enough, or missing altogether. This results in inappropriate examinations being prefetched or relevant examinations left behind; hanging protocol optimization suffers as well. Modern artificial intelligence (AI) techniques, particularly when harnessing federated deep learning techniques, allow for highly accurate automatic detection of body part based on the image data within a radiological examination; this allows for much more reliable implementation of this categorization and workflow. Additionally, new avenues to further optimize examination viewing such as dynamic hanging protocol and image display can be implemented using these techniques.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Corpo Humano , Humanos , Radiografia , Fluxo de Trabalho
6.
Radiology ; 299(1): E204-E213, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33399506

RESUMO

The coronavirus disease 2019 (COVID-19) pandemic is a global health care emergency. Although reverse-transcription polymerase chain reaction testing is the reference standard method to identify patients with COVID-19 infection, chest radiography and CT play a vital role in the detection and management of these patients. Prediction models for COVID-19 imaging are rapidly being developed to support medical decision making. However, inadequate availability of a diverse annotated data set has limited the performance and generalizability of existing models. To address this unmet need, the RSNA and Society of Thoracic Radiology collaborated to develop the RSNA International COVID-19 Open Radiology Database (RICORD). This database is the first multi-institutional, multinational, expert-annotated COVID-19 imaging data set. It is made freely available to the machine learning community as a research and educational resource for COVID-19 chest imaging. Pixel-level volumetric segmentation with clinical annotations was performed by thoracic radiology subspecialists for all COVID-19-positive thoracic CT scans. The labeling schema was coordinated with other international consensus panels and COVID-19 data annotation efforts, the European Society of Medical Imaging Informatics, the American College of Radiology, and the American Association of Physicists in Medicine. Study-level COVID-19 classification labels for chest radiographs were annotated by three radiologists, with majority vote adjudication by board-certified radiologists. RICORD consists of 240 thoracic CT scans and 1000 chest radiographs contributed from four international sites. It is anticipated that RICORD will ideally lead to prediction models that can demonstrate sustained performance across populations and health care systems.


Assuntos
COVID-19/diagnóstico por imagem , Bases de Dados Factuais/estatística & dados numéricos , Saúde Global/estatística & dados numéricos , Pulmão/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Humanos , Internacionalidade , Radiografia Torácica , Radiologia , SARS-CoV-2 , Sociedades Médicas , Tomografia Computadorizada por Raios X/estatística & dados numéricos
7.
J Digit Imaging ; 33(2): 490-496, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31768897

RESUMO

Pneumothorax is a potentially life-threatening condition that requires prompt recognition and often urgent intervention. In the ICU setting, large numbers of chest radiographs are performed and must be interpreted on a daily basis which may delay diagnosis of this entity. Development of artificial intelligence (AI) techniques to detect pneumothorax could help expedite detection as well as localize and potentially quantify pneumothorax. Open image analysis competitions are useful in advancing state-of-the art AI algorithms but generally require large expert annotated datasets. We have annotated and adjudicated a large dataset of chest radiographs to be made public with the goal of sparking innovation in this space. Because of the cumbersome and time-consuming nature of image labeling, we explored the value of using AI models to generate annotations for review. Utilization of this machine learning annotation (MLA) technique appeared to expedite our annotation process with relatively high sensitivity at the expense of specificity. Further research is required to confirm and better characterize the value of MLAs. Our adjudicated dataset is now available for public consumption in the form of a challenge.


Assuntos
Crowdsourcing , Pneumotórax , Inteligência Artificial , Conjuntos de Dados como Assunto , Humanos , Aprendizado de Máquina , Pneumotórax/diagnóstico por imagem , Raios X
9.
Radiology ; 290(2): 498-503, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30480490

RESUMO

Purpose The Radiological Society of North America (RSNA) Pediatric Bone Age Machine Learning Challenge was created to show an application of machine learning (ML) and artificial intelligence (AI) in medical imaging, promote collaboration to catalyze AI model creation, and identify innovators in medical imaging. Materials and Methods The goal of this challenge was to solicit individuals and teams to create an algorithm or model using ML techniques that would accurately determine skeletal age in a curated data set of pediatric hand radiographs. The primary evaluation measure was the mean absolute distance (MAD) in months, which was calculated as the mean of the absolute values of the difference between the model estimates and those of the reference standard, bone age. Results A data set consisting of 14 236 hand radiographs (12 611 training set, 1425 validation set, 200 test set) was made available to registered challenge participants. A total of 260 individuals or teams registered on the Challenge website. A total of 105 submissions were uploaded from 48 unique users during the training, validation, and test phases. Almost all methods used deep neural network techniques based on one or more convolutional neural networks (CNNs). The best five results based on MAD were 4.2, 4.4, 4.4, 4.5, and 4.5 months, respectively. Conclusion The RSNA Pediatric Bone Age Machine Learning Challenge showed how a coordinated approach to solving a medical imaging problem can be successfully conducted. Future ML challenges will catalyze collaboration and development of ML tools and methods that can potentially improve diagnostic accuracy and patient care. © RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Siegel in this issue.


Assuntos
Determinação da Idade pelo Esqueleto/métodos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Radiografia/métodos , Algoritmos , Criança , Bases de Dados Factuais , Feminino , Ossos da Mão/diagnóstico por imagem , Humanos , Masculino
10.
J Digit Imaging ; 32(1): 81-90, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30264216

RESUMO

Residents have a limited time to be trained. Although having a highly variable caseload should be beneficial for resident training, residents do not necessarily get a uniform distribution of cases. By developing a dashboard where residents and their attendings can track the procedures they have done and cases that they have seen, we hope to give residents a greater insight into their training and into where gaps in their training may be occurring. By taking advantage of modern advances in NLP techniques, we process medical records and generate statistics describing each resident's progress so far. We have built the system described and its life within the NYP ecosystem. By creating better tracking, we hope that caseloads can be shifted to better close any individual gaps in training. One of the educational pain points for radiology residency is the assignment of cases to match a well-balanced curriculum. By illuminating the historical cases of a resident, we can better assign future cases for a better educational experience.


Assuntos
Currículo , Educação de Pós-Graduação em Medicina/métodos , Registros Eletrônicos de Saúde , Internato e Residência/métodos , Radiologia/educação , Humanos
11.
J Digit Imaging ; 32(5): 897, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30771051

RESUMO

The paper below had been published originally without open access, but has been republished with open access.

13.
Radiology ; 287(3): 816-823, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29533723

RESUMO

Purpose To determine if the increased dentate nucleus signal intensity following six or more doses of a linear gadolinium-based contrast agent (GBCA) (gadopentetate dimeglumine) changes at follow-up examinations performed with a macrocyclic GBCA (gadobutrol). Materials and Methods This retrospective study included 13 patients with increased dentate nucleus signal intensity following at least six (range, 6-18) gadopentetate dimeglumine administrations who then underwent at least 12 months of follow-up imaging with multiple (range, 3-29) gadobutrol-enhanced magnetic resonance (MR) examinations. Dentate nucleus-to-pons and dentate nucleus-to-cerebellar peduncle signal intensity ratios were measured by two radiologists blinded to all patient information, and changes were analyzed by using the paired t test and linear regression. Results The mean dentate nucleus-to-pons and dentate nucleus-to-cerebellar peduncle signal intensity ratios increased after gadopentetate dimeglumine administration, from 0.98 ± 0.03 to 1.10 ± 0.03 (P < .0001) and from 0.98 ± 0.030 to 1.09 ± 0.02 (P < .0001), respectively. With gadobutrol, the mean dentate nucleus-to-pons and dentate nucleus-to-cerebellar peduncle signal intensity ratios decreased to 1.03 ± 0.03 and 1.02 ± 0.04, respectively (P < .0001). With use of a mixed effects model linear regression allowing for each patient to have a different y intercept, mean dentate nucleus-to-pons and dentate nucleus-to-cerebellar peduncle signal intensity ratios decreased with follow-up time (dentate nucleus-to-pons: slope = -0.2% per month [95% confidence interval: -0.0024, -0.0015], R2 = 0.58, P < .0001 for nonzero slope; dentate nucleus-to-cerebellar peduncle: slope = -0.2% per month [95% confidence interval: -0.0024, -0.0015], R2 = 0.61, P < .0001 for nonzero slope). Conclusion Dentate signal intensity increased with at least six gadopentetate dimeglumine-enhanced MR examinations and decreased after switching from a linear (gadopentetate dimeglumine) to a macrocyclic (gadobutrol) GBCA. © RSNA, 2018 Online supplemental material is available for this article.


Assuntos
Núcleos Cerebelares/diagnóstico por imagem , Meios de Contraste , Gadolínio DTPA , Aumento da Imagem/métodos , Imageamento por Ressonância Magnética/métodos , Compostos Organometálicos , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Seguimentos , Humanos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Estudos Retrospectivos , Adulto Jovem
16.
J Ultrasound Med ; 37(11): 2537-2544, 2018 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-29574913

RESUMO

OBJECTIVES: The aim of the study was to investigate the feasibility of using ultrasound shear wave elastography to quantify mechanical properties and movement symmetry of false vocal folds positioned in adduction and abduction. METHODS: We prospectively measured the shear wave velocity (SWV) within the bilateral false vocal folds in 10 healthy adults using acoustic radiation force impulse imaging. From a transcutaneous approach at the level of thyroid cartilage, 5 SWV measurements were obtained within each side of the false vocal folds twice in adduction and again in abduction for each participant. Configuration-related differences in the SWV within false vocal folds were compared between adduction and abduction, in addition to differences between the right and left false vocal folds and between men and women, by a paired t test. We developed an SWV index [(SWVgreater - SWVlesser )/SWVgreater ] to assess movement symmetry between the right and left false vocal folds. Intraobserver agreement on repeated measures was examined by the intraclass correlation coefficient. RESULTS: The 10 participants included 5 men and 5 women. We observed that the SWV within false vocal folds was significantly higher in adduction than in abduction (P < .001). The SWV within false vocal folds in adduction was also significantly higher in women compared to men (P < .001). There was no significant difference in the SWV between the right and left false vocal folds in adduction or in abduction or between men and women in abduction (P > .05). The mean SWV index was 0.05 (range, 0.03-0.07). The intraclass correlation coefficient for intraobserver agreement was 0.89 (P < .001). CONCLUSIONS: Shear wave elastography seems to be feasible to quantify mechanical properties and evaluate the symmetry of false vocal folds in healthy adults.


Assuntos
Técnicas de Imagem por Elasticidade/métodos , Prega Vocal/anormalidades , Adulto , Idoso , Estudos de Viabilidade , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Valores de Referência , Reprodutibilidade dos Testes , Prega Vocal/diagnóstico por imagem , Adulto Jovem
17.
J Digit Imaging ; 31(1): 124-132, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-28842816

RESUMO

The LOINC-RSNA Radiology Playbook represents the future direction of standardization for radiology procedure names. We developed a software solution ("RadMatch") utilizing Python 2.7 and FuzzyWuzzy, an open-source fuzzy string matching algorithm created by SeatGeek, to implement the LOINC-RSNA Radiology Playbook for adult abdomen and pelvis CT and MR procedures performed at our institution. Execution of this semi-automated method resulted in the assignment of appropriate LOINC numbers to 86% of local CT procedures. For local MR procedures, appropriate LOINC numbers were assigned to 75% of these procedures whereas 12.5% of local MR procedures could only be partially mapped. For the standardized local procedures, only 63% of CT and 71% of MR procedures had corresponding RadLex Playbook identifier (RPID) codes in the LOINC-RSNA Radiology Playbook, which limited the utility of RPID codes. RadMatch is a semi-automated open-source software tool that can assist radiology departments seeking to standardize their radiology procedures via implementation of the LOINC-RSNA Radiology Playbook.


Assuntos
Abdome/diagnóstico por imagem , Logical Observation Identifiers Names and Codes , Imageamento por Ressonância Magnética/métodos , Pelve/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Humanos , América do Norte , Sociedades Médicas , Software
18.
J Digit Imaging ; 31(3): 275-282, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29476392

RESUMO

Combining imaging biomarkers with genomic and clinical phenotype data is the foundation of precision medicine research efforts. Yet, biomedical imaging research requires unique infrastructure compared with principally text-driven clinical electronic medical record (EMR) data. The issues are related to the binary nature of the file format and transport mechanism for medical images as well as the post-processing image segmentation and registration needed to combine anatomical and physiological imaging data sources. The SiiM Machine Learning Committee was formed to analyze the gaps and challenges surrounding research into machine learning in medical imaging and to find ways to mitigate these issues. At the 2017 annual meeting, a whiteboard session was held to rank the most pressing issues and develop strategies to meet them. The results, and further reflections, are summarized in this paper.


Assuntos
Diagnóstico por Imagem , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , Pesquisa , Comportamento Cooperativo , Registros Eletrônicos de Saúde , Objetivos , Humanos , Fluxo de Trabalho
19.
J Digit Imaging ; 31(3): 283-289, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29725961

RESUMO

There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.


Assuntos
Aprendizado Profundo , Diagnóstico por Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Radiologia/educação , Humanos
20.
Radiology ; 282(2): 516-525, 2017 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-27513848

RESUMO

Purpose To explore the extent of signal hyperintensity in the brain on unenhanced T1-weighted magnetic resonance (MR) images with increasing gadolinium-based contrast agent (GBCA) doses in patients who received 35 or more linear GBCA administrations. Materials and Methods In this institutional review board-approved HIPAA-compliant retrospective study, picture archiving and communication systems of two tertiary referral hospitals were searched to identify patients who received 35 or more linear GBCA administrations. Unenhanced T1-weighted images of the brain in patients after six, 12, and 24 GBCA administrations and after the final GBCA administration were independently reviewed by three radiologists to identify sites where T1 signal intensity was increasing. Areas identified by all three observers as increasing in T1 signal intensity when compared with baseline images were further analyzed with a quantitative region of interest analysis measuring the rate of signal increase per injection and the total change after 24 linear GBCA administrations relative to reference tissues that did not show T1 shortening. Results Qualitative analysis of 13 patients with 39-59 linear GBCA administrations showed visually detectable T1 shortening in the dentate nucleus (n = 13), globus pallidus (n = 13), substantia nigra (n = 13), posterior thalamus (n = 12), red nucleus (n = 10), colliculi (n = 10), superior cerebellar peduncle (n = 7), caudate nucleus (n = 4), whole thalamus (n = 3), and putamen (n = 2). Quantitative analysis enable confirmation of signal intensity increases on unenhanced T1-weighted images relative to reference tissues in the dentate nucleus (0.53% signal intensity increase per injection, P < .001), globus pallidus (0.23% increase, P = .009), posterior thalamus (0.26% increase, P < .001), substantia nigra (0.25% increase, P = .01), red nucleus (0.25% increase, P = .01), cerebellar peduncle (0.19% increase, P = .001), and colliculi (0.21% increase, P = .02). Conclusion Increased signal intensity on unenhanced T1-weighted images was seen in the posterior thalamus, substantia nigra, red nucleus, cerebellar peduncle, colliculi, dentate nucleus, and globus pallidus. © RSNA, 2016.


Assuntos
Encefalopatias/diagnóstico por imagem , Encéfalo/efeitos dos fármacos , Encéfalo/diagnóstico por imagem , Meios de Contraste/administração & dosagem , Gadolínio/administração & dosagem , Imageamento por Ressonância Magnética/métodos , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA