Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 107
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
J Pathol ; 262(3): 310-319, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38098169

RESUMEN

Deep learning applied to whole-slide histopathology images (WSIs) has the potential to enhance precision oncology and alleviate the workload of experts. However, developing these models necessitates large amounts of data with ground truth labels, which can be both time-consuming and expensive to obtain. Pathology reports are typically unstructured or poorly structured texts, and efforts to implement structured reporting templates have been unsuccessful, as these efforts lead to perceived extra workload. In this study, we hypothesised that large language models (LLMs), such as the generative pre-trained transformer 4 (GPT-4), can extract structured data from unstructured plain language reports using a zero-shot approach without requiring any re-training. We tested this hypothesis by utilising GPT-4 to extract information from histopathological reports, focusing on two extensive sets of pathology reports for colorectal cancer and glioblastoma. We found a high concordance between LLM-generated structured data and human-generated structured data. Consequently, LLMs could potentially be employed routinely to extract ground truth data for machine learning from unstructured pathology reports in the future. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.


Asunto(s)
Glioblastoma , Medicina de Precisión , Humanos , Aprendizaje Automático , Reino Unido
2.
Eur Radiol ; 2024 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-38627289

RESUMEN

OBJECTIVES: Large language models (LLMs) have shown potential in radiology, but their ability to aid radiologists in interpreting imaging studies remains unexplored. We investigated the effects of a state-of-the-art LLM (GPT-4) on the radiologists' diagnostic workflow. MATERIALS AND METHODS: In this retrospective study, six radiologists of different experience levels read 40 selected radiographic [n = 10], CT [n = 10], MRI [n = 10], and angiographic [n = 10] studies unassisted (session one) and assisted by GPT-4 (session two). Each imaging study was presented with demographic data, the chief complaint, and associated symptoms, and diagnoses were registered using an online survey tool. The impact of Artificial Intelligence (AI) on diagnostic accuracy, confidence, user experience, input prompts, and generated responses was assessed. False information was registered. Linear mixed-effect models were used to quantify the factors (fixed: experience, modality, AI assistance; random: radiologist) influencing diagnostic accuracy and confidence. RESULTS: When assessing if the correct diagnosis was among the top-3 differential diagnoses, diagnostic accuracy improved slightly from 181/240 (75.4%, unassisted) to 188/240 (78.3%, AI-assisted). Similar improvements were found when only the top differential diagnosis was considered. AI assistance was used in 77.5% of the readings. Three hundred nine prompts were generated, primarily involving differential diagnoses (59.1%) and imaging features of specific conditions (27.5%). Diagnostic confidence was significantly higher when readings were AI-assisted (p > 0.001). Twenty-three responses (7.4%) were classified as hallucinations, while two (0.6%) were misinterpretations. CONCLUSION: Integrating GPT-4 in the diagnostic process improved diagnostic accuracy slightly and diagnostic confidence significantly. Potentially harmful hallucinations and misinterpretations call for caution and highlight the need for further safeguarding measures. CLINICAL RELEVANCE STATEMENT: Using GPT-4 as a virtual assistant when reading images made six radiologists of different experience levels feel more confident and provide more accurate diagnoses; yet, GPT-4 gave factually incorrect and potentially harmful information in 7.4% of its responses.

3.
Br J Clin Pharmacol ; 90(3): 649-661, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-37728146

RESUMEN

AIMS: To explore international undergraduate pharmacy students' views on integrating artificial intelligence (AI) into pharmacy education and practice. METHODS: This cross-sectional institutional review board-approved multinational, multicentre study comprised an anonymous online survey of 14 multiple-choice items to assess pharmacy students' preferences for AI events in the pharmacy curriculum, the current state of AI education, and students' AI knowledge and attitudes towards using AI in the pharmacy profession, supplemented by 8 demographic queries. Subgroup analyses were performed considering sex, study year, tech-savviness, and prior AI knowledge and AI events in the curriculum using the Mann-Whitney U-test. Variances were reported for responses in Likert scale format. RESULTS: The survey gathered 387 pharmacy student opinions across 16 faculties and 12 countries. Students showed predominantly positive attitudes towards AI in medicine (58%, n = 225) and expressed a strong desire for more AI education (72%, n = 276). However, they reported limited general knowledge of AI (63%, n = 242) and felt inadequately prepared to use AI in their future careers (51%, n = 197). Male students showed more positive attitudes towards increasing efficiency through AI (P = .011), while tech-savvy and advanced-year students expressed heightened concerns about potential legal and ethical issues related to AI (P < .001/P = .025, respectively). Students who had AI courses as part of their studies reported better AI knowledge (P < .001) and felt more prepared to apply it professionally (P < .001). CONCLUSIONS: Our findings underline the generally positive attitude of international pharmacy students towards AI application in medicine and highlight the necessity for a greater emphasis on AI education within pharmacy curricula.


Asunto(s)
Estudiantes de Farmacia , Humanos , Masculino , Estudios Transversales , Inteligencia Artificial , Encuestas y Cuestionarios , Curriculum
4.
J Med Internet Res ; 26: e54948, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38691404

RESUMEN

This study demonstrates that GPT-4V outperforms GPT-4 across radiology subspecialties in analyzing 207 cases with 1312 images from the Radiological Society of North America Case Collection.


Asunto(s)
Radiología , Radiología/métodos , Radiología/estadística & datos numéricos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
5.
Arch Gynecol Obstet ; 309(4): 1543-1549, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37975899

RESUMEN

PURPOSE: The market and application possibilities for artificial intelligence are currently growing at high speed and are increasingly finding their way into gynecology. While the medical side is highly represented in the current literature, the patient's perspective is still lagging behind. Therefore, the aim of this study was to evaluate the recommendations of ChatGPT regarding patient inquiries about the possible therapy of gynecological leading symptoms in a palliative situation by experts. METHODS: Case vignettes were constructed for 10 common concomitant symptoms in gynecologic oncology tumors in a palliative setting, and patient queries regarding therapy of these symptoms were generated as prompts for ChatGPT. Five experts in palliative care and gynecologic oncology evaluated the responses with respect to guideline adherence and applicability and identified advantages and disadvantages. RESULTS: The overall rating of ChatGPT responses averaged 4.1 (5 = strongly agree; 1 = strongly disagree). The experts saw an average guideline conformity of the therapy recommendations with a value of 4.0. ChatGPT sometimes omits relevant therapies and does not provide an individual assessment of the suggested therapies, but does indicate that a physician consultation is additionally necessary. CONCLUSIONS: Language models, such as ChatGPT, can provide valid and largely guideline-compliant therapy recommendations in their freely available and thus in principle accessible version for our patients. For a complete therapy recommendation, an evaluation of the therapies, their individual adjustment as well as a filtering of possible wrong recommendations, a medical expert's opinion remains indispensable.


Asunto(s)
Neoplasias de los Genitales Femeninos , Ginecología , Humanos , Femenino , Inteligencia Artificial , Neoplasias de los Genitales Femeninos/tratamiento farmacológico , Cooperación del Paciente , Adhesión a Directriz
6.
Clin Oral Investig ; 28(7): 381, 2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38886242

RESUMEN

OBJECTIVES: Tooth extraction is one of the most frequently performed medical procedures. The indication is based on the combination of clinical and radiological examination and individual patient parameters and should be made with great care. However, determining whether a tooth should be extracted is not always a straightforward decision. Moreover, visual and cognitive pitfalls in the analysis of radiographs may lead to incorrect decisions. Artificial intelligence (AI) could be used as a decision support tool to provide a score of tooth extractability. MATERIAL AND METHODS: Using 26,956 single teeth images from 1,184 panoramic radiographs (PANs), we trained a ResNet50 network to classify teeth as either extraction-worthy or preservable. For this purpose, teeth were cropped with different margins from PANs and annotated. The usefulness of the AI-based classification as well that of dentists was evaluated on a test dataset. In addition, the explainability of the best AI model was visualized via a class activation mapping using CAMERAS. RESULTS: The ROC-AUC for the best AI model to discriminate teeth worthy of preservation was 0.901 with 2% margin on dental images. In contrast, the average ROC-AUC for dentists was only 0.797. With a 19.1% tooth extractions prevalence, the AI model's PR-AUC was 0.749, while the dentist evaluation only reached 0.589. CONCLUSION: AI models outperform dentists/specialists in predicting tooth extraction based solely on X-ray images, while the AI performance improves with increasing contextual information. CLINICAL RELEVANCE: AI could help monitor at-risk teeth and reduce errors in indications for extractions.


Asunto(s)
Inteligencia Artificial , Radiografía Panorámica , Extracción Dental , Humanos , Odontólogos , Femenino , Masculino , Adulto
7.
Radiology ; 307(5): e222223, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37278629

RESUMEN

Background Deep learning (DL) models can potentially improve prognostication of rectal cancer but have not been systematically assessed. Purpose To develop and validate an MRI DL model for predicting survival in patients with rectal cancer based on segmented tumor volumes from pretreatment T2-weighted MRI scans. Materials and Methods DL models were trained and validated on retrospectively collected MRI scans of patients with rectal cancer diagnosed between August 2003 and April 2021 at two centers. Patients were excluded from the study if there were concurrent malignant neoplasms, prior anticancer treatment, incomplete course of neoadjuvant therapy, or no radical surgery performed. The Harrell C-index was used to determine the best model, which was applied to internal and external test sets. Patients were stratified into high- and low-risk groups based on a fixed cutoff calculated in the training set. A multimodal model was also assessed, which used DL model-computed risk score and pretreatment carcinoembryonic antigen level as input. Results The training set included 507 patients (median age, 56 years [IQR, 46-64 years]; 355 men). In the validation set (n = 218; median age, 55 years [IQR, 47-63 years]; 144 men), the best algorithm reached a C-index of 0.82 for overall survival. The best model reached hazard ratios of 3.0 (95% CI: 1.0, 9.0) in the high-risk group in the internal test set (n = 112; median age, 60 years [IQR, 52-70 years]; 76 men) and 2.3 (95% CI: 1.0, 5.4) in the external test set (n = 58; median age, 57 years [IQR, 50-67 years]; 38 men). The multimodal model further improved the performance, with a C-index of 0.86 and 0.67 for the validation and external test set, respectively. Conclusion A DL model based on preoperative MRI was able to predict survival of patients with rectal cancer. The model could be used as a preoperative risk stratification tool. Published under a CC BY 4.0 license. Supplemental material is available for this article. See also the editorial by Langs in this issue.


Asunto(s)
Aprendizaje Profundo , Neoplasias del Recto , Masculino , Humanos , Persona de Mediana Edad , Estudios Retrospectivos , Neoplasias del Recto/diagnóstico por imagen , Neoplasias del Recto/terapia , Imagen por Resonancia Magnética , Factores de Riesgo
8.
Radiology ; 307(3): e222211, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36943080

RESUMEN

Background Reducing the amount of contrast agent needed for contrast-enhanced breast MRI is desirable. Purpose To investigate if generative adversarial networks (GANs) can recover contrast-enhanced breast MRI scans from unenhanced images and virtual low-contrast-enhanced images. Materials and Methods In this retrospective study of breast MRI performed from January 2010 to December 2019, simulated low-contrast images were produced by adding virtual noise to the existing contrast-enhanced images. GANs were then trained to recover the contrast-enhanced images from the simulated low-contrast images (approach A) or from the unenhanced T1- and T2-weighted images (approach B). Two experienced radiologists were tasked with distinguishing between real and synthesized contrast-enhanced images using both approaches. Image appearance and conspicuity of enhancing lesions on the real versus synthesized contrast-enhanced images were independently compared and rated on a five-point Likert scale. P values were calculated by using bootstrapping. Results A total of 9751 breast MRI examinations from 5086 patients (mean age, 56 years ± 10 [SD]) were included. Readers who were blinded to the nature of the images could not distinguish real from synthetic contrast-enhanced images (average accuracy of differentiation: approach A, 52 of 100; approach B, 61 of 100). The test set included images with and without enhancing lesions (29 enhancing masses and 21 nonmass enhancement; 50 total). When readers who were not blinded compared the appearance of the real versus synthetic contrast-enhanced images side by side, approach A image ratings were significantly higher than those of approach B (mean rating, 4.6 ± 0.1 vs 3.0 ± 0.2; P < .001), with the noninferiority margin met by synthetic images from approach A (P < .001) but not B (P > .99). Conclusion Generative adversarial networks may be useful to enable breast MRI with reduced contrast agent dose. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Bahl in this issue.


Asunto(s)
Medios de Contraste , Imagen por Resonancia Magnética , Humanos , Persona de Mediana Edad , Estudios Retrospectivos , Imagen por Resonancia Magnética/métodos , Mama , Aprendizaje Automático
9.
Radiology ; 309(1): e230806, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37787671

RESUMEN

Background Clinicians consider both imaging and nonimaging data when diagnosing diseases; however, current machine learning approaches primarily consider data from a single modality. Purpose To develop a neural network architecture capable of integrating multimodal patient data and compare its performance to models incorporating a single modality for diagnosing up to 25 pathologic conditions. Materials and Methods In this retrospective study, imaging and nonimaging patient data were extracted from the Medical Information Mart for Intensive Care (MIMIC) database and an internal database comprised of chest radiographs and clinical parameters inpatients in the intensive care unit (ICU) (January 2008 to December 2020). The MIMIC and internal data sets were each split into training (n = 33 893, n = 28 809), validation (n = 740, n = 7203), and test (n = 1909, n = 9004) sets. A novel transformer-based neural network architecture was trained to diagnose up to 25 conditions using nonimaging data alone, imaging data alone, or multimodal data. Diagnostic performance was assessed using area under the receiver operating characteristic curve (AUC) analysis. Results The MIMIC and internal data sets included 36 542 patients (mean age, 63 years ± 17 [SD]; 20 567 male patients) and 45 016 patients (mean age, 66 years ± 16; 27 577 male patients), respectively. The multimodal model showed improved diagnostic performance for all pathologic conditions. For the MIMIC data set, the mean AUC was 0.77 (95% CI: 0.77, 0.78) when both chest radiographs and clinical parameters were used, compared with 0.70 (95% CI: 0.69, 0.71; P < .001) for only chest radiographs and 0.72 (95% CI: 0.72, 0.73; P < .001) for only clinical parameters. These findings were confirmed on the internal data set. Conclusion A model trained on imaging and nonimaging data outperformed models trained on only one type of data for diagnosing multiple diseases in patients in an ICU setting. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Kitamura and Topol in this issue.


Asunto(s)
Aprendizaje Profundo , Humanos , Masculino , Persona de Mediana Edad , Anciano , Estudios Retrospectivos , Radiografía , Bases de Datos Factuales , Pacientes Internos
10.
Radiology ; 307(1): e220510, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36472534

RESUMEN

Background Supine chest radiography for bedridden patients in intensive care units (ICUs) is one of the most frequently ordered imaging studies worldwide. Purpose To evaluate the diagnostic performance of a neural network-based model that is trained on structured semiquantitative radiologic reports of bedside chest radiographs. Materials and Methods For this retrospective single-center study, children and adults in the ICU of a university hospital who had been imaged using bedside chest radiography from January 2009 to December 2020 were reported by using a structured and itemized template. Ninety-eight radiologists rated the radiographs semiquantitatively for the severity of disease patterns. These data were used to train a neural network to identify cardiomegaly, pulmonary congestion, pleural effusion, pulmonary opacities, and atelectasis. A held-out internal test set (100 radiographs from 100 patients) that was assessed independently by an expert panel of six radiologists provided the ground truth. Individual assessments by each of these six radiologists, by two nonradiologist physicians in the ICU, and by the neural network were compared with the ground truth. Separately, the nonradiologist physicians assessed the images without and with preliminary readings provided by the neural network. The weighted Cohen κ coefficient was used to measure agreement between the readers and the ground truth. Results A total of 193 566 radiographs in 45 016 patients (mean age, 66 years ± 16 [SD]; 61% men) were included and divided into training (n = 122 294; 64%), validation (n = 31 243; 16%), and test (n = 40 029; 20%) sets. The neural network exhibited higher agreement with a majority vote of the expert panel (κ = 0.86) than each individual radiologist compared with the majority vote of the expert panel (κ = 0.81 to ≤0.84). When the neural network provided preliminary readings, the reports of the nonradiologist physicians improved considerably (aided vs unaided, κ = 0.87 vs 0.79, respectively; P < .001). Conclusion A neural network trained with structured semiquantitative bedside chest radiography reports allowed nonradiologist physicians improved interpretations compared with the consensus reading of expert radiologists. © RSNA, 2022 Supplemental material is available for this article. See also the editorial by Wielpütz in this issue.


Asunto(s)
Inteligencia Artificial , Radiografía Torácica , Masculino , Adulto , Niño , Humanos , Anciano , Femenino , Estudios Retrospectivos , Radiografía Torácica/métodos , Pulmón , Radiografía
11.
Gastric Cancer ; 26(2): 264-274, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36264524

RESUMEN

BACKGROUND: Computational pathology uses deep learning (DL) to extract biomarkers from routine pathology slides. Large multicentric datasets improve performance, but such datasets are scarce for gastric cancer. This limitation could be overcome by Swarm Learning (SL). METHODS: Here, we report the results of a multicentric retrospective study of SL for prediction of molecular biomarkers in gastric cancer. We collected tissue samples with known microsatellite instability (MSI) and Epstein-Barr Virus (EBV) status from four patient cohorts from Switzerland, Germany, the UK and the USA, storing each dataset on a physically separate computer. RESULTS: On an external validation cohort, the SL-based classifier reached an area under the receiver operating curve (AUROC) of 0.8092 (± 0.0132) for MSI prediction and 0.8372 (± 0.0179) for EBV prediction. The centralized model, which was trained on all datasets on a single computer, reached a similar performance. CONCLUSIONS: Our findings demonstrate the feasibility of SL-based molecular biomarkers in gastric cancer. In the future, SL could be used for collaborative training and, thus, improve the performance of these biomarkers. This may ultimately result in clinical-grade performance and generalizability.


Asunto(s)
Infecciones por Virus de Epstein-Barr , Neoplasias Gástricas , Humanos , Herpesvirus Humano 4/genética , Estudios Retrospectivos , Neoplasias Gástricas/patología , Inestabilidad de Microsatélites , Biomarcadores de Tumor/genética
12.
J Pathol ; 256(1): 50-60, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34561876

RESUMEN

Deep learning is a powerful tool in computational pathology: it can be used for tumor detection and for predicting genetic alterations based on histopathology images alone. Conventionally, tumor detection and prediction of genetic alterations are two separate workflows. Newer methods have combined them, but require complex, manually engineered computational pipelines, restricting reproducibility and robustness. To address these issues, we present a new method for simultaneous tumor detection and prediction of genetic alterations: The Slide-Level Assessment Model (SLAM) uses a single off-the-shelf neural network to predict molecular alterations directly from routine pathology slides without any manual annotations, improving upon previous methods by automatically excluding normal and non-informative tissue regions. SLAM requires only standard programming libraries and is conceptually simpler than previous approaches. We have extensively validated SLAM for clinically relevant tasks using two large multicentric cohorts of colorectal cancer patients, Darmkrebs: Chancen der Verhütung durch Screening (DACHS) from Germany and Yorkshire Cancer Research Bowel Cancer Improvement Programme (YCR-BCIP) from the UK. We show that SLAM yields reliable slide-level classification of tumor presence with an area under the receiver operating curve (AUROC) of 0.980 (confidence interval 0.975, 0.984; n = 2,297 tumor and n = 1,281 normal slides). In addition, SLAM can detect microsatellite instability (MSI)/mismatch repair deficiency (dMMR) or microsatellite stability/mismatch repair proficiency with an AUROC of 0.909 (0.888, 0.929; n = 2,039 patients) and BRAF mutational status with an AUROC of 0.821 (0.786, 0.852; n = 2,075 patients). The improvement with respect to previous methods was validated in a large external testing cohort in which MSI/dMMR status was detected with an AUROC of 0.900 (0.864, 0.931; n = 805 patients). In addition, SLAM provides human-interpretable visualization maps, enabling the analysis of multiplexed network predictions by human experts. In summary, SLAM is a new simple and powerful method for computational pathology that could be applied to multiple disease contexts. © 2021 The Authors. The Journal of Pathology published by John Wiley & Sons, Ltd. on behalf of The Pathological Society of Great Britain and Ireland.


Asunto(s)
Neoplasias Encefálicas/genética , Neoplasias Encefálicas/patología , Neoplasias Colorrectales/genética , Neoplasias Colorrectales/patología , Inestabilidad de Microsatélites , Mutación/genética , Síndromes Neoplásicos Hereditarios/genética , Síndromes Neoplásicos Hereditarios/patología , Adulto , Anciano , Anciano de 80 o más Años , Neoplasias Encefálicas/diagnóstico , Estudios de Cohortes , Neoplasias Colorrectales/diagnóstico , Aprendizaje Profundo , Femenino , Genotipo , Humanos , Masculino , Persona de Mediana Edad , Síndromes Neoplásicos Hereditarios/diagnóstico , Reproducibilidad de los Resultados
13.
J Med Internet Res ; 25: e43110, 2023 03 16.
Artículo en Inglés | MEDLINE | ID: mdl-36927634

RESUMEN

Generative models, such as DALL-E 2 (OpenAI), could represent promising future tools for image generation, augmentation, and manipulation for artificial intelligence research in radiology, provided that these models have sufficient medical domain knowledge. Herein, we show that DALL-E 2 has learned relevant representations of x-ray images, with promising capabilities in terms of zero-shot text-to-image generation of new images, the continuation of an image beyond its original boundaries, and the removal of elements; however, its capabilities for the generation of images with pathological abnormalities (eg, tumors, fractures, and inflammation) or computed tomography, magnetic resonance imaging, or ultrasound images are still limited. The use of generative models for augmenting and generating radiological data thus seems feasible, even if the further fine-tuning and adaptation of these models to their respective domains are required first.


Asunto(s)
Inteligencia Artificial , Radiología , Humanos , Tomografía Computarizada por Rayos X/métodos , Imagen por Resonancia Magnética/métodos , Ultrasonografía
14.
Eur Radiol ; 32(11): 7430-7438, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-35524784

RESUMEN

OBJECTIVES: Levonorgestrel-releasing intrauterine contraceptive devices (LNG-IUDs) are designed to exhibit only local hormonal effects. There is an ongoing debate on whether LNG-IUDs can have side effects similar to systemic hormonal medication. Benign background parenchymal enhancement (BPE) in dynamic contrast-enhanced (DCE) MRI has been established as a sensitive marker of hormonal stimulation of the breast. We investigated the association between LNG-IUD use and BPE in breast MRI to further explore possible systemic effects of LNG-IUDs. METHODS: Our hospital database was searched to identify premenopausal women without personal history of breast cancer, oophorectomy, and hormone replacement or antihormone therapy, who had undergone standardized DCE breast MRI at least twice, once with and without an LNG-IUD in place. To avoid confounding aging-related effects on BPE, half of included women had their first MRI without, the other half with, LNG-IUD in place. Degree of BPE was analyzed according to the ACR categories. Wilcoxon-matched-pairs signed-rank test was used to compare the distribution of ACR categories with vs. without LNG-IUD. RESULTS: Forty-eight women (mean age, 46 years) were included. In 24/48 women (50% [95% CI: 35.9-64.1%]), ACR categories did not change with vs. without LNG-IUDs. In 23/48 women (48% [33.9-62.1%]), the ACR category was higher with vs. without LNG-IUDs; in 1/48 (2% [0-6%]), the ACR category was lower with vs. without LNG-IUDs. The change of ACR category depending on the presence or absence of an LNG-IUD proved highly significant (p < 0.001). CONCLUSION: The use of an LNG-IUD can be associated with increased BPE in breast MRI, providing further evidence that LNG-IUDs do have systemic effects. KEY POINTS: • The use of levonorgestrel-releasing intrauterine contraceptive devices is associated with increased background parenchymal enhancement in breast MRI. • This suggests that hormonal effects of these devices are not only confined to the uterine cavity, but may be systemic. • Potential systemic effects of levonorgestrel-releasing intrauterine contraceptive devices should therefore be considered.


Asunto(s)
Dispositivos Intrauterinos de Cobre , Dispositivos Intrauterinos Medicados , Femenino , Humanos , Persona de Mediana Edad , Levonorgestrel/efectos adversos , Dispositivos Intrauterinos Medicados/efectos adversos , Dispositivos Intrauterinos de Cobre/efectos adversos , Mama/diagnóstico por imagen , Imagen por Resonancia Magnética
15.
J Vasc Interv Radiol ; 32(6): 836-842.e2, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33689835

RESUMEN

PURPOSE: To compare hepatic hypertrophy in the contralateral lobe achieved by unilobar transarterial radioembolization (TARE) versus portal vein embolization (PVE) in a swine model. METHODS: After an escalation study to determine the optimum dose to achieve hypertrophy after unilobar TARE in 4 animals, 16 pigs were treated by TARE (yttrium-90 resin microspheres) or PVE (lipiodol/n-butyl cyanoacrylate). Liver volume was calculated based on CT before treatment and during 6 months of follow-up. Independent t-test (P < .05) was used to compare hypertrophy. The relationship between hypertrophy after TARE and absorbed dose was calculated using the Pearson correlation. RESULTS: At 2 and 4 weeks after treatment, a significantly higher degree of future liver remnant hypertrophy was observed in the PVE group versus the TARE group, with a median volume gain of 31% (interquartile range [IQR]: 16%-66%) for PVE versus 23% (IQR: 6%-36%) for TARE after 2 weeks and 51% (IQR: 47%-69%) for PVE versus 29% (IQR: 20%-50%) for TARE after 4 weeks. After 3 and 6 months, hypertrophy converged without a statistically significant difference, with a volume gain of 103% (IQR: 86%-119%) for PVE versus 82% (IQR: 70%-96%) for TARE after 3 months and 115% (IQR: 70%-46%) for PVE versus 86% (IQR: 58%-111%) for TARE after 6 months. A strong correlation was observed between radiation dose (median 162 Gy, IQR: 139-175) and hypertrophy. CONCLUSIONS: PVE resulted in rapid hypertrophy within 1 month of the procedure, followed by a plateau, whereas TARE resulted in comparable hypertrophy by 3-6 months. TARE-induced hypertrophy correlated with radiation absorbed dose.


Asunto(s)
Embolización Terapéutica , Enbucrilato/administración & dosificación , Aceite Etiodizado/administración & dosificación , Arteria Hepática , Regeneración Hepática , Hígado/irrigación sanguínea , Vena Porta , Radiofármacos/administración & dosificación , Radioisótopos de Itrio/administración & dosificación , Animales , Embolización Terapéutica/efectos adversos , Enbucrilato/toxicidad , Aceite Etiodizado/toxicidad , Femenino , Arteria Hepática/diagnóstico por imagen , Hipertrofia , Inyecciones Intraarteriales , Inyecciones Intravenosas , Hígado/diagnóstico por imagen , Hígado/patología , Modelos Animales , Vena Porta/diagnóstico por imagen , Radiofármacos/efectos adversos , Porcinos , Porcinos Enanos , Factores de Tiempo , Radioisótopos de Itrio/toxicidad
16.
Magn Reson Med ; 83(4): 1192-1207, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-31631385

RESUMEN

PURPOSE: Magnetic resonance fingerprinting (MRF) with spiral readout enables rapid quantification of tissue relaxation times. However, it is prone to blurring because of off-resonance effects. Hence, fat blurring into adjacent regions might prevent identification of small tumors by their quantitative T1 and T2 values. This study aims to correct for the blurring artifacts, thereby enabling fast quantitative mapping in the female breast. METHODS: The impact of fat blurring on spiral MRF results was first assessed by simulations. Then, MRF was combined with 3-point Dixon water-fat separation and spiral blurring correction based on conjugate phase reconstruction. The approach was assessed in phantom experiments and compared to Cartesian reference measurements, namely inversion recovery (IR), multi-echo spin echo (MESE), and Cartesian MRF, by normalized root-mean-square error (NRMSE) and SD calculations. Feasibility is further demonstrated in vivo for quantitative breast measurements of 6 healthy female volunteers, age range 24-31 y. RESULTS: In the phantom experiment, the blurring correction reduced the NRMSE per phantom vial on average from 16% to 8% for T1 and from 18% to 11% for T2 when comparing spiral MRF to IR/MESE sequences. When comparing to Cartesian MRF, the NRMSE reduced from 15% to 8% for T1 and from 12% to 7% for T2 . Furthermore, SDs decreased. In vivo, the blurring correction removed fat bias on T1 /T2 from a rim of ~7-8 mm width adjacent to fatty structures. CONCLUSION: The blurring correction for spiral MRF yields improved quantitative maps in the presence of water and fat.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Agua , Adulto , Algoritmos , Femenino , Humanos , Imagen por Resonancia Magnética , Espectroscopía de Resonancia Magnética , Fantasmas de Imagen , Adulto Joven
17.
MAGMA ; 33(6): 839-854, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32314105

RESUMEN

OBJECTIVE: Beyond static assessment, functional techniques are increasingly applied in magnetic resonance imaging (MRI) studies. Stress MRI techniques bring together MRI and mechanical loading to study knee joint and tissue functionality, yet prototypical axial compressive loading devices are bulky and complex to operate. This study aimed to design and validate an MRI-compatible pressure-controlled varus-valgus loading device that applies loading along the joint line. METHODS: Following the device's thorough validation, we demonstrated proof of concept by subjecting a structurally intact human cadaveric knee joint to serial imaging in unloaded and loaded configurations, i.e. to varus and valgus loading at 7.5 kPa (= 73.5 N), 15 kPa (= 147.1 N), and 22.5 kPa (= 220.6 N). Following clinical standard (PDw fs) and high-resolution 3D water-selective cartilage (WATSc) sequences, we performed manual segmentations and computations of morphometric cartilage measures. We used CT and radiography (to quantify joint space widths) and histology and biomechanics (to assess tissue quality) as references. RESULTS: We found (sub)regional decreases in cartilage volume, thickness, and mean joint space widths reflective of areal pressurization of the medial and lateral femorotibial compartments. DISCUSSION: Once substantiated by larger sample sizes, varus-valgus loading may provide a powerful alternative stress MRI technique.


Asunto(s)
Cartílago Articular , Fenómenos Biomecánicos , Cartílago Articular/diagnóstico por imagen , Humanos , Articulación de la Rodilla/diagnóstico por imagen , Imagen por Resonancia Magnética , Soporte de Peso
18.
Radiology ; 290(2): 290-297, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30422086

RESUMEN

Purpose To compare the diagnostic performance of radiomic analysis (RA) and a convolutional neural network (CNN) to radiologists for classification of contrast agent-enhancing lesions as benign or malignant at multiparametric breast MRI. Materials and Methods Between August 2011 and August 2015, 447 patients with 1294 enhancing lesions (787 malignant, 507 benign; median size, 15 mm ± 20) were evaluated. Lesions were manually segmented by one breast radiologist. RA was performed by using L1 regularization and principal component analysis. CNN used a deep residual neural network with 34 layers. All algorithms were also retrained on half the number of lesions (n = 647). Machine interpretations were compared with prospective interpretations by three breast radiologists. Standard of reference was histologic analysis or follow-up. Areas under the receiver operating curve (AUCs) were used to compare diagnostic performance. Results CNN trained on the full cohort was superior to training on the half-size cohort (AUC, 0.88 vs 0.83, respectively; P = .01), but there was no difference for RA and L1 regularization (AUC, 0.81 vs 0.80, respectively; P = .76) or RA and principal component analysis (AUC, 0.78 vs 0.78, respectively; P = .93). By using the full cohort, CNN performance (AUC, 0.88; 95% confidence interval: 0.86, 0.89) was better than RA and L1 regularization (AUC, 0.81; 95% confidence interval: 0.79, 0.83; P < .001) and RA and principal component analysis (AUC, 0.78; 95% confidence interval: 0.76, 0.80; P < .001). However, CNN was inferior to breast radiologist interpretation (AUC, 0.98; 95% confidence interval: 0.96, 0.99; P < .001). Conclusion A convolutional neural network was superior to radiomic analysis for classification of enhancing lesions as benign or malignant at multiparametric breast MRI. Both approaches were inferior to radiologists' performance; however, more training data will further improve performance of convolutional neural network, but not that of radiomics algorithms. © RSNA, 2018 Online supplemental material is available for this article.


Asunto(s)
Neoplasias de la Mama/diagnóstico por imagen , Mama/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Femenino , Humanos , Estudios Prospectivos
19.
J Magn Reson Imaging ; 49(6): 1676-1683, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-30623506

RESUMEN

BACKGROUND: Fat-fraction has been established as a relevant marker for the assessment and diagnosis of neuromuscular diseases. For computing this metric, segmentation of muscle tissue in MR images is a first crucial step. PURPOSE: To tackle the high degree of variability in combination with the high annotation effort for training supervised segmentation models (such as fully convolutional neural networks). STUDY TYPE: Prospective. SUBJECTS: In all, 41 patients consisting of 20 patients showing fatty infiltration and 21 healthy subjects. Field Strength/Sequence: The T1 -weighted MR-pulse sequences were acquired on a 1.5T scanner. ASSESSMENT: To increase performance with limited training data, we propose a domain-specific technique for simulating fatty infiltrations (i.e., texture augmentation) in nonaffected subjects' MR images in combination with shape augmentation. For simulating the fatty infiltrations, we make use of an architecture comprising several competing networks (generative adversarial networks) that facilitate a realistic artificial conversion between healthy and infiltrated MR images. Finally, we assess the segmentation accuracy (Dice similarity coefficient). STATISTICAL TESTS: A Wilcoxon signed rank test was performed to assess whether differences in segmentation accuracy are significant. RESULTS: The mean Dice similarity coefficients significantly increase from 0.84-0.88 (P < 0.01) using data augmentation if training is performed with mixed data and from 0.59-0.87 (P < 0.001) if training is conducted with healthy subjects only. DATA CONCLUSION: Domain-specific data adaptation is highly suitable for facilitating neural network-based segmentation of thighs with feasible manual effort for creating training data. The results even suggest an approach completely bypassing manual annotations. LEVEL OF EVIDENCE: 4 Technical Efficacy: Stage 3.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Enfermedades Neuromusculares/diagnóstico por imagen , Grasa Subcutánea/diagnóstico por imagen , Muslo/diagnóstico por imagen , Algoritmos , Simulación por Computador , Bases de Datos Factuales , Femenino , Voluntarios Sanos , Humanos , Masculino , Redes Neurales de la Computación , Estudios Prospectivos , Reproducibilidad de los Resultados
20.
Eur Radiol ; 34(2): 1176-1178, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37580599
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA