Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 96
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Emerg Radiol ; 31(2): 167-178, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38302827

RESUMEN

PURPOSE: The AAST Organ Injury Scale is widely adopted for splenic injury severity but suffers from only moderate inter-rater agreement. This work assesses SpleenPro, a prototype interactive explainable artificial intelligence/machine learning (AI/ML) diagnostic aid to support AAST grading, for effects on radiologist dwell time, agreement, clinical utility, and user acceptance. METHODS: Two trauma radiology ad hoc expert panelists independently performed timed AAST grading on 76 admission CT studies with blunt splenic injury, first without AI/ML assistance, and after a 2-month washout period and randomization, with AI/ML assistance. To evaluate user acceptance, three versions of the SpleenPro user interface with increasing explainability were presented to four independent expert panelists with four example cases each. A structured interview consisting of Likert scales and free responses was conducted, with specific questions regarding dimensions of diagnostic utility (DU); mental support (MS); effort, workload, and frustration (EWF); trust and reliability (TR); and likelihood of future use (LFU). RESULTS: SpleenPro significantly decreased interpretation times for both raters. Weighted Cohen's kappa increased from 0.53 to 0.70 with AI/ML assistance. During user acceptance interviews, increasing explainability was associated with improvement in Likert scores for MS, EWF, TR, and LFU. Expert panelists indicated the need for a combined early notification and grading functionality, PACS integration, and report autopopulation to improve DU. CONCLUSIONS: SpleenPro was useful for improving objectivity of AAST grading and increasing mental support. Formative user research identified generalizable concepts including the need for a combined detection and grading pipeline and integration with the clinical workflow.


Asunto(s)
Tomografía Computarizada por Rayos X , Heridas no Penetrantes , Humanos , Tomografía Computarizada por Rayos X/métodos , Inteligencia Artificial , Reproducibilidad de los Resultados , Aprendizaje Automático
2.
Alzheimers Dement ; 20(4): 3074-3079, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38324244

RESUMEN

This perspective outlines the Artificial Intelligence and Technology Collaboratories (AITC) at Johns Hopkins University, University of Pennsylvania, and University of Massachusetts, highlighting their roles in developing AI-based technologies for older adult care, particularly targeting Alzheimer's disease (AD). These National Institute on Aging (NIA) centers foster collaboration among clinicians, gerontologists, ethicists, business professionals, and engineers to create AI solutions. Key activities include identifying technology needs, stakeholder engagement, training, mentoring, data integration, and navigating ethical challenges. The objective is to apply these innovations effectively in real-world scenarios, including in rural settings. In addition, the AITC focuses on developing best practices for AI application in the care of older adults, facilitating pilot studies, and addressing ethical concerns related to technology development for older adults with cognitive impairment, with the ultimate aim of improving the lives of older adults and their caregivers. HIGHLIGHTS: Addressing the complex needs of older adults with Alzheimer's disease (AD) requires a comprehensive approach, integrating medical and social support. Current gaps in training, techniques, tools, and expertise hinder uniform access across communities and health care settings. Artificial intelligence (AI) and digital technologies hold promise in transforming care for this demographic. Yet, transitioning these innovations from concept to marketable products presents significant challenges, often stalling promising advancements in the developmental phase. The Artificial Intelligence and Technology Collaboratories (AITC) program, funded by the National Institute on Aging (NIA), presents a viable model. These Collaboratories foster the development and implementation of AI methods and technologies through projects aimed at improving care for older Americans, particularly those with AD, and promote the sharing of best practices in AI and technology integration. Why Does This Matter? The National Institute on Aging (NIA) Artificial Intelligence and Technology Collaboratories (AITC) program's mission is to accelerate the adoption of artificial intelligence (AI) and new technologies for the betterment of older adults, especially those with dementia. By bridging scientific and technological expertise, fostering clinical and industry partnerships, and enhancing the sharing of best practices, this program can significantly improve the health and quality of life for older adults with Alzheimer's disease (AD).


Asunto(s)
Enfermedad de Alzheimer , Isotiocianatos , Estados Unidos , Humanos , Anciano , Enfermedad de Alzheimer/terapia , Inteligencia Artificial , Gerociencia , Calidad de Vida , Tecnología
3.
Ophthalmology ; 130(6): 631-639, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36754173

RESUMEN

PURPOSE: To compare the accuracy of detecting moderate and rapid rates of glaucoma worsening over a 2-year period with different numbers of OCT scans and visual field (VF) tests in a large sample of glaucoma and glaucoma suspect eyes. DESIGN: Descriptive and simulation study. PARTICIPANTS: The OCT sample comprised 12 150 eyes from 7392 adults with glaucoma or glaucoma suspect status followed up at the Wilmer Eye Institute from 2013 through 2021. The VF sample comprised 20 583 eyes from 10 958 adults from the same database. All eyes had undergone at least 5 measurements over follow-up from the Zeiss Cirrus OCT or Humphrey Field Analyzer. METHODS: Within-eye rates of change in retinal nerve fiber layer (RNFL) thickness and mean deviation (MD) were measured using linear regression. For each measured rate, simulated measurements of RNFL thickness and MD were generated using the distributions of residuals. Simulated rates of change for different numbers of OCT scans and VF tests over a 2-year period were used to estimate the accuracy of detecting moderate (75th percentile) and rapid (90th percentile) worsening for OCT and VF. Accuracy was defined as the percentage of simulated eyes in which the true rate of worsening (the rate without measurement error) was at or less than a criterion rate (e.g., 75th or 90th percentile). MAIN OUTCOME MEASURES: The accuracy of diagnosing moderate and rapid rates of glaucoma worsening for different numbers of OCT scans and VF tests over a 2-year period. RESULTS: Accuracy was less than 50% for both OCT and VF when diagnosing worsening after a 2-year period. OCT accuracy was 5 to 10 percentage points higher than VF accuracy at detecting moderate worsening and 10 to 15 percentage points higher for rapid worsening. Accuracy increased by more than 17 percentage points when using both OCT and VF to detect worsening, that is, when relying on either OCT or VF to be accurate. CONCLUSIONS: More frequent OCT scans and VF tests are needed to improve the accuracy of diagnosing glaucoma worsening. Accuracy greatly increases when relying on both OCT and VF to detect worsening. FINANCIAL DISCLOSURE(S): The author(s) have no proprietary or commercial interest in any materials discussed in this article.


Asunto(s)
Glaucoma , Campos Visuales , Adulto , Humanos , Tomografía de Coherencia Óptica/métodos , Células Ganglionares de la Retina , Fibras Nerviosas , Glaucoma/diagnóstico , Pruebas del Campo Visual/métodos , Presión Intraocular
4.
Ophthalmology ; 130(1): 39-47, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-35932839

RESUMEN

PURPOSE: To estimate the number of OCT scans necessary to detect moderate and rapid rates of retinal nerve fiber layer (RNFL) thickness worsening at different levels of accuracy using a large sample of glaucoma and glaucoma-suspect eyes. DESIGN: Descriptive and simulation study. PARTICIPANTS: Twelve thousand one hundred fifty eyes from 7392 adult patients with glaucoma or glaucoma-suspect status followed up at the Wilmer Eye Institute from 2013 through 2021. All eyes had at least 5 measurements of RNFL thickness on the Cirrus OCT (Carl Zeiss Meditec) with signal strength of 6 or more. METHODS: Rates of RNFL worsening for average RNFL thickness and for the 4 quadrants were measured using linear regression. Simulations were used to estimate the accuracy of detecting worsening-defined as the percentage of patients in whom the true rate of RNFL worsening was at or less than different criterion rates of worsening when the OCT-measured rate was also at or less than these criterion rates-for two different measurement strategies: evenly spaced (equal time intervals between measurements) and clustered (approximately half the measurements at each end point of the period). MAIN OUTCOME MEASURES: The 75th percentile (moderate) and 90th percentile (rapid) rates of RNFL worsening for average RNFL thickness and the accuracy of diagnosing worsening at these moderate and rapid rates. RESULTS: The 75th and 90th percentile rates of worsening for average RNFL thickness were -1.09 µm/year and -2.35 µm/year, respectively. Simulations showed that, for the average measurement frequency in our sample of approximately 3 OCT scans over a 2-year period, moderate and rapid RNFL worsening were diagnosed accurately only 47% and 40% of the time, respectively. Estimates for the number of OCT scans needed to achieve a range of accuracy levels are provided. For example, 60% accuracy requires 7 measurements to detect both moderate and rapid worsening within a 2-year period if the more efficient clustered measurement strategy is used. CONCLUSIONS: To diagnose RNFL worsening more accurately, the number of OCT scans must be increased compared with current clinical practice. A clustered measurement strategy reduces the number of scans required compared with evenly spacing measurements.


Asunto(s)
Glaucoma , Hipertensión Ocular , Disco Óptico , Enfermedades del Nervio Óptico , Adulto , Humanos , Tomografía de Coherencia Óptica/métodos , Enfermedades del Nervio Óptico/diagnóstico , Presión Intraocular , Campos Visuales , Células Ganglionares de la Retina , Fibras Nerviosas , Glaucoma/diagnóstico
5.
Ophthalmology ; 130(8): 854-862, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37003520

RESUMEN

PURPOSE: To identify visual field (VF) worsening from longitudinal OCT data using a gated transformer network (GTN) and to examine how GTN performance varies for different definitions of VF worsening and different stages of glaucoma severity at baseline. DESIGN: Retrospective longitudinal cohort study. PARTICIPANTS: A total of 4211 eyes (2666 patients) followed up at the Johns Hopkins Wilmer Eye Institute with at least 5 reliable VF results and 1 reliable OCT scan within 1 year of each reliable VF test. METHODS: For each eye, we used 3 trend-based methods (mean deviation [MD] slope, VF index slope, and pointwise linear regression) and 3 event-based methods (Guided Progression Analysis, Collaborative Initial Glaucoma Treatment Study scoring system, and Advanced Glaucoma Intervention Study [AGIS] scoring system) to define VF worsening. Additionally, we developed a "majority of 6" algorithm (M6) that classifies an eye as worsening if 4 or more of the 6 aforementioned methods classified the eye as worsening. Using these 7 reference standards for VF worsening, we trained 7 GTNs that accept a series of at least 5 as input OCT scans and provide as output a probability of VF worsening. Gated transformer network performance was compared with non-deep learning models with the same serial OCT input from previous studies-linear mixed-effects models (MEMs) and naive Bayes classifiers (NBCs)-using the same training sets and reference standards as for the GTN. MAIN OUTCOME MEASURES: Area under the receiver operating characteristic curve (AUC). RESULTS: The M6 labeled 63 eyes (1.50%) as worsening. The GTN achieved an AUC of 0.97 (95% confidence interval, 0.88-1.00) when trained with M6. Gated transformer networks trained and optimized with the other 6 reference standards showed an AUC ranging from 0.78 (MD slope) to 0.89 (AGIS). The 7 GTNs outperformed all 7 MEMs and all 7 NBCs accordingly. Gated transformer network performance was worse for eyes with more severe glaucoma at baseline. CONCLUSIONS: Gated transformer network models trained with OCT data may be used to identify VF worsening. After further validation, implementing such models in clinical practice may allow us to track functional worsening of glaucoma with less onerous structural testing. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found after the references.


Asunto(s)
Glaucoma , Campos Visuales , Humanos , Estudios Retrospectivos , Teorema de Bayes , Tomografía de Coherencia Óptica , Estudios Longitudinales , Trastornos de la Visión/diagnóstico , Glaucoma/diagnóstico , Pruebas del Campo Visual/métodos , Presión Intraocular , Progresión de la Enfermedad
6.
Emerg Radiol ; 30(1): 41-50, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36371579

RESUMEN

BACKGROUND: The American Association for the Surgery of Trauma (AAST) splenic organ injury scale (OIS) is the most frequently used CT-based grading system for blunt splenic trauma. However, reported inter-rater agreement is modest, and an algorithm that objectively automates grading based on transparent and verifiable criteria could serve as a high-trust diagnostic aid. PURPOSE: To pilot the development of an automated interpretable multi-stage deep learning-based system to predict AAST grade from admission trauma CT. METHODS: Our pipeline includes 4 parts: (1) automated splenic localization, (2) Faster R-CNN-based detection of pseudoaneurysms (PSA) and active bleeds (AB), (3) nnU-Net segmentation and quantification of splenic parenchymal disruption (SPD), and (4) a directed graph that infers AAST grades from detection and segmentation results. Training and validation is performed on a dataset of adult patients (age ≥ 18) with voxelwise labeling, consensus AAST grading, and hemorrhage-related outcome data (n = 174). RESULTS: AAST classification agreement (weighted κ) between automated and consensus AAST grades was substantial (0.79). High-grade (IV and V) injuries were predicted with accuracy, positive predictive value, and negative predictive value of 92%, 95%, and 89%. The area under the curve for predicting hemorrhage control intervention was comparable between expert consensus and automated AAST grading (0.83 vs 0.88). The mean combined inference time for the pipeline was 96.9 s. CONCLUSIONS: The results of our method were rapid and verifiable, with high agreement between automated and expert consensus grades. Diagnosis of high-grade lesions and prediction of hemorrhage control intervention produced accurate results in adult patients.


Asunto(s)
Tomografía Computarizada por Rayos X , Heridas no Penetrantes , Adulto , Humanos , Estados Unidos , Tomografía Computarizada por Rayos X/métodos , Valor Predictivo de las Pruebas , Heridas no Penetrantes/cirugía , Bazo/lesiones , Hemorragia , Estudios Retrospectivos
7.
Ophthalmology ; 129(1): 35-44, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34506846

RESUMEN

PURPOSE: To estimate the effect of achieving target intraocular pressure (IOP) values on visual field (VF) worsening in a treated clinical population. DESIGN: Retrospective analysis of longitudinal data. PARTICIPANTS: A total of 2852 eyes of 1688 patients with glaucoma-related diagnoses treated in a tertiary care practice. All included eyes had at least 5 reliable VF tests and 5 IOP measures on separate visits along with at least 1 target IOP defined by a clinician on the first or second visit. METHODS: The primary dependent variable was the slope of the mean deviation (MD) over time (decibels [dB]/year). The primary independent variable was mean target difference (measured IOP - target IOP). We created simple linear regression models and mixed-effects linear models to study the relationship between MD slope and mean target difference for individual eyes. In the mixed-effects models, we included an interaction term to account for disease severity (mild/suspect, moderate, or advanced) and a spline term to account for the differing effects of achieving target IOP (target difference ≤0) and failing to achieve target IOP (target difference >0). MAIN OUTCOME MEASURES: Rate of change in MD slope (changes in dB/year) per 1 mmHg change in target difference at different stages of glaucoma severity. RESULTS: Across all eyes, a simple linear regression model demonstrated that a 1 mmHg increase in target difference had a -0.018 dB/year (confidence interval [CI], -0.026 to -0.011; P < 0.05) effect on MD slope. The mixed-effects model shows that eyes with moderate disease that fail to achieve their target IOP experience the largest effects, with a 1 mmHg increase in target difference resulting in a -0.119 dB/year (CI, -0.168 to -0.070; P < 0.05) worse MD slope. The effects of missing target IOP on VF worsening were more pronounced than the effect of absolute level of IOP on VF worsening, where a 1 mmHg increase in IOP had a -0.004 dB/year (CI, -0.011 to 0.003; P > 0.05) effect on the MD slope. CONCLUSIONS: In treated patients, failing to achieve target IOP was associated with more rapid VF worsening. Eyes with moderate glaucoma experienced the greatest VF worsening from failing to achieve target IOP.


Asunto(s)
Glaucoma de Ángulo Abierto/fisiopatología , Presión Intraocular/fisiología , Trastornos de la Visión/fisiopatología , Campos Visuales/fisiología , Anciano , Anciano de 80 o más Años , Paquimetría Corneal , Progresión de la Enfermedad , Femenino , Glaucoma de Ángulo Abierto/diagnóstico , Humanos , Masculino , Persona de Mediana Edad , Hipertensión Ocular/diagnóstico , Hipertensión Ocular/fisiopatología , Estudios Retrospectivos , Factores de Riesgo , Índice de Severidad de la Enfermedad , Tonometría Ocular , Trastornos de la Visión/diagnóstico , Pruebas del Campo Visual
8.
Emerg Radiol ; 29(6): 995-1002, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35971025

RESUMEN

PURPOSE: We employ nnU-Net, a state-of-the-art self-configuring deep learning-based semantic segmentation method for quantitative visualization of hemothorax (HTX) in trauma patients, and assess performance using a combination of overlap and volume-based metrics. The accuracy of hemothorax volumes for predicting a composite of hemorrhage-related outcomes - massive transfusion (MT) and in-hospital mortality (IHM) not related to traumatic brain injury - is assessed and compared to subjective expert consensus grading by an experienced chest and emergency radiologist. MATERIALS AND METHODS: The study included manually labeled admission chest CTs from 77 consecutive adult patients with non-negligible (≥ 50 mL) traumatic HTX between 2016 and 2018 from one trauma center. DL results of ensembled nnU-Net were determined from fivefold cross-validation and compared to individual 2D, 3D, and cascaded 3D nnU-Net results using the Dice similarity coefficient (DSC) and volume similarity index. Pearson's r, intraclass correlation coefficient (ICC), and mean bias were also determined for the best performing model. Manual and automated hemothorax volumes and subjective hemothorax volume grades were analyzed as predictors of MT and IHM using AUC comparison. Volume cut-offs yielding sensitivity or specificity ≥ 90% were determined from ROC analysis. RESULTS: Ensembled nnU-Net achieved a mean DSC of 0.75 (SD: ± 0.12), and mean volume similarity of 0.91 (SD: ± 0.10), Pearson r of 0.93, and ICC of 0.92. Mean overmeasurement bias was only 1.7 mL despite a range of manual HTX volumes from 35 to 1503 mL (median: 178 mL). AUC of automated volumes for the composite outcome was 0.74 (95%CI: 0.58-0.91), compared to 0.76 (95%CI: 0.58-0.93) for manual volumes, and 0.76 (95%CI: 0.62-0.90) for consensus expert grading (p = 0.93). Automated volume cut-offs of 77 mL and 334 mL predicted the outcome with 93% sensitivity and 90% specificity respectively. CONCLUSION: Automated HTX volumetry had high method validity, yielded interpretable visual results, and had similar performance for the hemorrhage-related outcomes assessed compared to manual volumes and expert consensus grading. The results suggest promising avenues for automated HTX volumetry in research and clinical care.


Asunto(s)
Aprendizaje Profundo , Traumatismos Torácicos , Adulto , Humanos , Hemotórax/diagnóstico por imagen , Proyectos Piloto , Traumatismos Torácicos/complicaciones , Traumatismos Torácicos/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos
9.
J Neuroophthalmol ; 41(3): 368-374, 2021 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-34415271

RESUMEN

BACKGROUND: To date, deep learning-based detection of optic disc abnormalities in color fundus photographs has mostly been limited to the field of glaucoma. However, many life-threatening systemic and neurological conditions can manifest as optic disc abnormalities. In this study, we aimed to extend the application of deep learning (DL) in optic disc analyses to detect a spectrum of nonglaucomatous optic neuropathies. METHODS: Using transfer learning, we trained a ResNet-152 deep convolutional neural network (DCNN) to distinguish between normal and abnormal optic discs in color fundus photographs (CFPs). Our training data set included 944 deidentified CFPs (abnormal 364; normal 580). Our testing data set included 151 deidentified CFPs (abnormal 71; normal 80). Both the training and testing data sets contained a wide range of optic disc abnormalities, including but not limited to ischemic optic neuropathy, atrophy, compressive optic neuropathy, hereditary optic neuropathy, hypoplasia, papilledema, and toxic optic neuropathy. The standard measures of performance (sensitivity, specificity, and area under the curve of the receiver operating characteristic curve (AUC-ROC)) were used for evaluation. RESULTS: During the 10-fold cross-validation test, our DCNN for distinguishing between normal and abnormal optic discs achieved the following mean performance: AUC-ROC 0.99 (95 CI: 0.98-0.99), sensitivity 94% (95 CI: 91%-97%), and specificity 96% (95 CI: 93%-99%). When evaluated against the external testing data set, our model achieved the following mean performance: AUC-ROC 0.87, sensitivity 90%, and specificity 69%. CONCLUSION: In summary, we have developed a deep learning algorithm that is capable of detecting a spectrum of optic disc abnormalities in color fundus photographs, with a focus on neuro-ophthalmological etiologies. As the next step, we plan to validate our algorithm prospectively as a focused screening tool in the emergency department, which if successful could be beneficial because current practice pattern and training predict a shortage of neuro-ophthalmologists and ophthalmologists in general in the near future.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Técnicas de Diagnóstico Oftalmológico , Disco Óptico/anomalías , Enfermedades del Nervio Óptico/diagnóstico , Humanos , Disco Óptico/diagnóstico por imagen , Curva ROC
10.
J Digit Imaging ; 34(1): 53-65, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33479859

RESUMEN

Admission trauma whole-body CT is routinely employed as a first-line diagnostic tool for characterizing pelvic fracture severity. Tile AO/OTA grade based on the presence or absence of rotational and translational instability corresponds with need for interventions including massive transfusion and angioembolization. An automated method could be highly beneficial for point of care triage in this critical time-sensitive setting. A dataset of 373 trauma whole-body CTs collected from two busy level 1 trauma centers with consensus Tile AO/OTA grading by three trauma radiologists was used to train and test a triplanar parallel concatenated network incorporating orthogonal full-thickness multiplanar reformat (MPR) views as input with a ResNeXt-50 backbone. Input pelvic images were first derived using an automated registration and cropping technique. Performance of the network for classification of rotational and translational instability was compared with that of (1) an analogous triplanar architecture incorporating an LSTM RNN network, (2) a previously described 3D autoencoder-based method, and (3) grading by a fourth independent blinded radiologist with trauma expertise. Confusion matrix results were derived, anchored to peak Matthews correlation coefficient (MCC). Associations with clinical outcomes were determined using Fisher's exact test. The triplanar parallel concatenated method had the highest accuracies for discriminating translational and rotational instability (85% and 74%, respectively), with specificity, recall, and F1 score of 93.4%, 56.5%, and 0.63 for translational instability and 71.7%, 75.7%, and 0.77 for rotational instability. Accuracy of this method was equivalent to the single radiologist read for rotational instability (74.0% versus 76.7%, p = 0.40), but significantly higher for translational instability (85.0% versus 75.1, p = 0.0007). Mean inference time was < 0.1 s per test image. Translational instability determined with this method was associated with need for angioembolization and massive transfusion (p = 0.002-0.008). Saliency maps demonstrated that the network focused on the sacroiliac complex and pubic symphysis, in keeping with the AO/OTA grading paradigm. A multiview concatenated deep network leveraging 3D information from orthogonal thick-MPR images predicted rotationally and translationally unstable pelvic fractures with accuracy comparable to an independent reader with trauma radiology expertise. Model output demonstrated significant association with key clinical outcomes.


Asunto(s)
Aprendizaje Profundo , Fracturas Óseas , Huesos Pélvicos , Fracturas Óseas/diagnóstico por imagen , Humanos , Huesos Pélvicos/diagnóstico por imagen , Pelvis , Tomografía Computarizada por Rayos X
11.
Surg Innov ; 28(2): 208-213, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33980097

RESUMEN

As the scope and scale of the COVID-19 pandemic became clear in early March of 2020, the faculty of the Malone Center engaged in several projects aimed at addressing both immediate and long-term implications of COVID-19. In this article, we briefly outline the processes that we engaged in to identify areas of need, the projects that emerged, and the results of those projects. As we write, some of these projects have reached a natural termination point, whereas others continue. We identify some of the factors that led to projects that moved to implementation, as well as factors that led projects to fail to progress or to be abandoned.


Asunto(s)
Ingeniería Biomédica , COVID-19/prevención & control , Ingeniería Biomédica/instrumentación , Ingeniería Biomédica/métodos , Ingeniería Biomédica/organización & administración , Bases de Datos Factuales , Humanos , Nebraska , Pandemias , SARS-CoV-2
12.
Proc IEEE Inst Electr Electron Eng ; 108(1): 198-214, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31920208

RESUMEN

Data-driven computational approaches have evolved to enable extraction of information from medical images with a reliability, accuracy and speed which is already transforming their interpretation and exploitation in clinical practice. While similar benefits are longed for in the field of interventional imaging, this ambition is challenged by a much higher heterogeneity. Clinical workflows within interventional suites and operating theatres are extremely complex and typically rely on poorly integrated intra-operative devices, sensors, and support infrastructures. Taking stock of some of the most exciting developments in machine learning and artificial intelligence for computer assisted interventions, we highlight the crucial need to take context and human factors into account in order to address these challenges. Contextual artificial intelligence for computer assisted intervention, or CAI4CAI, arises as an emerging opportunity feeding into the broader field of surgical data science. Central challenges being addressed in CAI4CAI include how to integrate the ensemble of prior knowledge and instantaneous sensory information from experts, sensors and actuators; how to create and communicate a faithful and actionable shared representation of the surgery among a mixed human-AI actor team; how to design interventional systems and associated cognitive shared control schemes for online uncertainty-aware collaborative decision making ultimately producing more precise and reliable interventions.

13.
Int J Comput Assist Radiol Surg ; 19(6): 1165-1173, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38619790

RESUMEN

PURPOSE: The expanding capabilities of surgical systems bring with them increasing complexity in the interfaces that humans use to control them. Robotic C-arm X-ray imaging systems, for instance, often require manipulation of independent axes via joysticks, while higher-level control options hide inside device-specific menus. The complexity of these interfaces hinder "ready-to-hand" use of high-level functions. Natural language offers a flexible, familiar interface for surgeons to express their desired outcome rather than remembering the steps necessary to achieve it, enabling direct access to task-aware, patient-specific C-arm functionality. METHODS: We present an English language voice interface for controlling a robotic X-ray imaging system with task-aware functions for pelvic trauma surgery. Our fully integrated system uses a large language model (LLM) to convert natural spoken commands into machine-readable instructions, enabling low-level commands like "Tilt back a bit," to increase the angular tilt or patient-specific directions like, "Go to the obturator oblique view of the right ramus," based on automated image analysis. RESULTS: We evaluate our system with 212 prompts provided by an attending physician, in which the system performed satisfactory actions 97% of the time. To test the fully integrated system, we conduct a real-time study in which an attending physician placed orthopedic hardware along desired trajectories through an anthropomorphic phantom, interacting solely with an X-ray system via voice. CONCLUSION: Voice interfaces offer a convenient, flexible way for surgeons to manipulate C-arms based on desired outcomes rather than device-specific processes. As LLMs grow increasingly capable, so too will their applications in supporting higher-level interactions with surgical assistance systems.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Humanos , Procedimientos Quirúrgicos Robotizados/métodos , Procedimientos Quirúrgicos Robotizados/instrumentación , Interfaz Usuario-Computador , Pelvis/cirugía , Procesamiento de Lenguaje Natural
14.
Int J Comput Assist Radiol Surg ; 19(7): 1301-1312, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38709423

RESUMEN

PURPOSE: Specialized robotic and surgical tools are increasing the complexity of operating rooms (ORs), requiring elaborate preparation especially when techniques or devices are to be used for the first time. Spatial planning can improve efficiency and identify procedural obstacles ahead of time, but real ORs offer little availability to optimize space utilization. Methods for creating reconstructions of physical setups, i.e., digital twins, are needed to enable immersive spatial planning of such complex environments in virtual reality. METHODS: We present a neural rendering-based method to create immersive digital twins of complex medical environments and devices from casual video capture that enables spatial planning of surgical scenarios. To evaluate our approach we recreate two operating rooms and ten objects through neural reconstruction, then conduct a user study with 21 graduate students carrying out planning tasks in the resulting virtual environment. We analyze task load, presence, perceived utility, plus exploration and interaction behavior compared to low visual complexity versions of the same environments. RESULTS: Results show significantly increased perceived utility and presence using the neural reconstruction-based environments, combined with higher perceived workload and exploratory behavior. There's no significant difference in interactivity. CONCLUSION: We explore the feasibility of using modern reconstruction techniques to create digital twins of complex medical environments and objects. Without requiring expert knowledge or specialized hardware, users can create, explore and interact with objects in virtual environments. Results indicate benefits like high perceived utility while being technically approachable, which may indicate promise of this approach for spatial planning and beyond.


Asunto(s)
Quirófanos , Realidad Virtual , Humanos , Interfaz Usuario-Computador , Femenino , Masculino , Adulto , Estudios de Factibilidad , Procedimientos Quirúrgicos Robotizados/métodos
15.
IEEE Trans Med Imaging ; 43(1): 275-285, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37549070

RESUMEN

Image-based 2D/3D registration is a critical technique for fluoroscopic guided surgical interventions. Conventional intensity-based 2D/3D registration approa- ches suffer from a limited capture range due to the presence of local minima in hand-crafted image similarity functions. In this work, we aim to extend the 2D/3D registration capture range with a fully differentiable deep network framework that learns to approximate a convex-shape similarity function. The network uses a novel Projective Spatial Transformer (ProST) module that has unique differentiability with respect to 3D pose parameters, and is trained using an innovative double backward gradient-driven loss function. We compare the most popular learning-based pose regression methods in the literature and use the well-established CMAES intensity-based registration as a benchmark. We report registration pose error, target registration error (TRE) and success rate (SR) with a threshold of 10mm for mean TRE. For the pelvis anatomy, the median TRE of ProST followed by CMAES is 4.4mm with a SR of 65.6% in simulation, and 2.2mm with a SR of 73.2% in real data. The CMAES SRs without using ProST registration are 28.5% and 36.0% in simulation and real data, respectively. Our results suggest that the proposed ProST network learns a practical similarity function, which vastly extends the capture range of conventional intensity-based 2D/3D registration. We believe that the unique differentiable property of ProST has the potential to benefit related 3D medical imaging research applications. The source code is available at https://github.com/gaocong13/Projective-Spatial-Transformers.


Asunto(s)
Imagenología Tridimensional , Pelvis , Imagenología Tridimensional/métodos , Fluoroscopía/métodos , Programas Informáticos , Algoritmos
16.
Mach Learn Med Imaging ; 14349: 205-213, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38617846

RESUMEN

The synergy of long-range dependencies from transformers and local representations of image content from convolutional neural networks (CNNs) has led to advanced architectures and increased performance for various medical image analysis tasks due to their complementary benefits. However, compared with CNNs, transformers require considerably more training data, due to a larger number of parameters and an absence of inductive bias. The need for increasingly large datasets continues to be problematic, particularly in the context of medical imaging, where both annotation efforts and data protection result in limited data availability. In this work, inspired by the human decision-making process of correlating new "evidence" with previously memorized "experience", we propose a Memorizing Vision Transformer (MoViT) to alleviate the need for large-scale datasets to successfully train and deploy transformer-based architectures. MoViT leverages an external memory structure to cache history attention snapshots during the training stage. To prevent overfitting, we incorporate an innovative memory update scheme, attention temporal moving average, to update the stored external memories with the historical moving average. For inference speedup, we design a prototypical attention learning method to distill the external memory into smaller representative subsets. We evaluate our method on a public histology image dataset and an in-house MRI dataset, demonstrating that MoViT applied to varied medical image analysis tasks, can outperform vanilla transformer models across varied data regimes, especially in cases where only a small amount of annotated data is available. More importantly, MoViT can reach a competitive performance of ViT with only 3.0% of the training data. In conclusion, MoViT provides a simple plug-in for transformer architectures which may contribute to reducing the training data needed to achieve acceptable models for a broad range of medical image analysis tasks.

17.
Sci Rep ; 14(1): 599, 2024 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-38182701

RESUMEN

To develop and evaluate the performance of a deep learning model (DLM) that predicts eyes at high risk of surgical intervention for uncontrolled glaucoma based on multimodal data from an initial ophthalmology visit. Longitudinal, observational, retrospective study. 4898 unique eyes from 4038 adult glaucoma or glaucoma-suspect patients who underwent surgery for uncontrolled glaucoma (trabeculectomy, tube shunt, xen, or diode surgery) between 2013 and 2021, or did not undergo glaucoma surgery but had 3 or more ophthalmology visits. We constructed a DLM to predict the occurrence of glaucoma surgery within various time horizons from a baseline visit. Model inputs included spatially oriented visual field (VF) and optical coherence tomography (OCT) data as well as clinical and demographic features. Separate DLMs with the same architecture were trained to predict the occurrence of surgery within 3 months, within 3-6 months, within 6 months-1 year, within 1-2 years, within 2-3 years, within 3-4 years, and within 4-5 years from the baseline visit. Included eyes were randomly split into 60%, 20%, and 20% for training, validation, and testing. DLM performance was measured using area under the receiver operating characteristic curve (AUC) and precision-recall curve (PRC). Shapley additive explanations (SHAP) were utilized to assess the importance of different features. Model prediction of surgery for uncontrolled glaucoma within 3 months had the best AUC of 0.92 (95% CI 0.88, 0.96). DLMs achieved clinically useful AUC values (> 0.8) for all models that predicted the occurrence of surgery within 3 years. According to SHAP analysis, all 7 models placed intraocular pressure (IOP) within the five most important features in predicting the occurrence of glaucoma surgery. Mean deviation (MD) and average retinal nerve fiber layer (RNFL) thickness were listed among the top 5 most important features by 6 of the 7 models. DLMs can successfully identify eyes requiring surgery for uncontrolled glaucoma within specific time horizons. Predictive performance decreases as the time horizon for forecasting surgery increases. Implementing prediction models in a clinical setting may help identify patients that should be referred to a glaucoma specialist for surgical evaluation.


Asunto(s)
Aprendizaje Profundo , Glaucoma , Oftalmología , Trabeculectomía , Adulto , Humanos , Estudios Retrospectivos , Glaucoma/cirugía , Retina
18.
PLoS One ; 19(1): e0296674, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38215176

RESUMEN

Linear regression of optical coherence tomography measurements of peripapillary retinal nerve fiber layer thickness is often used to detect glaucoma progression and forecast future disease course. However, current measurement frequencies suggest that clinicians often apply linear regression to a relatively small number of measurements (e.g., less than a handful). In this study, we estimate the accuracy of linear regression in predicting the next reliable measurement of average retinal nerve fiber layer thickness using Zeiss Cirrus optical coherence tomography measurements of average retinal nerve fiber layer thickness from a sample of 6,471 eyes with glaucoma or glaucoma-suspect status. Linear regression is compared to two null models: no glaucoma worsening, and worsening due to aging. Linear regression on the first M ≥ 2 measurements was significantly worse at predicting a reliable M+1st measurement for 2 ≤ M ≤ 6. This range was reduced to 2 ≤ M ≤ 5 when retinal nerve fiber layer thickness measurements were first "corrected" for scan quality. Simulations based on measurement frequencies in our sample-on average 393 ± 190 days between consecutive measurements-show that linear regression outperforms both null models when M ≥ 5 and the goal is to forecast moderate (75th percentile) worsening, and when M ≥ 3 for rapid (90th percentile) worsening. If linear regression is used to assess disease trajectory with a small number of measurements over short time periods (e.g., 1-2 years), as is often the case in clinical practice, the number of optical coherence tomography examinations needs to be increased.


Asunto(s)
Glaucoma , Tomografía de Coherencia Óptica , Humanos , Tomografía de Coherencia Óptica/métodos , Modelos Lineales , Células Ganglionares de la Retina , Glaucoma/diagnóstico por imagen , Fibras Nerviosas , Presión Intraocular
19.
Ophthalmol Glaucoma ; 7(3): 222-231, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38296108

RESUMEN

PURPOSE: Develop and evaluate the performance of a deep learning model (DLM) that forecasts eyes with low future visual field (VF) variability, and study the impact of using this DLM on sample size requirements for neuroprotective trials. DESIGN: Retrospective cohort and simulation study. METHODS: We included 1 eye per patient with baseline reliable VFs, OCT, clinical measures (demographics, intraocular pressure, and visual acuity), and 5 subsequent reliable VFs to forecast VF variability using DLMs and perform sample size estimates. We estimated sample size for 3 groups of eyes: all eyes (AE), low variability eyes (LVE: the subset of AE with a standard deviation of mean deviation [MD] slope residuals in the bottom 25th percentile), and DLM-predicted low variability eyes (DLPE: the subset of AE predicted to be low variability by the DLM). Deep learning models using only baseline VF/OCT/clinical data as input (DLM1), or also using a second VF (DLM2) were constructed to predict low VF variability (DLPE1 and DLPE2, respectively). Data were split 60/10/30 into train/val/test. Clinical trial simulations were performed only on the test set. We estimated the sample size necessary to detect treatment effects of 20% to 50% in MD slope with 80% power. Power was defined as the percentage of simulated clinical trials where the MD slope was significantly worse from the control. Clinical trials were simulated with visits every 3 months with a total of 10 visits. RESULTS: A total of 2817 eyes were included in the analysis. Deep learning models 1 and 2 achieved an area under the receiver operating characteristic curve of 0.73 (95% confidence interval [CI]: 0.68, 0.76) and 0.82 (95% CI: 0.78, 0.85) in forecasting low VF variability. When compared with including AE, using DLPE1 and DLPE2 reduced sample size to achieve 80% power by 30% and 38% for 30% treatment effect, and 31% and 38% for 50% treatment effect. CONCLUSIONS: Deep learning models can forecast eyes with low VF variability using data from a single baseline clinical visit. This can reduce sample size requirements, and potentially reduce the burden of future glaucoma clinical trials. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.


Asunto(s)
Aprendizaje Profundo , Presión Intraocular , Campos Visuales , Humanos , Campos Visuales/fisiología , Estudios Retrospectivos , Presión Intraocular/fisiología , Femenino , Masculino , Ensayos Clínicos como Asunto , Glaucoma/fisiopatología , Glaucoma/diagnóstico , Agudeza Visual/fisiología , Anciano , Pruebas del Campo Visual/métodos , Persona de Mediana Edad , Tomografía de Coherencia Óptica/métodos
20.
Med Phys ; 51(6): 4158-4180, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38733602

RESUMEN

PURPOSE: Interventional Cone-Beam CT (CBCT) offers 3D visualization of soft-tissue and vascular anatomy, enabling 3D guidance of abdominal interventions. However, its long acquisition time makes CBCT susceptible to patient motion. Image-based autofocus offers a suitable platform for compensation of deformable motion in CBCT, but it relies on handcrafted motion metrics based on first-order image properties and that lack awareness of the underlying anatomy. This work proposes a data-driven approach to motion quantification via a learned, context-aware, deformable metric, VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ , that quantifies the amount of motion degradation as well as the realism of the structural anatomical content in the image. METHODS: The proposed VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ was modeled as a deep convolutional neural network (CNN) trained to recreate a reference-based structural similarity metric-visual information fidelity (VIF). The deep CNN acted on motion-corrupted images, providing an estimation of the spatial VIF map that would be obtained against a motion-free reference, capturing motion distortion, and anatomic plausibility. The deep CNN featured a multi-branch architecture with a high-resolution branch for estimation of voxel-wise VIF on a small volume of interest. A second contextual, low-resolution branch provided features associated to anatomical context for disentanglement of motion effects and anatomical appearance. The deep CNN was trained on paired motion-free and motion-corrupted data obtained with a high-fidelity forward projection model for a protocol involving 120 kV and 9.90 mGy. The performance of VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ was evaluated via metrics of correlation with ground truth VIF ${\bm{VIF}}$ and with the underlying deformable motion field in simulated data with deformable motion fields with amplitude ranging from 5 to 20 mm and frequency from 2.4 up to 4 cycles/scan. Robustness to variation in tissue contrast and noise levels was assessed in simulation studies with varying beam energy (90-120 kV) and dose (1.19-39.59 mGy). Further validation was obtained on experimental studies with a deformable phantom. Final validation was obtained via integration of VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ on an autofocus compensation framework, applied to motion compensation on experimental datasets and evaluated via metric of spatial resolution on soft-tissue boundaries and sharpness of contrast-enhanced vascularity. RESULTS: The magnitude and spatial map of VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ showed consistent and high correlation levels with the ground truth in both simulation and real data, yielding average normalized cross correlation (NCC) values of 0.95 and 0.88, respectively. Similarly, VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ achieved good correlation values with the underlying motion field, with average NCC of 0.90. In experimental phantom studies, VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ properly reflects the change in motion amplitudes and frequencies: voxel-wise averaging of the local VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ across the full reconstructed volume yielded an average value of 0.69 for the case with mild motion (2 mm, 12 cycles/scan) and 0.29 for the case with severe motion (12 mm, 6 cycles/scan). Autofocus motion compensation using VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ resulted in noticeable mitigation of motion artifacts and improved spatial resolution of soft tissue and high-contrast structures, resulting in reduction of edge spread function width of 8.78% and 9.20%, respectively. Motion compensation also increased the conspicuity of contrast-enhanced vascularity, reflected in an increase of 9.64% in vessel sharpness. CONCLUSION: The proposed VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ , featuring a novel context-aware architecture, demonstrated its capacity as a reference-free surrogate of structural similarity to quantify motion-induced degradation of image quality and anatomical plausibility of image content. The validation studies showed robust performance across motion patterns, x-ray techniques, and anatomical instances. The proposed anatomy- and context-aware metric poses a powerful alternative to conventional motion estimation metrics, and a step forward for application of deep autofocus motion compensation for guidance in clinical interventional procedures.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Procesamiento de Imagen Asistido por Computador , Movimiento , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA