Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 96
Filtrar
1.
Emerg Radiol ; 31(2): 167-178, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38302827

RESUMO

PURPOSE: The AAST Organ Injury Scale is widely adopted for splenic injury severity but suffers from only moderate inter-rater agreement. This work assesses SpleenPro, a prototype interactive explainable artificial intelligence/machine learning (AI/ML) diagnostic aid to support AAST grading, for effects on radiologist dwell time, agreement, clinical utility, and user acceptance. METHODS: Two trauma radiology ad hoc expert panelists independently performed timed AAST grading on 76 admission CT studies with blunt splenic injury, first without AI/ML assistance, and after a 2-month washout period and randomization, with AI/ML assistance. To evaluate user acceptance, three versions of the SpleenPro user interface with increasing explainability were presented to four independent expert panelists with four example cases each. A structured interview consisting of Likert scales and free responses was conducted, with specific questions regarding dimensions of diagnostic utility (DU); mental support (MS); effort, workload, and frustration (EWF); trust and reliability (TR); and likelihood of future use (LFU). RESULTS: SpleenPro significantly decreased interpretation times for both raters. Weighted Cohen's kappa increased from 0.53 to 0.70 with AI/ML assistance. During user acceptance interviews, increasing explainability was associated with improvement in Likert scores for MS, EWF, TR, and LFU. Expert panelists indicated the need for a combined early notification and grading functionality, PACS integration, and report autopopulation to improve DU. CONCLUSIONS: SpleenPro was useful for improving objectivity of AAST grading and increasing mental support. Formative user research identified generalizable concepts including the need for a combined detection and grading pipeline and integration with the clinical workflow.


Assuntos
Tomografia Computadorizada por Raios X , Ferimentos não Penetrantes , Humanos , Tomografia Computadorizada por Raios X/métodos , Inteligência Artificial , Reprodutibilidade dos Testes , Aprendizado de Máquina
2.
Alzheimers Dement ; 20(4): 3074-3079, 2024 04.
Artigo em Inglês | MEDLINE | ID: mdl-38324244

RESUMO

This perspective outlines the Artificial Intelligence and Technology Collaboratories (AITC) at Johns Hopkins University, University of Pennsylvania, and University of Massachusetts, highlighting their roles in developing AI-based technologies for older adult care, particularly targeting Alzheimer's disease (AD). These National Institute on Aging (NIA) centers foster collaboration among clinicians, gerontologists, ethicists, business professionals, and engineers to create AI solutions. Key activities include identifying technology needs, stakeholder engagement, training, mentoring, data integration, and navigating ethical challenges. The objective is to apply these innovations effectively in real-world scenarios, including in rural settings. In addition, the AITC focuses on developing best practices for AI application in the care of older adults, facilitating pilot studies, and addressing ethical concerns related to technology development for older adults with cognitive impairment, with the ultimate aim of improving the lives of older adults and their caregivers. HIGHLIGHTS: Addressing the complex needs of older adults with Alzheimer's disease (AD) requires a comprehensive approach, integrating medical and social support. Current gaps in training, techniques, tools, and expertise hinder uniform access across communities and health care settings. Artificial intelligence (AI) and digital technologies hold promise in transforming care for this demographic. Yet, transitioning these innovations from concept to marketable products presents significant challenges, often stalling promising advancements in the developmental phase. The Artificial Intelligence and Technology Collaboratories (AITC) program, funded by the National Institute on Aging (NIA), presents a viable model. These Collaboratories foster the development and implementation of AI methods and technologies through projects aimed at improving care for older Americans, particularly those with AD, and promote the sharing of best practices in AI and technology integration. Why Does This Matter? The National Institute on Aging (NIA) Artificial Intelligence and Technology Collaboratories (AITC) program's mission is to accelerate the adoption of artificial intelligence (AI) and new technologies for the betterment of older adults, especially those with dementia. By bridging scientific and technological expertise, fostering clinical and industry partnerships, and enhancing the sharing of best practices, this program can significantly improve the health and quality of life for older adults with Alzheimer's disease (AD).


Assuntos
Doença de Alzheimer , Isotiocianatos , Estados Unidos , Humanos , Idoso , Doença de Alzheimer/terapia , Inteligência Artificial , Gerociência , Qualidade de Vida , Tecnologia
3.
Ophthalmology ; 130(6): 631-639, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36754173

RESUMO

PURPOSE: To compare the accuracy of detecting moderate and rapid rates of glaucoma worsening over a 2-year period with different numbers of OCT scans and visual field (VF) tests in a large sample of glaucoma and glaucoma suspect eyes. DESIGN: Descriptive and simulation study. PARTICIPANTS: The OCT sample comprised 12 150 eyes from 7392 adults with glaucoma or glaucoma suspect status followed up at the Wilmer Eye Institute from 2013 through 2021. The VF sample comprised 20 583 eyes from 10 958 adults from the same database. All eyes had undergone at least 5 measurements over follow-up from the Zeiss Cirrus OCT or Humphrey Field Analyzer. METHODS: Within-eye rates of change in retinal nerve fiber layer (RNFL) thickness and mean deviation (MD) were measured using linear regression. For each measured rate, simulated measurements of RNFL thickness and MD were generated using the distributions of residuals. Simulated rates of change for different numbers of OCT scans and VF tests over a 2-year period were used to estimate the accuracy of detecting moderate (75th percentile) and rapid (90th percentile) worsening for OCT and VF. Accuracy was defined as the percentage of simulated eyes in which the true rate of worsening (the rate without measurement error) was at or less than a criterion rate (e.g., 75th or 90th percentile). MAIN OUTCOME MEASURES: The accuracy of diagnosing moderate and rapid rates of glaucoma worsening for different numbers of OCT scans and VF tests over a 2-year period. RESULTS: Accuracy was less than 50% for both OCT and VF when diagnosing worsening after a 2-year period. OCT accuracy was 5 to 10 percentage points higher than VF accuracy at detecting moderate worsening and 10 to 15 percentage points higher for rapid worsening. Accuracy increased by more than 17 percentage points when using both OCT and VF to detect worsening, that is, when relying on either OCT or VF to be accurate. CONCLUSIONS: More frequent OCT scans and VF tests are needed to improve the accuracy of diagnosing glaucoma worsening. Accuracy greatly increases when relying on both OCT and VF to detect worsening. FINANCIAL DISCLOSURE(S): The author(s) have no proprietary or commercial interest in any materials discussed in this article.


Assuntos
Glaucoma , Campos Visuais , Adulto , Humanos , Tomografia de Coerência Óptica/métodos , Células Ganglionares da Retina , Fibras Nervosas , Glaucoma/diagnóstico , Testes de Campo Visual/métodos , Pressão Intraocular
4.
Ophthalmology ; 130(1): 39-47, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35932839

RESUMO

PURPOSE: To estimate the number of OCT scans necessary to detect moderate and rapid rates of retinal nerve fiber layer (RNFL) thickness worsening at different levels of accuracy using a large sample of glaucoma and glaucoma-suspect eyes. DESIGN: Descriptive and simulation study. PARTICIPANTS: Twelve thousand one hundred fifty eyes from 7392 adult patients with glaucoma or glaucoma-suspect status followed up at the Wilmer Eye Institute from 2013 through 2021. All eyes had at least 5 measurements of RNFL thickness on the Cirrus OCT (Carl Zeiss Meditec) with signal strength of 6 or more. METHODS: Rates of RNFL worsening for average RNFL thickness and for the 4 quadrants were measured using linear regression. Simulations were used to estimate the accuracy of detecting worsening-defined as the percentage of patients in whom the true rate of RNFL worsening was at or less than different criterion rates of worsening when the OCT-measured rate was also at or less than these criterion rates-for two different measurement strategies: evenly spaced (equal time intervals between measurements) and clustered (approximately half the measurements at each end point of the period). MAIN OUTCOME MEASURES: The 75th percentile (moderate) and 90th percentile (rapid) rates of RNFL worsening for average RNFL thickness and the accuracy of diagnosing worsening at these moderate and rapid rates. RESULTS: The 75th and 90th percentile rates of worsening for average RNFL thickness were -1.09 µm/year and -2.35 µm/year, respectively. Simulations showed that, for the average measurement frequency in our sample of approximately 3 OCT scans over a 2-year period, moderate and rapid RNFL worsening were diagnosed accurately only 47% and 40% of the time, respectively. Estimates for the number of OCT scans needed to achieve a range of accuracy levels are provided. For example, 60% accuracy requires 7 measurements to detect both moderate and rapid worsening within a 2-year period if the more efficient clustered measurement strategy is used. CONCLUSIONS: To diagnose RNFL worsening more accurately, the number of OCT scans must be increased compared with current clinical practice. A clustered measurement strategy reduces the number of scans required compared with evenly spacing measurements.


Assuntos
Glaucoma , Hipertensão Ocular , Disco Óptico , Doenças do Nervo Óptico , Adulto , Humanos , Tomografia de Coerência Óptica/métodos , Doenças do Nervo Óptico/diagnóstico , Pressão Intraocular , Campos Visuais , Células Ganglionares da Retina , Fibras Nervosas , Glaucoma/diagnóstico
5.
Ophthalmology ; 130(8): 854-862, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37003520

RESUMO

PURPOSE: To identify visual field (VF) worsening from longitudinal OCT data using a gated transformer network (GTN) and to examine how GTN performance varies for different definitions of VF worsening and different stages of glaucoma severity at baseline. DESIGN: Retrospective longitudinal cohort study. PARTICIPANTS: A total of 4211 eyes (2666 patients) followed up at the Johns Hopkins Wilmer Eye Institute with at least 5 reliable VF results and 1 reliable OCT scan within 1 year of each reliable VF test. METHODS: For each eye, we used 3 trend-based methods (mean deviation [MD] slope, VF index slope, and pointwise linear regression) and 3 event-based methods (Guided Progression Analysis, Collaborative Initial Glaucoma Treatment Study scoring system, and Advanced Glaucoma Intervention Study [AGIS] scoring system) to define VF worsening. Additionally, we developed a "majority of 6" algorithm (M6) that classifies an eye as worsening if 4 or more of the 6 aforementioned methods classified the eye as worsening. Using these 7 reference standards for VF worsening, we trained 7 GTNs that accept a series of at least 5 as input OCT scans and provide as output a probability of VF worsening. Gated transformer network performance was compared with non-deep learning models with the same serial OCT input from previous studies-linear mixed-effects models (MEMs) and naive Bayes classifiers (NBCs)-using the same training sets and reference standards as for the GTN. MAIN OUTCOME MEASURES: Area under the receiver operating characteristic curve (AUC). RESULTS: The M6 labeled 63 eyes (1.50%) as worsening. The GTN achieved an AUC of 0.97 (95% confidence interval, 0.88-1.00) when trained with M6. Gated transformer networks trained and optimized with the other 6 reference standards showed an AUC ranging from 0.78 (MD slope) to 0.89 (AGIS). The 7 GTNs outperformed all 7 MEMs and all 7 NBCs accordingly. Gated transformer network performance was worse for eyes with more severe glaucoma at baseline. CONCLUSIONS: Gated transformer network models trained with OCT data may be used to identify VF worsening. After further validation, implementing such models in clinical practice may allow us to track functional worsening of glaucoma with less onerous structural testing. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found after the references.


Assuntos
Glaucoma , Campos Visuais , Humanos , Estudos Retrospectivos , Teorema de Bayes , Tomografia de Coerência Óptica , Estudos Longitudinais , Transtornos da Visão/diagnóstico , Glaucoma/diagnóstico , Testes de Campo Visual/métodos , Pressão Intraocular , Progressão da Doença
6.
Emerg Radiol ; 30(1): 41-50, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36371579

RESUMO

BACKGROUND: The American Association for the Surgery of Trauma (AAST) splenic organ injury scale (OIS) is the most frequently used CT-based grading system for blunt splenic trauma. However, reported inter-rater agreement is modest, and an algorithm that objectively automates grading based on transparent and verifiable criteria could serve as a high-trust diagnostic aid. PURPOSE: To pilot the development of an automated interpretable multi-stage deep learning-based system to predict AAST grade from admission trauma CT. METHODS: Our pipeline includes 4 parts: (1) automated splenic localization, (2) Faster R-CNN-based detection of pseudoaneurysms (PSA) and active bleeds (AB), (3) nnU-Net segmentation and quantification of splenic parenchymal disruption (SPD), and (4) a directed graph that infers AAST grades from detection and segmentation results. Training and validation is performed on a dataset of adult patients (age ≥ 18) with voxelwise labeling, consensus AAST grading, and hemorrhage-related outcome data (n = 174). RESULTS: AAST classification agreement (weighted κ) between automated and consensus AAST grades was substantial (0.79). High-grade (IV and V) injuries were predicted with accuracy, positive predictive value, and negative predictive value of 92%, 95%, and 89%. The area under the curve for predicting hemorrhage control intervention was comparable between expert consensus and automated AAST grading (0.83 vs 0.88). The mean combined inference time for the pipeline was 96.9 s. CONCLUSIONS: The results of our method were rapid and verifiable, with high agreement between automated and expert consensus grades. Diagnosis of high-grade lesions and prediction of hemorrhage control intervention produced accurate results in adult patients.


Assuntos
Tomografia Computadorizada por Raios X , Ferimentos não Penetrantes , Adulto , Humanos , Estados Unidos , Tomografia Computadorizada por Raios X/métodos , Valor Preditivo dos Testes , Ferimentos não Penetrantes/cirurgia , Baço/lesões , Hemorragia , Estudos Retrospectivos
7.
Ophthalmology ; 129(1): 35-44, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34506846

RESUMO

PURPOSE: To estimate the effect of achieving target intraocular pressure (IOP) values on visual field (VF) worsening in a treated clinical population. DESIGN: Retrospective analysis of longitudinal data. PARTICIPANTS: A total of 2852 eyes of 1688 patients with glaucoma-related diagnoses treated in a tertiary care practice. All included eyes had at least 5 reliable VF tests and 5 IOP measures on separate visits along with at least 1 target IOP defined by a clinician on the first or second visit. METHODS: The primary dependent variable was the slope of the mean deviation (MD) over time (decibels [dB]/year). The primary independent variable was mean target difference (measured IOP - target IOP). We created simple linear regression models and mixed-effects linear models to study the relationship between MD slope and mean target difference for individual eyes. In the mixed-effects models, we included an interaction term to account for disease severity (mild/suspect, moderate, or advanced) and a spline term to account for the differing effects of achieving target IOP (target difference ≤0) and failing to achieve target IOP (target difference >0). MAIN OUTCOME MEASURES: Rate of change in MD slope (changes in dB/year) per 1 mmHg change in target difference at different stages of glaucoma severity. RESULTS: Across all eyes, a simple linear regression model demonstrated that a 1 mmHg increase in target difference had a -0.018 dB/year (confidence interval [CI], -0.026 to -0.011; P < 0.05) effect on MD slope. The mixed-effects model shows that eyes with moderate disease that fail to achieve their target IOP experience the largest effects, with a 1 mmHg increase in target difference resulting in a -0.119 dB/year (CI, -0.168 to -0.070; P < 0.05) worse MD slope. The effects of missing target IOP on VF worsening were more pronounced than the effect of absolute level of IOP on VF worsening, where a 1 mmHg increase in IOP had a -0.004 dB/year (CI, -0.011 to 0.003; P > 0.05) effect on the MD slope. CONCLUSIONS: In treated patients, failing to achieve target IOP was associated with more rapid VF worsening. Eyes with moderate glaucoma experienced the greatest VF worsening from failing to achieve target IOP.


Assuntos
Glaucoma de Ângulo Aberto/fisiopatologia , Pressão Intraocular/fisiologia , Transtornos da Visão/fisiopatologia , Campos Visuais/fisiologia , Idoso , Idoso de 80 Anos ou mais , Paquimetria Corneana , Progressão da Doença , Feminino , Glaucoma de Ângulo Aberto/diagnóstico , Humanos , Masculino , Pessoa de Meia-Idade , Hipertensão Ocular/diagnóstico , Hipertensão Ocular/fisiopatologia , Estudos Retrospectivos , Fatores de Risco , Índice de Gravidade de Doença , Tonometria Ocular , Transtornos da Visão/diagnóstico , Testes de Campo Visual
8.
Emerg Radiol ; 29(6): 995-1002, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35971025

RESUMO

PURPOSE: We employ nnU-Net, a state-of-the-art self-configuring deep learning-based semantic segmentation method for quantitative visualization of hemothorax (HTX) in trauma patients, and assess performance using a combination of overlap and volume-based metrics. The accuracy of hemothorax volumes for predicting a composite of hemorrhage-related outcomes - massive transfusion (MT) and in-hospital mortality (IHM) not related to traumatic brain injury - is assessed and compared to subjective expert consensus grading by an experienced chest and emergency radiologist. MATERIALS AND METHODS: The study included manually labeled admission chest CTs from 77 consecutive adult patients with non-negligible (≥ 50 mL) traumatic HTX between 2016 and 2018 from one trauma center. DL results of ensembled nnU-Net were determined from fivefold cross-validation and compared to individual 2D, 3D, and cascaded 3D nnU-Net results using the Dice similarity coefficient (DSC) and volume similarity index. Pearson's r, intraclass correlation coefficient (ICC), and mean bias were also determined for the best performing model. Manual and automated hemothorax volumes and subjective hemothorax volume grades were analyzed as predictors of MT and IHM using AUC comparison. Volume cut-offs yielding sensitivity or specificity ≥ 90% were determined from ROC analysis. RESULTS: Ensembled nnU-Net achieved a mean DSC of 0.75 (SD: ± 0.12), and mean volume similarity of 0.91 (SD: ± 0.10), Pearson r of 0.93, and ICC of 0.92. Mean overmeasurement bias was only 1.7 mL despite a range of manual HTX volumes from 35 to 1503 mL (median: 178 mL). AUC of automated volumes for the composite outcome was 0.74 (95%CI: 0.58-0.91), compared to 0.76 (95%CI: 0.58-0.93) for manual volumes, and 0.76 (95%CI: 0.62-0.90) for consensus expert grading (p = 0.93). Automated volume cut-offs of 77 mL and 334 mL predicted the outcome with 93% sensitivity and 90% specificity respectively. CONCLUSION: Automated HTX volumetry had high method validity, yielded interpretable visual results, and had similar performance for the hemorrhage-related outcomes assessed compared to manual volumes and expert consensus grading. The results suggest promising avenues for automated HTX volumetry in research and clinical care.


Assuntos
Aprendizado Profundo , Traumatismos Torácicos , Adulto , Humanos , Hemotórax/diagnóstico por imagem , Projetos Piloto , Traumatismos Torácicos/complicações , Traumatismos Torácicos/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
9.
J Neuroophthalmol ; 41(3): 368-374, 2021 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-34415271

RESUMO

BACKGROUND: To date, deep learning-based detection of optic disc abnormalities in color fundus photographs has mostly been limited to the field of glaucoma. However, many life-threatening systemic and neurological conditions can manifest as optic disc abnormalities. In this study, we aimed to extend the application of deep learning (DL) in optic disc analyses to detect a spectrum of nonglaucomatous optic neuropathies. METHODS: Using transfer learning, we trained a ResNet-152 deep convolutional neural network (DCNN) to distinguish between normal and abnormal optic discs in color fundus photographs (CFPs). Our training data set included 944 deidentified CFPs (abnormal 364; normal 580). Our testing data set included 151 deidentified CFPs (abnormal 71; normal 80). Both the training and testing data sets contained a wide range of optic disc abnormalities, including but not limited to ischemic optic neuropathy, atrophy, compressive optic neuropathy, hereditary optic neuropathy, hypoplasia, papilledema, and toxic optic neuropathy. The standard measures of performance (sensitivity, specificity, and area under the curve of the receiver operating characteristic curve (AUC-ROC)) were used for evaluation. RESULTS: During the 10-fold cross-validation test, our DCNN for distinguishing between normal and abnormal optic discs achieved the following mean performance: AUC-ROC 0.99 (95 CI: 0.98-0.99), sensitivity 94% (95 CI: 91%-97%), and specificity 96% (95 CI: 93%-99%). When evaluated against the external testing data set, our model achieved the following mean performance: AUC-ROC 0.87, sensitivity 90%, and specificity 69%. CONCLUSION: In summary, we have developed a deep learning algorithm that is capable of detecting a spectrum of optic disc abnormalities in color fundus photographs, with a focus on neuro-ophthalmological etiologies. As the next step, we plan to validate our algorithm prospectively as a focused screening tool in the emergency department, which if successful could be beneficial because current practice pattern and training predict a shortage of neuro-ophthalmologists and ophthalmologists in general in the near future.


Assuntos
Algoritmos , Aprendizado Profundo , Técnicas de Diagnóstico Oftalmológico , Disco Óptico/anormalidades , Doenças do Nervo Óptico/diagnóstico , Humanos , Disco Óptico/diagnóstico por imagem , Curva ROC
10.
J Digit Imaging ; 34(1): 53-65, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33479859

RESUMO

Admission trauma whole-body CT is routinely employed as a first-line diagnostic tool for characterizing pelvic fracture severity. Tile AO/OTA grade based on the presence or absence of rotational and translational instability corresponds with need for interventions including massive transfusion and angioembolization. An automated method could be highly beneficial for point of care triage in this critical time-sensitive setting. A dataset of 373 trauma whole-body CTs collected from two busy level 1 trauma centers with consensus Tile AO/OTA grading by three trauma radiologists was used to train and test a triplanar parallel concatenated network incorporating orthogonal full-thickness multiplanar reformat (MPR) views as input with a ResNeXt-50 backbone. Input pelvic images were first derived using an automated registration and cropping technique. Performance of the network for classification of rotational and translational instability was compared with that of (1) an analogous triplanar architecture incorporating an LSTM RNN network, (2) a previously described 3D autoencoder-based method, and (3) grading by a fourth independent blinded radiologist with trauma expertise. Confusion matrix results were derived, anchored to peak Matthews correlation coefficient (MCC). Associations with clinical outcomes were determined using Fisher's exact test. The triplanar parallel concatenated method had the highest accuracies for discriminating translational and rotational instability (85% and 74%, respectively), with specificity, recall, and F1 score of 93.4%, 56.5%, and 0.63 for translational instability and 71.7%, 75.7%, and 0.77 for rotational instability. Accuracy of this method was equivalent to the single radiologist read for rotational instability (74.0% versus 76.7%, p = 0.40), but significantly higher for translational instability (85.0% versus 75.1, p = 0.0007). Mean inference time was < 0.1 s per test image. Translational instability determined with this method was associated with need for angioembolization and massive transfusion (p = 0.002-0.008). Saliency maps demonstrated that the network focused on the sacroiliac complex and pubic symphysis, in keeping with the AO/OTA grading paradigm. A multiview concatenated deep network leveraging 3D information from orthogonal thick-MPR images predicted rotationally and translationally unstable pelvic fractures with accuracy comparable to an independent reader with trauma radiology expertise. Model output demonstrated significant association with key clinical outcomes.


Assuntos
Aprendizado Profundo , Fraturas Ósseas , Ossos Pélvicos , Fraturas Ósseas/diagnóstico por imagem , Humanos , Ossos Pélvicos/diagnóstico por imagem , Pelve , Tomografia Computadorizada por Raios X
11.
Surg Innov ; 28(2): 208-213, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33980097

RESUMO

As the scope and scale of the COVID-19 pandemic became clear in early March of 2020, the faculty of the Malone Center engaged in several projects aimed at addressing both immediate and long-term implications of COVID-19. In this article, we briefly outline the processes that we engaged in to identify areas of need, the projects that emerged, and the results of those projects. As we write, some of these projects have reached a natural termination point, whereas others continue. We identify some of the factors that led to projects that moved to implementation, as well as factors that led projects to fail to progress or to be abandoned.


Assuntos
Engenharia Biomédica , COVID-19/prevenção & controle , Engenharia Biomédica/instrumentação , Engenharia Biomédica/métodos , Engenharia Biomédica/organização & administração , Bases de Dados Factuais , Humanos , Nebraska , Pandemias , SARS-CoV-2
12.
Proc IEEE Inst Electr Electron Eng ; 108(1): 198-214, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31920208

RESUMO

Data-driven computational approaches have evolved to enable extraction of information from medical images with a reliability, accuracy and speed which is already transforming their interpretation and exploitation in clinical practice. While similar benefits are longed for in the field of interventional imaging, this ambition is challenged by a much higher heterogeneity. Clinical workflows within interventional suites and operating theatres are extremely complex and typically rely on poorly integrated intra-operative devices, sensors, and support infrastructures. Taking stock of some of the most exciting developments in machine learning and artificial intelligence for computer assisted interventions, we highlight the crucial need to take context and human factors into account in order to address these challenges. Contextual artificial intelligence for computer assisted intervention, or CAI4CAI, arises as an emerging opportunity feeding into the broader field of surgical data science. Central challenges being addressed in CAI4CAI include how to integrate the ensemble of prior knowledge and instantaneous sensory information from experts, sensors and actuators; how to create and communicate a faithful and actionable shared representation of the surgery among a mixed human-AI actor team; how to design interventional systems and associated cognitive shared control schemes for online uncertainty-aware collaborative decision making ultimately producing more precise and reliable interventions.

13.
Int J Comput Assist Radiol Surg ; 19(6): 1165-1173, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38619790

RESUMO

PURPOSE: The expanding capabilities of surgical systems bring with them increasing complexity in the interfaces that humans use to control them. Robotic C-arm X-ray imaging systems, for instance, often require manipulation of independent axes via joysticks, while higher-level control options hide inside device-specific menus. The complexity of these interfaces hinder "ready-to-hand" use of high-level functions. Natural language offers a flexible, familiar interface for surgeons to express their desired outcome rather than remembering the steps necessary to achieve it, enabling direct access to task-aware, patient-specific C-arm functionality. METHODS: We present an English language voice interface for controlling a robotic X-ray imaging system with task-aware functions for pelvic trauma surgery. Our fully integrated system uses a large language model (LLM) to convert natural spoken commands into machine-readable instructions, enabling low-level commands like "Tilt back a bit," to increase the angular tilt or patient-specific directions like, "Go to the obturator oblique view of the right ramus," based on automated image analysis. RESULTS: We evaluate our system with 212 prompts provided by an attending physician, in which the system performed satisfactory actions 97% of the time. To test the fully integrated system, we conduct a real-time study in which an attending physician placed orthopedic hardware along desired trajectories through an anthropomorphic phantom, interacting solely with an X-ray system via voice. CONCLUSION: Voice interfaces offer a convenient, flexible way for surgeons to manipulate C-arms based on desired outcomes rather than device-specific processes. As LLMs grow increasingly capable, so too will their applications in supporting higher-level interactions with surgical assistance systems.


Assuntos
Procedimentos Cirúrgicos Robóticos , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Procedimentos Cirúrgicos Robóticos/instrumentação , Interface Usuário-Computador , Pelve/cirurgia , Processamento de Linguagem Natural
14.
Int J Comput Assist Radiol Surg ; 19(7): 1301-1312, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38709423

RESUMO

PURPOSE: Specialized robotic and surgical tools are increasing the complexity of operating rooms (ORs), requiring elaborate preparation especially when techniques or devices are to be used for the first time. Spatial planning can improve efficiency and identify procedural obstacles ahead of time, but real ORs offer little availability to optimize space utilization. Methods for creating reconstructions of physical setups, i.e., digital twins, are needed to enable immersive spatial planning of such complex environments in virtual reality. METHODS: We present a neural rendering-based method to create immersive digital twins of complex medical environments and devices from casual video capture that enables spatial planning of surgical scenarios. To evaluate our approach we recreate two operating rooms and ten objects through neural reconstruction, then conduct a user study with 21 graduate students carrying out planning tasks in the resulting virtual environment. We analyze task load, presence, perceived utility, plus exploration and interaction behavior compared to low visual complexity versions of the same environments. RESULTS: Results show significantly increased perceived utility and presence using the neural reconstruction-based environments, combined with higher perceived workload and exploratory behavior. There's no significant difference in interactivity. CONCLUSION: We explore the feasibility of using modern reconstruction techniques to create digital twins of complex medical environments and objects. Without requiring expert knowledge or specialized hardware, users can create, explore and interact with objects in virtual environments. Results indicate benefits like high perceived utility while being technically approachable, which may indicate promise of this approach for spatial planning and beyond.


Assuntos
Salas Cirúrgicas , Realidade Virtual , Humanos , Interface Usuário-Computador , Feminino , Masculino , Adulto , Estudos de Viabilidade , Procedimentos Cirúrgicos Robóticos/métodos
15.
IEEE Trans Med Imaging ; 43(1): 275-285, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37549070

RESUMO

Image-based 2D/3D registration is a critical technique for fluoroscopic guided surgical interventions. Conventional intensity-based 2D/3D registration approa- ches suffer from a limited capture range due to the presence of local minima in hand-crafted image similarity functions. In this work, we aim to extend the 2D/3D registration capture range with a fully differentiable deep network framework that learns to approximate a convex-shape similarity function. The network uses a novel Projective Spatial Transformer (ProST) module that has unique differentiability with respect to 3D pose parameters, and is trained using an innovative double backward gradient-driven loss function. We compare the most popular learning-based pose regression methods in the literature and use the well-established CMAES intensity-based registration as a benchmark. We report registration pose error, target registration error (TRE) and success rate (SR) with a threshold of 10mm for mean TRE. For the pelvis anatomy, the median TRE of ProST followed by CMAES is 4.4mm with a SR of 65.6% in simulation, and 2.2mm with a SR of 73.2% in real data. The CMAES SRs without using ProST registration are 28.5% and 36.0% in simulation and real data, respectively. Our results suggest that the proposed ProST network learns a practical similarity function, which vastly extends the capture range of conventional intensity-based 2D/3D registration. We believe that the unique differentiable property of ProST has the potential to benefit related 3D medical imaging research applications. The source code is available at https://github.com/gaocong13/Projective-Spatial-Transformers.


Assuntos
Imageamento Tridimensional , Pelve , Imageamento Tridimensional/métodos , Fluoroscopia/métodos , Software , Algoritmos
16.
Mach Learn Med Imaging ; 14349: 205-213, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38617846

RESUMO

The synergy of long-range dependencies from transformers and local representations of image content from convolutional neural networks (CNNs) has led to advanced architectures and increased performance for various medical image analysis tasks due to their complementary benefits. However, compared with CNNs, transformers require considerably more training data, due to a larger number of parameters and an absence of inductive bias. The need for increasingly large datasets continues to be problematic, particularly in the context of medical imaging, where both annotation efforts and data protection result in limited data availability. In this work, inspired by the human decision-making process of correlating new "evidence" with previously memorized "experience", we propose a Memorizing Vision Transformer (MoViT) to alleviate the need for large-scale datasets to successfully train and deploy transformer-based architectures. MoViT leverages an external memory structure to cache history attention snapshots during the training stage. To prevent overfitting, we incorporate an innovative memory update scheme, attention temporal moving average, to update the stored external memories with the historical moving average. For inference speedup, we design a prototypical attention learning method to distill the external memory into smaller representative subsets. We evaluate our method on a public histology image dataset and an in-house MRI dataset, demonstrating that MoViT applied to varied medical image analysis tasks, can outperform vanilla transformer models across varied data regimes, especially in cases where only a small amount of annotated data is available. More importantly, MoViT can reach a competitive performance of ViT with only 3.0% of the training data. In conclusion, MoViT provides a simple plug-in for transformer architectures which may contribute to reducing the training data needed to achieve acceptable models for a broad range of medical image analysis tasks.

17.
PLoS One ; 19(1): e0296674, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38215176

RESUMO

Linear regression of optical coherence tomography measurements of peripapillary retinal nerve fiber layer thickness is often used to detect glaucoma progression and forecast future disease course. However, current measurement frequencies suggest that clinicians often apply linear regression to a relatively small number of measurements (e.g., less than a handful). In this study, we estimate the accuracy of linear regression in predicting the next reliable measurement of average retinal nerve fiber layer thickness using Zeiss Cirrus optical coherence tomography measurements of average retinal nerve fiber layer thickness from a sample of 6,471 eyes with glaucoma or glaucoma-suspect status. Linear regression is compared to two null models: no glaucoma worsening, and worsening due to aging. Linear regression on the first M ≥ 2 measurements was significantly worse at predicting a reliable M+1st measurement for 2 ≤ M ≤ 6. This range was reduced to 2 ≤ M ≤ 5 when retinal nerve fiber layer thickness measurements were first "corrected" for scan quality. Simulations based on measurement frequencies in our sample-on average 393 ± 190 days between consecutive measurements-show that linear regression outperforms both null models when M ≥ 5 and the goal is to forecast moderate (75th percentile) worsening, and when M ≥ 3 for rapid (90th percentile) worsening. If linear regression is used to assess disease trajectory with a small number of measurements over short time periods (e.g., 1-2 years), as is often the case in clinical practice, the number of optical coherence tomography examinations needs to be increased.


Assuntos
Glaucoma , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Modelos Lineares , Células Ganglionares da Retina , Glaucoma/diagnóstico por imagem , Fibras Nervosas , Pressão Intraocular
18.
Sci Rep ; 14(1): 599, 2024 01 05.
Artigo em Inglês | MEDLINE | ID: mdl-38182701

RESUMO

To develop and evaluate the performance of a deep learning model (DLM) that predicts eyes at high risk of surgical intervention for uncontrolled glaucoma based on multimodal data from an initial ophthalmology visit. Longitudinal, observational, retrospective study. 4898 unique eyes from 4038 adult glaucoma or glaucoma-suspect patients who underwent surgery for uncontrolled glaucoma (trabeculectomy, tube shunt, xen, or diode surgery) between 2013 and 2021, or did not undergo glaucoma surgery but had 3 or more ophthalmology visits. We constructed a DLM to predict the occurrence of glaucoma surgery within various time horizons from a baseline visit. Model inputs included spatially oriented visual field (VF) and optical coherence tomography (OCT) data as well as clinical and demographic features. Separate DLMs with the same architecture were trained to predict the occurrence of surgery within 3 months, within 3-6 months, within 6 months-1 year, within 1-2 years, within 2-3 years, within 3-4 years, and within 4-5 years from the baseline visit. Included eyes were randomly split into 60%, 20%, and 20% for training, validation, and testing. DLM performance was measured using area under the receiver operating characteristic curve (AUC) and precision-recall curve (PRC). Shapley additive explanations (SHAP) were utilized to assess the importance of different features. Model prediction of surgery for uncontrolled glaucoma within 3 months had the best AUC of 0.92 (95% CI 0.88, 0.96). DLMs achieved clinically useful AUC values (> 0.8) for all models that predicted the occurrence of surgery within 3 years. According to SHAP analysis, all 7 models placed intraocular pressure (IOP) within the five most important features in predicting the occurrence of glaucoma surgery. Mean deviation (MD) and average retinal nerve fiber layer (RNFL) thickness were listed among the top 5 most important features by 6 of the 7 models. DLMs can successfully identify eyes requiring surgery for uncontrolled glaucoma within specific time horizons. Predictive performance decreases as the time horizon for forecasting surgery increases. Implementing prediction models in a clinical setting may help identify patients that should be referred to a glaucoma specialist for surgical evaluation.


Assuntos
Aprendizado Profundo , Glaucoma , Oftalmologia , Trabeculectomia , Adulto , Humanos , Estudos Retrospectivos , Glaucoma/cirurgia , Retina
19.
Int J Comput Assist Radiol Surg ; 19(6): 1213-1222, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38642297

RESUMO

PURPOSE: Teamwork in surgery depends on a shared mental model of success, i.e., a common understanding of objectives in the operating room. A shared model leads to increased engagement among team members and is associated with fewer complications and overall better outcomes for patients. However, clinical training typically focuses on role-specific skills, leaving individuals to acquire a shared model indirectly through on-the-job experience. METHODS: We investigate whether virtual reality (VR) cross-training, i.elet@tokeneonedotexposure to other roles, can enhance a shared mental model for non-surgeons more directly. Our study focuses on X-ray guided pelvic trauma surgery, a procedure where successful communication depends on the shared model between the surgeon and a C-arm technologist. We present a VR environment supporting both roles and evaluate a cross-training curriculum in which non-surgeons swap roles with the surgeon. RESULTS: Exposure to the surgical task resulted in higher engagement with the C-arm technologist role in VR, as measured by the mental demand and effort expended by participants ( p < 0.001 ). It also has a significant effect on non-surgeon's mental model of the overall task; novice participants' estimation of the mental demand and effort required for the surgeon's task increases after training, while their perception of overall performance decreases ( p < 0.05 ), indicating a gap in understanding based solely on observation. This phenomenon was also present for a professional C-arm technologist. CONCLUSION: Until now, VR applications for clinical training have focused on virtualizing existing curricula. We demonstrate how novel approaches which are not possible outside of a virtual environment, such as role swapping, may enhance the shared mental model of surgical teams by contextualizing each individual's role within the overall task in a time- and cost-efficient manner. As workflows grow increasingly sophisticated, we see VR curricula as being able to directly foster a shared model for success, ultimately benefiting patient outcomes through more effective teamwork in surgery.


Assuntos
Equipe de Assistência ao Paciente , Realidade Virtual , Humanos , Feminino , Masculino , Currículo , Competência Clínica , Adulto , Cirurgia Assistida por Computador/métodos , Cirurgia Assistida por Computador/educação , Cirurgiões/educação , Cirurgiões/psicologia
20.
Med Image Anal ; 97: 103254, 2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38968908

RESUMO

The present standard of care for unresectable liver cancer is transarterial chemoembolization (TACE), which involves using chemotherapeutic particles to selectively embolize the arteries supplying hepatic tumors. Accurate volumetric identification of intricate fine vascularity is crucial for selective embolization. Three-dimensional imaging, particularly cone-beam CT (CBCT), aids in visualization and targeting of small vessels in such highly variable anatomy, but long image acquisition time results in intra-scan patient motion, which distorts vascular structures and tissue boundaries. To improve clarity of vascular anatomy and intra-procedural utility, this work proposes a targeted motion estimation and compensation framework that removes the need for any prior information or external tracking and for user interaction. Motion estimation is performed in two stages: (i) a target identification stage that segments arteries and catheters in the projection domain using a multi-view convolutional neural network to construct a coarse 3D vascular mask; and (ii) a targeted motion estimation stage that iteratively solves for the time-varying motion field via optimization of a vessel-enhancing objective function computed over the target vascular mask. The vessel-enhancing objective is derived through eigenvalues of the local image Hessian to emphasize bright tubular structures. Motion compensation is achieved via spatial transformer operators that apply time-dependent deformations to partial angle reconstructions, allowing efficient minimization via gradient backpropagation. The framework was trained and evaluated in anatomically realistic simulated motion-corrupted CBCTs mimicking TACE of hepatic tumors, at intermediate (3.0 mm) and large (6.0 mm) motion magnitudes. Motion compensation substantially improved median vascular DICE score (from 0.30 to 0.59 for large motion), image SSIM (from 0.77 to 0.93 for large motion), and vessel sharpness (0.189 mm-1 to 0.233 mm-1 for large motion) in simulated cases. Motion compensation also demonstrated increased vessel sharpness (0.188 mm-1 before to 0.205 mm-1 after) and reconstructed vessel length (median increased from 37.37 to 41.00 mm) on a clinical interventional CBCT. The proposed anatomy-aware motion compensation framework presented a promising approach for improving the utility of CBCT for intra-procedural vascular imaging, facilitating selective embolization procedures.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa