RESUMO
The present study aimed to summarize and report data on errors related to treatment planning, which were collected by medical physicists. The following analyses were performed based on the 10-year error report data: (1) listing of high-risk errors that occurred and (2) the relationship between the number of treatments and error rates, (3) usefulness of the Automated Plan Checking System (APCS) with the Eclipse Scripting Application Programming Interface and (4) the relationship between human factors and error rates. Differences in error rates were observed before and after the use of APCS. APCS reduced the error rate by ~1% for high-risk errors and 3% for low-risk errors. The number of treatments was negatively correlated with error rates. Therefore, we examined the relationship between the workload of medical physicists and error occurrence and revealed that a very large workload may contribute to overlooking errors. Meanwhile, an increase in the number of medical physicists may lead to the detection of more errors. The number of errors was correlated with the number of physicians with less clinical experience; the error rates were higher when there were more physicians with less experience. This is likely due to the lack of training among clinically inexperienced physicians. An environment to provide adequate training is important, as inexperience in clinical practice can easily and directly lead to the occurrence of errors. In any environment, the need for additional plan checkers is an essential factor for eliminating errors.
Assuntos
Erros Médicos , Planejamento da Radioterapia Assistida por Computador , Humanos , Erros Médicos/prevenção & controle , Carga de TrabalhoRESUMO
Medical datasets may be imbalanced and contain errors due to subjective test results and clinical variability. The poor quality of original data affects classification accuracy and reliability. Hence, detecting abnormal samples in the dataset can help clinicians make better decisions. In this study, we propose an unsupervised error detection method using patterns discovered by the Pattern Discovery and Disentanglement (PDD) model, developed in our earlier work. Applied to the large data, the eICU Collaborative Research Database for sepsis risk assessment, the proposed algorithm can effectively discover statistically significant association patterns, generate an interpretable knowledge base for interpretability, cluster samples in an unsupervised learning manner, and detect abnormal samples from the dataset. As shown in the experimental result, our method outperformed K-Means by 38% on the full dataset and 47% on the reduced dataset for unsupervised clustering. Multiple supervised classifiers improve accuracy by an average of 4% after removing abnormal samples by the proposed error detection approach. Therefore, the proposed algorithm provides a robust and practical solution for unsupervised clustering and error detection in healthcare data.
RESUMO
BACKGROUND: Electronic medical record (EMR) systems provide timely access to clinical information and have been shown to improve medication safety. However, EMRs can also create opportunities for error, including system-related errors or errors that were unlikely or not possible with the use of paper medication charts. This study aimed to determine the detection and mitigation strategies adopted by a health district in Australia to target system-related errors and to explore stakeholder views on strategies needed to curb future system-related errors from emerging. METHODS: A qualitative descriptive study design was used comprising semi-structured interviews. Data were collected from three hospitals within a health district in Sydney, Australia, between September 2020 and May 2021. Interviews were conducted with EMR users and other key stakeholders (e.g. clinical informatics team members). Participants were asked to reflect on how system-related errors changed over time, and to describe approaches taken by their organisation to detect and mitigate these errors. Thematic analysis was conducted iteratively using a general inductive approach, where codes were assigned as themes emerged from the data. RESULTS: Interviews were conducted with 25 stakeholders. Participants reported that most system-related errors were detected by front-line clinicians. Following error detection, clinicians either reported system-related errors directly to the clinical informatics team or submitted reports to the incident information management system. System-related errors were also reported to be detected via reports run within the EMR, or during organisational processes such as incident investigations or system enhancement projects. EMR redesign was the main approach described by participants for mitigating system-related errors, however other strategies, like regular user education and minimising the use of hybrid systems, were also reported. CONCLUSIONS: Initial detection of system-related errors relies heavily on front-line clinicians, however other organisational strategies that are proactive and layered can improve the systemic detection, investigation, and management of errors. Together with EMR design changes, complementary error mitigation strategies, including targeted staff education, can support safe EMR use and development.
Assuntos
Registros Eletrônicos de Saúde , Pesquisa Qualitativa , Humanos , Austrália , Erros Médicos/prevenção & controle , Entrevistas como Assunto , Erros de Medicação/prevenção & controle , Segurança do PacienteRESUMO
Objective.High-dose-rate (HDR) brachytherapy lacks routinely available treatment verification methods. Real-time tracking of the radiation source during HDR brachytherapy can enhance treatment verification capabilities. Recent developments in source tracking allow for measurement of dwell times and source positions with high accuracy. However, more clinically relevant information, such as dose discrepancies, is still needed. To address this, a real-time dose calculation implementation was developed to provide more relevant information from source tracking data. A proof-of-principle of the developed tool was shown using source tracking data obtained from a 3D-printed anthropomorphic phantom.Approach.Software was developed to calculate dose-volume-histograms (DVH) and clinical dose metrics from experimental HDR prostate treatment source tracking data, measured in a realistic pelvic phantom. Uncertainty estimation was performed using repeat measurements to assess the inherent dose measuring uncertainty of thein vivodosimetry (IVD) system. Using a novel approach, the measurement uncertainty can be incorporated in the dose calculation, and used for evaluation of cumulative dose and clinical dose-volume metrics after every dwell position, enabling real-time treatment verification.Main results.The dose calculated from source tracking measurements aligned with the generated uncertainty bands, validating the approach. Simulated shifts of 3 mm in 5/17 needles in a single plan caused DVH deviations beyond the uncertainty bands, indicating errors occurred during treatment. Clinical dose-volume metrics could be monitored in a time-resolved approach, enabling early detection of treatment plan deviations and prediction of their impact on the final dose that will be delivered in real-time.Significance.Integrating dose calculation with source tracking enhances the clinical relevance of IVD methods. Phantom measurements show that the developed tool aids in tracking treatment progress, detecting errors in real-time and post-treatment evaluation. In addition, it could be used to define patient-specific action limits and error thresholds, while taking the uncertainty of the measurement system into consideration.
Assuntos
Braquiterapia , Imagens de Fantasmas , Doses de Radiação , Dosagem Radioterapêutica , Braquiterapia/métodos , Braquiterapia/instrumentação , Incerteza , Humanos , Fatores de Tempo , Planejamento da Radioterapia Assistida por Computador/métodos , Neoplasias da Próstata/radioterapia , Estudo de Prova de Conceito , MasculinoRESUMO
Background: Volumetric-modulated arc therapy (VMAT) is an efficient method of administering intensity-modulated radiotherapy beams. The Delta4 device was employed to examine patient data. Aims and Objectives: The utility of the Delta4 device in identifying errors for patient-specific quality assurance of VMAT plans was studied in this research. Materials and Methods: Intentional errors were purposely created in the collimator rotation, gantry rotation, multileaf collimator (MLC) position displacement, and increase in the number of monitor units (MU). Results: The results show that when the characteristics of the treatment plans were changed, the gamma passing rate (GPR) decreased. The largest percentage of erroneous detection was seen in the increasing number of MU, with a GPR ranging from 41 to 92. Gamma analysis was used to compare the dose distributions of the original and intentional error designs using the 2%/2 mm criteria. The percentage of dose errors (DEs) in the dose-volume histogram (DVH) was also analyzed, and the statistical association was assessed using logistic regression. A modest association (Pearson's R-values: 0.12-0.67) was seen between the DE and GPR in all intentional plans. The findings indicated a moderate association between DVH and GPR. The data reveal that Delta4 is effective in detecting mistakes in treatment regimens for head-and-neck cancer as well as lung cancer. Conclusion: The study results also imply that Delta4 can detect errors in VMAT plans, depending on the details of the defects and the treatment plans employed.
RESUMO
The emerging integration of Brain-Computer Interfaces (BCIs) in human-robot collaboration holds promise for dynamic adaptive interaction. The use of electroencephalogram (EEG)-measured error-related potentials (ErrPs) for online error detection in assistive devices offers a practical method for improving the reliability of such devices. However, continuous online error detection faces challenges such as developing efficient and lightweight classification techniques for quick predictions, reducing false alarms from artifacts, and dealing with the non-stationarity of EEG signals. Further research is essential to address the complexities of continuous classification in online sessions. With this study, we demonstrated a comprehensive approach for continuous online EEG-based machine error detection, which emerged as the winner of a competition at the 32nd International Joint Conference on Artificial Intelligence. The competition consisted of two stages: an offline stage for model development using pre-recorded, labeled EEG data, and an online stage 3 months after the offline stage, where these models were tested live on continuously streamed EEG data to detect errors in orthosis movements in real time. Our approach incorporates two temporal-derivative features with an effect size-based feature selection technique for model training, together with a lightweight noise filtering method for online sessions without recalibration of the model. The model trained in the offline stage not only resulted in a high average cross-validation accuracy of 89.9% across all participants, but also demonstrated remarkable performance during the online session 3 months after the initial data collection without further calibration, maintaining a low overall false alarm rate of 1.7% and swift response capabilities. Our research makes two significant contributions to the field. Firstly, it demonstrates the feasibility of integrating two temporal derivative features with an effect size-based feature selection strategy, particularly in online EEG-based BCIs. Secondly, our work introduces an innovative approach designed for continuous online error prediction, which includes a straightforward noise rejection technique to reduce false alarms. This study serves as a feasibility investigation into a methodology for seamless error detection that promises to transform practical applications in the domain of neuroadaptive technology and human-robot interaction.
RESUMO
BACKGROUND: Quality assurance (QA) of patient-specific treatment plans for intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) necessitates prior validation. However, the standard methodology exhibits deficiencies and lacks sensitivity in the analysis of positional dose distribution data, leading to difficulties in accurately identifying reasons for plan verification failure. This issue complicates and impedes the efficiency of QA tasks. PURPOSE: The primary aim of this research is to utilize deep learning algorithms for the extraction of 3D dose distribution maps and the creation of a predictive model for error classification across multiple machine models, treatment methodologies, and tumor locations. METHOD: We devised five categories of validation plans (normal, gantry error, collimator error, couch error, and dose error), conforming to tolerance limits of different accuracy levels and employing 3D dose distribution data from a sample of 94 tumor patients. A CNN model was then constructed to predict the diverse error types, with predictions compared against the gamma pass rate (GPR) standard employing distinct thresholds (3%, 3 mm; 3%, 2 mm; 2%, 2 mm) to evaluate the model's performance. Furthermore, we appraised the model's robustness by assessing its functionality across diverse accelerators. RESULTS: The accuracy, precision, recall, and F1 scores of CNN model performance were 0.907, 0.925, 0.907, and 0.908, respectively. Meanwhile, the performance on another device is 0.900, 0.918, 0.900, and 0.898. In addition, compared to the GPR method, the CNN model achieved better results in predicting different types of errors. CONCLUSION: When juxtaposed with the GPR methodology, the CNN model exhibits superior predictive capability for classification in the validation of the radiation therapy plan on different devices. By using this model, the plan validation failures can be detected more rapidly and efficiently, minimizing the time required for QA tasks and serving as a valuable adjunct to overcome the constraints of the GPR method.
Assuntos
Algoritmos , Aprendizado Profundo , Garantia da Qualidade dos Cuidados de Saúde , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador , Radioterapia de Intensidade Modulada , Planejamento da Radioterapia Assistida por Computador/métodos , Humanos , Radioterapia de Intensidade Modulada/métodos , Garantia da Qualidade dos Cuidados de Saúde/normas , Neoplasias/radioterapia , Órgãos em Risco/efeitos da radiaçãoRESUMO
PURPOSE: This study aimed to develop a hybrid multi-channel network to detect multileaf collimator (MLC) positional errors using dose difference (DD) maps and gamma maps generated from low-resolution detectors in patient-specific quality assurance (QA) for Intensity Modulated Radiation Therapy (IMRT). METHODS: A total of 68 plans with 358 beams of IMRT were included in this study. The MLC leaf positions of all control points in the original IMRT plans were modified to simulate four types of errors: shift error, opening error, closing error, and random error. These modified plans were imported into the treatment planning system (TPS) to calculate the predicted dose, while the PTW seven29 phantom was utilized to obtain the measured dose distributions. Based on the measured and predicted dose, DD maps and gamma maps, both with and without errors, were generated, resulting in a dataset with 3222 samples. The network's performance was evaluated using various metrics, including accuracy, sensitivity, specificity, precision, F1-score, ROC curves, and normalized confusion matrix. Besides, other baseline methods, such as single-channel hybrid network, ResNet-18, and Swin-Transformer, were also evaluated as a comparison. RESULTS: The experimental results showed that the multi-channel hybrid network outperformed other methods, demonstrating higher average precision, accuracy, sensitivity, specificity, and F1-scores, with values of 0.87, 0.89, 0.85, 0.97, and 0.85, respectively. The multi-channel hybrid network also achieved higher AUC values in the random errors (0.964) and the error-free (0.946) categories. Although the average accuracy of the multi-channel hybrid network was only marginally better than that of ResNet-18 and Swin Transformer, it significantly outperformed them regarding precision in the error-free category. CONCLUSION: The proposed multi-channel hybrid network exhibits a high level of accuracy in identifying MLC errors using low-resolution detectors. The method offers an effective and reliable solution for promoting quality and safety of IMRT QA.
Assuntos
Imagens de Fantasmas , Garantia da Qualidade dos Cuidados de Saúde , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador , Radioterapia de Intensidade Modulada , Humanos , Radioterapia de Intensidade Modulada/métodos , Garantia da Qualidade dos Cuidados de Saúde/normas , Planejamento da Radioterapia Assistida por Computador/métodos , Algoritmos , Órgãos em Risco/efeitos da radiação , Neoplasias/radioterapia , Erros de Configuração em Radioterapia/prevenção & controleRESUMO
The U.S. Food and Drug Administration (FDA) has broadly supported quality by design initiatives for clinical trials - including monitoring and data validation - by releasing two related guidance documents (FDA 2013 and 2019). Centralized statistical monitoring (CSM) can be a component of a quality by design process. In this article, we describe our experience with a CSM platform as part of a Cooperative Research and Development Agreement between CluePoints and FDA. This agreement's approach to CSM is based on many statistical tests performed on all relevant subject-level data submitted to identify outlying sites. An overall data inconsistency score is calculated to assess the inconsistency of data from one site compared to data from all sites. Sites are ranked by the data inconsistency score (-log10p,where p is an aggregated p-value). Results from a deidentified trial demonstrate the typical data anomaly findings through Statistical Monitoring Applied to Research Trials analyses. Sensitivity analyses were performed after excluding laboratory data and questionnaire data. Graphics from deidentified subject-level trial data illustrate abnormal data patterns. The analyses were performed by site, country/region, and patient separately. Key risk indicator analyses were conducted for the selected endpoints. Potential data anomalies and their possible causes are discussed. This data-driven approach can be effective and efficient in selecting sites that exhibit data anomalies and provides insights to statistical reviewers for conducting sensitivity analyses, subgroup analyses, and site by treatment effect explorations. Messy data, data failing to conform to standards, and other disruptions (e.g. the COVID-19 pandemic) can pose challenges.
RESUMO
The construction of medical knowledge graphs (MKGs) is steadily progressing from manual to automatic methods, which inevitably introduce noise, which could impair the performance of downstream healthcare applications. Existing error detection approaches depend on the topological structure and external labels of entities in MKGs to improve their quality. Nevertheless, due to the cost of manual annotation and imperfect automatic algorithms, precise entity labels in MKGs cannot be readily obtained. To address these issues, we propose an approach named Enhancing error detection on Medical knowledge graphs via intrinsic labEL (EMKGEL). Considering the absence of hyper-view KG, we establish a hyper-view KG and a triplet-level KG for implicit label information and neighborhood information, respectively. Inspired by the success of graph attention networks (GATs), we introduce the hyper-view GAT to incorporate label messages and neighborhood information into representation learning. We leverage a confidence score that combines local and global trustworthiness to estimate the triplets. To validate the effectiveness of our approach, we conducted experiments on three publicly available MKGs, namely PharmKG-8k, DiseaseKG, and DiaKG. Compared with the baseline models, the Precision@K value improved by 0.7%, 6.1%, and 3.6%, respectively, on these datasets. Furthermore, our method empirically showed that it significantly outperformed the baseline on a general knowledge graph, Nell-995.
RESUMO
Investigating the cognitive control processes and error detection mechanisms involved in risk-taking behaviors is essential for understanding risk propensity. This study investigated the relationship between risk propensity and cognitive control processes using an event-related potentials (ERP) approach. The study employed a Cued Go/Nogo paradigm to elicit ERP components related to cognitive control processes, including contingent negative variation (CNV), P300, error-related negativity (ERN), and error positivity (Pe). Healthy participants were categorized into high-risk and low-risk groups based on their performance in the Balloon Analogue Risk Task (BART). The results revealed risk-taking behavior influenced CNV amplitudes, indicating heightened response preparation and inhibition for the high-risk group. In contrast, the P300 component showed no group differences but revealed enhanced amplitudes in Nogo trials, particularly in high-risk group. Furthermore, despite the lack of difference in the Pe component, the high-risk group exhibited smaller ERN amplitudes compared to the low-risk group, suggesting reduced sensitivity to error detection. These findings imply that risk-taking behaviors may be associated with a hypoactive avoidance system rather than impaired response inhibition. Understanding the neural mechanisms underlying risk propensity and cognitive control processes can contribute to the development of interventions aimed at reducing risky behaviors and promoting better decision-making.
Assuntos
Eletroencefalografia , Potenciais Evocados , Humanos , Tempo de Reação/fisiologia , Eletroencefalografia/métodos , Potenciais Evocados/fisiologia , Potenciais Evocados P300/fisiologia , Cognição/fisiologiaRESUMO
BACKGROUND: Clinical laboratories frequently implement the same tests and internal quality control (QC) rules on identical instruments. It is unclear whether individual QC targets for each analyser or ones that are common to all instruments are preferable. This study modelled how common QC targets influence assay error detection before examining their effect on real-world data. METHODS: The effect of variable bias and imprecision on error detection and false rejection rates when using common or individual QC targets on two instruments was simulated. QC data from tests run on two identical Beckman instruments (6-month period, same QC lot, n > 100 points for each instrument) determined likely real-world consequences. RESULTS: Compared to individual QC targets, common targets had an asymmetrical effect on systematic error detection, with one instrument assay losing detection power more than the other gained. If individual in-control assay standard deviations (SDs) differed, then common targets led to one assay failing QC more frequently. Applied to two analysers (95 QC levels and 45 tests), common targets reduced one instrument's error detection by ≥ 0.4 sigma on 15/45 (33%) of tests. Such targets also meant 14/45 (31%) of assays on one in-control instrument would fail over twice as frequently as the other (median ratio 1.62, IQR 1.20-2.39) using a 2SD rule. CONCLUSIONS: Compared to instrument-specific QC targets, common targets can reduce the probability of detecting changes in individual assay performance and cause one in-control assay to fail QC more frequently than another. Any impact on clinical care requires further investigation.
Assuntos
Controle de Qualidade , Humanos , Laboratórios Clínicos/normas , Simulação por ComputadorRESUMO
The ability to recognize and correct errors in one's explanatory understanding is critically important for learning. However, little is known about the mechanisms that determine when and under what circumstances errors are detected and how they are corrected. The present study investigated thought experiments as a potential tool that can reveal errors and trigger belief revision in the service of error correction. Across two experiments, 1149 participants engaged in reasoning about force and motion (a domain with well-documented misconceptions) in a pre-training-training-post-training design. The two experiments manipulated the type of mental model manipulated in the thought experiments (i.e., whether participants reasoned about forces acting on their own bodies vs. on external objects), as well as the level of relational and argumentative reasoning about the outcomes of the thought experiments. The results showed that: (i) thought experiments can serve as a tool to elicit inconsistencies in one's representations; (ii) the level of relational and argumentative reasoning determines the level of belief revision in the service of error correction; and (iii) the type of mental model manipulated in a thought experiment determines its outcome and its potential to initiate belief revision. Thought experiments can serve as a valuable teaching and learning tool, and they can help us better understand the nature of error detection and correction systems.
Assuntos
Aprendizagem , Resolução de Problemas , Humanos , Movimento (Física)RESUMO
Coordinated interactions between the central and autonomic nervous systems are crucial for survival due to the inherent propensity for human behavior to make errors. In our ever-changing environment, when individuals make mistakes, these errors can have life-threatening consequences. In response to errors, specific reactions occur in both brain activity and heart rate to detect and correct errors. Specifically, there are two brain-related indicators of error detection and awareness known as error-related negativity and error positivity. Conversely, error-related cardiac deceleration denotes a momentary slowing of heart rate following an error, signaling an autonomic response. However, what is the connection between the brain and the heart during error processing? In this review, we discuss the functional and neuroanatomical connections between the brain and heart markers of error processing, exploring the experimental conditions in which they covary. Given the current limitations of available data, future research will continue to investigate the neurobiological factors governing the brain-heart interaction, aiming to utilize them as combined markers for assessing cognitive control in healthy and pathological conditions.
Assuntos
Desaceleração , Eletroencefalografia , Humanos , Tempo de Reação/fisiologia , Encéfalo , Sistema Nervoso Autônomo/fisiologia , Desempenho Psicomotor/fisiologia , Potenciais Evocados/fisiologiaRESUMO
The escalating demand for artificial intelligence (AI) systems that can monitor and supervise human errors and abnormalities in healthcare presents unique challenges. Recent advances in vision-language models reveal the challenges of monitoring AI by understanding both visual and textual concepts and their semantic correspondences. However, there has been limited success in the application of vision-language models in the medical domain. Current vision-language models and learning strategies for photographic images and captions call for a web-scale data corpus of image and text pairs which is not often feasible in the medical domain. To address this, we present a model named medical cross-attention vision-language model (Medical X-VL), which leverages key components to be tailored for the medical domain. The model is based on the following components: self-supervised unimodal models in medical domain and a fusion encoder to bridge them, momentum distillation, sentencewise contrastive learning for medical reports, and sentence similarity-adjusted hard negative mining. We experimentally demonstrated that our model enables various zero-shot tasks for monitoring AI, ranging from the zero-shot classification to zero-shot error correction. Our model outperformed current state-of-the-art models in two medical image datasets, suggesting a novel clinical application of our monitoring AI model to alleviate human errors. Our method demonstrates a more specialized capacity for fine-grained understanding, which presents a distinct advantage particularly applicable to the medical domain.
Assuntos
Inteligência Artificial , Radiologia , Humanos , Radiografia , Aprendizagem , IdiomaRESUMO
This paper presents an innovative approach for predicting timing errors tailored to near-/sub-threshold operations, addressing the energy-efficient requirements of digital circuits in applications, such as IoT devices and wearables. The method involves assessing deep path activity within an adjustable window prior to the root clock's rising edge. By dynamically adapting the prediction window and supply voltage based on error detection outcomes, the approach effectively mitigates false predictions-an essential concern in low-voltage prediction techniques. The efficacy of this strategy is demonstrated through its implementation in a near-/sub-threshold 32-bit microprocessor system. The approach incurs only a modest 6.84% area overhead attributed to well-engineered lightweight design methodologies. Furthermore, with the integration of clock gating, the system functions seamlessly across a voltage range of 0.4 V-1.2 V (5-100 MHz), effectively catering to adaptive energy efficiency. Empirical results highlight the potential of the proposed strategy, achieving a significant 46.95% energy reduction at the Minimum Energy Point (MEP, 15 MHz) compared to signoff margins. Additionally, a 19.75% energy decrease is observed compared to the zero-margin operation, demonstrating successful realization of negative margins.
RESUMO
Objective.While brain-machine interfaces (BMIs) are promising technologies that could provide direct pathways for controlling the external world and thus regaining motor capabilities, their effectiveness is hampered by decoding errors. Previous research has demonstrated the detection and correction of BMI outcome errors, which occur at the end of trials. Here we focus on continuous detection and correction of BMI execution errors, which occur during real-time movements.Approach.Two adult male rhesus macaques were implanted with Utah arrays in the motor cortex. The monkeys performed single or two-finger group BMI tasks where a Kalman filter decoded binned spiking-band power into intended finger kinematics. Neural activity was analyzed to determine how it depends not only on the kinematics of the fingers, but also on the distance of each finger-group to its target. We developed a method to detect erroneous movements, i.e. consistent movements away from the target, from the same neural activity used by the Kalman filter. Detected errors were corrected by a simple stopping strategy, and the effect on performance was evaluated.Mainresults.First we show that including distance to target explains significantly more variance of the recorded neural activity. Then, for the first time, we demonstrate that neural activity in motor cortex can be used to detect execution errors during BMI controlled movements. Keeping false positive rate below5%, it was possible to achieve mean true positive rate of28.1%online. Despite requiring 200 ms to detect and react to suspected errors, we were able to achieve a significant improvement in task performance via reduced orbiting time of one finger group.Significance.Neural activity recorded in motor cortex for BMI control can be used to detect and correct BMI errors and thus to improve performance. Further improvements may be obtained by enhancing classification and correction strategies.
Assuntos
Interfaces Cérebro-Computador , Animais , Masculino , Macaca mulatta , Eletrodos Implantados , Dedos , MovimentoRESUMO
Objective: To investigate: (1) what automated search methods are used to identify wrong-patient order entry (WPOE), (2) what data are being captured and how they are being used, (3) the causes of WPOE, and (4) how providers identify their own errors. Materials and Methods: A systematic scoping review of the empirical literature was performed using the databases CINAHL, Embase, and MEDLINE, covering the period from database inception until 2021. Search terms were related to the use of automated searches for WPOE when using an electronic prescribing system. Data were extracted and thematic analysis was performed to identify patterns or themes within the data. Results: Fifteen papers were included in the review. Several automated search methods were identified, with the retract-and-reorder (RAR) method and the Void Alert Tool (VAT) the most prevalent. Included studies used automated search methods to identify background error rates in isolation, or in the context of an intervention. Risk factors for WPOE were identified, with technological factors and interruptions deemed the biggest risks. Minimal data on how providers identify their own errors were identified. Discussion: RAR is the most widely used method to identify WPOE, with a good positive predictive value (PPV) of 76.2%. However, it will not currently identify other error types. The VAT is nonspecific for WPOE, with a mean PPV of 78%-93.1%, but the voiding reason accuracy varies considerably. Conclusion: Automated search methods are powerful tools to identify WPOE that would otherwise go unnoticed. Further research is required around self-identification of errors.
RESUMO
BACKGROUND: Electromagnetic tracking (EMT) is a promising technology that holds great potential to advance patient-specific pre-treatment verification in interstitial brachytherapy (iBT). It allows easy determination of the implant geometry without line-of-sight restrictions and without dose exposure to the patient. What it cannot provide, however, is a link to anatomical landmarks, such as the exit points of catheters or needles on the skin surface. These landmarks are required for the registration of EMT data with other imaging modalities and for the detection of treatment errors such as incorrect indexer lengths, and catheter or needle shifts. PURPOSE: To develop an easily applicable method to detect reference points in the positional data of the trajectory of an EMT sensor, specifically the exit points of catheters in breast iBT, and to apply the approach to pre-treatment error detection. METHODS: Small metal objects were attached to catheter fixation buttons that rest against the breast surface to intentionally induce a local, spatially limited perturbation of the magnetic field on which the working principle of EMT relies. This perturbation can be sensed by the EMT sensor as it passes by, allowing it to localize the metal object and thus the catheter exit point. For the proof-of-concept, different small metal objects (magnets, washers, and bushes) and EMT sensor drive speeds were used to find the optimal parameters. The approach was then applied to treatment error detection and validated in-vitro on a phantom. Lastly, the in-vivo feasibility of the approach was tested on a patient cohort of four patients to assess the impact on the clinical workflow. RESULTS: All investigated metal objects were able to measurably perturb the magnetic field, which resulted in missing sensor readings, that is two data gaps, one for the sensor moving towards the tip end and one when retracting from there. The size of the resulting data gaps varied depending on the choice of gap points used for calculation of the gap size; it was found that the start points of the gaps in both directions showed the smallest variability. The median size of data gaps was ⩽8 mm for all tested materials and sensor drive speeds. The variability of the determined object position was ⩽0.5 mm at a speed of 1.0 cm/s and ⩽0.7 mm at 2.5 cm/s, with an increase up to 2.3 mm at 5.0 cm/s. The in-vitro validation of the error detection yielded a 100% detection rate for catheter shifts of ≥2.2 mm. All simulated wrong indexer lengths were correctly identified. The in-vivo feasibility assessment showed that the metal objects did not interfere with the routine clinical workflow. CONCLUSIONS: The developed approach was able to successfully detect reference points in EMT data, which can be used for registration to other imaging modalities, but also for treatment error detection. It can thus advance the automation of patient-specific, pre-treatment quality assurance in iBT.