Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 174
Filtrar
1.
Brief Bioinform ; 23(6)2022 11 19.
Artículo en Inglés | MEDLINE | ID: mdl-36266246

RESUMEN

Nucleotide and protein sequences stored in public databases are the cornerstone of many bioinformatics analyses. The records containing these sequences are prone to a wide range of errors, including incorrect functional annotation, sequence contamination and taxonomic misclassification. One source of information that can help to detect errors are the strong interdependency between records. Novel sequences in one database draw their annotations from existing records, may generate new records in multiple other locations and will have varying degrees of similarity with existing records across a range of attributes. A network perspective of these relationships between sequence records, within and across databases, offers new opportunities to detect-or even correct-erroneous entries and more broadly to make inferences about record quality. Here, we describe this novel perspective of sequence database records as a rich network, which we call the sequence database network, and illustrate the opportunities this perspective offers for quantification of database quality and detection of spurious entries. We provide an overview of the relevant databases and describe how the interdependencies between sequence records across these databases can be exploited by network analyses. We review the process of sequence annotation and provide a classification of sources of error, highlighting propagation as a major source. We illustrate the value of a network perspective through three case studies that use network analysis to detect errors, and explore the quality and quantity of critical relationships that would inform such network analyses. This systematic description of a network perspective of sequence database records provides a novel direction to combat the proliferation of errors within these critical bioinformatics resources.


Asunto(s)
Biología Computacional , Bases de Datos de Ácidos Nucleicos , Secuencia de Aminoácidos
2.
Brain Cogn ; 175: 106136, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38301366

RESUMEN

Investigating the cognitive control processes and error detection mechanisms involved in risk-taking behaviors is essential for understanding risk propensity. This study investigated the relationship between risk propensity and cognitive control processes using an event-related potentials (ERP) approach. The study employed a Cued Go/Nogo paradigm to elicit ERP components related to cognitive control processes, including contingent negative variation (CNV), P300, error-related negativity (ERN), and error positivity (Pe). Healthy participants were categorized into high-risk and low-risk groups based on their performance in the Balloon Analogue Risk Task (BART). The results revealed risk-taking behavior influenced CNV amplitudes, indicating heightened response preparation and inhibition for the high-risk group. In contrast, the P300 component showed no group differences but revealed enhanced amplitudes in Nogo trials, particularly in high-risk group. Furthermore, despite the lack of difference in the Pe component, the high-risk group exhibited smaller ERN amplitudes compared to the low-risk group, suggesting reduced sensitivity to error detection. These findings imply that risk-taking behaviors may be associated with a hypoactive avoidance system rather than impaired response inhibition. Understanding the neural mechanisms underlying risk propensity and cognitive control processes can contribute to the development of interventions aimed at reducing risky behaviors and promoting better decision-making.


Asunto(s)
Electroencefalografía , Potenciales Evocados , Humanos , Tiempo de Reacción/fisiología , Electroencefalografía/métodos , Potenciales Evocados/fisiología , Potenciales Relacionados con Evento P300/fisiología , Cognición/fisiología
3.
J Biopharm Stat ; : 1-7, 2024 Mar 29.
Artículo en Inglés | MEDLINE | ID: mdl-38549510

RESUMEN

The U.S. Food and Drug Administration (FDA) has broadly supported quality by design initiatives for clinical trials - including monitoring and data validation - by releasing two related guidance documents (FDA 2013 and 2019). Centralized statistical monitoring (CSM) can be a component of a quality by design process. In this article, we describe our experience with a CSM platform as part of a Cooperative Research and Development Agreement between CluePoints and FDA. This agreement's approach to CSM is based on many statistical tests performed on all relevant subject-level data submitted to identify outlying sites. An overall data inconsistency score is calculated to assess the inconsistency of data from one site compared to data from all sites. Sites are ranked by the data inconsistency score (-log10p,where p is an aggregated p-value). Results from a deidentified trial demonstrate the typical data anomaly findings through Statistical Monitoring Applied to Research Trials analyses. Sensitivity analyses were performed after excluding laboratory data and questionnaire data. Graphics from deidentified subject-level trial data illustrate abnormal data patterns. The analyses were performed by site, country/region, and patient separately. Key risk indicator analyses were conducted for the selected endpoints. Potential data anomalies and their possible causes are discussed. This data-driven approach can be effective and efficient in selecting sites that exhibit data anomalies and provides insights to statistical reviewers for conducting sensitivity analyses, subgroup analyses, and site by treatment effect explorations. Messy data, data failing to conform to standards, and other disruptions (e.g. the COVID-19 pandemic) can pose challenges.

4.
BMC Health Serv Res ; 24(1): 839, 2024 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-39049093

RESUMEN

BACKGROUND: Electronic medical record (EMR) systems provide timely access to clinical information and have been shown to improve medication safety. However, EMRs can also create opportunities for error, including system-related errors or errors that were unlikely or not possible with the use of paper medication charts. This study aimed to determine the detection and mitigation strategies adopted by a health district in Australia to target system-related errors and to explore stakeholder views on strategies needed to curb future system-related errors from emerging. METHODS: A qualitative descriptive study design was used comprising semi-structured interviews. Data were collected from three hospitals within a health district in Sydney, Australia, between September 2020 and May 2021. Interviews were conducted with EMR users and other key stakeholders (e.g. clinical informatics team members). Participants were asked to reflect on how system-related errors changed over time, and to describe approaches taken by their organisation to detect and mitigate these errors. Thematic analysis was conducted iteratively using a general inductive approach, where codes were assigned as themes emerged from the data. RESULTS: Interviews were conducted with 25 stakeholders. Participants reported that most system-related errors were detected by front-line clinicians. Following error detection, clinicians either reported system-related errors directly to the clinical informatics team or submitted reports to the incident information management system. System-related errors were also reported to be detected via reports run within the EMR, or during organisational processes such as incident investigations or system enhancement projects. EMR redesign was the main approach described by participants for mitigating system-related errors, however other strategies, like regular user education and minimising the use of hybrid systems, were also reported. CONCLUSIONS: Initial detection of system-related errors relies heavily on front-line clinicians, however other organisational strategies that are proactive and layered can improve the systemic detection, investigation, and management of errors. Together with EMR design changes, complementary error mitigation strategies, including targeted staff education, can support safe EMR use and development.


Asunto(s)
Registros Electrónicos de Salud , Investigación Cualitativa , Humanos , Australia , Errores Médicos/prevención & control , Entrevistas como Asunto , Errores de Medicación/prevención & control , Seguridad del Paciente
5.
J Appl Clin Med Phys ; 25(8): e14372, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38709158

RESUMEN

BACKGROUND: Quality assurance (QA) of patient-specific treatment plans for intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) necessitates prior validation. However, the standard methodology exhibits deficiencies and lacks sensitivity in the analysis of positional dose distribution data, leading to difficulties in accurately identifying reasons for plan verification failure. This issue complicates and impedes the efficiency of QA tasks. PURPOSE: The primary aim of this research is to utilize deep learning algorithms for the extraction of 3D dose distribution maps and the creation of a predictive model for error classification across multiple machine models, treatment methodologies, and tumor locations. METHOD: We devised five categories of validation plans (normal, gantry error, collimator error, couch error, and dose error), conforming to tolerance limits of different accuracy levels and employing 3D dose distribution data from a sample of 94 tumor patients. A CNN model was then constructed to predict the diverse error types, with predictions compared against the gamma pass rate (GPR) standard employing distinct thresholds (3%, 3 mm; 3%, 2 mm; 2%, 2 mm) to evaluate the model's performance. Furthermore, we appraised the model's robustness by assessing its functionality across diverse accelerators. RESULTS: The accuracy, precision, recall, and F1 scores of CNN model performance were 0.907, 0.925, 0.907, and 0.908, respectively. Meanwhile, the performance on another device is 0.900, 0.918, 0.900, and 0.898. In addition, compared to the GPR method, the CNN model achieved better results in predicting different types of errors. CONCLUSION: When juxtaposed with the GPR methodology, the CNN model exhibits superior predictive capability for classification in the validation of the radiation therapy plan on different devices. By using this model, the plan validation failures can be detected more rapidly and efficiently, minimizing the time required for QA tasks and serving as a valuable adjunct to overcome the constraints of the GPR method.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Garantía de la Calidad de Atención de Salud , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador , Radioterapia de Intensidad Modulada , Planificación de la Radioterapia Asistida por Computador/métodos , Humanos , Radioterapia de Intensidad Modulada/métodos , Garantía de la Calidad de Atención de Salud/normas , Neoplasias/radioterapia , Órganos en Riesgo/efectos de la radiación
6.
J Appl Clin Med Phys ; 25(6): e14327, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38488663

RESUMEN

PURPOSE: This study aimed to develop a hybrid multi-channel network to detect multileaf collimator (MLC) positional errors using dose difference (DD) maps and gamma maps generated from low-resolution detectors in patient-specific quality assurance (QA) for Intensity Modulated Radiation Therapy (IMRT). METHODS: A total of 68 plans with 358 beams of IMRT were included in this study. The MLC leaf positions of all control points in the original IMRT plans were modified to simulate four types of errors: shift error, opening error, closing error, and random error. These modified plans were imported into the treatment planning system (TPS) to calculate the predicted dose, while the PTW seven29 phantom was utilized to obtain the measured dose distributions. Based on the measured and predicted dose, DD maps and gamma maps, both with and without errors, were generated, resulting in a dataset with 3222 samples. The network's performance was evaluated using various metrics, including accuracy, sensitivity, specificity, precision, F1-score, ROC curves, and normalized confusion matrix. Besides, other baseline methods, such as single-channel hybrid network, ResNet-18, and Swin-Transformer, were also evaluated as a comparison. RESULTS: The experimental results showed that the multi-channel hybrid network outperformed other methods, demonstrating higher average precision, accuracy, sensitivity, specificity, and F1-scores, with values of 0.87, 0.89, 0.85, 0.97, and 0.85, respectively. The multi-channel hybrid network also achieved higher AUC values in the random errors (0.964) and the error-free (0.946) categories. Although the average accuracy of the multi-channel hybrid network was only marginally better than that of ResNet-18 and Swin Transformer, it significantly outperformed them regarding precision in the error-free category. CONCLUSION: The proposed multi-channel hybrid network exhibits a high level of accuracy in identifying MLC errors using low-resolution detectors. The method offers an effective and reliable solution for promoting quality and safety of IMRT QA.


Asunto(s)
Fantasmas de Imagen , Garantía de la Calidad de Atención de Salud , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador , Radioterapia de Intensidad Modulada , Humanos , Radioterapia de Intensidad Modulada/métodos , Garantía de la Calidad de Atención de Salud/normas , Planificación de la Radioterapia Asistida por Computador/métodos , Algoritmos , Órganos en Riesgo/efectos de la radiación , Neoplasias/radioterapia , Errores de Configuración en Radioterapia/prevención & control
7.
J Appl Clin Med Phys ; 24(8): e14001, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37086428

RESUMEN

PURPOSE: Developed as a plan-specific pre-treatment QA tool, Varian portal dosimetry promises a fast, high-resolution, and integrated QA solution. In this study, the agreement between predicted fluence and measured cumulative portal dose was determined for the first 140 patient plans at our Halcyon linear accelerator. Furthermore, the capability of portal dosimetry to detect incorrect plan delivery was compared to that of a common QA phantom. Finally, tolerance criteria for verification of VMAT plan delivery with Varian portal dosimetry were derived. METHODS: All patient plans and the corresponding verification plans were generated within the Eclipse treatment planning system. Four representative plans of different treatment sites (prostate, prostate with lymphatic drainage, rectum, and head & neck) were intentionally altered to model incorrect plan delivery. Investigated errors included both systematic and random errors. Gamma analysis was conducted on both portal dose (criteria γ2%/2 mm , γ2%/1 mm , and γ1%/1 mm ) and ArcCHECK measurements (criteria γ3%/3 mm , γ3%/2 mm , and γ2%/2 mm ) with a 10% low-dose threshold. Performance assessment of various acceptance criteria for plan-specific treatment QA utilized receiver operating characteristic (ROC) analysis. RESULTS: Predicted and acquired portal dosimetry fluences demonstrated a high agreement evident by average gamma passing rates for the clinical patient plans of 99.90%, 96.64%, and 91.87% for γ2%/2 mm , γ2%/1 mm , and γ1%/1 mm , respectively. The ROC analysis demonstrated a very high capability of detecting erroneous plan delivery for portal dosimetry (area under curve (AUC) > 0.98) and in this regard outperforms QA with the ArcCHECK phantom (AUC ≈ 0.82). With the suggested optimum decision thresholds excellent sensitivity and specificity is simultaneously possible. CONCLUSIONS: Owing to the high achievable spatial resolution, portal dosimetry at the Halcyon can reliably be deployed as plan-specific pre-treatment QA tool to screen for errors. It is recommended to support the fluence integrated portal dosimetry QA by independent phantom-based measurements of a random sample survey of treatment plans.


Asunto(s)
Radioterapia de Intensidad Modulada , Masculino , Humanos , Planificación de la Radioterapia Asistida por Computador , Radiometría , Dosificación Radioterapéutica , Sensibilidad y Especificidad , Garantía de la Calidad de Atención de Salud
8.
Sensors (Basel) ; 23(17)2023 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-37687954

RESUMEN

This paper presents an innovative approach for predicting timing errors tailored to near-/sub-threshold operations, addressing the energy-efficient requirements of digital circuits in applications, such as IoT devices and wearables. The method involves assessing deep path activity within an adjustable window prior to the root clock's rising edge. By dynamically adapting the prediction window and supply voltage based on error detection outcomes, the approach effectively mitigates false predictions-an essential concern in low-voltage prediction techniques. The efficacy of this strategy is demonstrated through its implementation in a near-/sub-threshold 32-bit microprocessor system. The approach incurs only a modest 6.84% area overhead attributed to well-engineered lightweight design methodologies. Furthermore, with the integration of clock gating, the system functions seamlessly across a voltage range of 0.4 V-1.2 V (5-100 MHz), effectively catering to adaptive energy efficiency. Empirical results highlight the potential of the proposed strategy, achieving a significant 46.95% energy reduction at the Minimum Energy Point (MEP, 15 MHz) compared to signoff margins. Additionally, a 19.75% energy decrease is observed compared to the zero-margin operation, demonstrating successful realization of negative margins.

9.
Cogn Affect Behav Neurosci ; 22(6): 1231-1249, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-35915335

RESUMEN

Error detection and error significance form essential mechanisms that influence error processing and action adaptation. Error detection often is assessed by an immediate self-evaluation of accuracy. Our study used cognitive neuroscience methods to elucidate whether self-evaluation itself influences error processing by increasing error significance in the context of a complex response selection process. In a novel eight-alternative response task, our participants responded to eight symbol stimuli with eight different response keys and a specific stimulus-response assignment. In the first part of the experiment, the participants merely performed the task. In the second part, they also evaluated their response accuracy on each trial. We replicated variations in early and later stages of error processing and action adaptation as a function of error detection. The additional self-evaluation enhanced error processing on later stages, probably reflecting error evidence accumulation, whereas earlier error monitoring processes were not amplified. Implementing multivariate pattern analysis revealed that self-evaluation influenced brain activity patterns preceding and following the response onset, independent of response accuracy. The classifier successfully differentiated between responses from the self- and the no-self-evaluation condition several hundred milliseconds before response onset. Subsequent exploratory analyses indicated that both self-evaluation and the time on task contributed to these differences in brain activity patterns. This suggests that in addition to its effect on error processing, self-evaluation in a complex choice task seems to have an influence on early and general processing mechanisms (e.g., the quality of attention and stimulus encoding), which is amplified by the time on task.


Asunto(s)
Atención , Desempeño Psicomotor , Humanos , Tiempo de Reacción/fisiología , Desempeño Psicomotor/fisiología , Atención/fisiología , Electroencefalografía , Potenciales Evocados/fisiología
10.
Conscious Cogn ; 99: 103284, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35168038

RESUMEN

This research is a replication study that sought to verify whether the readability of a font has an effect on the Moses illusion detection. It was designed to stimulate information retrieval from memory and confuse retrieval with a text's erroneous wording. Undergraduates aged 19-30 (N = 87, 80% women) were presented with two questions, one of which contained distorted information. We assumed that a difficult-to-read font would facilitate error detection, as it increases the focus of attention on the text. However, unlike the original study, we were unable to find support for this hypothesis, as font readability did not significantly affect error detection. In the difficult-to-read condition, 43% of participants reported an error, while, in the easy-to-read condition, errors were detected by 37% of the participants. Unlike the original study, our research results do not support the hypothesis that the visual presentation of a text affects the automatic retrieval of information from memory. This study clarifies the effect of text readability on error detection taking into consideration the role of long-term memory and visual perception.


Asunto(s)
Comprensión , Ilusiones , Adulto , Femenino , Humanos , Masculino , Lectura , Percepción Visual , Adulto Joven
11.
Sensors (Basel) ; 22(16)2022 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-36015801

RESUMEN

The Industrial Revolution 4.0 (IR 4.0) has drastically impacted how the world operates. The Internet of Things (IoT), encompassed significantly by the Wireless Sensor Networks (WSNs), is an important subsection component of the IR 4.0. WSNs are a good demonstration of an ambient intelligence vision, in which the environment becomes intelligent and aware of its surroundings. WSN has unique features which create its own distinct network attributes and is deployed widely for critical real-time applications that require stringent prerequisites when dealing with faults to ensure the avoidance and tolerance management of catastrophic outcomes. Thus, the respective underlying Fault Tolerance (FT) structure is a critical requirement that needs to be considered when designing any algorithm in WSNs. Moreover, with the exponential evolution of IoT systems, substantial enhancements of current FT mechanisms will ensure that the system constantly provides high network reliability and integrity. Fault tolerance structures contain three fundamental stages: error detection, error diagnosis, and error recovery. The emergence of analytics and the depth of harnessing it has led to the development of new fault-tolerant structures and strategies based on artificial intelligence and cloud-based. This survey provides an elaborate classification and analysis of fault tolerance structures and their essential components and categorizes errors from several perspectives. Subsequently, an extensive analysis of existing fault tolerance techniques based on eight constraints is presented. Many prior studies have provided classifications for fault tolerance systems. However, this research has enhanced these reviews by proposing an extensively enhanced categorization that depends on the new and additional metrics which include the number of sensor nodes engaged, the overall fault-tolerant approach performance, and the placement of the principal algorithm responsible for eliminating network errors. A new taxonomy of comparison that also extensively reviews previous surveys and state-of-the-art scientific articles based on different factors is discussed and provides the basis for the proposed open issues.

12.
Sensors (Basel) ; 22(15)2022 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-35898028

RESUMEN

Integrated satellite multiple terrestrial relay network (ISMTRN) is a new network architecture that combines satellite communication with terrestrial communication. It both utilizes the advantages of the two systems and overcomes their shortcomings. However, security issues inevitably arise in the ISMTRN resulting from the broad coverage of the satellite beams and the openness of wireless communication. One of the promising methods to achieve secure transmission is covert communication technology, which has been a hot discussion topic in recent years. In this paper, we investigate the performance of covert communication in the ISMTRN with partial relay selection. Particularly, when the satellite transmits its signal to the user, we consider the scenario that the selected relay opportunistically sends covert information to the destination. Furthermore, the closed-form error detection probability and average covert communication rate are derived. Finally, numerical simulation results are provided to reveal the impact of critical parameters on system covert performance.

13.
Entropy (Basel) ; 24(7)2022 Jul 06.
Artículo en Inglés | MEDLINE | ID: mdl-35885159

RESUMEN

Error detection is a critical step in data cleaning. Most traditional error detection methods are based on rules and external information with high cost, especially when dealing with large-scaled data. Recently, with the advances of deep learning, some researchers focus their attention on learning the semantic distribution of data for error detection; however, the low error rate in real datasets makes it hard to collect negative samples for training supervised deep learning models. Most of the existing deep-learning-based error detection algorithms solve the class imbalance problem by data augmentation. Due to the inadequate sampling of negative samples, the features learned by those methods may be biased. In this paper, we propose an AEGAN (Auto-Encoder Generative Adversarial Network)-based deep learning model named SAT-GAN (Self-Attention Generative Adversarial Network) to detect errors in relational datasets. Combining the self-attention mechanism with the pre-trained language model, our model can capture semantic features of the dataset, specifically the functional dependency between attributes, so that no rules or constraints are needed for SAT-GAN to identify inconsistent data. For the lack of negative samples, we propose to train our model via zero-shot learning. As a clean-data tailored model, SAT-GAN tries to recognize error data as outliers by learning the latent features of clean data. In our evaluation, SAT-GAN achieves an average F1-score of 0.95 on five datasets, which yields at least 46.2% F1-score improvement over rule-based methods and outperforms state-of-the-art deep learning approaches in the absence of rules and negative samples.

14.
Neuroimage ; 232: 117888, 2021 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-33647498

RESUMEN

The concurrent execution of temporally overlapping tasks leads to considerable interference between the subtasks. This also impairs control processes associated with the detection of performance errors. In the present study, we investigated how the human brain adapts to this interference between task representations in such multitasking scenarios. In Experiment 1, participants worked on a dual-tasking paradigm with partially overlapping execution of two tasks (T1 and T2), while we recorded error-related scalp potentials. The error positivity (Pe), a correlate of higher-level error evaluation, was reduced after T1 errors but occurred after a correct T2-response instead. MVPA-based and regression-based single-trial analysis revealed that the immediate Pe and deferred Pe are negatively correlated, suggesting a trial-wise trade-off between immediate and postponed error processing. Experiment 2 confirmed this finding and additionally showed that this result is not due to credit-assignment errors in which a T1 error is falsely attributed to T2. For the first time reporting a Pe that is temporally detached from its eliciting error event by a considerable amount of time, this study illustrates how reliable error detection in dual-tasking is maintained by a mechanism that adaptively schedules error processing, thus demonstrating a remarkable flexibility of the human brain when adapting to multitasking situations.


Asunto(s)
Adaptación Fisiológica/fisiología , Encéfalo/fisiología , Aprendizaje Discriminativo/fisiología , Comportamiento Multifuncional/fisiología , Tiempo de Reacción/fisiología , Estimulación Acústica/métodos , Adolescente , Adulto , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Desempeño Psicomotor/fisiología , Adulto Joven
15.
Strahlenther Onkol ; 197(7): 633-643, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33594471

RESUMEN

PURPOSE: To investigate critical aspects and effectiveness of in vivo dosimetry (IVD) tests obtained by an electronic portal imaging device (EPID) in a multicenter and multisystem context. MATERIALS AND METHODS: Eight centers with three commercial systems-SoftDiso (SD, Best Medical Italy, Chianciano, Italy), Dosimetry Check (DC, Math Resolution, LCC), and PerFRACTION (PF, Sun Nuclear Corporation, SNC, Melbourne, FL)-collected IVD results for a total of 2002 patients and 32,276 tests. Data are summarized for IVD software, radiotherapy technique, and anatomical site. Every center reported the number of patients and tests analyzed, and the percentage of tests outside of the tolerance level (OTL%). OTL% was categorized as being due to incorrect patient setup, incorrect use of immobilization devices, incorrect dose computation, anatomical variations, and unknown causes. RESULTS: The three systems use different approaches and customized alert indices, based on local protocols. For Volumetric Modulated Arc Therapy (VMAT) treatments OTL% mean values were up to 8.9% for SD, 18.0% for DC, and 16.0% for PF. Errors due to "anatomical variations" for head and neck were up to 9.0% for SD and DC and 8.0% for PF systems, while for abdomen and pelvis/prostate treatments were up to 9%, 17.0%, and 9.0% for SD, DC, and PF, respectively. The comparison among techniques gave 3% for Stereotactic Body Radiation Therapy, 7.0% (range 4.7-8.9%) for VMAT, 10.4% (range 7.0-12.2%) for Intensity Modulated Radiation Therapy, and 13.2% (range 8.8-21.0%) for 3D Conformal Radiation Therapy. CONCLUSION: The results obtained with different IVD software and among centers were consistent and showed an acceptable homogeneity. EPID IVD was effective in intercepting important errors.


Asunto(s)
Dosimetría in Vivo/métodos , Humanos , Radiocirugia , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador , Radioterapia de Intensidad Modulada , Programas Informáticos
16.
Sensors (Basel) ; 21(16)2021 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-34451097

RESUMEN

Currently, cryptographic algorithms are widely applied to communications systems to guarantee data security. For instance, in an emerging automotive environment where connectivity is a core part of autonomous and connected cars, it is essential to guarantee secure communications both inside and outside the vehicle. The AES algorithm has been widely applied to protect communications in onboard networks and outside the vehicle. Hardware implementations use techniques such as iterative, parallel, unrolled, and pipeline architectures. Nevertheless, the use of AES does not guarantee secure communication, because previous works have proved that implementations of secret key cryptosystems, such as AES, in hardware are sensitive to differential fault analysis. Moreover, it has been demonstrated that even a single fault during encryption or decryption could cause a large number of errors in encrypted or decrypted data. Although techniques such as iterative and parallel architectures have been explored for fault detection to protect AES encryption and decryption, it is necessary to explore other techniques such as pipelining. Furthermore, balancing a high throughput, reducing low power consumption, and using fewer hardware resources in the pipeline design are great challenges, and they are more difficult when considering fault detection and correction. In this research, we propose a novel hybrid pipeline hardware architecture focusing on error and fault detection for the AES cryptographic algorithm. The architecture is hybrid because it combines hardware and time redundancy through a pipeline structure, analyzing and balancing the critical path and distributing the processing elements within each stage. The main contribution is to present a pipeline structure for ciphering five times on the same data blocks, implementing a voting module to verify when an error occurs or when output has correct cipher data, optimizing the process, and using a decision tree to reduce the complexity of all combinations required for evaluating. The architecture is analyzed and implemented on several FPGA technologies, and it reports a throughput of 0.479 Gbps and an efficiency of 0.336 Mbps/LUT when a Virtex-7 is used.

17.
Sensors (Basel) ; 21(21)2021 Oct 29.
Artículo en Inglés | MEDLINE | ID: mdl-34770488

RESUMEN

Fault tolerance in IoT systems is challenging to overcome due to its complexity, dynamicity, and heterogeneity. IoT systems are typically designed and constructed in layers. Every layer has its requirements and fault tolerance strategies. However, errors in one layer can propagate and cause effects on others. Thus, it is impractical to consider a centralized fault tolerance approach for an entire system. Consequently, it is vital to consider multiple layers in order to enable collaboration and information exchange when addressing fault tolerance. The purpose of this study is to propose a multi-layer fault tolerance approach, granting interconnection among IoT system layers, allowing information exchange and collaboration in order to attain the property of dependability. Therefore, we define an event-driven framework called FaTEMa (Fault Tolerance Event Manager) that creates a dedicated fault-related communication channel in order to propagate events across the levels of the system. The implemented framework assist with error detection and continued service. Additionally, it offers extension points to support heterogeneous communication protocols and evolve new capabilities. Our empirical results show that introducing FaTEMa provided improvements to the error detection and error resolution time, consequently improving system availability. In addition, the use of Fatema provided a reliability improvement and a reduction in the number of failures produced.

18.
Sensors (Basel) ; 21(10)2021 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-34066193

RESUMEN

A piston error detection method is proposed based on the broadband intensity distribution on the image plane using a back-propagation (BP) artificial neural network. By setting a mask with a sparse circular clear multi-subaperture configuration in the exit pupil plane of a segmented telescope to fragment the pupil, the relation between the piston error of segments and amplitude of the modulation transfer function (MTF) sidelobes is strictly derived according to the Fourier optics principle. Then the BP artificial neural network is utilized to establish the mapping relation between them, where the amplitudes of the MTF sidelobes directly calculated from theoretical relationship and the introduced piston errors are used as inputs and outputs respectively to train the network. With the well trained-network, the piston errors are measured to a good precision using one in-focused broadband image without defocus division as input, and the capture range achieving the coherence length of the broadband light is available. Adequate simulations demonstrate the effectiveness and accuracy of the proposed method; the results show that the trained network has high measurement accuracy, wide detection range, quite good noise immunity and generalization ability. This method provides a feasible and easily implemented way to measure piston error and can simultaneously detect the multiple piston errors of the entire aperture of the segmented telescope.

19.
Sensors (Basel) ; 21(24)2021 Dec 08.
Artículo en Inglés | MEDLINE | ID: mdl-34960300

RESUMEN

Accidentally clicking on a link is a type of human error known as a slip in which a user unintentionally performs an unintended task. The risk magnitude is the probability of occurrences of such error with a possible substantial effect to which even experienced individuals are susceptible. Phishing attacks take advantage of slip-based human error by attacking psychological aspects of the users that lead to unintentionally clicking on phishing links. Such actions may lead to installing tracking software, downloading malware or viruses, or stealing private, sensitive information, to list a few. Therefore, a system is needed that detects whether a click on a link is intentional or unintentional and, if unintentional, can then prevent it. This paper proposes a micro-behavioral accidental click detection system (ACDS) to prevent slip-based human error. A within-subject-based experiment was conducted with 20 participants to test the potential of the proposed system. The results reveal the statistical significance between the two cases of intentional vs. unintentional clicks using a smartphone. Random tree, random forest, and support vector machine classifiers were used, exhibiting 82.6%, 87.2%, and 91.6% accuracy in detecting unintentional clicks, respectively.


Asunto(s)
Seguridad Computacional , Programas Informáticos , Accidentes , Recolección de Datos , Humanos
20.
Entropy (Basel) ; 23(4)2021 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-33923611

RESUMEN

For an industrial process, the estimation of feeding composition is important for analyzing production status and making control decisions. However, random errors or even gross ones inevitably contaminate the actual measurements. Feeding composition is conventionally obtained via discrete and low-rate artificial testing. To address these problems, a feeding composition estimation approach based on data reconciliation procedure is developed. To improve the variable accuracy, a novel robust M-estimator is first proposed. Then, an iterative robust hierarchical data reconciliation and estimation strategy is applied to estimate the feeding composition. The feasibility and effectiveness of the estimation approach are verified on a fluidized bed roaster. The proposed M-estimator showed better overall performance.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda