RESUMO
BACKGROUND: Smoking is a major risk factor of cardiovascular diseases, notably peripheral arterial disease (PAD). Despite this link, research on smoking cessation interventions in PAD patients remains scarce and inconclusive regarding the efficacy of such interventions. Therefore, elucidating it is crucial and should address both individuals who smoke that are motivated to quit and individuals who smoke heavily lacking the motivation to quit. METHODS/DESIGN: The Aachen Smoking Cessation and Harm Reduction (ASCHR) trial is a prospective randomized controlled study (RCT) on the benefits of telemedical-psychological support for smoking cessation in patients with PAD, funded by the "Innovation Fund" of the Joint Federal Committee in Germany. This trial aims to scientifically assess the efficacy, feasibility, acceptance, and efficiency of a multi-stage smoking cessation program based on the recommendations of the German guideline for smoking cessation tailored to patients with PAD, compared to a control group receiving no intervention. Central to the program is psychological counseling utilizing motivational interviewing techniques, delivered through telemedicine via video consultations. The primary endpoint of the ASCHR trial is the smoking cessation rate after 8 months of intervention, with a secondary endpoint evaluating sustained abstinence at a further 6 months follow-up. Smoking cessation is defined as a carbon monoxide level in exhaled air of less than 6 ppm. We hypothesize that the group receiving the multi-stage cessation program will yield a cessation rate at least 10 percentage points higher than that of usual care. Anticipating a dropout rate of around 35%, the planned sample size is at least N = 1032 study participants. DISCUSSION: Should the trial demonstrate significant positive outcomes, efforts should be made to integrate the program into routine care in Germany, potentially offering a promising base for future smoking cessation support among PAD patients.
Assuntos
Abandono do Hábito de Fumar , Telemedicina , Humanos , Abandono do Hábito de Fumar/métodos , Abandono do Hábito de Fumar/psicologia , Alemanha , Estudos Prospectivos , Redução do Dano , Doença Arterial Periférica/terapia , Doença Arterial Periférica/psicologia , Aconselhamento/métodos , Feminino , Masculino , Entrevista Motivacional , Adulto , Pessoa de Meia-IdadeRESUMO
BACKGROUND AND OBJECTIVE: Despite recent performance advancements, deep learning models are not yet adopted in clinical practice on a wide scale. The intrinsic intransparency of such systems is commonly cited as one major reason for this reluctance. This has motivated methods that aim to provide explanations of model functioning. Known limitations of feature-based explanations have led to an increased interest in concept-based interpretability. Testing with Concept Activation Vectors (TCAV) employs human-understandable, abstract concepts to explain model behavior. The method has previously been applied to the medical domain in the context of electronic health records, retinal fundus images and magnetic resonance imaging. METHODS: We explore the usage of TCAV for building interpretable models on physiological time series, using an example of abnormality detection in electroencephalography (EEG). For this purpose, we adopt the XceptionTime model, which is suitable for multi-channel physiological data of variable sizes. The model provides state-of-the-art performance on raw EEG data and is publically available. We propose and test several ideas regarding concept definition through metadata mining, using additional labeled EEG data and extracting interpretable signal characteristics in the form of frequencies. By including our own hospital data with analog labeling, we further evaluate the robustness of our approach. RESULTS: The tested concepts show a TCAV score distribution that is in line with the clinical expectations, i.e. concepts known to have strong links with EEG pathologies (such as epileptiform discharges) received higher scores than the neutral concepts (e.g. sex). The scores were consistent across the applied concept generation strategies. CONCLUSIONS: TCAV has the potential to improve interpretability of deep learning applied to multi-channel signals as well as to detect possible biases in the data. Still, further work on developing the strategies for concept definition and validation on clinical physiological time series is needed to better understand how to extract clinically relevant information from the concept sensitivity scores.
RESUMO
INTRODUCTION: Process Mining (PM) has emerged as a transformative tool in healthcare, facilitating the enhancement of process models and predicting potential anomalies. However, the widespread application of PM in healthcare is hindered by the lack of structured event logs and specific data privacy regulations. CONCEPT: This paper introduces a pipeline that converts routine healthcare data into PM-compatible event logs, leveraging the newly available permissions under the Health Data Utilization Act to use healthcare data. IMPLEMENTATION: Our system exploits the Core Data Sets (CDS) provided by Data Integration Centers (DICs). It involves converting routine data into Fast Healthcare Interoperable Resources (FHIR), storing it locally, and subsequently transforming it into standardized PM event logs through FHIR queries applicable on any DIC. This facilitates the extraction of detailed, actionable insights across various healthcare settings without altering existing DIC infrastructures. LESSONS LEARNED: Challenges encountered include handling the variability and quality of data, and overcoming network and computational constraints. Our pipeline demonstrates how PM can be applied even in complex systems like healthcare, by allowing for a standardized yet flexible analysis pipeline which is widely applicable.The successful application emphasize the critical role of tailored event log generation and data querying capabilities in enabling effective PM applications, thus enabling evidence-based improvements in healthcare processes.
Assuntos
Mineração de Dados , Mineração de Dados/métodos , Informática Médica , Humanos , Registros Eletrônicos de SaúdeRESUMO
INTRODUCTION: The configuration of electronic data capture (EDC) systems has a relevant impact on data quality in studies and patient registries. The objective was to develop a method to visualise the configuration of an EDC system to check the completeness and correctness of the data definition and rules. METHODS: Step 1: transformation of the EDC data model into a graphical model, step 2: Checking the completeness and consistency of the data model, step 3: correction of identified findings. This process model was evaluated on the patient registry EpiReg. RESULTS: Using the graphical visualisation as a basis, 21 problems in the EDC configuration were identified, discussed with an interdisciplinary team, and corrected. CONCLUSION: The tested methodological approach enables an improvement in data quality by optimising the underlying EDC configuration.
Assuntos
Confiabilidade dos Dados , Registros Eletrônicos de Saúde , Sistema de Registros , HumanosRESUMO
We developed and validated a statistical prediction model using 2.5 electronic health records from 24 German emergency departments (EDs) to estimate treatment timeliness at triage. The model's moderate fit and reliance on interoperable, routine data suggest its potential for implementation in ED crowding management.
Assuntos
Registros Eletrônicos de Saúde , Serviço Hospitalar de Emergência , Triagem , Humanos , Alemanha , Modelos Estatísticos , AglomeraçãoRESUMO
OBJECTIVE: Development of a generic model to visualize the potential for use and further development of registries to assess the suitability of the registry for a specific purpose. METHODS: Multi-stage community approach. RESULTS: The maturity model has 9 categories with 105 items. The purpose of the registry is mapped via potential usage dimensions. CONCLUSION: Important for acceptance is the appropriateness of the requirements in relation to the purposes.
Assuntos
Sistema de Registros , Humanos , Registros Eletrônicos de Saúde , Modelos OrganizacionaisRESUMO
The growing number of genes identified in relation to epilepsy represents a major breakthrough in diagnosis and treatment, but experts face the challenge of efficiently accessing and consolidating the vast amount of genetic data available. Therefore, we present the process of transforming data from different sources and formats into an Entity-Attribute-Value (EAV) model database. Combined with the use of standard coding systems, this approach will provide a scalable and adaptable database to present the data in a comprehensive way to experts via a dashboard.
Assuntos
Epilepsia , Epilepsia/genética , Epilepsia/diagnóstico , Epilepsia/tratamento farmacológico , Humanos , Bases de Dados GenéticasRESUMO
BACKGROUND AND OBJECTIVES: The increasing amount of open-access medical data provides new opportunities to gain clinically relevant information without recruiting new patients. We developed an open-source computational pipeline, that utilizes the publicly available electroencephalographic (EEG) data of the Temple University Hospital to identify EEG profiles associated with the usage of neuroactive medications. It facilitates access to the data and ensures consistency in data processing and analysis, thus reducing the risk of errors and creating comparable and reproducible results. Using this pipeline, we analyze the influence of common neuroactive medications on brain activity. METHODS: The pipeline is constructed using easily controlled modules. The user defines the medications of interest and comparison groups. The data is downloaded and preprocessed, spectral features are extracted, and statistical group comparison with visualization through a topographic EEG map is performed. The pipeline is adjustable to answer a variety of research questions. Here, the effects of carbamazepine and risperidone were statistically compared with control data and with other medications from the same classes (anticonvulsants and antipsychotics). RESULTS: The comparison between carbamazepine and the control group showed an increase in absolute and relative power for delta and theta, and a decrease in relative power for alpha, beta, and gamma. Compared to antiseizure medications, carbamazepine showed an increase in alpha and theta for absolute powers, and for relative powers an increase in alpha and theta, and a decrease in gamma and delta. Risperidone compared with the control group showed a decrease in absolute and relative power for alpha and beta and an increase in theta for relative power. Compared to antipsychotic medications, risperidone showed a decrease in delta for absolute powers. These results show good agreement with state-of-the-art research. The database allows to create large groups for many different medications. Additionally, it provides a collection of records labeled as "normal" after expert assessment, which is convenient for the creation of control groups. CONCLUSIONS: The pipeline allows fast testing of different hypotheses regarding links between medications and EEG spectrum through ecological usage of readily available data. It can be utilized to make informed decisions about the design of new clinical studies.
Assuntos
Mineração de Dados , Eletroencefalografia , Humanos , Eletroencefalografia/métodos , Mineração de Dados/métodos , Carbamazepina/uso terapêutico , Carbamazepina/farmacologia , Risperidona , Antipsicóticos/farmacologia , Anticonvulsivantes/farmacologia , Anticonvulsivantes/uso terapêutico , Encéfalo/efeitos dos fármacosRESUMO
BACKGROUND: The use of triage systems such as the Manchester Triage System (MTS) is a standard procedure to determine the sequence of treatment in emergency departments (EDs). When using the MTS, time targets for treatment are determined. These are commonly displayed in the ED information system (EDIS) to ED staff. Using measurements as targets has been associated with a decline in meeting those targets. OBJECTIVE: This study investigated the impact of displaying time targets for treatment to physicians on processing times in the ED. METHODS: We analyzed the effects of displaying time targets to ED staff on waiting times in a prospective crossover study, during the introduction of a new EDIS in a large regional hospital in Germany. The old information system version used a module that showed the time target determined by the MTS, while the new system version used a priority list instead. Evaluation was based on 35,167 routinely collected electronic health records from the preintervention period and 10,655 records from the postintervention period. Electronic health records were extracted from the EDIS, and data were analyzed using descriptive statistics and generalized additive models. We evaluated the effects of the intervention on waiting times and the odds of achieving timely treatment according to the time targets set by the MTS. RESULTS: The average ED length of stay and waiting times increased when the EDIS that did not display time targets was used (average time from admission to treatment: preintervention phase=median 15, IQR 6-39 min; postintervention phase=median 11, IQR 5-23 min). However, severe cases with high acuity (as indicated by the triage score) benefited from lower waiting times (0.15 times as high as in the preintervention period for MTS1, only 0.49 as high for MTS2). Furthermore, these patients were less likely to receive delayed treatment, and we observed reduced odds of late treatment when crowding occurred. CONCLUSIONS: Our results suggest that it is beneficial to use a priority list instead of displaying time targets to ED personnel. These time targets may lead to false incentives. Our work highlights that working better is not the same as working faster.
Assuntos
Estudos Cross-Over , Serviço Hospitalar de Emergência , Triagem , Triagem/métodos , Triagem/estatística & dados numéricos , Humanos , Serviço Hospitalar de Emergência/estatística & dados numéricos , Estudos Prospectivos , Feminino , Masculino , Fatores de Tempo , Alemanha , Pessoa de Meia-Idade , Adulto , IdosoRESUMO
BACKGROUND AND OBJECTIVE: Cell segmentation in bright-field histological slides is a crucial topic in medical image analysis. Having access to accurate segmentation allows researchers to examine the relationship between cellular morphology and clinical observations. Unfortunately, most segmentation methods known today are limited to nuclei and cannot segment the cytoplasm. METHODS: We present a new network architecture Cyto R-CNN that is able to accurately segment whole cells (with both the nucleus and the cytoplasm) in bright-field images. We also present a new dataset CytoNuke, consisting of multiple thousand manual annotations of head and neck squamous cell carcinoma cells. Utilizing this dataset, we compared the performance of Cyto R-CNN to other popular cell segmentation algorithms, including QuPath's built-in algorithm, StarDist, Cellpose and a multi-scale Attention Deeplabv3+. To evaluate segmentation performance, we calculated AP50, AP75 and measured 17 morphological and staining-related features for all detected cells. We compared these measurements to the gold standard of manual segmentation using the Kolmogorov-Smirnov test. RESULTS: Cyto R-CNN achieved an AP50 of 58.65% and an AP75 of 11.56% in whole-cell segmentation, outperforming all other methods (QuPath 19.46/0.91%; StarDist 45.33/2.32%; Cellpose 31.85/5.61%, Deeplabv3+ 3.97/1.01%). Cell features derived from Cyto R-CNN showed the best agreement to the gold standard (D¯=0.15) outperforming QuPath (D¯=0.22), StarDist (D¯=0.25), Cellpose (D¯=0.23) and Deeplabv3+ (D¯=0.33). CONCLUSION: Our newly proposed Cyto R-CNN architecture outperforms current algorithms in whole-cell segmentation while providing more reliable cell measurements than any other model. This could improve digital pathology workflows, potentially leading to improved diagnosis. Moreover, our published dataset can be used to develop further models in the future.
Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Núcleo Celular , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/patologia , Carcinoma de Células Escamosas de Cabeça e Pescoço/diagnóstico por imagem , Carcinoma de Células Escamosas de Cabeça e Pescoço/patologia , Citoplasma , Reprodutibilidade dos Testes , Carcinoma de Células Escamosas/diagnóstico por imagem , Carcinoma de Células Escamosas/patologiaRESUMO
BACKGROUND: Early detection of oral cancer (OC) or its precursors is the most effective measure to improve outcome. The reasons for missing them on conventional oral examination (COE) or possible countermeasures are still unclear. METHODS: In this randomized controlled trial, we investigated the effects of standardized oral examination (SOE) compared to COE. 49 dentists, specialists, and dental students wearing an eye tracker had to detect 10 simulated oral lesions drawn into a volunteer's oral cavity. RESULTS: SOE had a higher detection rate at 85.4% sensitivity compared to 78.8% in the control (p = 0.017) due to higher completeness (p < 0.001). Detection rate correlated with examination duration (p = 0.002). CONCLUSIONS: A standardized approach can improve systematics and thereby detection rates in oral examinations. It should take at least 5 min. Perceptual and cognitive errors and improper technique cause oral lesions to be missed. Its wide implementation could be an additional strategy to enhance early detection of OC.
RESUMO
OBJECTIVE: The gold standard of oral cancer (OC) treatment is diagnostic confirmation by biopsy followed by surgical treatment. However, studies have shown that dentists have difficulty performing biopsies, dental students lack knowledge about OC, and surgeons do not always maintain a safe margin during tumor resection. To address this, biopsies and resections could be trained under realistic conditions outside the patient. The aim of this study was to develop and to validate a porcine pseudotumor model of the tongue. METHODS: An interdisciplinary team reflecting various specialties involved in the oncological treatment of head and neck oncology developed a porcine pseudotumor model of the tongue in which biopsies and resections can be practiced. The refined model was validated in a final trial of 10 participants who each resected four pseudotumors on a tongue, resulting in a total of 40 resected pseudotumors. The participants (7 residents and 3 specialists) had an experience in OC treatment ranging from 0.5 to 27 years. Resection margins (minimum and maximum) were assessed macroscopically and compared beside self-assessed margins and resection time between residents and specialists. Furthermore, the model was evaluated using Likert-type questions on haptic and radiological fidelity, its usefulness as a training model, as well as its imageability using CT and ultrasound. RESULTS: The model haptically resembles OC (3.0 ± 0.5; 4-point Likert scale), can be visualized with medical imaging and macroscopically evaluated immediately after resection providing feedback. Although, participants (3.2 ± 0.4) tended to agree that they had resected the pseudotumor with an ideal safety margin (10 mm), the mean minimum resection margin was insufficient at 4.2 ± 1.2 mm (mean ± SD), comparable to reported margins in literature. Simultaneously, a maximum resection margin of 18.4 ± 6.1 mm was measured, indicating partial over-resection. Although specialists were faster at resection (p < 0.001), this had no effect on margins (p = 0.114). Overall, the model was well received by the participants, and they could see it being implemented in training (3.7 ± 0.5). CONCLUSION: The model, which is cost-effective, cryopreservable, and provides a risk-free training environment, is ideal for training in OC biopsy and resection and could be incorporated into dental, medical, or oncologic surgery curricula. Future studies should evaluate the long-term training effects using this model and its potential impact on improving patient outcomes.
Assuntos
Margens de Excisão , Neoplasias Bucais , Animais , Humanos , Biópsia , Cadáver , Cabeça , Neoplasias Bucais/cirurgia , Neoplasias Bucais/patologia , SuínosRESUMO
PURPOSE: Efficient and precise surgical skills are essential in ensuring positive patient outcomes. By continuously providing real-time, data driven, and objective evaluation of surgical performance, automated skill assessment has the potential to greatly improve surgical skill training. Whereas machine learning-based surgical skill assessment is gaining traction for minimally invasive techniques, this cannot be said for open surgery skills. Open surgery generally has more degrees of freedom when compared to minimally invasive surgery, making it more difficult to interpret. In this paper, we present novel approaches for skill assessment for open surgery skills. METHODS: We analyzed a novel video dataset for open suturing training. We provide a detailed analysis of the dataset and define evaluation guidelines, using state of the art deep learning models. Furthermore, we present novel benchmarking results for surgical skill assessment in open suturing. The models are trained to classify a video into three skill levels based on the global rating score. To obtain initial results for video-based surgical skill classification, we benchmarked a temporal segment network with both an I3D and a Video Swin backbone on this dataset. RESULTS: The dataset is composed of 314 videos of approximately five minutes each. Model benchmarking results are an accuracy and F1 score of up to 75 and 72%, respectively. This is similar to the performance achieved by the individual raters, regarding inter-rater agreement and rater variability. We present the first end-to-end trained approach for skill assessment for open surgery training. CONCLUSION: We provide a thorough analysis of a new dataset as well as novel benchmarking results for surgical skill assessment. This opens the doors to new advances in skill assessment by enabling video-based skill assessment for classic surgical techniques with the potential to improve the surgical outcome of patients.
Assuntos
Competência Clínica , Técnicas de Sutura , Gravação em Vídeo , Humanos , Técnicas de Sutura/educação , BenchmarkingRESUMO
BACKGROUND: Autopsies have long been considered the gold standard for quality assurance in medicine, yet their significance in basic research has been relatively overlooked. The COVID-19 pandemic underscored the potential of autopsies in understanding pathophysiology, therapy, and disease management. In response, the German Registry for COVID-19 Autopsies (DeRegCOVID) was established in April 2020, followed by the DEFEAT PANDEMIcs consortium (2020-2021), which evolved into the National Autopsy Network (NATON). DEREGCOVID: DeRegCOVID collected and analyzed autopsy data from COVID-19 deceased in Germany over three years, serving as the largest national multicenter autopsy study. Results identified crucial factors in severe/fatal cases, such as pulmonary vascular thromboemboli and the intricate virus-immune interplay. DeRegCOVID served as a central hub for data analysis, research inquiries, and public communication, playing a vital role in informing policy changes and responding to health authorities. NATON: Initiated by the Network University Medicine (NUM), NATON emerged as a sustainable infrastructure for autopsy-based research. NATON aims to provide a data and method platform, fostering collaboration across pathology, neuropathology, and legal medicine. Its structure supports a swift feedback loop between research, patient care, and pandemic management. CONCLUSION: DeRegCOVID has significantly contributed to understanding COVID-19 pathophysiology, leading to the establishment of NATON. The National Autopsy Registry (NAREG), as its successor, embodies a modular and adaptable approach, aiming to enhance autopsy-based research collaboration nationally and, potentially, internationally.
Assuntos
Autopsia , COVID-19 , Sistema de Registros , Humanos , COVID-19/epidemiologia , COVID-19/patologia , Alemanha/epidemiologia , Pandemias , SARS-CoV-2RESUMO
BACKGROUND: The radial forearm free flap (RFFF) serves as a workhorse for a variety of reconstructions. Although there are a variety of surgical techniques for donor site closure after RFFF raising, the most common techniques are closure using a split-thickness skin graft (STSG) or a full-thickness skin graft (FTSG). The closure can result in wound complications and function and aesthetic compromise of the forearm and hand. The aim of the planned systematic review and meta-analysis is to compare the wound-related, function-related and aesthetics-related outcome associated with full-thickness skin grafts (FTSG) and split-thickness skin grafts (STSG) in radial forearm free flap (RFFF) donor site closure. METHODS: A systematic review and meta-analysis will be conducted. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines will be followed. Electronic databases and platforms (PubMed, Embase, Scopus, Web of Science, Cochrane Central Register of Controlled Trials (CENTRAL), China National Knowledge Infrastructure (CNKI)) and clinical trial registries (ClinicalTrials.gov, the German Clinical Trials Register, the ISRCTN registry, the International Clinical Trials Registry Platform) will be searched using predefined search terms until 15 January 2024. A rerun of the search will be carried out within 12 months before publication of the review. Eligible studies should report on the occurrence of donor site complications after raising an RFFF and closure of the defect. Included closure techniques are techniques that use full-thickness skin grafts and split-thickness skin grafts. Excluded techniques for closure are primary wound closure without the use of skin graft. Outcomes are considered wound-, functional-, and aesthetics-related. Studies that will be included are randomized controlled trials (RCTs) and prospective and retrospective comparative cohort studies. Case-control studies, studies without a control group, animal studies and cadaveric studies will be excluded. Screening will be performed in a blinded fashion by two reviewers per study. A third reviewer resolves discrepancies. The risk of bias in the original studies will be assessed using the ROBINS-I and RoB 2 tools. Data synthesis will be done using Review Manager (RevMan) 5.4.1. If appropriate, a meta-analysis will be conducted. Between-study variability will be assessed using the I2 index. If necessary, R will be used. The quality of evidence for outcomes will eventually be assessed using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. DISCUSSION: This study's findings may help us understand both closure techniques' complication rates and may have important implications for developing future guidelines for RFFF donor site management. If available data is limited and several questions remain unanswered, additional comparative studies will be needed. SYSTEMATIC REVIEW REGISTRATION: The protocol was developed in line with the PRISMA-P extension for protocols and was registered with the International Prospective Register of Systematic Reviews (PROSPERO) on 17 September 2023 (registration number CRD42023351903).
Assuntos
Antebraço , Retalhos de Tecido Biológico , Transplante de Pele , Revisões Sistemáticas como Assunto , Humanos , Transplante de Pele/métodos , Antebraço/cirurgia , Procedimentos de Cirurgia Plástica/métodos , Metanálise como Assunto , Sítio Doador de Transplante/cirurgia , Técnicas de Fechamento de Ferimentos , CicatrizaçãoRESUMO
BACKGROUND: As part of the German government's digitization initiative, the paper-based documentation that is still present in many intensive care units is to be replaced by digital patient data management systems (PDMS). In order to simplify the implementation of such systems, standards for basic functionalities that should be part of basic configurations of PDMS would be of great value. PURPOSE: This paper describes functional requirements for PDMS in several categories. METHODS: Criteria for standardized data documentation were defined by the authors and derived functional requirements were classified into two priority categories. RESULTS: Overall, general technical requirements, functionalities for intensive care patient care, and additional functionalities for PDMS were defined and prioritized. DISCUSSION: Using this paper as a starting point for a discussion about basic functionalities of PDMS, it is planned to develop and obtain consensus on definitive standards with representatives from medical societies, medical informatics and PDMS manufacture.
Assuntos
Cuidados Críticos , Gerenciamento de Dados , Humanos , Unidades de Terapia Intensiva , DocumentaçãoRESUMO
In the field of neuroscience, a considerable number of commercial data acquisition and processing solutions rely on proprietary formats for data storage. This often leads to data being locked up in formats that are only accessible by using the original software, which may lead to interoperability problems. In fact, even the loss of data access is possible if the software becomes unsupported, changed, or otherwise unavailable. To ensure FAIR data management, strategies should be established to enable long-term, independent, and unified access to data in proprietary formats. In this work, we demonstrate PyDapsys, a solution to gain open access to data that was acquired using the proprietary recording system DAPSYS. PyDapsys enables us to open the recorded files directly in Python and saves them as NIX files, commonly used for open research in the electrophysiology domain. Thus, PyDapsys secures efficient and open access to existing and prospective data. The manuscript demonstrates the complete process of reverse engineering a proprietary electrophysiological format on the example of microneurography data collected for studies on pain and itch signaling in peripheral neural fibers.
RESUMO
Metadata is essential for handling medical data according to FAIR principles. Standards are well-established for many types of electrophysiological methods but are still lacking for microneurographic recordings of peripheral sensory nerve fibers in humans. Developing a new concept to enhance laboratory workflows is a complex process. We propose a standard for structuring and storing microneurography metadata based on odML and odML-tables. Further, we present an extension to the odML-tables GUI that enables user-friendly search functionality of the database. With our open-source repository, we encourage other microneurography labs to incorporate odML-based metadata into their experimental routines.
Assuntos
Decoração de Interiores e Mobiliário , Metadados , Humanos , Bases de Dados Factuais , Laboratórios , Fluxo de TrabalhoRESUMO
Clinical assessment of newly developed sensors is important for ensuring their validity. Comparing recordings of emerging electrocardiography (ECG) systems to a reference ECG system requires accurate synchronization of data from both devices. Current methods can be inefficient and prone to errors. To address this issue, three algorithms are presented to synchronize two ECG time series from different recording systems: Binned R-peak Correlation, R-R Interval Correlation, and Average R-peak Distance. These algorithms reduce ECG data to their cyclic features, mitigating inefficiencies and minimizing discrepancies between different recording systems. We evaluate the performance of these algorithms using high-quality data and then assess their robustness after manipulating the R-peaks. Our results show that R-R Interval Correlation was the most efficient, whereas the Average R-peak Distance and Binned R-peak Correlation were more robust against noisy data.
Assuntos
Confiabilidade dos Dados , Eletrocardiografia , Algoritmos , Fatores de TempoRESUMO
Despite developments in wearable devices for detecting various bio-signals, continuous measurement of breathing rate (BR) remains a challenge. This work presents an early proof of concept that employs a wearable patch to estimate BR. We propose combining techniques for calculating BR from electrocardiogram (ECG) and accelerometer (ACC) signals, while applying decision rules based on signal-to-noise (SNR) to fuse the estimates for improved accuracy.