Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
BMC Neurol ; 23(1): 2, 2023 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-36597038

RESUMO

BACKGROUND: Although of high individual and socioeconomic relevance, a reliable prediction model for the prognosis of juvenile stroke (18-55 years) is missing. Therefore, the study presented in this protocol aims to prospectively validate the discriminatory power of a prediction score for the 3 months functional outcome after juvenile stroke or transient ischemic attack (TIA) that has been derived from an independent retrospective study using standard clinical workup data. METHODS: PREDICT-Juvenile-Stroke is a multi-centre (n = 4) prospective observational cohort study collecting standard clinical workup data and data on treatment success at 3 months after acute ischemic stroke or TIA that aims to validate a new prediction score for juvenile stroke. The prediction score has been developed upon single center retrospective analysis of 340 juvenile stroke patients. The score determines the patient's individual probability for treatment success defined by a modified Rankin Scale (mRS) 0-2 or return to pre-stroke baseline mRS 3 months after stroke or TIA. This probability will be compared to the observed clinical outcome at 3 months using the area under the receiver operating characteristic curve. The primary endpoint is to validate the clinical potential of the new prediction score for a favourable outcome 3 months after juvenile stroke or TIA. Secondary outcomes are to determine to what extent predictive factors in juvenile stroke or TIA patients differ from those in older patients and to determine the predictive accuracy of the juvenile stroke prediction score on other clinical and paraclinical endpoints. A minimum of 430 juvenile patients (< 55 years) with acute ischemic stroke or TIA, and the same number of older patients will be enrolled for the prospective validation study. DISCUSSION: The juvenile stroke prediction score has the potential to enable personalisation of counselling, provision of appropriate information regarding the prognosis and identification of patients who benefit from specific treatments. TRIAL REGISTRATION: The study has been registered at https://drks.de on March 31, 2022 ( DRKS00024407 ).


Assuntos
Ataque Isquêmico Transitório , AVC Isquêmico , Acidente Vascular Cerebral , Humanos , Adulto Jovem , Idoso , Ataque Isquêmico Transitório/diagnóstico , Ataque Isquêmico Transitório/epidemiologia , Ataque Isquêmico Transitório/complicações , AVC Isquêmico/complicações , Estudos Retrospectivos , Acidente Vascular Cerebral/diagnóstico , Acidente Vascular Cerebral/epidemiologia , Acidente Vascular Cerebral/complicações , Prognóstico , Valor Preditivo dos Testes , Estudos Observacionais como Assunto
2.
Emerg Infect Dis ; 28(3): 572-581, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35195515

RESUMO

Hospital staff are at high risk for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection during the coronavirus disease (COVID-19) pandemic. This cross-sectional study aimed to determine the prevalence of SARS-CoV-2 infection in hospital staff at the University Hospital rechts der Isar in Munich, Germany, and identify modulating factors. Overall seroprevalence of SARS-CoV-2-IgG in 4,554 participants was 2.4%. Staff engaged in direct patient care, including those working in COVID-19 units, had a similar probability of being seropositive as non-patient-facing staff. Increased probability of infection was observed in staff reporting interactions with SARS-CoV-2‒infected coworkers or private contacts or exposure to COVID-19 patients without appropriate personal protective equipment. Analysis of spatiotemporal trajectories identified that distinct hotspots for SARS-CoV-2‒positive staff and patients only partially overlap. Patient-facing work in a healthcare facility during the SARS-CoV-2 pandemic might be safe as long as adequate personal protective equipment is used and infection prevention practices are followed inside and outside the hospital.


Assuntos
COVID-19 , SARS-CoV-2 , Estudos Transversais , Alemanha/epidemiologia , Pessoal de Saúde , Hospitais Universitários , Humanos , Imunoglobulina G , Controle de Infecções , Recursos Humanos em Hospital , Prevalência , Estudos Soroepidemiológicos
3.
Eur J Public Health ; 32(3): 422-428, 2022 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-35165720

RESUMO

BACKGROUND: Heterozygous familial hypercholesterolemia (FH) represents the most frequent monogenic disorder with an estimated prevalence of 1:250 in the general population. Diagnosis during childhood enables early initiation of preventive measures, reducing the risk of severe consecutive atherosclerotic manifestations. Nevertheless, population-based screening programs for FH are scarce. METHODS: In the VRONI study, children aged 5-14 years in Bavaria are invited to participate in an FH screening program during regular pediatric visits. The screening is based on low-density lipoprotein cholesterol measurements from capillary blood. If exceeding 130 mg/dl (3.34 mmol/l), i.e. the expected 95th percentile in this age group, subsequent molecular genetic analysis for FH is performed. Children with FH pathogenic variants enter a registry and are treated by specialized pediatricians. Furthermore, qualified training centers offer FH-focused training courses to affected families. For first-degree relatives, reverse cascade screening is recommended to identify and treat affected family members. RESULTS: Implementation of VRONI required intensive prearrangements for addressing ethical, educational, data safety, legal and organizational aspects, which will be outlined in this article. Recruitment started in early 2021, within the first months, more than 380 pediatricians screened over 5200 children. Approximately 50 000 children are expected to be enrolled in the VRONI study until 2024. CONCLUSIONS: VRONI aims to test the feasibility of a population-based screening for FH in children in Bavaria, intending to set the stage for a nationwide FH screening infrastructure. Furthermore, we aim to validate genetic variants of unclear significance, detect novel causative mutations and contribute to polygenic risk indices (DRKS00022140; August 2020).


Assuntos
Hiperlipoproteinemia Tipo II , Idoso de 80 Anos ou mais , Criança , Diagnóstico Precoce , Humanos , Hiperlipoproteinemia Tipo II/diagnóstico , Hiperlipoproteinemia Tipo II/epidemiologia , Hiperlipoproteinemia Tipo II/genética , Programas de Rastreamento
4.
Dis Esophagus ; 32(8)2019 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-31329831

RESUMO

Risk stratification in patients with Barrett's esophagus (BE) to prevent the development of esophageal adenocarcinoma (EAC) is an unsolved task. The incidence of EAC and BE is increasing and patients are still at unknown risk. BarrettNET is an ongoing multicenter prospective cohort study initiated to identify and validate molecular and clinical biomarkers that allow a more personalized surveillance strategy for patients with BE. For BarrettNET participants are recruited in 20 study centers throughout Germany, to be followed for progression to dysplasia (low-grade dysplasia or high-grade dysplasia) or EAC for >10 years. The study instruments comprise self-administered epidemiological information (containing data on demographics, lifestyle factors, and health), as well as biological specimens, i.e., blood-based samples, esophageal tissue biopsies, and feces and saliva samples. In follow-up visits according to the individual surveillance plan of the participants, sample collection is repeated. The standardized collection and processing of the specimen guarantee the highest sample quality. Via a mobile accessible database, the documentation of inclusion, epidemiological data, and pathological disease status are recorded subsequently. Currently the BarrettNET registry includes 560 participants (23.1% women and 76.9% men, aged 22-92 years) with a median follow-up of 951 days. Both the design and the size of BarrettNET offer the advantage of answering research questions regarding potential causes of disease progression from BE to EAC. Here all the integrated methods and materials of BarrettNET are presented and reviewed to introduce this valuable German registry.


Assuntos
Adenocarcinoma/diagnóstico , Esôfago de Barrett/complicações , Detecção Precoce de Câncer/métodos , Neoplasias Esofágicas/diagnóstico , Vigilância da População/métodos , Medição de Risco/métodos , Adenocarcinoma/etiologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Biomarcadores/análise , Regras de Decisão Clínica , Progressão da Doença , Neoplasias Esofágicas/etiologia , Feminino , Alemanha , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Sistema de Registros , Fatores de Risco , Adulto Jovem
5.
BMC Med Inform Decis Mak ; 19(1): 178, 2019 09 04.
Artigo em Inglês | MEDLINE | ID: mdl-31484555

RESUMO

BACKGROUND: The collection of data and biospecimens which characterize patients and probands in-depth is a core element of modern biomedical research. Relevant data must be considered highly sensitive and it needs to be protected from unauthorized use and re-identification. In this context, laws, regulations, guidelines and best-practices often recommend or mandate pseudonymization, which means that directly identifying data of subjects (e.g. names and addresses) is stored separately from data which is primarily needed for scientific analyses. DISCUSSION: When (authorized) re-identification of subjects is not an exceptional but a common procedure, e.g. due to longitudinal data collection, implementing pseudonymization can significantly increase the complexity of software solutions. For example, data stored in distributed databases, need to be dynamically combined with each other, which requires additional interfaces for communicating between the various subsystems. This increased complexity may lead to new attack vectors for intruders. Obviously, this is in contrast to the objective of improving data protection. What is lacking is a standardized process of evaluating and reporting risks, threats and countermeasures, which can be used to test whether integrating pseudonymization methods into data collection systems actually improves upon the degree of protection provided by system designs that simply follow common IT security best practices and implement fine-grained role-based access control models. To demonstrate that the methods used to describe systems employing pseudonymized data management are currently heterogeneous and ad-hoc, we examined the extent to which twelve recent studies address each of the six basic security properties defined by the International Organization for Standardization (ISO) standard 27,000. We show inconsistencies across the studies, with most of them failing to mention one or more security properties. CONCLUSION: We discuss the degree of privacy protection provided by implementing pseudonymization into research data collection processes. We conclude that (1) more research is needed on the interplay of pseudonymity, information security and data protection, (2) problem-specific guidelines for evaluating and reporting risks, threats and countermeasures should be developed and that (3) future work on pseudonymized research data collection should include the results of such structured and integrated analyses.


Assuntos
Anônimos e Pseudônimos , Pesquisa Biomédica , Confidencialidade , Redes de Comunicação de Computadores , Segurança Computacional/normas , Humanos
6.
BMC Med Inform Decis Mak ; 17(1): 30, 2017 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-28330491

RESUMO

BACKGROUND: Translational researchers need robust IT solutions to access a range of data types, varying from public data sets to pseudonymised patient information with restricted access, provided on a case by case basis. The reason for this complication is that managing access policies to sensitive human data must consider issues of data confidentiality, identifiability, extent of consent, and data usage agreements. All these ethical, social and legal aspects must be incorporated into a differential management of restricted access to sensitive data. METHODS: In this paper we present a pilot system that uses several common open source software components in a novel combination to coordinate access to heterogeneous biomedical data repositories containing open data (open access) as well as sensitive data (restricted access) in the domain of biobanking and biosample research. Our approach is based on a digital identity federation and software to manage resource access entitlements. RESULTS: Open source software components were assembled and configured in such a way that they allow for different ways of restricted access according to the protection needs of the data. We have tested the resulting pilot infrastructure and assessed its performance, feasibility and reproducibility. CONCLUSIONS: Common open source software components are sufficient to allow for the creation of a secure system for differential access to sensitive data. The implementation of this system is exemplary for researchers facing similar requirements for restricted access data. Here we report experience and lessons learnt of our pilot implementation, which may be useful for similar use cases. Furthermore, we discuss possible extensions for more complex scenarios.


Assuntos
Bancos de Espécimes Biológicos/normas , Pesquisa Biomédica/normas , Segurança Computacional/normas , Conjuntos de Dados como Assunto , Pesquisa Translacional Biomédica/normas , Humanos , Projetos Piloto
7.
BMC Med Inform Decis Mak ; 16: 49, 2016 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-27130179

RESUMO

BACKGROUND: Privacy must be protected when sensitive biomedical data is shared, e.g. for research purposes. Data de-identification is an important safeguard, where datasets are transformed to meet two conflicting objectives: minimizing re-identification risks while maximizing data quality. Typically, de-identification methods search a solution space of possible data transformations to find a good solution to a given de-identification problem. In this process, parts of the search space must be excluded to maintain scalability. OBJECTIVES: The set of transformations which are solution candidates is typically narrowed down by storing the results obtained during the search process and then using them to predict properties of the output of other transformations in terms of privacy (first objective) and data quality (second objective). However, due to the exponential growth of the size of the search space, previous implementations of this method are not well-suited when datasets contain many attributes which need to be protected. As this is often the case with biomedical research data, e.g. as a result of longitudinal collection, we have developed a novel method. METHODS: Our approach combines the mathematical concept of antichains with a data structure inspired by prefix trees to represent properties of a large number of data transformations while requiring only a minimal amount of information to be stored. To analyze the improvements which can be achieved by adopting our method, we have integrated it into an existing algorithm and we have also implemented a simple best-first branch and bound search (BFS) algorithm as a first step towards methods which fully exploit our approach. We have evaluated these implementations with several real-world datasets and the k-anonymity privacy model. RESULTS: When integrated into existing de-identification algorithms for low-dimensional data, our approach reduced memory requirements by up to one order of magnitude and execution times by up to 25 %. This allowed us to increase the size of solution spaces which could be processed by almost a factor of 10. When using the simple BFS method, we were able to further increase the size of the solution space by a factor of three. When used as a heuristic strategy for high-dimensional data, the BFS approach outperformed a state-of-the-art algorithm by up to 12 % in terms of the quality of output data. CONCLUSIONS: This work shows that implementing methods of data de-identification for real-world applications is a challenging task. Our approach solves a problem often faced by data custodians: a lack of scalability of de-identification software when used with datasets having realistic schemas and volumes. The method described in this article has been implemented into ARX, an open source de-identification software for biomedical data.


Assuntos
Algoritmos , Confidencialidade , Informática Médica/métodos , Modelos Estatísticos , Humanos
8.
J Biomed Inform ; 58: 37-48, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26385376

RESUMO

OBJECTIVE: With the ARX data anonymization tool structured biomedical data can be de-identified using syntactic privacy models, such as k-anonymity. Data is transformed with two methods: (a) generalization of attribute values, followed by (b) suppression of data records. The former method results in data that is well suited for analyses by epidemiologists, while the latter method significantly reduces loss of information. Our tool uses an optimal anonymization algorithm that maximizes output utility according to a given measure. To achieve scalability, existing optimal anonymization algorithms exclude parts of the search space by predicting the outcome of data transformations regarding privacy and utility without explicitly applying them to the input dataset. These optimizations cannot be used if data is transformed with generalization and suppression. As optimal data utility and scalability are important for anonymizing biomedical data, we had to develop a novel method. METHODS: In this article, we first confirm experimentally that combining generalization with suppression significantly increases data utility. Next, we proof that, within this coding model, the outcome of data transformations regarding privacy and utility cannot be predicted. As a consequence, existing algorithms fail to deliver optimal data utility. We confirm this finding experimentally. The limitation of previous work can be overcome at the cost of increased computational complexity. However, scalability is important for anonymizing data with user feedback. Consequently, we identify properties of datasets that may be predicted in our context and propose a novel and efficient algorithm. Finally, we evaluate our solution with multiple datasets and privacy models. RESULTS: This work presents the first thorough investigation of which properties of datasets can be predicted when data is anonymized with generalization and suppression. Our novel approach adopts existing optimization strategies to our context and combines different search methods. The experiments show that our method is able to efficiently solve a broad spectrum of anonymization problems. CONCLUSION: Our work shows that implementing syntactic privacy models is challenging and that existing algorithms are not well suited for anonymizing data with transformation models which are more complex than generalization alone. As such models have been recommended for use in the biomedical domain, our results are of general relevance for de-identifying structured biomedical data.


Assuntos
Serviços de Informação/economia , Serviços de Informação/normas , Segurança Computacional , Modelos Teóricos , Privacidade
9.
BMC Med Inform Decis Mak ; 15: 100, 2015 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-26621059

RESUMO

BACKGROUND: Collaborative collection and sharing of data have become a core element of biomedical research. Typical applications are multi-site registries which collect sensitive person-related data prospectively, often together with biospecimens. To secure these sensitive data, national and international data protection laws and regulations demand the separation of identifying data from biomedical data and to introduce pseudonyms. Neither the formulation in laws and regulations nor existing pseudonymization concepts, however, are precise enough to directly provide an implementation guideline. We therefore describe core requirements as well as implementation options for registries and study databases with sensitive biomedical data. METHODS: We first analyze existing concepts and compile a set of fundamental requirements for pseudonymized data management. Then we derive a system architecture that fulfills these requirements. Next, we provide a comprehensive overview and a comparison of different technical options for an implementation. Finally, we develop a generic software solution for managing pseudonymized data and show its feasibility by describing how we have used it to realize two research networks. RESULTS: We have found that pseudonymization models are highly heterogeneous, already on a conceptual level. We have compiled a set of requirements from different pseudonymization schemes. We propose an architecture and present an overview of technical options. Based on a selection of technical elements, we suggest a generic solution. It supports the multi-site collection and management of biomedical data. Security measures are multi-tier pseudonymity and physical separation of data over independent backend servers. Integrated views are provided by a web-based user interface. Our approach has been successfully used to implement a national and an international rare disease network. CONCLUSIONS: We were able to identify a set of core requirements out of several pseudonymization models. Considering various implementation options, we realized a generic solution which was implemented and deployed in research networks. Still, further conceptual work on pseudonymity is needed. Specifically, it remains unclear how exactly data is to be separated into distributed subsets. Moreover, a thorough risk and threat analysis is needed.


Assuntos
Pesquisa Biomédica/normas , Confidencialidade/normas , Conjuntos de Dados como Assunto/normas , Guias como Assunto/normas , Sistema de Registros/normas , Humanos
10.
J Biomed Inform ; 50: 62-76, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-24333850

RESUMO

Sensitive biomedical data is often collected from distributed sources, involving different information systems and different organizational units. Local autonomy and legal reasons lead to the need of privacy preserving integration concepts. In this article, we focus on anonymization, which plays an important role for the re-use of clinical data and for the sharing of research data. We present a flexible solution for anonymizing distributed data in the semi-honest model. Prior to the anonymization procedure, an encrypted global view of the dataset is constructed by means of a secure multi-party computing (SMC) protocol. This global representation can then be anonymized. Our approach is not limited to specific anonymization algorithms but provides pre- and postprocessing for a broad spectrum of algorithms and many privacy criteria. We present an extensive analytical and experimental evaluation and discuss which types of methods and criteria are supported. Our prototype demonstrates the approach by implementing k-anonymity, ℓ-diversity, t-closeness and δ-presence with a globally optimal de-identification method in horizontally and vertically distributed setups. The experiments show that our method provides highly competitive performance and offers a practical and flexible solution for anonymizing distributed biomedical datasets.


Assuntos
Sistemas Computadorizados de Registros Médicos , Privacidade , Algoritmos , Modelos Teóricos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA