RESUMO
PURPOSE: Our objective is to describe how the U.S. Food and Drug Administration (FDA)'s Sentinel System implements best practices to ensure trust in drug safety studies using real-world data from disparate sources. METHODS: We present a stepwise schematic for Sentinel's data harmonization, data quality check, query design and implementation, and reporting practices, and describe approaches to enhancing the transparency, reproducibility, and replicability of studies at each step. CONCLUSIONS: Each Sentinel data partner converts its source data into the Sentinel Common Data Model. The transformed data undergoes rigorous quality checks before it can be used for Sentinel queries. The Sentinel Common Data Model framework, data transformation codes for several data sources, and data quality assurance packages are publicly available. Designed to run against the Sentinel Common Data Model, Sentinel's querying system comprises a suite of pre-tested, parametrizable computer programs that allow users to perform sophisticated descriptive and inferential analysis without having to exchange individual-level data across sites. Detailed documentation of capabilities of the programs as well as the codes and information required to execute them are publicly available on the Sentinel website. Sentinel also provides public trainings and online resources to facilitate use of its data model and querying system. Its study specifications conform to established reporting frameworks aimed at facilitating reproducibility and replicability of real-world data studies. Reports from Sentinel queries and associated design and analytic specifications are available for download on the Sentinel website. Sentinel is an example of how real-world data can be used to generate regulatory-grade evidence at scale using a transparent, reproducible, and replicable process.
Assuntos
Farmacoepidemiologia , United States Food and Drug Administration , Farmacoepidemiologia/métodos , Reprodutibilidade dos Testes , United States Food and Drug Administration/normas , Humanos , Estados Unidos , Confiabilidade dos Dados , Sistemas de Notificação de Reações Adversas a Medicamentos/estatística & dados numéricos , Sistemas de Notificação de Reações Adversas a Medicamentos/normas , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos/epidemiologia , Bases de Dados Factuais/normas , Projetos de Pesquisa/normasRESUMO
BACKGROUND: When data is distributed across multiple sites, sharing information at the individual level among sites may be difficult. In these multi-site studies, propensity score model can be fitted with data within each site or data from all sites when using inverse probability-weighted Cox regression to estimate overall hazard ratio. However, when there is unknown heterogeneity of covariates in different sites, either approach may lead to potential bias or reduced efficiency. In this study, we proposed a method to estimate propensity score based on covariate balance-related criterion and estimate the overall hazard ratio while overcoming data sharing constraints across sites. METHODS: The proposed propensity score was generated by choosing between global and local propensity score based on covariate balance-related criterion, combining the global propensity score fitted in the entire population and the local propensity score fitted within each site. We used this proposed propensity score to estimate overall hazard ratio of distributed survival data with multiple sites, while requiring only the summary-level information across sites. We conducted simulation studies to evaluate the performance of the proposed method. Besides, we applied the proposed method to real-world data to examine the effect of radiation therapy on time to death among breast cancer patients. RESULTS: The simulation studies showed that the proposed method improved the performance in estimating overall hazard ratio comparing with global and local propensity score method, regardless of the number of sites and sample size in each site. Similar results were observed under both homogeneous and heterogeneous settings. Besides, the proposed method yielded identical results to the pooled individual-level data analysis. The real-world data analysis indicated that the proposed method was more likely to find a significant effect of radiation therapy on mortality compared to the global propensity score method and local propensity score method. CONCLUSIONS: The proposed covariate balance-related propensity score in multi-site distributed survival data outperformed the global propensity score estimated using data from the entire population or the local propensity score estimated within each site in estimating the overall hazard ratio. The proposed approach can be performed without individual-level data transfer between sites and would yield the same results as the corresponding pooled individual-level data analysis.
Assuntos
Disseminação de Informação , Humanos , Pontuação de Propensão , Modelos de Riscos Proporcionais , Simulação por Computador , Disseminação de Informação/métodos , ViésRESUMO
One of the effective missions of biology and medical science is to find disease-related genes. Recent research uses gene/protein networks to find such genes. Due to false positive interactions in these networks, the results often are not accurate and reliable. Integrating multiple gene/protein networks could overcome this drawback, causing a network with fewer false positive interactions. The integration method plays a crucial role in the quality of the constructed network. In this paper, we integrate several sources to build a reliable heterogeneous network, i.e., a network that includes nodes of different types. Due to the different gene/protein sources, four gene-gene similarity networks are constructed first and integrated by applying the type-II fuzzy voter scheme. The resulting gene-gene network is linked to a disease-disease similarity network (as the outcome of integrating four sources) through a two-part disease-gene network. We propose a novel algorithm, namely random walk with restart on the heterogeneous network method with fuzzy fusion (RWRHN-FF). Through running RWRHN-FF over the heterogeneous network, disease-related genes are determined. Experimental results using the leave-one-out cross-validation indicate that RWRHN-FF outperforms existing methods. The proposed algorithm can be applied to find new genes for prostate, breast, gastric, and colon cancers. Since the RWRHN-FF algorithm converges slowly on large heterogeneous networks, we propose a parallel implementation of the RWRHN-FF algorithm on the Apache Spark platform for high-throughput and reliable network inference. Experiments run on heterogeneous networks of different sizes indicate faster convergence compared to other non-distributed modes of implementation.
Assuntos
Biologia Computacional , Redes Reguladoras de Genes , Algoritmos , Humanos , MasculinoRESUMO
Distributed data networks enable large-scale epidemiologic studies, but protecting privacy while adequately adjusting for a large number of covariates continues to pose methodological challenges. Using 2 empirical examples within a 3-site distributed data network, we tested combinations of 3 aggregate-level data-sharing approaches (risk-set, summary-table, and effect-estimate), 4 confounding adjustment methods (matching, stratification, inverse probability weighting, and matching weighting), and 2 summary scores (propensity score and disease risk score) for binary and time-to-event outcomes. We assessed the performance of combinations of these data-sharing and adjustment methods by comparing their results with results from the corresponding pooled individual-level data analysis (reference analysis). For both types of outcomes, the method combinations examined yielded results identical or comparable to the reference results in most scenarios. Within each data-sharing approach, comparability between aggregate- and individual-level data analysis depended on adjustment method; for example, risk-set data-sharing with matched or stratified analysis of summary scores produced identical results, while weighted analysis showed some discrepancies. Across the adjustment methods examined, risk-set data-sharing generally performed better, while summary-table and effect-estimate data-sharing more often produced discrepancies in settings with rare outcomes and small sample sizes. Valid multivariable-adjusted analysis can be performed in distributed data networks without sharing of individual-level data.
Assuntos
Confidencialidade/normas , Agregação de Dados , Projetos de Pesquisa Epidemiológica , Disseminação de Informação/métodos , Serviços de Informação , Humanos , Análise Multivariada , Privacidade , Pontuação de PropensãoRESUMO
Networks of constellations of longitudinal observational databases, often electronic medical records or transactional insurance claims or both, are increasingly being used for studying the effects of medicinal products in real-world use. Such databases are frequently configured as distributed networks. That is, patient-level data are kept behind firewalls and not communicated outside of the data vendor other than in aggregate form. Instead, data are standardized across the network, and queries of the network are executed locally by data partners, and summary results provided to a central research partner(s) for amalgamation, aggregation, and summarization. Such networks can be huge covering years of data on upwards of 100 million patients. Examples of such networks include the FDA Sentinel Network, ASPEN, CNODES, and EU-ADR. As this is a new emerging field, we note in this paper the conceptual similarities and differences between the analysis of distributed networks and the now well-established field of meta-analysis of randomized clinical trials (RCTs). We recommend, wherever appropriate, to apply learnings from meta-analysis to help guide the development of distributed network analyses of longitudinal observational databases.
Assuntos
Redes de Comunicação de Computadores/estatística & dados numéricos , Mineração de Dados/estatística & dados numéricos , Bases de Dados Factuais/estatística & dados numéricos , Metanálise como Assunto , Estudos Observacionais como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Projetos de Pesquisa/estatística & dados numéricos , Sistemas de Notificação de Reações Adversas a Medicamentos/estatística & dados numéricos , Angioedema/induzido quimicamente , Angioedema/diagnóstico , Angioedema/epidemiologia , Inibidores da Enzima Conversora de Angiotensina/efeitos adversos , Confiabilidade dos Dados , Interpretação Estatística de Dados , Mineração de Dados/métodos , Humanos , Estudos Observacionais como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Medição de Risco , Fatores de RiscoRESUMO
Big data analysis raises the expectation that computerized algorithms may extract new knowledge from otherwise unmanageable vast data sets. What are the algorithms behind the big data discussion? In principle, high throughput technologies in molecular research already introduced big data and the development and application of analysis tools into the field of rheumatology some 15 years ago. This includes especially omics technologies, such as genomics, transcriptomics and cytomics. Some basic methods of data analysis are provided along with the technology, however, functional analysis and interpretation requires adaptation of existing or development of new software tools. For these steps, structuring and evaluating according to the biological context is extremely important and not only a mathematical problem. This aspect has to be considered much more for molecular big data than for those analyzed in health economy or epidemiology. Molecular data are structured in a first order determined by the applied technology and present quantitative characteristics that follow the principles of their biological nature. These biological dependencies have to be integrated into software solutions, which may require networks of molecular big data of the same or even different technologies in order to achieve cross-technology confirmation. More and more extensive recording of molecular processes also in individual patients are generating personal big data and require new strategies for management in order to develop data-driven individualized interpretation concepts. With this perspective in mind, translation of information derived from molecular big data will also require new specifications for education and professional competence.
Assuntos
Big Data , Técnicas de Diagnóstico Molecular/métodos , Reumatologia/métodos , Algoritmos , Conjuntos de Dados como Assunto/tendências , Previsões , Alemanha , Humanos , Sistemas Computadorizados de Registros Médicos/tendências , Técnicas de Diagnóstico Molecular/tendências , Dados de Saúde Gerados pelo Paciente/tendências , Reumatologia/tendências , Software/tendênciasRESUMO
Serial and time-resolved macromolecular crystallography are on the rise. However, beam time at X-ray free-electron lasers is limited and most third-generation synchrotron-based macromolecular crystallography beamlines do not offer the necessary infrastructure yet. Here, a new setup is demonstrated, based on the JUNGFRAU detector and Jungfraujoch data-acquisition system, that enables collection of kilohertz serial crystallography data at fourth-generation synchrotrons. More importantly, it is shown that this setup is capable of collecting multiple-time-point time-resolved protein dynamics at kilohertz rates, allowing the probing of microsecond to second dynamics at synchrotrons in a fraction of the time needed previously. A high-quality complete X-ray dataset was obtained within 1â min from lysozyme microcrystals, and the dynamics of the light-driven sodium-pump membrane protein KR2 with a time resolution of 1â ms could be demonstrated. To make the setup more accessible for researchers, downstream data handling and analysis will be automated to allow on-the-fly spot finding and indexing, as well as data processing.
RESUMO
BACKGROUND: In clinical research, important variables may be collected from multiple data sources. Physical pooling of patient-level data from multiple sources often raises several challenges, including proper protection of patient privacy and proprietary interests. We previously developed an SAS-based package to perform distributed regression-a suite of privacy-protecting methods that perform multivariable-adjusted regression analysis using only summary-level information-with horizontally partitioned data, a setting where distinct cohorts of patients are available from different data sources. We integrated the package with PopMedNet, an open-source file transfer software, to facilitate secure file transfer between the analysis center and the data-contributing sites. The feasibility of using PopMedNet to facilitate distributed regression analysis (DRA) with vertically partitioned data, a setting where the data attributes from a cohort of patients are available from different data sources, was unknown. OBJECTIVE: The objective of the study was to describe the feasibility of using PopMedNet and enhancements to PopMedNet to facilitate automatable vertical DRA (vDRA) in real-world settings. METHODS: We gathered the statistical and informatic requirements of using PopMedNet to facilitate automatable vDRA. We enhanced PopMedNet based on these requirements to improve its technical capability to support vDRA. RESULTS: PopMedNet can enable automatable vDRA. We identified and implemented two enhancements to PopMedNet that improved its technical capability to perform automatable vDRA in real-world settings. The first was the ability to simultaneously upload and download multiple files, and the second was the ability to directly transfer summary-level information between the data-contributing sites without a third-party analysis center. CONCLUSIONS: PopMedNet can be used to facilitate automatable vDRA to protect patient privacy and support clinical research in real-world settings.
RESUMO
BACKGROUND: Local nodes on federated research and data networks (FR&DNs) provide enabling infrastructure for collaborative clinical and translational research. Studies in other fields note that infrastructuring, that is, work to identify and negotiate relationships among people, technologies, and organizations, is invisible, unplanned, and undervalued. This may explain the limited literature on nodes in FR&DNs in health care. METHODS: A retrospective case study of one PCORnet® node explored 3 questions: (1) how were components of infrastructure assembled; (2) what specific work was required; and (3) what theoretically grounded, pragmatic questions should be considered when infrastructuring a node for sustainability. Artifacts, work efforts, and interviews generated during node development and implementation were reviewed. A sociotechnical lens was applied to the analysis. Validity was established with internal and external partners. RESULTS: Resources, services, and expertise needed to establish the node existed within the organization, but were scattered across work units. Aligning, mediating, and institutionalizing for sustainability among network and organizational teams, governance, and priorities consumed more work efforts than deploying technical aspects of the node. A theoretically based set of questions relevant to infrastructuring a node was developed and organized within a framework of infrastructuring emphasizing enacting technology, organizing work, and institutionalizing; validity was established with internal and external partners. CONCLUSIONS: FR&DNs are expanding; we provide a sociotechnical perspective on infrastructuring a node. Future research should evaluate the applicability of the framework and questions to other node and network configurations, and more broadly the infrastructuring required to enable and support federated clinical and translational science.
RESUMO
Access to health data, important for population health planning, basic and clinical research and health industry utilization, remains problematic. Legislation intended to improve access to personal data across national borders has proven to be a double-edged sword, where complexity and implications from misinterpretations have paradoxically resulted in data becoming more siloed. As a result, the potential for development of health specific AI and clinical decision support tools built on real-world data have yet to be fully realized. In this perspective, we propose federated networks as a solution to enable access to diverse data sets and tackle known and emerging health problems. The perspective draws on experience from the World Economic Forum Breaking Barriers to Health Data project, the Personal Health Train and Vantage6 infrastructures, and industry insights. We first define the concept of federated networks in a healthcare context, present the value they can bring to multiple stakeholders, and discuss their establishment, operation and implementation. Challenges of federated networks in healthcare are highlighted, as well as the resulting need for and value of an independent orchestrator for their safe, sustainable and scalable implementation.
Assuntos
Atenção à Saúde , Privacidade , Estados UnidosRESUMO
BACKGROUND: A distributed data network approach combined with distributed regression analysis (DRA) can reduce the risk of disclosing sensitive individual and institutional information in multicenter studies. However, software that facilitates large-scale and efficient implementation of DRA is limited. OBJECTIVE: This study aimed to assess the precision and operational performance of a DRA application comprising a SAS-based DRA package and a file transfer workflow developed within the open-source distributed networking software PopMedNet in a horizontally partitioned distributed data network. METHODS: We executed the SAS-based DRA package to perform distributed linear, logistic, and Cox proportional hazards regression analysis on a real-world test case with 3 data partners. We used PopMedNet to iteratively and automatically transfer highly summarized information between the data partners and the analysis center. We compared the DRA results with the results from standard SAS procedures executed on the pooled individual-level dataset to evaluate the precision of the SAS-based DRA package. We computed the execution time of each step in the workflow to evaluate the operational performance of the PopMedNet-driven file transfer workflow. RESULTS: All DRA results were precise (<10-12), and DRA model fit curves were identical or similar to those obtained from the corresponding pooled individual-level data analyses. All regression models required less than 20 min for full end-to-end execution. CONCLUSIONS: We integrated a SAS-based DRA package with PopMedNet and successfully tested the new capability within an active distributed data network. The study demonstrated the validity and feasibility of using DRA to enable more privacy-protecting analysis in multicenter studies.
RESUMO
The inverse probability weighted Cox proportional hazards model can be used to estimate the marginal hazard ratio. In multi-site studies, it may be infeasible to pool individual-level datasets due to privacy and other considerations. We propose three methods for making inference on hazard ratios without the need for pooling individual-level datasets across sites. The first method requires a summary-level eight-column risk-set table to produce the same hazard ratio estimate and robust sandwich variance estimate as those from the corresponding pooled individual-level data analysis (reference analysis). The second and third methods, which are based on two bootstrap re-sampling strategies, require a summary-level four-column risk-set table and bootstrap-based risk-set tables from each site to produce the same hazard ratio and bootstrap variance estimates as those from their reference analyses. All three methods require only one file transfer between the data-contributing sites and the analysis center. We justify these methods theoretically, illustrate their use, and demonstrate their statistical performance using both simulated and real-world data.
Assuntos
Análise de Dados , Projetos de Pesquisa , Probabilidade , Modelos de Riscos ProporcionaisRESUMO
The biomedical scientific community is in the midst of a significant expansion in how data are used to accomplish the important goals of reducing disability and improving health care. Data science is the academic discipline emerging from this expansion. Data science reflects a new approach to the acquisition, storage, analysis, and interpretation of scientific knowledge. The potential benefits of data science are transforming biomedical research and will lead physical medicine and rehabilitation in exciting new directions. Understanding this transformation will require modifying and expanding the education, training, and research infrastructure that support rehabilitation science and practice.
Assuntos
Ciência de Dados , Modalidades de Fisioterapia , Reabilitação , Ciência de Dados/métodos , Humanos , Disseminação de Informação , Medicina Física e Reabilitação/métodos , Reabilitação/métodosRESUMO
INTRODUCTION: Health information generated by health care encounters, research enterprises, and public health is increasingly interoperable and shareable across uses and users. This paper examines the US public's willingness to be a part of multi-user health information networks and identifies factors associated with that willingness. METHODS: Using a probability-based sample (n = 890), we examined the univariable and multivariable relationships between willingness to participate in health information networks and demographic factors, trust, altruism, beliefs about the public's ethical obligation to participate in research, privacy, medical deception, and policy and governance using linear regression modeling. RESULTS: Willingness to be a part of a multi-user network that includes health care providers, mental health, social services, research, or quality improvement is low (26 percent-7.4 percent, depending on the user). Using stepwise regression, we identified a model that explained 42.6 percent of the variability in willingness to participate and included nine statistically significant factors associated with the outcome: Trust in the health system, confidence in policy, the belief that people have an obligation to participate in research, the belief that health researchers are accountable for conducting ethical research, the desire to give permission, education, concerns about insurance, privacy, and preference for notification. DISCUSSION: Our results suggest willingness to be a part of multi-user data networks is low, but that attention to governance may increase willingness. Building trust to enable acceptance of multi-use data networks will require a commitment to aligning data access practices with the expectations of the people whose data is being used.
RESUMO
CONTEXT: Sustaining electronic health data networks and maximizing return on federal investment in their development is essential for achieving national data insight goals for transforming health care. However, crossing the business model chasm from grant funding to self-sustaining viability is challenging. CASE DESCRIPTION: This paper presents lessons learned in seeking the sustainability of the Scalable Architecture for Federated Translational Inquiries Network (SAFTINet), and electronic health data network involving over 50 primary care practices in three states. SAFTINet was developed with funding from the Agency for Healthcare Research and Quality to create a multi-state network for comparative effectiveness research (CER) involving safety-net patients. METHODS: Three analyses were performed: (1) a product gap analysis of alternative data sources; (2) a Strengths-Weaknesses-Opportunities-Threat (SWOT) analysis of SAFTINet in the context of competing alternatives; and (3) a customer discovery process involving approximately 150 SAFTINet stakeholders to identify SAFTINet's sustaining value proposition for health services researchers, clinical data partners, and policy makers. FINDINGS: The results of this business model analysis informed SAFTINet's sustainability strategy. The fundamental high-level product needs were similar between the three primary customer segments: credible data, efficient and easy to use, and relevance to their daily work or 'jobs to be done'. However, how these benefits needed to be minimally demonstrated varied by customer such that different supporting evidence was required. MAJOR THEMES: The SAFTINet experience illustrates that commercialization-readiness and business model methods can be used to identify multi-sided value propositions for sustaining electronic health data networks and their data capabilities as drivers of health care transformation.
RESUMO
INTRODUCTION: Existing large-scale distributed health data networks are disconnected even as they address related questions of healthcare research and public policy. This paper describes the design and implementation of a fully functional prototype open-source tool, the Cross-Network Directory Service (CNDS), which addresses much of what keeps distributed networks disconnected from each other. METHODS: The set of services needed to implement a Cross-Directory Service was identified through engagement with stakeholders and workgroup members. CNDS was implemented using PCORnet and Sentinel network instances and tested by participating data partners. RESULTS: Web services that enable the four major functional features of the service (registration, discovery, communication, and governance) were developed and placed into an open-source repository. The services include a robust metadata model that is extensible to accommodate a virtually unlimited inventory of metadata fields, without requiring any further software development. The user interfaces are programmatically generated based on the contents of the metadata model. CONCLUSION: The CNDS pilot project gathered functional requirements from stakeholders and collaborating partners to build a software application to enable cross-network data and resource sharing. The two partners-one from Sentinel and one from PCORnet-tested the software. They successfully entered metadata about their organizations and data sources and then used the Discovery and Communication functionality to find data sources of interest and send a cross-network query. The CNDS software can help integrate disparate health data networks by providing a mechanism for data partners to participate in multiple networks, share resources, and seamlessly send queries across those networks.
RESUMO
PURPOSE: Sharing of detailed individual-level data continues to pose challenges in multi-center studies. This issue can be addressed in part by using analytic methods that require only summary-level information to perform the desired multivariable-adjusted analysis. We examined the feasibility and empirical validity of 1) conducting multivariable-adjusted distributed linear regression and 2) combining distributed linear regression with propensity scores, in a large distributed data network. PATIENTS AND METHODS: We compared percent total weight loss 1-year postsurgery between Roux-en-Y gastric bypass and sleeve gastrectomy procedure among 43,110 patients from 36 health systems in the National Patient-Centered Clinical Research Network. We adjusted for baseline demographic and clinical variables as individual covariates, deciles of propensity scores, or both, in three separate outcome regression models. We used distributed linear regression, a method that requires only summary-level information (specifically, sums of squares and cross products matrix) from sites, to fit the three ordinary least squares linear regression models. A comparison set of analyses that used pooled deidentified individual-level data from sites served as the reference. RESULTS: Distributed linear regression produced results identical to those from the corresponding pooled individual-level data analysis for all variables in all three models. The maximum numerical difference in the parameter estimate or standard error for all the variables was 3×10-11 across three models. CONCLUSION: Distributed linear regression analysis is a feasible and valid analytic method in multicenter studies for one-time continuous outcomes. Combining distributed regression with propensity scores via modeling offers more privacy protection and analytic flexibility.
RESUMO
INTRODUCTION: Patient privacy and data security concerns often limit the feasibility of pooling patient-level data from multiple sources for analysis. Distributed data networks (DDNs) that employ privacy-protecting analytical methods, such as distributed regression analysis (DRA), can mitigate these concerns. However, DRA is not routinely implemented in large DDNs. OBJECTIVE: We describe the design and implementation of a process framework and query workflow that allow automatable DRA in real-world DDNs that use PopMedNet™, an open-source distributed networking software platform. METHODS: We surveyed and catalogued existing hardware and software configurations at all data partners in the Sentinel System, a PopMedNet-driven DDN. Key guiding principles for the design included minimal disruptions to the current PopMedNet query workflow and minimal modifications to data partners' hardware configurations and software requirements. RESULTS: We developed and implemented a three-step process framework and PopMedNet query workflow that enables automatable DRA: 1) assembling a de-identified patient-level dataset at each data partner, 2) distributing a DRA package to data partners for local iterative analysis, and 3) iteratively transferring intermediate files between data partners and analysis center. The DRA query workflow is agnostic to statistical software, accommodates different regression models, and allows different levels of user-specified automation. DISCUSSION: The process framework can be generalized to and the query workflow can be adopted by other PopMedNet-based DDNs. CONCLUSION: DRA has great potential to change the paradigm of data analysis in DDNs. Successful implementation of DRA in Sentinel will facilitate adoption of the analytic approach in other DDNs.
RESUMO
BACKGROUND: The Flinders Telehealth in the Home trial (FTH trial), conducted in South Australia, was an action research initiative to test and evaluate the inclusion of telehealth services and broadband access technologies for palliative care patients living in the community and home-based rehabilitation services for the elderly at home. Telehealth services at home were supported by video conferencing between a therapist, nurse or doctor, and a patient using the iPad tablet. OBJECTIVE: The aims of this study are to identify which technical factors influence the quality of video conferencing in the home setting and to assess the impact of these factors on the clinical perceptions and acceptance of video conferencing for health care delivery into the home. Finally, we aim to identify any relationships between technical factors and clinical acceptance of this technology. METHODS: An action research process developed several quantitative and qualitative procedures during the FTH trial to investigate technology performance and users perceptions of the technology including measurements of signal power, data transmission throughput, objective assessment of user perceptions of videoconference quality, and questionnaires administered to clinical users. RESULTS: The effectiveness of telehealth was judged by clinicians as equivalent to or better than a home visit on 192 (71.6%, 192/268) occasions, and clinicians rated the experience of conducting a telehealth session compared with a home visit as equivalent or better in 90.3% (489/540) of the sessions. It was found that the quality of video conferencing when using a third generation mobile data service (3G) in comparison to broadband fiber-based services was concerning as 23.5% (220/936) of the calls failed during the telehealth sessions. The experimental field tests indicated that video conferencing audio and video quality was worse when using mobile data services compared with fiber to the home services. As well, statistically significant associations were found between audio/video quality and patient comfort with the technology as well as the clinician ratings for effectiveness of telehealth. CONCLUSIONS: These results showed that the quality of video conferencing when using 3G-based mobile data services instead of broadband fiber-based services was less due to failed calls, audio/ video jitter, and video pixilation during the telehealth sessions. Nevertheless, clinicians felt able to deliver effective services to patients at home using 3G-based mobile data services.
RESUMO
Interpersonal relationships are vital for our daily functioning and wellbeing. Social networks may form the primary means by which environmental influences determine individual traits. Several studies have shown the influence of social networks on decision-making, behaviors and wellbeing. Smartphones have great potential for measuring social networks in a real world setting. Here we tested the feasibility of using people's own smartphones as a data collection platform for face-to-face interactions. We developed an application for iOS and Android to collect Bluetooth data and acquired one week of data from 14 participants in our organization. The Bluetooth scanning statistics were used to quantify the time-resolved connection strength between participants and define the weights of a dynamic social network. We used network metrics to quantify changes in network topology over time and non-negative matrix factorization to identify cliques or subgroups that reoccurred during the week. The scanning rate varied considerably between smartphones running Android and iOS and egocentric networks metrics were correlated with the scanning rate. The time courses of two identified subgroups matched with two meetings that took place that week. These findings demonstrate the feasibility of using participants' own smartphones to map social network, whilst identifying current limitations of using generic smartphones. The bias introduced by variations in scanning rate and missing data is an important limitation that needs to be addressed in future studies.