RESUMO
BACKGROUND: Many factors contribute to developing and conducting a successful multi-data source, non-interventional, post-authorization safety study (NI-PASS) for submission to multiple health authorities. Such studies are often large undertakings; evaluating and sharing lessons learned can provide useful insights to others considering similar studies. OBJECTIVES: We discuss challenges and key methodological and organizational factors that led to the delivery of a successful post-marketing requirement (PMR)/PASS program investigating the risk of cardiovascular and cancer events among users of mirabegron, an oral medication for the treatment of overactive bladder. RESULTS: We provide context and share learnings, including sections on research program collaboration, scientific transparency, organizational approach, mitigation of uncertainty around potential delays, validity of study outcomes, selection of data sources and optimizing patient numbers, choice of comparator groups and enhancing precision of estimates of associations, potential confounding and generalizability of study findings, and interpretation of results. CONCLUSIONS: This large PMR/PASS program was a long-term commitment from all parties and benefited from an effective coordinating center and extensive scientific interactions across research partners, scientific advisory board, study sponsor, and health authorities, and delivered useful learnings related to the design and organization of multi-data source NI-PASS.
Assuntos
Acetanilidas , Vigilância de Produtos Comercializados , Tiazóis , Bexiga Urinária Hiperativa , Humanos , Tiazóis/efeitos adversos , Tiazóis/administração & dosagem , Vigilância de Produtos Comercializados/métodos , Bexiga Urinária Hiperativa/tratamento farmacológico , Acetanilidas/efeitos adversos , Acetanilidas/administração & dosagem , Acetanilidas/uso terapêutico , Farmacoepidemiologia , Doenças Cardiovasculares/prevenção & controle , Doenças Cardiovasculares/epidemiologia , Projetos de Pesquisa , Agentes Urológicos/efeitos adversos , Agentes Urológicos/administração & dosagem , Fonte de InformaçãoRESUMO
BACKGROUND: The rapid growth of research in artificial intelligence (AI) and machine learning (ML) continues. However, it is unclear whether this growth reflects an increase in desirable study attributes or merely perpetuates the same issues previously raised in the literature. OBJECTIVE: This study aims to evaluate temporal trends in AI/ML studies over time and identify variations that are not apparent from aggregated totals at a single point in time. METHODS: We identified AI/ML studies registered on ClinicalTrials.gov with start dates between January 1, 2010, and December 31, 2023. Studies were included if AI/ML-specific terms appeared in the official title, detailed description, brief summary, intervention, primary outcome, or sponsors' keywords. Studies registered as systematic reviews and meta-analyses were excluded. We reported trends in AI/ML studies over time, along with study characteristics that were fast-growing and those that remained unchanged during 2010-2023. RESULTS: Of 3106 AI/ML studies, only 7.6% (n=235) were regulated by the US Food and Drug Administration. The most common study characteristics were randomized (56.2%; 670/1193; interventional) and prospective (58.9%; 1126/1913; observational) designs; a focus on diagnosis (28.2%; 335/1190) and treatment (24.4%; 290/1190); hospital/clinic (44.2%; 1373/3106) or academic (28%; 869/3106) sponsorship; and neoplasm (12.9%; 420/3245), nervous system (12.2%; 395/3245), cardiovascular (11.1%; 356/3245) or pathological conditions (10%; 325/3245; multiple counts per study possible). Enrollment data were skewed to the right: maximum 13,977,257; mean 16,962 (SD 288,155); median 255 (IQR 80-1000). The most common size category was 101-1000 (44.8%; 1372/3061; excluding withdrawn or missing), but large studies (n>1000) represented 24.1% (738/3061) of all studies: 29% (551/1898) of observational studies and 16.1% (187/1163) of trials. Study locations were predominantly in high-income countries (75.3%; 2340/3106), followed by upper-middle-income (21.7%; 675/3106), lower-middle-income (2.8%; 88/3106), and low-income countries (0.1%; 3/3106). The fastest-growing characteristics over time were high-income countries (location); Europe, Asia, and North America (location); diagnosis and treatment (primary purpose); hospital/clinic and academia (lead sponsor); randomized and prospective designs; and the 1-100 and 101-1000 size categories. Only 5.6% (47/842) of completed studies had results available on ClinicalTrials.gov, and this pattern persisted. Over time, there was an increase in not only the number of newly initiated studies, but also the number of completed studies without posted results. CONCLUSIONS: Much of the rapid growth in AI/ML studies comes from high-income countries in high-resource settings, albeit with a modest increase in upper-middle-income countries (mostly China). Lower-middle-income or low-income countries remain poorly represented. The increase in randomized or prospective designs, along with 738 large studies (n>1000), mostly ongoing, may indicate that enough studies are shifting from an in silico evaluation stage toward a prospective comparative evaluation stage. However, the ongoing limited availability of basic results on ClinicalTrials.gov contrasts with this field's rapid advancements and the public registry's role in reducing publication and outcome reporting biases.
Assuntos
Inteligência Artificial , Aprendizado de Máquina , Inteligência Artificial/tendências , Aprendizado de Máquina/tendências , Estudos Transversais , Humanos , Estados Unidos , Sistema de RegistrosRESUMO
BACKGROUND: Big data from large, government-sponsored surveys and data sets offers researchers opportunities to conduct population-based studies of important health issues in the United States, as well as develop preliminary data to support proposed future work. Yet, navigating these national data sources is challenging. Despite the widespread availability of national data, there is little guidance for researchers on how to access and evaluate the use of these resources. OBJECTIVE: Our aim was to identify and summarize a comprehensive list of federally sponsored, health- and health care-related data sources that are accessible in the public domain in order to facilitate their use by researchers. METHODS: We conducted a systematic mapping review of government sources of health-related data on US populations and with active or recent (previous 10 years) data collection. The key measures were government sponsor, overview and purpose of data, population of interest, sampling design, sample size, data collection methodology, type and description of data, and cost to obtain data. Convergent synthesis was used to aggregate findings. RESULTS: Among 106 unique data sources, 57 met the inclusion criteria. Data sources were classified as survey or assessment data (n=30, 53%), trends data (n=27, 47%), summative processed data (n=27, 47%), primary registry data (n=17, 30%), and evaluative data (n=11, 19%). Most (n=39, 68%) served more than 1 purpose. The population of interest included individuals/patients (n=40, 70%), providers (n=15, 26%), and health care sites and systems (n=14, 25%). The sources collected data on demographic (n=44, 77%) and clinical information (n=35, 61%), health behaviors (n=24, 42%), provider or practice characteristics (n=22, 39%), health care costs (n=17, 30%), and laboratory tests (n=8, 14%). Most (n=43, 75%) offered free data sets. CONCLUSIONS: A broad scope of national health data is accessible to researchers. These data provide insights into important health issues and the nation's health care system while eliminating the burden of primary data collection. Data standardization and uniformity were uncommon across government entities, highlighting a need to improve data consistency. Secondary analyses of national data are a feasible, cost-efficient means to address national health concerns.
Assuntos
Atenção à Saúde , Fonte de Informação , Humanos , Estados Unidos , Custos de Cuidados de Saúde , Governo , Inquéritos e QuestionáriosRESUMO
BACKGROUND: Effective animal health surveillance systems require reliable, high-quality, and timely data for decision making. In Tanzania, the animal health surveillance system has been relying on a few data sources, which suffer from delays in reporting, underreporting, and high cost of data collection and transmission. The integration of data from multiple sources can enhance early detection and response to animal diseases and facilitate the early control of outbreaks. This study aimed to identify and assess existing and potential data sources for the animal health surveillance system in Tanzania and how they can be better used for early warning surveillance. The study used a mixed-method design to identify and assess data sources. Data were collected through document reviews, internet search, cross-sectional survey, key informant interviews, site visits, and non-participant observation. The assessment was done using pre-defined criteria. RESULTS: A total of 13 data sources were identified and assessed. Most surveillance data came from livestock farmers, slaughter facilities, and livestock markets; while animal dip sites were the least used sources. Commercial farms and veterinary shops, electronic surveillance tools like AfyaData and Event Mobile Application (EMA-i) and information systems such as the Tanzania National Livestock Identification and Traceability System (TANLITS) and Agricultural Routine Data System (ARDS) show potential to generate relevant data for the national animal health surveillance system. The common variables found across most sources were: the name of the place (12/13), animal type/species (12/13), syndromes (10/13) and number of affected animals (8/13). The majority of the sources had good surveillance data contents and were accessible with medium to maximum spatial coverage. However, there was significant variation in terms of data frequency, accuracy and cost. There were limited integration and coordination of data flow from the identified sources with minimum to non-existing automated data entry and transmission. CONCLUSION: The study demonstrated how the available data sources have great potential for early warning surveillance in Tanzania. Both existing and potential data sources had complementary strengths and weaknesses; a multi-source surveillance system would be best placed to harness these different strengths.
Assuntos
Doenças dos Animais/epidemiologia , Surtos de Doenças/veterinária , Monitoramento Epidemiológico/veterinária , Animais , Armazenamento e Recuperação da Informação , Gado , Tanzânia/epidemiologiaRESUMO
Autonomous driving systems tightly rely on the quality of the data from sensors for tasks such as localization and navigation. In this work, we present an integrity monitoring framework that can assess the quality of multimodal data from exteroceptive sensors. The proposed multisource coherence-based integrity assessment framework is capable of handling highway as well as complex semi-urban and urban scenarios. To achieve such generalization and scalability, we employ a semantic-grid data representation, which can efficiently represent the surroundings of the vehicle. The proposed method is used to evaluate the integrity of sources in several scenarios, and the integrity markers generated are used for identifying and quantifying unreliable data. A particular focus is given to real-world complex scenarios obtained from publicly available datasets where integrity localization requirements are of high importance. Those scenarios are examined to evaluate the performance of the framework and to provide proof-of-concept. We also establish the importance of the proposed integrity assessment framework in context-based localization applications for autonomous vehicles. The proposed method applies the integrity assessment concepts in the field of aviation to ground vehicles and provides the Protection Level markers (Horizontal, Lateral, Longitudinal) for perception systems used for vehicle localization.
RESUMO
BACKGROUND: The choice of cost data sources is crucial, because it influences the results of cost studies, decisions of hospital managers and ultimately national directives of policy makers. The main objective of this study was to compare a hospital cost accounting system in a French hospital group and the national cost study (ENC) considering the cost of organ recovery procedures. The secondary objective was to compare these approaches to the weighting method used in the ENC to assess organ recovery costs. METHODS: The resources consumed during the hospital stay and organ recovery procedure were identified and quantified retrospectively from hospital discharge abstracts and the national discharge abstract database. Identified items were valued using hospital cost accounting, followed by 2010-2011 ENC data, and then weighted using 2010-2011 ENC data. A Kruskal-Wallis test was used to determine whether at least two of the cost databases provided different results. Then, a Mann-Whitney test was used to compare the three cost databases. RESULTS: The costs assessed using hospital cost accounting differed significantly from those obtained using the ENC data (Mann-Whitney; P-value < 0.001). In the ENC, the mean costs for hospital stays and organ recovery procedures were determined to be 4961 (SD 7295) and 862 (SD 887), respectively, versus 12,074 (SD 6956) and 4311 (SD 1738) for the hospital cost accounting assessment. The use of a weighted methodology reduced the differences observed between these two data sources. CONCLUSIONS: Readers, hospital managers and decision makers must know the strengths and weaknesses of each database to interpret the results in an informed context.
RESUMO
BACKGROUND: Drug data has been used to estimate the prevalence of chronic diseases. Disease registries and annual surveys are lacking, especially in less-developed regions. At the same time, insurance drug data and self-reports of medications are easily accessible and inexpensive. We aim to investigate the similarity of prevalence estimation between self-report data of some chronic diseases and drug data in a less developed setting in southwestern Iran. METHODS: Baseline data from the Pars Cohort Study (PCS) was re-analyzed. The use of disease-related drugs were compared against self-report of each disease (hypertension [HTN], diabetes mellitus [DM], heart disease, stroke, chronic obstructive pulmonary disease [COPD], sleep disorder, anxiety, depression, gastroesophageal reflux disease [GERD], irritable bowel syndrome [IBS], and functional constipation [FC]). We used sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and the Jaccard similarity index. RESULTS: The top five similarities were observed in DM (54%), HTN (53%), heart disease (32%), COPD (30%), and GERD (15%). The similarity between drug use and self-report was found to be low in IBS (2%), stroke (5%), depression (9%), sleep disorders (10%), and anxiety disorders (11%). CONCLUSION: Self-reports of diseases and the drug data show a different picture of most diseases' prevalence in our setting. It seems that drug data alone cannot estimate the prevalence of diseases in settings similar to ours. We recommend using drug data in combination with self-report data for epidemiological investigation in the less-developed setting.
Assuntos
Autorrelato , Humanos , Doença Crônica , Prevalência , Masculino , Feminino , Irã (Geográfico)/epidemiologia , Pessoa de Meia-Idade , Adulto , Idoso , Estudos de Coortes , Sensibilidade e EspecificidadeRESUMO
When investigating the relationship between the acoustic environment and human wellbeing, there is a potential problem resulting from data source self-correlation. To address this data source self-correlation problem, we proposed a third-party assessment combined with an artificial intelligence (TPA-AI) model. The TPA-AI utilized acoustic spectrograms to assess the soundscape's affective quality. First, we collected data on public perceptions of urban sounds (i.e., inviting 100 volunteers to label the affective quality of 7051 10-s audios on a polar scale from annoying to pleasant). Second, we converted the labeled audios to acoustic spectrograms and used deep learning methods to train the TPA-AI model, achieving a 92.88 % predictive accuracy for binary classification. Third, geographic ecological momentary assessment (GEMA) was used to log momentary audios from 180 participants in their daily life context, and we employed the well-trained TPA-AI model to predict the affective quality of these momentary audios. Lastly, we compared the explanatory power of the three methods (i.e., sound level meters, sound questionnaires, and the TPA-AI model) when estimating the relationship between momentary stress level and the acoustic environment. Our results indicate that the TPA-AI's explanatory power outperformed the sound level meter, while using a sound questionnaire might overestimate the effect of the acoustic environment on momentary stress and underestimate other confounders.
Assuntos
Inteligência Artificial , Humanos , Ruído , Acústica , Avaliação Momentânea Ecológica , AdultoRESUMO
Telehealth presents both the potential to improve access to care and to widen the digital divide contributing to health care disparities and obliging health care systems to standardize approaches to measure and display telehealth disparities. Based on a literature review and the operational experience of clinicians, informaticists, and researchers in the Supporting Pediatric Research on Outcomes and Utilization of Telehealth (SPROUT)-Clinical and Translational Science Awards (CTSA) Network, we outline a strategic framework for health systems to develop and optimally use a telehealth equity dashboard through a 3-phased approach of (1) defining data sources and key equity-related metrics of interest; (2) designing a dynamic and user-friendly dashboard; and (3) deploying the dashboard to maximize engagement among clinical staff, investigators, and administrators.
RESUMO
The aim of this scoping review was to assist researchers who want to use survey data, either in academic or community settings, to identify and comprehend health disparities affecting Native Hawaiian (NH), Pacific Islander (PI), and/or Filipino populations, as these are groups with known and numerous health disparities. The scoping review methodology was used to identify survey datasets that disaggregate data for NH, PI, or Filipinos. Healthdata.gov was searched, as there is not an official index of databases. The website was established by the United States (US) Department and Health and Human Services to increase accessibility of health data for entrepreneurs, researchers, and policy makers, with the ultimate goal of improving health outcomes. Using the search term 'survey,' 332 datasets were retrieved, many of which were duplicates from different years. Datasets were included that met the following criteria: (1) related to health; (2) disaggregated NH, PI, and/or Filipino subgroups; (3) administered in the US; (4) publicly available; (5) individual-level data; (6) self-reported information; and (7) contained data from 2010 or later. Fifteen survey datasets met the inclusion criteria. Two additional survey datasets were identified by colleagues. For each dataset, the dataset name, data source, years of the data availability, availability of disaggregated NH, PI, and/or Filipino data, data on health outcomes and social determinants of health, and website information were documented. This inventory of datasets should be of use to researchers who want to advance understanding of health disparities experienced by NH, PI, and Filipino populations in the US.
Assuntos
Pesquisa Biomédica , Desigualdades de Saúde , Havaiano Nativo ou Outro Ilhéu do Pacífico , População das Ilhas do Pacífico , Humanos , Povo Asiático , Havaí , Inquéritos e Questionários , Estados Unidos , Minorias Desiguais em Saúde e Populações Vulneráveis , United States Dept. of Health and Human Services , Bases de Dados Factuais , População do Sudeste AsiáticoRESUMO
BACKGROUND: On February 25, 2022, Russian forces took control of the Chernobyl power plant after continuous fighting within the Chernobyl exclusion zone. Continual events occurred in the month of March, which raised the risk of potential contamination of previously uncontaminated areas and the potential for impacts on human and environmental health. The disruption of war has caused interruptions to normal preventive activities, and radiation monitoring sensors have been nonfunctional. Open-source intelligence can be informative when formal reporting and data are unavailable. OBJECTIVE: This paper aimed to demonstrate the value of open-source intelligence in Ukraine to identify signals of potential radiological events of health significance during the Ukrainian conflict. METHODS: Data were collected from search terminology for radiobiological events and acute radiation syndrome detection between February 1 and March 20, 2022, using 2 open-source intelligence (OSINT) systems, EPIWATCH and Epitweetr. RESULTS: Both EPIWATCH and Epitweetr identified signals of potential radiobiological events throughout Ukraine, particularly on March 4 in Kyiv, Bucha, and Chernobyl. CONCLUSIONS: Open-source data can provide valuable intelligence and early warning about potential radiation hazards in conditions of war, where formal reporting and mitigation may be lacking, to enable timely emergency and public health responses.
RESUMO
OBJECTIVE: The purpose of this study was to develop a framework to assess the quality of healthcare data sources. MATERIALS AND METHODS: First, a systematic review was performed and a thematic analysis of included literature conducted to identify items relating to the quality of healthcare data sources. Second, expert advisory group meetings were held to explore experts' perception of the results of the review and identify gaps in the findings. Third, a framework was developed based on the findings. RESULTS: Synthesis of the review results and expert advisory group meetings resulted in 8 parent themes and 22 subthemes. The parent themes were: Governance, leadership, and management; Data; Trust; Context; Monitoring; Use of information; Standardization; Learning and training. The 22 subthemes were: governance, finance, organization, characteristics, time, data management, data quality, ethics, access, security, quality improvement, monitoring and feedback, dissemination, analysis, research, standards, linkage, infrastructure, documentation, definitions and classification, learning, and training. DISCUSSION: The herein presented framework was developed using a robust methodology which included reviewing literature and extracting data source quality items, filtering, and matching items, developing a list of themes, and revising them based on expert opinion. To the best of our knowledge, this study is the first to apply a systematic approach to identify aspects related to the quality of healthcare data sources. CONCLUSIONS: The framework, can assist those using healthcare data sources to identify and assess the quality of a data source and inform whether the data sources used are fit for their intended use.
Assuntos
Atenção à Saúde , Instalações de Saúde , Armazenamento e Recuperação da Informação , LiderançaRESUMO
BACKGROUND: Skin care for maintaining skin integrity includes cleansing, skin product use, and photoprotection. Inappropriate skin care can lead to skin problems. AIMS: To evaluate the knowledge, attitude, and practices in skin care among Thai adolescents. PATIENTS/METHODS: Questionnaire-based, descriptive, cross-sectional study. RESULTS: A total of 588 Thai adolescent students (mean age: 15.6 ± 1.8 years, 50.5% female) were included. Of those who responded, 99.5% knew the benefits of cleansing, and 95.9% knew the benefits of skin care products. Skin products, moisturizer, and sunscreen were used by 87.8%, 80.8%, and 71.5% of students, respectively. Female teenagers used moisturizers, cosmetics, and sunscreen significantly more than males (p = 0.001, p = 0.001, and p < 0.001, respectively). High school teenagers applied cosmetics more than junior high school teenagers (p = 0.004). Ninety-three percent of adolescents knew the effects of sunlight, but only 27.4% regularly applied sunscreen. The sources of knowledge were from person, online social media, print media, and television/radio in 88.5%, 77.5%, 30.7%, and 26.1%, respectively. Data from physicians and parents were trusted by 65.3% and 64.2%, respectively. Most (74.1%) adolescents searched for data from more than 1 source. Adolescent females and high school adolescents demonstrated significantly more accurate knowledge and practice in cleansing and photoprotection (p < 0.001) compared with adolescent males and junior high school adolescents. Knowledge and practices did not significantly correlate with underlying skin diseases or monthly allowance. CONCLUSION: Gender and education level were found to significantly influence knowledge and practice in skin care among adolescents in Thailand.
Assuntos
Conhecimentos, Atitudes e Prática em Saúde , Protetores Solares , Adolescente , Estudos Transversais , Feminino , Humanos , Masculino , Higiene da Pele , Inquéritos e Questionários , TailândiaRESUMO
BACKGROUND: Irrational antimicrobial consumption (AMC) became one of the main global health problems in recent decades. OBJECTIVE: In order to understand AMC in Latin-American Region, we performed the present research in 6 countries. METHODS: Antimicrobial consumption (J01, A07A, P01AB groups) was registered in Argentina, Chile, Colombia, Costa Rica, Paraguay, and Peru. Source of information, AMC type, DDD (Defined Daily Doses), DID (DDD/1000 inhabitants/day), population were variables explored. Data was analyzed using the Global Antimicrobial Resistance and Use Surveillance System (GLASS) tool. RESULTS: Source of information included data from global, public, and private sectors. Total AMC was highly variable (range 1.91-36.26 DID). Penicillin was the most consumed group in all countries except in Paraguay, while macrolides and lincosamides were ranked second. In terms of type of AMC according to the WHO-AWaRe classification, it was found that for certain groups like "Reserve," there are similarities among all countries. CONCLUSION AND RELEVANCE: This paper shows the progress that 6 Latin-American countries made toward AMC surveillance. The study provides a standardized approach for building a national surveillance system for AMC data analysis. These steps will contribute to the inclusion of Latin-America among the regions of the world that have periodic, regular, and quality data of AMC.
Assuntos
Antibacterianos , Antibacterianos/uso terapêutico , Argentina , Chile , Colômbia , Humanos , América Latina/epidemiologiaRESUMO
A domestic electrical storage water heater (i.e., DESWH) is one of the 14 products listed in China's Waste Electrical and Electronic Products Disposal Catalogue (Batch 2). Due to the lack of systematic quantitative analysis on the waste quantity and recovery value of a DESWH, a multi-data source-based hybrid methodology based on quarterly sales data, survey data and internet data is proposed. In the methodology, the seasonal Mann-Kendall trend test is used to identify the seasonal trait of the quarterly sales data for DESWHs, and an accurate prediction of the sales volume of DESWHs is obtained via a generalised seasonal grey model with dynamic seasonal adjustment factors. Then the lifespan distribution of DESWHs is fitted based on the survey data, and the quantity of wasted DESWHs is estimated from 2012Q1 to 2038Q4. Finally, on the basis of the data crawled from the internet, the weight distribution of DESWHs is constructed, and the recycling value of wasted DESWHs is then calculated. The empirical results show that the waste quantity of DESWHs will increase greatly from 2012Q1 to 2022Q4 and that the recycling value of wasted DESWHs may be worth 18.48 billion yuan in China. The results show that the wasted DESWHs have a great recycling value, and that the proposed multi-data source based hybrid methodology can be used as an effective estimation method for the recycling value of electronic waste.
Assuntos
Resíduo Eletrônico , Gerenciamento de Resíduos , China , Resíduo Eletrônico/análise , Armazenamento e Recuperação da Informação , Reciclagem , Águas Residuárias , ÁguaRESUMO
Recent advances in technology have led to the rise of new-age data sources (e.g., Internet of Things (IoT), wearables, social media, and mobile health). IoT is becoming ubiquitous, and data generation is accelerating globally. Other health research domains have used IoT as a data source, but its potential has not been thoroughly explored and utilized systematically in public health surveillance. This article summarizes the existing literature on the use of IoT as a data source for surveillance. It presents the shortcomings of current data sources and how NextGen data sources, including the large-scale applications of IoT, can meet the needs of surveillance. The opportunities and challenges of using these modern data sources in public health surveillance are also explored. These IoT data ecosystems are being generated with minimal effort by the device users and benefit from high granularity, objectivity, and validity. Advances in computing are now bringing IoT-based surveillance into the realm of possibility. The potential advantages of IoT data include high-frequency, high volume, zero effort data collection methods, with a potential to have syndromic surveillance. In contrast, the critical challenges to mainstream this data source within surveillance systems are the huge volume and variety of data, fusing data from multiple devices to produce a unified result, and the lack of multidisciplinary professionals to understand the domain and analyze the domain data accordingly.
Assuntos
Internet das Coisas , Mídias Sociais , Telemedicina , Ecossistema , Humanos , Vigilância em Saúde PúblicaRESUMO
OBJECTIVES: The Danish Multiple Sclerosis Registry is the oldest operative and nationwide MS registry. We present The Danish Multiple Sclerosis Registry with its history, data collection, scientific contribution, and national and international research collaboration. MATERIALS AND METHODS: Detailed description of data collection, completeness, quality optimizing procedures, funding, and legal, ethical and data protection issues are provided. RESULTS: The total number of registered cases with clinical isolated syndrome and multiple sclerosis since 1956 was by start of May 2020 30,023 of whom 16,515 cases were alive and residing in Denmark, giving a prevalence rate of about 284 per 100,000 population. The mean annual number of new cases receiving an MS diagnosis was 649 per year in the period 2010 to 2019. In total, 7,945 patients (48.1%) are receiving disease modifying therapy at the start of May 2020. CONCLUSIONS: Multiple Sclerosis registers are becoming increasingly important, not only for epidemiological research but also by quantifying the burden of the disease for the patients and society and helping health care providers and regulators in their decisions. The Danish Multiple Sclerosis Registry has served as data source for a number of scientific publications including epidemiological studies on changes in incidence and mortality, cohort studies investigating risk factors for developing MS, comorbidities and socioeconomic outcomes in the MS population, and observational studies on effectiveness of disease modifying treatments outside the narrow realms of randomized clinical trials.
Assuntos
Esclerose Múltipla , Dinamarca/epidemiologia , Humanos , Incidência , Esclerose Múltipla/epidemiologia , Prevalência , Sistema de RegistrosRESUMO
Background: Digital data sources have become ubiquitous in modern culture in the era of digital technology but often tend to be under-researched because of restricted access to data sources due to fragmentation, privacy issues, or industry ownership, and the methodological complexity of demonstrating their measurable impact on human health. Even though new big data sources have shown unprecedented potential for disease diagnosis and outbreak detection, we need to investigate results in the existing literature to gain a comprehensive understanding of their impact on and benefits to human health. Objective: A systematic review of systematic reviews on identifying digital data sources and their impact area on people's health, including challenges, opportunities, and good practices. Methods: A multidatabase search was performed. Peer-reviewed papers published between January 2010 and November 2020 relevant to digital data sources on health were extracted, assessed, and reviewed. Results: The 64 reviews are covered by three domains, that is, universal health coverage (UHC), public health emergencies, and healthier populations, defined in WHO's General Programme of Work, 2019-2023, and the European Programme of Work, 2020-2025. In all three categories, social media platforms are the most popular digital data source, accounting for 47% (N = 8), 84% (N = 11), and 76% (N = 26) of studies, respectively. The second most utilized data source are electronic health records (EHRs) (N = 13), followed by websites (N = 7) and mass media (N = 5). In all three categories, the most studied impact of digital data sources is on prevention, management, and intervention of diseases (N = 40), and as a tool, there are also many studies (N = 10) on early warning systems for infectious diseases. However, they could also pose health hazards (N = 13), for instance, by exacerbating mental health issues and promoting smoking and drinking behavior among young people. Conclusions: The digital data sources presented are essential for collecting and mining information about human health. The key impact of social media, electronic health records, and websites is in the area of infectious diseases and early warning systems, and in the area of personal health, that is, on mental health and smoking and drinking prevention. However, further research is required to address privacy, trust, transparency, and interoperability to leverage the potential of data held in multiple datastores and systems. This study also identified the apparent gap in systematic reviews investigating the novel big data streams, Internet of Things (IoT) data streams, and sensor, mobile, and GPS data researched using artificial intelligence, complex network, and other computer science methods, as in this domain systematic reviews are not common.
Assuntos
Inteligência Artificial , Saúde Mental , Adolescente , Surtos de Doenças , Humanos , Armazenamento e Recuperação da Informação , Revisões Sistemáticas como AssuntoRESUMO
INTRODUCTION: This systematic review aimed to analyse the performance of the Integrated Disease Surveillance and Response (IDSR) strategy in Sub-Saharan Africa (SSA) and how its implementation has embraced advancement in information technology, big data analytics techniques and wealth of data sources. METHODS: HINARI, PubMed, and advanced Google Scholar databases were searched for eligible articles. The review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols. RESULTS: A total of 1,809 articles were identified and screened at two stages. Forty-five studies met the inclusion criteria, of which 35 were country-specific, seven covered the SSA region, and three covered 3-4 countries. Twenty-six studies assessed the IDSR core functions, 43 the support functions, while 24 addressed both functions. Most of the studies involved Tanzania (9), Ghana (6) and Uganda (5). The routine Health Management Information System (HMIS), which collects data from health care facilities, has remained the primary source of IDSR data. However, the system is characterised by inadequate data completeness, timeliness, quality, analysis and utilisation, and lack of integration of data from other sources. Under-use of advanced and big data analytical technologies in performing disease surveillance and relating multiple indicators minimises the optimisation of clinical and practice evidence-based decision-making. CONCLUSIONS: This review indicates that most countries in SSA rely mainly on traditional indicator-based disease surveillance utilising data from healthcare facilities with limited use of data from other sources. It is high time that SSA countries consider and adopt multi-sectoral, multi-disease and multi-indicator platforms that integrate other sources of health information to provide support to effective detection and prompt response to public health threats.
RESUMO
Common visual features used in target tracking, including colour and grayscale, are prone to failure in a confusingly similar-looking background. As the technology of three-dimensional visual information acquisition has gradually gained ground in recent years, the conditions for the wide use of depth information in target tracking has been made available. This study focuses on discussing the possible ways to introduce depth information into the generative target tracking methods based on a kernel density estimation as well as the performance of different methods of introduction, thereby providing a reference for the use of depth information in actual target tracking systems. First, an analysis of the mean-shift technical framework, a typical algorithm used for generative target tracking, is described, and four methods of introducing the depth information are proposed, i.e., the thresholding of the data source, thresholding of the density distribution of the dataset applied, weighting of the data source, and weighting of the density distribution of the dataset. Details of an experimental study conducted to evaluate the validity, characteristics, and advantages of each method are then described. The experimental results showed that the four methods can improve the validity of the basic method to a certain extent and meet the requirements of real-time target tracking in a confusingly similar background. The method of weighting the density distribution of the dataset, into which depth information is introduced, is the prime choice in engineering practise because it delivers an excellent comprehensive performance and the highest level of accuracy, whereas methods such as the thresholding of both the data sources and the density distribution of the dataset are less time-consuming. The performance in comparison with that of a state-of-the-art tracker further verifies the practicality of the proposed approach. Finally, the research results also provide a reference for improvements in other target tracking methods in which depth information can be introduced.