Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 19(4): e0300393, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38630710

RESUMO

Knowledge of the key macroeconomic variables that influence stock volatility across capital sizes, styles, and sectors can provide clues for investment strategies and policy decisions. We use the GARCH-MIDAS model with feature selection to analyze Korean Benchmark Indices from 2009 to 2022. This study maximizes memory retention through an optimal fractional differentiated price series and uses an adaptive lasso penalty for feature selection. The housing price-sales index and realized volatility were consistently influential across most indices and sectors. The GARCH-MIDAS model, paired with these variables, significantly improves long-term stock volatility forecasts. This study underscores the need to monitor housing prices in South Korea because of their substantial effects on long-term stock volatility.

2.
PLoS One ; 18(10): e0286989, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37851618

RESUMO

Every student has a varied level of mathematical proficiency. Therefore, it is important to provide them with questions accordingly. Owing to advances in technology and artificial intelligence, the Learning Management System (LMS) has become a popular application to conduct online learning for students. The LMS can store multiple pieces of information on students through an online database, enabling it to recommend appropriate questions for each student based on an analysis of their previous responses to questions. Particularly, the LMS manages learners and provides an online platform that can evaluate their skills. Questions need to be classified according to their difficulty level so that the LMS can recommend them to learners appropriately and thereby increase their learning efficiency. In this study, we classified large-scale mathematical test items provided by ABLE Tech, which supports LMS-based online mathematical education platforms, using various machine learning techniques according to the difficulty levels of the questions. First, through t-test analysis, we identified the significant correlation variables according to the difficulty level. The t-test results showed that answer rate, type of question, and solution time were positively correlated with the difficulty of the question. Second, items were classified according to their difficulty level using various machine learning models, such as logistic regression (LR), random forest (RF), and extreme gradient boosting (xgboost). Accuracy, precision, recall, F1 score, the area under the curve of the receiver operating curve (AUC-ROC), Cohen's Kappa and Matthew's correlation coefficient (MCC) scores were used as the evaluation metrics. The correct answer rate, question type, and time for solving a question correlated significantly with the difficulty level. The machine learning-based xgboost model outperformed the statistical machine learning models, with a 85.7% accuracy, and 85.8% F1 score. These results can be used as an auxiliary tool in recommending suitable mathematical questions to various learners based on their difficulty level.


Assuntos
Inteligência Artificial , Educação a Distância , Humanos , Estudantes , Aprendizado de Máquina , Área Sob a Curva
3.
PLoS One ; 18(4): e0284298, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37099535

RESUMO

As of 2022, COVID-19, first reported in Wuhan, China, in November 2019, has become a worldwide epidemic, causing numerous infections and casualties and enormous social and economic damage. To mitigate its impact, various COVID-19 prediction studies have emerged, most of them using mathematical models and artificial intelligence for prediction. However, the problem with these models is that their prediction accuracy is considerably reduced when the duration of the COVID-19 outbreak is short. In this paper, we propose a new prediction method combining Word2Vec and the existing long short-term memory and Seq2Seq + Attention model. We compare the prediction error of the existing and proposed models with the COVID-19 prediction results reported from five US states: California, Texas, Florida, New York, and Illinois. The results of the experiment show that the proposed model combining Word2Vec and the existing long short-term memory and Seq2Seq + Attention achieves better prediction results and lower errors than the existing long short-term memory and Seq2Seq + Attention models. In experiments, the Pearson correlation coefficient increased by 0.05 to 0.21 and the RMSE decreased by 0.03 to 0.08 compared to the existing method.


Assuntos
COVID-19 , Epidemias , Humanos , Fatores de Tempo , Inteligência Artificial , COVID-19/epidemiologia , Surtos de Doenças
4.
IEEE Trans Neural Netw Learn Syst ; 34(5): 2400-2412, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-34469319

RESUMO

Influenza leads to many deaths every year and is a threat to human health. For effective prevention, traditional national-scale statistical surveillance systems have been developed, and numerous studies have been conducted to predict influenza outbreaks using web data. Most studies have captured the short-term signs of influenza outbreaks, such as one-week prediction using the characteristics of web data uploaded in real time; however, long-term predictions of more than 2-10 weeks are required to effectively cope with influenza outbreaks. In this study, we determined that web data uploaded in real time have a time-precedence relationship with influenza outbreaks. For example, a few weeks before an influenza pandemic, the word "colds" appears frequently in web data. The web data after the appearance of the word "colds" can be used as information for forecasting future influenza outbreaks, which can improve long-term influenza prediction accuracy. In this study, we propose a novel long-term influenza outbreak forecast model utilizing the time precedence between the emergence of web data and an influenza outbreak. Based on the proposed model, we conducted experiments on: 1) selecting suitable web data for long-term influenza prediction; 2) determining whether the proposed model is regionally dependent; and 3) evaluating the accuracy according to the prediction timeframe. The proposed model showed a correlation of 0.87 in the long-term prediction of ten weeks while significantly outperforming other state-of-the-art methods.


Assuntos
Influenza Humana , Humanos , Influenza Humana/epidemiologia , Redes Neurais de Computação , Surtos de Doenças , Previsões , Estações do Ano
5.
PLoS One ; 17(12): e0278744, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36490250

RESUMO

Recent advances in positioning techniques, along with the widespread use of mobile devices, make it easier to monitor and collect user trajectory information during their daily activities. An ever-growing abundance of data about trajectories of individual users paves the way for various applications that utilize user mobility information. One of the most common analysis tasks in these new applications is to extract the sequential transition patterns between two consecutive timestamps from a collection of trajectories. Such patterns have been widely exploited in diverse applications to predict and recommend next user locations based on the current position. Thus, in this paper, we explore the computation of the transition patterns, especially with a trajectory dataset collected using differential privacy, which is a de facto standard for privacy-preserving data collection and processing. Specifically, the proposed scheme relies on geo-indistinguishability, which is a variant of the well-known differential privacy, to collect trajectory data from users in a privacy-preserving manner, and exploits the functionality of the expectation-maximization algorithm to precisely estimate hidden transition patterns based on perturbed trajectory datasets collected under geo-indistinguishability. Experimental results using real trajectory datasets confirm that a good estimation of transition pattern can be achieved with the proposed method.


Assuntos
Segurança Computacional , Privacidade , Algoritmos , Coleta de Dados
6.
PLoS One ; 17(11): e0278071, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36417448

RESUMO

The stress placed on global power supply systems by the growing demand for electricity has been steadily increasing in recent years. Thus, accurate forecasting of energy demand and consumption is essential to maintain the lifestyle and economic standards of nations sustainably. However, multiple factors, including climate change, affect the energy demands of local, national, and global power grids. Therefore, effective analysis of multivariable data is required for the accurate estimation of energy demand and consumption. In this context, some studies have suggested that LSTM and CNN models can be used to model electricity demand accurately. However, existing works have utilized training based on either electricity loads and weather observations or national metrics e.g., gross domestic product, imports, and exports. This binary segregation has degraded forecasting performance. To resolve this shortcoming, we propose a CNN-LSTM model based on a multivariable augmentation approach. Based on previous studies, we adopt 1D convolution and pooling to extract undiscovered features from temporal sequences. LSTM outperforms RNN on vanishing gradient problems while retaining its benefits regarding time-series variables. The proposed model exhibits near-perfect forecasting of electricity consumption, outperforming existing models. Further, state-level analysis and training are performed, demonstrating the utility of the proposed methodology in forecasting regional energy consumption. The proposed model outperforms other models in most areas.


Assuntos
Fontes de Energia Elétrica , Eletricidade , Produto Interno Bruto , Previsões
7.
J Biomed Inform ; 133: 104148, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35878824

RESUMO

Perhaps no other generation in the span of recorded human history has endured the risks of infectious diseases as has the current generation. The prevalence of infectious diseases is caused mainly by unlimited contact between people in a highly globalized world. Disease control and prevention (CDC) promptly collect and produce disease outbreak statistics, but CDCs rely on a curated, centralized collection system, and requires up to two weeks of lead time. Consequently, the quick release of disease outbreak information has become a great challenge. Infectious disease outbreak information is recorded and spread somewhere on the Internet much faster than CDC announcements, and Internet-sourced data have shown non-substitutable potential to watch and predict infectious disease outbreaks in advance. In this study, we performed a thorough analysis to show the similarity between the Korean Center of Disease Control (KCDC) infectious disease datasets and three Internet-sourced data for nine major infectious diseases in terms of time-series volume. The results show that many of infectious disease outbreak have strongly related to Internet-sourced data. We analyzed several factors that affect the similarity. Our analysis shows that the increase in the number of Internet-sourced data correlates with the increase in the number of infected people and thus, show the positive similarity. We also found that the greater the number of infectious disease outbreaks corresponds to having a wider spread of outbreak regions, in which it also proves to have higher similarity. We presented the prediction result of infectious disease outbreak using various Internet-sourced data and an effective deep learning algorithm. It showed that there are positive correlations between the number of infected people or the number of related web data and the prediction accuracy. We developed and currently operate a web-based system to show the similarity between KCDC and related Internet-sourced data for infectious diseases. This paper helps people to identify what kind of Internet-sourced data they need to use to predict and track a specific infectious disease outbreak. We considered as much as nine major diseases and three kinds of Internet-sourced data together, and we can say that our finding did not depend on specific infectious disease nor specific Internet-sourced data.


Assuntos
Doenças Transmissíveis , Surtos de Doenças , Doenças Transmissíveis/epidemiologia , Previsões , Humanos , Internet
8.
Sensors (Basel) ; 21(14)2021 Jul 07.
Artigo em Inglês | MEDLINE | ID: mdl-34300403

RESUMO

Due to the prevalence of globalization and the surge in people's traffic, diseases are spreading more rapidly than ever and the risks of sporadic contamination are becoming higher than before. Disease warnings continue to rely on censored data, but these warning systems have failed to cope with the speed of disease proliferation. Due to the risks associated with the problem, there have been many studies on disease outbreak surveillance systems, but existing systems have limitations in monitoring disease-related topics and internationalization. With the advent of online news, social media and search engines, social and web data contain rich unexplored data that can be leveraged to provide accurate, timely disease activities and risks. In this study, we develop an infectious disease surveillance system for extracting information related to emerging diseases from a variety of Internet-sourced data. We also propose an effective deep learning-based data filtering and ranking algorithm. This system provides nation-specific disease outbreak information, disease-related topic ranking, a number of reports per district and disease through various visualization techniques such as a map, graph, chart, correlation and coefficient, and word cloud. Our system provides an automated web-based service, and it is free for all users and live in operation.


Assuntos
Doenças Transmissíveis , Aprendizado Profundo , Mídias Sociais , Algoritmos , Surtos de Doenças , Humanos , Internet
9.
PLoS One ; 16(5): e0250992, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33974672

RESUMO

With the rapid advancement of information and communication technologies, there is a growing transformation of healthcare systems. A patient's health data can now be centrally stored in the cloud and be shared with multiple healthcare stakeholders, enabling the patient to be collaboratively treated by more than one healthcare institution. However, several issues, including data security and privacy concerns still remain unresolved. Ciphertext-policy attribute-based encryption (CP-ABE) has shown promising potential in providing data security and privacy in cloud-based systems. Nevertheless, the conventional CP-ABE scheme is inadequate for direct adoption in a collaborative ehealth system. For one, its expressiveness is limited as it is based on a monotonic access structure. Second, it lacks an attribute/user revocation mechanism. Third, the computational burden on both the data owner and data users is linear with the number of attributes in the ciphertext. To address these inadequacies, we propose CESCR, a CP-ABE for efficient and secure sharing of health data in collaborative ehealth systems with immediate and efficient attribute/user revocation. The CESCR scheme is unbounded, i.e., it does not bind the size of the attribute universe to the security parameter, it is based on the expressive and non-restrictive ordered binary decision diagram (OBDD) access structure, and it securely outsources the computationally demanding attribute operations of both encryption and decryption processes without requiring a dummy attribute. Security analysis shows that the CESCR scheme is secure in the selective model. Simulation and performance comparisons with related schemes also demonstrate that the CESCR scheme is expressive and efficient.


Assuntos
Segurança Computacional , Prestação Integrada de Cuidados de Saúde/tendências , Registros Eletrônicos de Saúde , Disseminação de Informação , Telemedicina , Simulação por Computador , Sistemas de Gerenciamento de Base de Dados , Prestação Integrada de Cuidados de Saúde/métodos , Humanos , Telemedicina/métodos
10.
JMIR Med Inform ; 9(5): e23305, 2021 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-34032577

RESUMO

BACKGROUND: Each year, influenza affects 3 to 5 million people and causes 290,000 to 650,000 fatalities worldwide. To reduce the fatalities caused by influenza, several countries have established influenza surveillance systems to collect early warning data. However, proper and timely warnings are hindered by a 1- to 2-week delay between the actual disease outbreaks and the publication of surveillance data. To address the issue, novel methods for influenza surveillance and prediction using real-time internet data (such as search queries, microblogging, and news) have been proposed. Some of the currently popular approaches extract online data and use machine learning to predict influenza occurrences in a classification mode. However, many of these methods extract training data subjectively, and it is difficult to capture the latent characteristics of the data correctly. There is a critical need to devise new approaches that focus on extracting training data by reflecting the latent characteristics of the data. OBJECTIVE: In this paper, we propose an effective method to extract training data in a manner that reflects the hidden features and improves the performance by filtering and selecting only the keywords related to influenza before the prediction. METHODS: Although word embedding provides a distributed representation of words by encoding the hidden relationships between various tokens, we enhanced the word embeddings by selecting keywords related to the influenza outbreak and sorting the extracted keywords using the Pearson correlation coefficient in order to solely keep the tokens with high correlation with the actual influenza outbreak. The keyword extraction process was followed by a predictive model based on long short-term memory that predicts the influenza outbreak. To assess the performance of the proposed predictive model, we used and compared a variety of word embedding techniques. RESULTS: Word embedding without our proposed sorting process showed 0.8705 prediction accuracy when 50.2 keywords were selected on average. Conversely, word embedding using our proposed sorting process showed 0.8868 prediction accuracy and an improvement in prediction accuracy of 12.6%, although smaller amounts of training data were selected, with only 20.6 keywords on average. CONCLUSIONS: The sorting stage empowers the embedding process, which improves the feature extraction process because it acts as a knowledge base for the prediction component. The model outperformed other current approaches that use flat extraction before prediction.

11.
J Biomed Inform ; 118: 103778, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33872817

RESUMO

Leveraging the Electronic Health Records (EHR) longitudinal data to produce actionable clinical insights has always been a critical issue for recent studies. Non-forecasted extended hospitalizations account for a disproportionate amount of resource use, the mediocre quality of inpatient care, and avoidable fatalities. The capability to predict the Length of Stay (LoS) and mortality in the early stages of the admission provides opportunities to improve care and prevent many preventable losses. Forecasting the in-hospital mortality is important in providing clinicians with enough insights to make decisions and hospitals to allocate resources, hence predicting the LoS and mortality within the first day of admission is a difficult but a paramount endeavor. The biggest challenge is that few data are available by this time, thus the prediction has to bring in the previous admissions history and free text diagnosis that are recorded immediately on admission. We propose a model that uses the multi-modal EHR structured medical codes and key demographic information to classify the LoS in 3 classes; Short Los (LoS⩽10 days), Medium LoS (1030 days) as well as mortality as a binary classification of a patient's death during current admission. The prediction has to use data available only within 24 h of admission. The key predictors include previous ICD9 diagnosis codes, ICD9 procedures, key demographic data, and free text diagnosis of the current admission recorded right on admission. We propose a Hierarchical Attention Network (HAN-LoS and HAN-Mor) model and train it to a dataset of over 45321 admissions recorded in the de-identified MIMIC-III dataset. For improved prediction, our attention mechanisms can focus on the most influential past admissions and most influential codes in these admissions. For fair performance evaluation, we implemented and compared the HAN model with previous approaches. With dataset balancing techniques HAN-LoS achieved an AUROC of over 0.82 and a Micro-F1 score of 0.24 and HAN-Mor achieved AUC-ROC of 0.87 hence outperforming the existing baselines that use structured medical codes as well as clinical time series for LoS and Mortality forecasting. By predicting mortality and LoS using the same model, we show that with little tuning the proposed model can be used for other clinical predictive tasks like phenotyping, decompensation,re-admission prediction, and survival analysis.


Assuntos
Hospitalização , Classificação Internacional de Doenças , Registros Eletrônicos de Saúde , Mortalidade Hospitalar , Humanos , Tempo de Internação
12.
IEEE J Biomed Health Inform ; 24(10): 2960-2972, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32071017

RESUMO

The digitization of health records due to technological developments has paved the way for patients to be collaboratively treated by different healthcare institutions. In collaborative ehealth systems, a patient's health data is stored remotely in the cloud for sharing with different healthcare service providers. However, the use of third parties for storage exposes the data to several privacy and security violation threats. Ciphertext policy attribute-based encryption (CP-ABE) which provides a fine-grained access control is a promising solution to privacy and security issues in the cloud environment and as a result, it has been widely studied for secure sharing of health data in cloud-based ehealth systems. Addressing the aspects of expressiveness, efficiency, user collusion resistance and attribute/user revocation in CP-ABE have been at the forefront of these studies. Thus, in this article, we proposed a novel expressive, efficient and collusion-resistant access control scheme with immediate attribute/user revocation for secure sharing of health data in collaborative ehealth systems. The proposed scheme additionally achieves forward and backward security. To realize these features, our access control is based on the ordered binary decision diagram (OBDD) access structure and it binds the user keys to the user identities. Security and performance analysis show that our proposed scheme is secure, expressive and efficient.


Assuntos
Segurança Computacional , Confidencialidade , Registros Eletrônicos de Saúde , Telemedicina , Humanos
13.
PLoS One ; 14(8): e0220976, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31437181

RESUMO

Big web data from sources including online news and Twitter are good resources for investigating deep learning. However, collected news articles and tweets almost certainly contain data unnecessary for learning, and this disturbs accurate learning. This paper explores the performance of word2vec Convolutional Neural Networks (CNNs) to classify news articles and tweets into related and unrelated ones. Using two word embedding algorithms of word2vec, Continuous Bag-of-Word (CBOW) and Skip-gram, we constructed CNN with the CBOW model and CNN with the Skip-gram model. We measured the classification accuracy of CNN with CBOW, CNN with Skip-gram, and CNN without word2vec models for real news articles and tweets. The experimental results indicated that word2vec significantly improved the accuracy of the classification model. The accuracy of the CBOW model was higher and more stable when compared to that of the Skip-gram model. The CBOW model exhibited better performance on news articles, and the Skip-gram model exhibited better performance on tweets. Specifically, CNN with word2vec models was more effective on news articles when compared to that on tweets because news articles are typically more uniform when compared to tweets.


Assuntos
Aprendizado Profundo/estatística & dados numéricos , Mídias Sociais/estatística & dados numéricos , Humanos , Disseminação de Informação
14.
Sensors (Basel) ; 19(13)2019 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-31261936

RESUMO

Indoor positioning technology has attracted the attention of researchers due to the increasing pervasiveness of smartphones and the development of sensor technology, along with the increase of indoor time. Sensor technology, which is one of the most commonly used data sources for indoor positioning, has the advantage that sensors can receive data from a smartphone without installing any additional device. However, the readings of built-in sensors are easily affected by the surrounding environment and are even occasionally different from each other which adversely influence the accuracy of indoor positioning. Moreover, once an error occurs, it can accumulate because there is not any reference point in the sensor, only in indoor positioning. In this paper, we present an accurate indoor positioning technology, which uses smartphone built-in sensors and Bluetooth beacon-based landmarks. Our proposed algorithm chooses proper one between values of sensors alternately based on their characteristics. It exploits landmarks as the reference points of indoor positioning. It also allows individuals to add the location where they repeatedly detect the same and special beacon received signal strength indicator values as a crowdsourced landmark. Extensive experimental results show that our proposed algorithm facilitates the acquisition of accurate heading direction and coordinates of the user.

15.
IEEE Access ; 7: 82956-82969, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-32391237

RESUMO

A map-based infectious disease outbreak information system, called PEACOCK, that provides three types of necessary infectious disease outbreak information is presented. The system first collects the infectious disease outbreak statistics from the government agencies and displays the number of infected people and infection indices on the map. Then, it crawls online news articles for each infectious disease and displays the number of mentions of each disease on the map. Users can also search for news articles regarding the disease. Finally, it retrieves the portal search query data and plots the graphs of the trends. It divides the risk into three levels (i.e., normal, caution, and danger) and visualizes them using different colors on the map. Users can access infectious disease outbreak information accurately and quickly using the system. As the system visualizes the information using both a map and various types of graphs, users can check the information at a glance. This system is in live at http://www.epidemic.co.kr/map.

16.
PLoS One ; 13(11): e0207639, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30496200

RESUMO

Recently, as the paradigm of medical services has shifted from treatment to prevention, there is a growing interest in smart healthcare that can provide users with healthcare services anywhere, at any time, using information and communications technologies. With the development of the smart healthcare industry, there is a growing need for collecting large-scale personal health data to exploit the knowledge obtained through analyzing them for improving the smart healthcare services. Although such a considerable amount of health data can be a valuable asset to the smart healthcare fields, they may cause serious privacy problems if sensitive information of an individual user is leaked to outside users. Therefore, most individuals are reluctant to provide their health data to smart healthcare service providers for data analysis and utilization purpose, which is the biggest challenge in smart healthcare fields. Thus, in this paper, we develop a novel mechanism for privacy-preserving collection of personal health data streams that is characterized as temporal data collected at fixed intervals by leveraging local differential privacy (LDP). In particular, with the proposed approach, a data contributor uses a given privacy budget of LDP to report a small amount of salient data, which are extracted from an entire health data stream, to a data collector. Then, a data collector can effectively reconstruct a health data stream based on the noisy salient data received from a data contributor. Experimental results demonstrate that the proposed approach provides significant accuracy gains over straightforward solutions to this problem.


Assuntos
Confidencialidade , Privacidade , Algoritmos , Coleta de Dados , Serviços de Saúde , Humanos
17.
PLoS One ; 13(8): e0201933, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30075009

RESUMO

This paper describes the web-based automated disease-related topic extraction system, called to DiTeX, which monitors important disease-related topics and provides associated information. National disease surveillance systems require a considerable amount of time to inform people of recent outbreaks of diseases. To solve this problem, many studies have used Internet-based sources such as news and Social Network Service (SNS). However, these sources contain many intentional elements that disturb extracting important topics. To address this challenge, we employ Natural Language Processing and an effective ranking algorithm, and develop DiTeX that provides important disease-related topics. This report describes the web front-end and back-end architecture, implementation, performance of the ranking algorithm, and captured topics of DiTeX. We describe processes for collecting Internet-based data and extracting disease-related topics based on search keywords. Our system then applies a ranking algorithm to evaluate the importance of disease-related topics extracted from these data. Finally, we conduct analysis based on real-world incidents to evaluate the performance and the effectiveness of DiTeX. To evaluate DiTeX, we analyze the ranking of well-known disease-related incidents for various ranking algorithms. The topic extraction rate of our ranking algorithm is superior to those of others. We demonstrate the validity of DiTeX by summarizing the disease-related topics of each day extracted by our system. To our knowledge, DiTeX is the world's first automated web-based real-time service system that extracts and presents disease-related topics, trends and related data through web-based sources. DiTeX is now available on the web through http://epidemic.co.kr/media/topics.


Assuntos
Bases de Dados Factuais , Vigilância em Saúde Pública/métodos , Software , Navegador , Algoritmos , Surtos de Doenças , Internet
18.
IEEE Access ; 6: 47206-47216, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-32391235

RESUMO

Humans are susceptible to various infectious diseases. However, humanity still has limited responses to emergent and recurrent infectious diseases. Recent developments in medical technology have led to various vaccines being developed, but these vaccines typically require a considerable amount of time to counter infectious diseases. Therefore, one of the best methods to prevent infectious diseases is to continuously update our knowledge with useful information from infectious disease information systems and taking active steps to safeguard ourselves against infectious diseases. Some existing infectious disease information systems simply present infectious disease information in the form of text or transmit it via e-mail. Other systems provide data in the form of files or maps. Most existing systems display text-centric information regarding infectious disease outbreaks. Therefore, understanding infectious disease outbreak information at a glance is difficult for users. In this paper, we propose the infectious disease outbreak statistics visualization system, called to DOVE, which collects infectious disease outbreak statistics from the Korea Centers for Disease Control & Prevention and provides statistical charts with district, time, infectious disease, gender, and age data. Users can easily identify infectious disease outbreak statistics at a glance by simply entering the district, time, and name of an infectious disease into our system. Additionally, each statistical chart allows users to recognize the characteristics of an infectious disease and predict outbreaks by investigating the outbreak trends of that disease. We believe that our system provides effective information to help prevent infectious disease outbreaks. Our system is currently available on the web at http://www.epidemic.co.kr/statistics.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA