Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.875
Filtrar
2.
Lancet Oncol ; 22(11): e488-e500, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34735818

RESUMO

Challenges of health systems in Latin America and the Caribbean include accessibility, inequity, segmentation, and poverty. These challenges are similar in different countries of the region and transcend national borders. The increasing digital transformation of health care holds promise of more precise interventions, improved health outcomes, increased efficiency, and ultimately reduced health-care costs. In Latin America and the Caribbean, the adoption of digital health tools is in early stages and the quality of cancer registries, electronic health records, and structured databases are problematic. Cancer research and innovation in the region are limited due to inadequate academic resources and translational research is almost fully dependent on public funding. Regulatory complexity and extended timelines jeopardise the potential improvement in participation in international studies. Emerging technologies, artificial intelligence, big data, and cancer research represent an opportunity to address the health-care challenges in Latin America and the Caribbean collectively, by optimising national capacities, sharing and comparing best practices, and transferring scientific and technical capabilities.


Assuntos
Pesquisa Biomédica/tendências , Neoplasias/prevenção & controle , Medicina de Precisão/tendências , Inteligência Artificial , Big Data , Pesquisa Biomédica/estatística & dados numéricos , Região do Caribe/epidemiologia , Tecnologia Digital , Registros Eletrônicos de Saúde , Humanos , América Latina/epidemiologia , Neoplasias/epidemiologia , Medicina de Precisão/estatística & dados numéricos
3.
J Med Internet Res ; 23(11): e26450, 2021 11 11.
Artigo em Inglês | MEDLINE | ID: mdl-34762055

RESUMO

BACKGROUND: This study aims to identify a novel potential use for web portals in health care and health research: their adoption for the purposes of rapidly sharing health research findings with clinicians, scientists, and patients. In the era of precision medicine and learning health systems, the translation of research findings into targeted therapies depends on the availability of big data and emerging research results. Web portals may work to promote the availability of novel research, working in tandem with traditional scientific publications and conference proceedings. OBJECTIVE: This study aims to assess the potential use of web portals, which facilitate the sharing of health research findings among researchers, clinicians, patients, and the public. It also summarizes the potential legal, ethical, and policy implications associated with such tools for public use and in the management of patient care for complex diseases. METHODS: This study broadly adopts the methods for scoping literature reviews outlined by Arskey and O'Malley in 2005. Raised by the integration of web portals into patient care for complex diseases, we systematically searched 3 databases, PubMed, Scopus, and WestLaw Next, for sources describing web portals for sharing health research findings among clinicians, researchers, and patients and their associated legal, ethical, and policy challenges. Of the 719 candidate source citations, 22 were retained for the review. RESULTS: We found varied and inconsistent treatment of web portals for sharing health research findings among clinicians, researchers, and patients. Although the literature supports the view that portals of this kind are potentially highly promising, they remain novel and are not yet widely adopted. We also found a wide range of discussions on the legal, ethical, and policy issues related to the use of web portals to share research data. CONCLUSIONS: We identified 5 important legal and ethical challenges: privacy and confidentiality, patient health literacy, equity, training, and decision-making. We contend that each of these has meaningful implications for the increased integration of web portals into clinical care.


Assuntos
Letramento em Saúde , Portais do Paciente , Bibliometria , Big Data , Humanos
4.
Comput Intell Neurosci ; 2021: 7085412, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34782834

RESUMO

The current Internet data explosion is expecting an ever-higher demand for text emotion analysis that greatly facilitates public opinion analysis and trend prediction, among others. Therefore, this paper proposes to use a dual-channel convolutional neural network (DCNN) algorithm to analyze the semantic features of English text big data. Following the analysis of the effect of CNN, artificial neural network (ANN), and recurrent neural network (RNN) on English text data analysis, the more effective long short-term memory (LSTM) and the gated recurrent unit (GRU) neural network (NN) are introduced, and each network is combined with the dual-channel CNN, respectively, and comprehensively analyzed under comparative experiments. Second, the semantic features of English text big data are analyzed through the improved SO-pointwise mutual information (SO-PMI) algorithm. Finally, the ensemble dual-channel CNN model is established. Under the comparative experiment, GRU NN has a better feature detection effect than LSTM NN, but the performance increase from dual-channel CNN to GRU NN + dual-channel CNN is not obvious. Under the comparative analysis of GRU NN + dual-channel CNN model and LSTM NN + dual-channel CNN model, GRU NN + dual-channel CNN model ensures the high accuracy of semantic feature analysis and improves the analysis speed of the model. Further, after the attention mechanism is added to the GRU NN + dual-channel CNN model, the accuracy of semantic feature analysis of the model is improved by nearly 1.3%. Therefore, the ensemble model of GRU NN + dual-channel CNN + attention mechanism is more suitable for semantic feature analysis of English text big data. The results will help the e-commerce platform to analyze the evaluation language and semantic features for the current network English short texts.


Assuntos
Idioma , Semântica , Algoritmos , Big Data , Redes Neurais de Computação
5.
Comput Intell Neurosci ; 2021: 8336887, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34782835

RESUMO

With the rapid development of information technology, hospital informatization has become the general trend. In this context, disease monitoring based on medical big data has been proposed and has aroused widespread concern. In order to overcome the shortcomings of the BP neural network, such as slow convergence speed and easy to fall into local extremum, simulated annealing algorithm is used to optimize the BP neural network and high-order simulated annealing neural network algorithm is constructed. After screening the potential target indicators using the random forest algorithm, based on medical big data, the experiment uses high-order simulated annealing neural network algorithm to establish the obesity monitoring model to realize obesity monitoring and prevention. The results show that the training times of the SA-BP neural network are 1480 times lower than those of the BP neural network, and the mean square error of the SA-BP neural network is 3.43 times lower than that of the BP neural network. The MAE of the SA-BP neural network is 1.81 times lower than that of the BP neural network, and the average output error of the obesity monitoring model is about 2.35 at each temperature. After training, the average accuracy of the obesity monitoring model was 98.7%. The above results show that the obesity monitoring model based on medical big data can effectively complete the monitoring of obesity and has a certain contribution to the diagnosis, treatment, and early warning of obesity.


Assuntos
Big Data , Redes Neurais de Computação , Algoritmos , Humanos , Obesidade/diagnóstico , Obesidade/epidemiologia
6.
Comput Intell Neurosci ; 2021: 6882467, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34745251

RESUMO

With the advent of the information age, human demand for information is increasing day by day. The emergence of the concept of big data has triggered a new round of technological revolution, and visual information plays an important role in information. In order to obtain a better 3D model, this paper studies the reconstruction model of training motion 3D images based on a graphical neural network algorithm. This paper studies the problem of Sanda from the following two aspects. First, we try to apply two deep learning algorithms, graphical neural network and recurrent neural network, to the boxing movement recognition task and compare the effects with quadratic discriminant analysis and support vector machine. By comparing and analyzing the influence of different network structures on the deep learning algorithm, it is concluded that recurrent neural network has more practical application advantages than graph neural network in network structure parameter tuning.


Assuntos
Algoritmos , Redes Neurais de Computação , Big Data , Humanos , Imageamento Tridimensional , Máquina de Vetores de Suporte
7.
BMC Public Health ; 21(1): 2001, 2021 11 04.
Artigo em Inglês | MEDLINE | ID: mdl-34736445

RESUMO

BACKGROUND: As COVID-19 continues to spread globally, traditional emergency management measures are facing many practical limitations. The application of big data analysis technology provides an opportunity for local governments to conduct the COVID-19 epidemic emergency management more scientifically. The present study, based on emergency management lifecycle theory, includes a comprehensive analysis of the application framework of China's SARS epidemic emergency management lacked the support of big data technology in 2003. In contrast, this study first proposes a more agile and efficient application framework, supported by big data technology, for the COVID-19 epidemic emergency management and then analyses the differences between the two frameworks. METHODS: This study takes Hainan Province, China as its case study by using a file content analysis and semistructured interviews to systematically comprehend the strategy and mechanism of Hainan's application of big data technology in its COVID-19 epidemic emergency management. RESULTS: Hainan Province adopted big data technology during the four stages, i.e., migration, preparedness, response, and recovery, of its COVID-19 epidemic emergency management. Hainan Province developed advanced big data management mechanisms and technologies for practical epidemic emergency management, thereby verifying the feasibility and value of the big data technology application framework we propose. CONCLUSIONS: This study provides empirical evidence for certain aspects of the theory, mechanism, and technology for local governments in different countries and regions to apply, in a precise, agile, and evidence-based manner, big data technology in their formulations of comprehensive COVID-19 epidemic emergency management strategies.


Assuntos
COVID-19 , Epidemias , Big Data , China/epidemiologia , Humanos , Governo Local , SARS-CoV-2 , Tecnologia
8.
Comput Intell Neurosci ; 2021: 4334024, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34751226

RESUMO

The use of computer vision for target detection and recognition has been an interesting and challenging area of research for the past three decades. Professional athletes and sports enthusiasts in general can be trained with appropriate systems for corrective training and assistive training. Such a need has motivated researchers to combine artificial intelligence with the field of sports to conduct research. In this paper, we propose a Mask Region-Convolutional Neural Network (MR-CNN)- based method for yoga movement recognition based on the image task of yoga movement recognition. The improved MR-CNN model is based on the framework and structure of the region-convolutional network, which proposes a certain number of candidate regions for the image by feature extraction and classifies them, then outputs these regions as detected bounding boxes, and does mask prediction for the candidate regions using segmentation branches. The improved MR-CNN model uses an improved deep residual network as the backbone network for feature extraction, bilinear interpolation of the extracted candidate regions using Region of Interest (RoI) Align, followed by target classification and detection, and segmentation of the image using the segmentation branch. The model improves the convolution part in the segmentation branch by replacing the original standard convolution with a depth-separable convolution to improve the network efficiency. Experimentally constructed polygon-labeled datasets are simulated using the algorithm. The deepening of the network and the use of depth-separable network improve the accuracy of detection while maintaining the reliability of the network and validate the effectiveness of the improved MR-CNN.


Assuntos
Inteligência Artificial , Ioga , Big Data , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Reprodutibilidade dos Testes
9.
BMC Health Serv Res ; 21(1): 1084, 2021 Oct 12.
Artigo em Inglês | MEDLINE | ID: mdl-34641850

RESUMO

BACKGROUND: Spatial allocation of medical resources is closely related to people's health. Thus, it is important to evaluate the abundance of medical resources regionally and explore the spatial heterogeneity of medical resource allocation. METHODS: Using medical geographic big data, this study analyzed 369 Chinese cities and constructed a medical resource evaluation model based on the grading of medical institutions using the Delphi method. It evaluated China's medical resources at three levels (economic sectors, economic zones, and provinces) and discussed their spatial clustering patterns. Geographically weighted regression was used to explore the correlations between the evaluation results and population and gross domestic product (GDP). RESULTS: The spatial heterogeneity of medical resource allocation in China was significant, and the following general regularities were observed: 1) The abundance and balance of medical resources were typically better in the east than in the west, and in coastal areas compared to inland ones. 2) The average primacy ratio of medical resources in Chinese cities by province was 2.30. The spatial distribution of medical resources in the provinces was unbalanced, showing high concentrations in the primate cities. 3) The allocation of medical resources at the provincial level in China was summarized as following a single-growth pole pattern supplemented by bipolar circular allocation and balanced allocation patterns. The agglomeration patterns of medical resources in typical cities were categorized into single-center and balanced development patterns. GDP was highly correlated to the medical evaluation results, while demographic factors showed, low correlations. Large cities and their surrounding areas exhibited obvious response characteristics. CONCLUSIONS: These findings provide policy-relevant guidance for improving the spatial imbalance of medical resources, strengthening regional public health systems, and promoting government coordination efforts for medical resource allocation at different levels to improve the overall functioning of the medical and health service system and bolster its balanced and synergistic development.


Assuntos
Big Data , Alocação de Recursos , Animais , China/epidemiologia , Produto Interno Bruto , Análise Espacial
10.
Comput Intell Neurosci ; 2021: 8996673, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34712319

RESUMO

With the development of medical informatization, the data related to medical field are growing at an amazing speed, and medical big data appears. The mining and analysis of these data plays an important role in the prediction, monitoring, diagnosis, and treatment of tumor diseases. Therefore, this paper proposes a clustering algorithm of the high-order simulated annealing neural network algorithm and uses this algorithm to extract tumor disease-related big data, constructs training set according to the relevant information mined, designs a kind of dimension reduction model, aiming at the problem of excessive and wrong diagnosis and treatment in the diagnosis and treatment module of tumor disease monitoring mode, and establishes the corresponding control mechanism, so as to optimize the tumor disease monitoring mode. The results show that the clustering accuracy of the high-order simulated annealing neural network algorithm on different data sets (iris, wine, and Pima India diabetes) is 97.33%, 82.11%, and 70.56% and the execution time is 0.75 s, 0.562 s, and 1.092 s, which are better than those of the fast k-medoids algorithm and improved k-medoids clustering algorithm. To sum up, the high-order simulated annealing neural network algorithm can achieve good clustering effect in medical big data mining. The establishment of model M1 can reduce the probability of excessive and wrong medical treatment and improve the effectiveness of diagnosis and treatment module monitoring in tumor disease monitoring mode.


Assuntos
Big Data , Neoplasias , Algoritmos , Análise por Conglomerados , Humanos , Neoplasias/diagnóstico , Redes Neurais de Computação
11.
Comput Intell Neurosci ; 2021: 3250062, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34707649

RESUMO

People usually use the method of job analysis to understand the requirements of each job in terms of personnel characteristics, at the same time use the method of psychological measurement to understand the psychological characteristics of each person, and then put the personnel in the appropriate position by matching them with each other. With the development of the information age, massive and complex data are produced. How to accurately extract the effective data needed by the industry from the big data is a very arduous task. In reality, personnel data are influenced by many factors, and the time series formed by it is more accidental and random and often has multilevel and multiscale characteristics. How to use a certain algorithm or data processing technology to effectively dig out the rules contained in the personnel information data and explore the personnel placement scheme has become an important issue. In this paper, a multilayer variable neural network model for complex big data feature learning is established to optimize the staffing scheme. At the same time, the learning model is extended from vector space to tensor space. The parameters of neural network are inversed by high-order backpropagation algorithm facing tensor space. Compared with the traditional multilayer neural network calculation model based on tensor space, the multimodal neural network calculation model can learn the characteristics of complex data quickly and accurately and has obvious advantages.


Assuntos
Big Data , Análise de Dados , Algoritmos , Humanos , Aprendizagem , Redes Neurais de Computação
12.
Sensors (Basel) ; 21(20)2021 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-34695927

RESUMO

Fault detection and diagnosis (FDD) has received considerable attention with the advent of big data. Many data-driven FDD procedures have been proposed, but most of them may not be accurate when data missing occurs. Therefore, this paper proposes an improved random forest (RF) based on decision paths, named DPRF, utilizing correction coefficients to compensate for the influence of incomplete data. In this DPRF model, intact training samples are firstly used to grow all the decision trees in the RF. Then, for each test sample that possibly contains missing values, the decision paths and the corresponding nodes importance scores are obtained, so that for each tree in the RF, the reliability score for the sample can be inferred. Thus, the prediction results of each decision tree for the sample will be assigned to certain reliability scores. The final prediction result is obtained according to the majority voting law, combining both the predicting results and the corresponding reliability scores. To prove the feasibility and effectiveness of the proposed method, the Tennessee Eastman (TE) process is tested. Compared with other FDD methods, the proposed DPRF model shows better performance on incomplete data.


Assuntos
Algoritmos , Big Data , Fenômenos Químicos , Reprodutibilidade dos Testes
13.
Sensors (Basel) ; 21(20)2021 Oct 14.
Artigo em Inglês | MEDLINE | ID: mdl-34696058

RESUMO

Sensor monitoring networks and advances in big data analytics have guided the reliability engineering landscape to a new era of big machinery data. Low-cost sensors, along with the evolution of the internet of things and industry 4.0, have resulted in rich databases that can be analyzed through prognostics and health management (PHM) frameworks. Several data-driven models (DDMs) have been proposed and applied for diagnostics and prognostics purposes in complex systems. However, many of these models are developed using simulated or experimental data sets, and there is still a knowledge gap for applications in real operating systems. Furthermore, little attention has been given to the required data preprocessing steps compared to the training processes of these DDMs. Up to date, research works do not follow a formal and consistent data preprocessing guideline for PHM applications. This paper presents a comprehensive step-by-step pipeline for the preprocessing of monitoring data from complex systems aimed for DDMs. The importance of expert knowledge is discussed in the context of data selection and label generation. Two case studies are presented for validation, with the end goal of creating clean data sets with healthy and unhealthy labels that are then used to train machinery health state classifiers.


Assuntos
Big Data , Gerenciamento de Dados , Bases de Dados Factuais , Prognóstico , Reprodutibilidade dos Testes
14.
Yi Chuan ; 43(10): 924-929, 2021 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-34702704

RESUMO

In recent years, with the development of various high-throughput omics based biological technologies (BT), biomedical research began to enter the era of big data. In the face of high-dimensional, multi-domain and multi-modal biomedical big data, scientific research requires a new paradigm of data intensive scientific research. The vigorous development of cutting-edge information technologies (IT) such as cloud computing, blockchain and artificial intelligence provides technical means for the practice of this new research paradigm. Here,we describe the application of such cutting-edge information technologies in biomedical big data, and propose a forward-looking prospect for the construction of a new paradigm supporting environment for data intensive scientific research. We expect to establish a new research scheme and new scientific research paradigm integrating BT & IT technology, which can finally promote the great leap forward development of biomedical research.


Assuntos
Pesquisa Biomédica , Tecnologia da Informação , Inteligência Artificial , Big Data , Computação em Nuvem
15.
Yi Chuan ; 43(10): 930-937, 2021 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-34702705

RESUMO

With the rapid development of high-throughput sequencing technology and computer science, the amount of large omics data has increased exponentially, the advantages of multi-omics analysis have gradually emerged, and the application of artificial intelligence has become more and more extensive. In this review, we introduce the application progress of multi-omics data analysis and artificial intelligence in the medical field in recent years, and also show the cases and advantages of their combined application. Finally, we briefly explain the current challenges of multi-omics analysis and artificial intelligence in order to provide new research ideas for the medical industry and to promote the development and application of precision medicine.


Assuntos
Inteligência Artificial , Big Data , Sequenciamento de Nucleotídeos em Larga Escala , Medicina de Precisão
16.
Yi Chuan ; 43(10): 949-961, 2021 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-34702707

RESUMO

Short tandem repeat (STR) markers have been widely used in forensic paternity testing and individual identification, but the STR mutation might impact on the forensic result interpretation. Importantly, the STR mutation rate was underestimated due to ignoring the "hidden" mutation phenomenon in most similar studies. Considering this, we use Slooten and Ricciardi's restricted mutation model based on big data to obtain more accurate mutation rates for each marker. In this paper, the mutations of 20 autosomal STRs loci (D3S1358, D1S1656, D13S317, Penta E, D16S539, D18S51, D2S1338, CSF1PO, Penta D, TH01, vWA, D21S11, D6S1043, D7S820, D5S818, TPOX, D8S1179, D12S391, D19S433, and FGA; The restricted model does not include the correction factor of D6S1043, this paper calculates remaining 19 STR loci mutation rates) were investigated in 28,313 (Total: 78,739 individuals) confirmed parentage-testing cases in Chinese Han population. As a result, total 1665 mutations were found in all loci, including 1614 one-steps, 34 two-steps, 8 three-steps, and 9 nonintegral mutations. The loci-specific average mutation rates ranged from 0.00007700 (TPOX) to 0.00459050 (FGA) in trio's and 0.00000000 (TPOX) to 0.00344850 (FGA) in duo's. We analyzed the relationship between mutation rates of the apparent and actual, the trio's and duo's, the paternal and maternal, respectively. The results demonstrated that the actual mutation rates are more than the apparent mostly, and the values of µ1"/µ2"(apparent) are also greater than µ1/µ2 (actual) commonly (µ1", µ1; µ2", µ2 are the mutation rates of one-step and two-step). Therefore, the "hidden" mutations are identified. In addition, the mutations rates of trio's and duo's, the paternal and maternal, exhibit significant difference. Next, those mutation data are used to do a comparison with the studies of other Han populations in China, which present the temporal and regional disparities. Due to the large sample size, some rare mutation events, such as monozygotic (MZ) mutation and "fake four-step mutation", are also reported in this study. In conclusion, the estimation values of actual mutations are obtained based on big data, they can not only provide basic data for the Chinese forensic DNA and population genetics databases, but also have important significance for the development of forensic individual identification, paternity testing and genetics research.


Assuntos
Big Data , Repetições de Microssatélites , Frequência do Gene , Genética Populacional , Humanos , Repetições de Microssatélites/genética , Mutação , Taxa de Mutação
17.
BMC Med Inform Decis Mak ; 21(1): 289, 2021 10 20.
Artigo em Inglês | MEDLINE | ID: mdl-34670548

RESUMO

BACKGROUND: To describe an automated method for assessment of the plausibility of continuous variables collected in the electronic health record (EHR) data for real world evidence research use. METHODS: The most widely used approach in quality assessment (QA) for continuous variables is to detect the implausible numbers using prespecified thresholds. In augmentation to the thresholding method, we developed a score-based method that leverages the longitudinal characteristics of EHR data for detection of the observations inconsistent with the history of a patient. The method was applied to the height and weight data in the EHR from the Million Veteran Program Data from the Veteran's Healthcare Administration (VHA). A validation study was also conducted. RESULTS: The receiver operating characteristic (ROC) metrics of the developed method outperforms the widely used thresholding method. It is also demonstrated that different quality assessment methods have a non-ignorable impact on the body mass index (BMI) classification calculated from height and weight data in the VHA's database. CONCLUSIONS: The score-based method enables automated and scaled detection of the problematic data points in health care big data while allowing the investigators to select the high-quality data based on their need. Leveraging the longitudinal characteristics in EHR will significantly improve the QA performance.


Assuntos
Registros Eletrônicos de Saúde , Veteranos , Big Data , Confiabilidade dos Dados , Gerenciamento de Dados , Humanos
18.
Eur Rev Med Pharmacol Sci ; 25(18): 5865-5870, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34604979

RESUMO

OBJECTIVE: Dupilumab (Dupixent®) is a monoclonal antibody that inhibits IL-4 and IL-13 signaling used for the treatment of allergic diseases. Whilst biologic therapy is traditionally regarded as immunosuppressive and capable to increase the infectious risk, Dupilumab does not display these characteristics and may be even protective in certain cases. We investigated the link between Dupilumab therapy and SARS-CoV-2 infection. MATERIALS AND METHODS: We carried out a comprehensive data mining and disproportionality analysis of the WHO global pharmacovigilance database. One asymptomatic COVID-19 case, 106 cases of symptomatic COVID-19, and 2 cases of severe COVID-19 pneumonia were found. RESULTS: Dupilumab treated patients were at higher risk of COVID-19 (with an IC0.25 of 3.05), even though infections were less severe (IC0.25 of -1.71). The risk of developing COVID-19 was significant both among males and females (with an IC0.25 of 0.24 and 0.58, respectively). The risk of developing COVID-19 was significant in the age-group of 45-64 years (with an IC0.25 of 0.17). CONCLUSIONS: Dupilumab use seems to reduce COVID-19 related severity. Further studies are needed to better understand the immunological mechanisms and clinical implications of these findings. Remarkably, the heterogenous nature of the reports and the database structure did not allow to establish a cause-effect link, but only an epidemiologically decreased risk in the patients subset treated with dupilumab.


Assuntos
Anticorpos Monoclonais Humanizados/efeitos adversos , Anticorpos Monoclonais Humanizados/uso terapêutico , Big Data , COVID-19/epidemiologia , COVID-19/imunologia , Adolescente , Adulto , Idoso , COVID-19/tratamento farmacológico , Bases de Dados Factuais , Feminino , Humanos , Imunossupressores/uso terapêutico , Masculino , Pessoa de Meia-Idade , Fatores de Risco , SARS-CoV-2/efeitos dos fármacos , SARS-CoV-2/imunologia , Índice de Gravidade de Doença , Organização Mundial da Saúde , Adulto Jovem
19.
Nat Commun ; 12(1): 5757, 2021 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-34599181

RESUMO

The large amount of biomedical data derived from wearable sensors, electronic health records, and molecular profiling (e.g., genomics data) is rapidly transforming our healthcare systems. The increasing scale and scope of biomedical data not only is generating enormous opportunities for improving health outcomes but also raises new challenges ranging from data acquisition and storage to data analysis and utilization. To meet these challenges, we developed the Personal Health Dashboard (PHD), which utilizes state-of-the-art security and scalability technologies to provide an end-to-end solution for big biomedical data analytics. The PHD platform is an open-source software framework that can be easily configured and deployed to any big data health project to store, organize, and process complex biomedical data sets, support real-time data analysis at both the individual level and the cohort level, and ensure participant privacy at every step. In addition to presenting the system, we illustrate the use of the PHD framework for large-scale applications in emerging multi-omics disease studies, such as collecting and visualization of diverse data types (wearable, clinical, omics) at a personal level, investigation of insulin resistance, and an infrastructure for the detection of presymptomatic COVID-19.


Assuntos
Ciência de Dados/métodos , Sistemas Computadorizados de Registros Médicos , Big Data , Segurança Computacional , Análise de Dados , Interoperabilidade da Informação em Saúde , Humanos , Armazenamento e Recuperação da Informação , Software
20.
Artigo em Inglês | MEDLINE | ID: mdl-34639450

RESUMO

Coronavirus disease (COVID-19) spreads from one person to another rapidly. A recently discovered coronavirus causes it. COVID-19 has proven to be challenging to detect and cure at an early stage all over the world. Patients showing symptoms of COVID-19 are resulting in hospitals becoming overcrowded, which is becoming a significant challenge. Deep learning's contribution to big data medical research has been enormously beneficial, offering new avenues and possibilities for illness diagnosis techniques. To counteract the COVID-19 outbreak, researchers must create a classifier distinguishing between positive and negative corona-positive X-ray pictures. In this paper, the Apache Spark system has been utilized as an extensive data framework and applied a Deep Transfer Learning (DTL) method using Convolutional Neural Network (CNN) three architectures -InceptionV3, ResNet50, and VGG19-on COVID-19 chest X-ray images. The three models are evaluated in two classes, COVID-19 and normal X-ray images, with 100 percent accuracy. But in COVID/Normal/pneumonia, detection accuracy was 97 percent for the inceptionV3 model, 98.55 percent for the ResNet50 Model, and 98.55 percent for the VGG19 model, respectively.


Assuntos
COVID-19 , Aprendizado Profundo , Big Data , Humanos , SARS-CoV-2 , Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...