Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.627
Filtrar
2.
Crit Care Clin ; 39(1): 235-242, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36333034

RESUMO

In recent years, the volume of digitalized web-based information utilizing modern computer-based technology for data storage, processing, and analysis has grown rapidly. Humans can process a limited number of variables at any given time. Thus, the deluge of clinically useful information in the intensive care unit environment remains untapped. Innovations in machine learning technology with the development of deep neural networks and efficient, cost-effective data archival systems have provided the infrastructure to apply artificial intelligence on big data for determination of clinical events and outcomes. Here, we introduce a few computer-based technologies that have been tested across these domains.


Assuntos
Inteligência Artificial , Big Data , Humanos , Ciência de Dados , Redes Neurais de Computação , Aprendizado de Máquina
3.
Yearb Med Inform ; 31(1): 106-115, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36463867

RESUMO

OBJECTIVES: Over the past few years, challenges from the pandemic have led to an explosion of data sharing and algorithmic development efforts in the areas of molecular measurements, clinical data, and digital health. We aim to characterize and describe recent advanced computational approaches in translational bioinformatics across these domains in the context of issues or progress related to equity and inclusion. METHODS: We conducted a literature assessment of the trends and approaches in translational bioinformatics in the past few years. RESULTS: We present a review of recent computational approaches across molecular, clinical, and digital realms. We discuss applications of phenotyping, disease subtype characterization, predictive modeling, biomarker discovery, and treatment selection. We consider these methods and applications through the lens of equity and inclusion in biomedicine. CONCLUSION: Equity and inclusion should be incorporated at every step of translational bioinformatics projects, including project design, data collection, model creation, and clinical implementation. These considerations, coupled with the exciting breakthroughs in big data and machine learning, are pivotal to reach the goals of precision medicine for all.


Assuntos
Pesquisa Biomédica , Medicina de Precisão , Biologia Computacional , Big Data , Aprendizado de Máquina
4.
Yearb Med Inform ; 31(1): 152-160, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36463873

RESUMO

BACKGROUND: Artificial Intelligence (AI) is becoming more and more important especially in datacentric fields, such as biomedical research and biobanking. However, AI does not only offer advantages and promising benefits, but brings about also ethical risks and perils. In recent years, there has been growing interest in AI ethics, as reflected by a huge number of (scientific) literature dealing with the topic of AI ethics. The main objectives of this review are: (1) to provide an overview about important (upcoming) AI ethics regulations and international recommendations as well as available AI ethics tools and frameworks relevant to biomedical research, (2) to identify what AI ethics can learn from findings in ethics of traditional biomedical research - in particular looking at ethics in the domain of biobanking, and (3) to provide an overview about the main research questions in the field of AI ethics in biomedical research. METHODS: We adopted a modified thematic review approach focused on understanding AI ethics aspects relevant to biomedical research. For this review, four scientific literature databases at the cross-section of medical, technical, and ethics science literature were queried: PubMed, BMC Medical Ethics, IEEE Xplore, and Google Scholar. In addition, a grey literature search was conducted to identify current trends in legislation and standardization. RESULTS: More than 2,500 potentially relevant publications were retrieved through the initial search and 57 documents were included in the final review. The review found many documents describing high-level principles of AI ethics, and some publications describing approaches for making AI ethics more actionable and bridging the principles-to-practice gap. Also, some ongoing regulatory and standardization initiatives related to AI ethics were identified. It was found that ethical aspects of AI implementation in biobanks are often like those in biomedical research, for example with regards to handling big data or tackling informed consent. The review revealed current 'hot' topics in AI ethics related to biomedical research. Furthermore, several published tools and methods aiming to support practical implementation of AI ethics, as well as tools and frameworks specifically addressing complete and transparent reporting of biomedical studies involving AI are described in the review results. CONCLUSIONS: The review results provide a practically useful overview of research strands as well as regulations, guidelines, and tools regarding AI ethics in biomedical research. Furthermore, the review results show the need for an ethical-mindful and balanced approach to AI in biomedical research, and specifically reveal the need for AI ethics research focused on understanding and resolving practical problems arising from the use of AI in science and society.


Assuntos
Inteligência Artificial , Pesquisa Biomédica , Bancos de Espécimes Biológicos , Big Data , Consentimento Livre e Esclarecido
5.
Yearb Med Inform ; 31(1): 161-164, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36463874

RESUMO

OBJECTIVES: To summarize key contributions to current research in the field of Clinical Research Informatics (CRI) and to select best papers published in 2021. METHOD: Using PubMed, we did a bibliographic search using a combination of MeSH descriptors and free-text terms on CRI, followed by a double-blind review in order to select a list of candidate best papers to be peer-reviewed by external reviewers. After peer-review ranking, three section editors met for a consensus meeting and the editorial team was organized to finally conclude on the selected three best papers. RESULTS: Among the 1,096 papers (published in 2021) returned by the search and in the scope of the various areas of CRI, the full review process selected three best papers. The first best paper describes an operational and scalable framework for generating EHR datasets based on a detailed clinical model with an application in the domain of the COVID-19 pandemics. The authors of the second best paper present a secure and scalable platform for the preprocessing of biomedical data for deep data-driven health management applied for the detection of pre-symptomatic COVID-19 cases and for biological characterization of insulin-resistance heterogeneity. The third best paper provides a contribution to the integration of care and research activities with the REDCap Clinical Data and Interoperability sServices (CDIS) module improving the accuracy and efficiency of data collection. CONCLUSIONS: The COVID-19 pandemic is still significantly stimulating research efforts in the CRI field to improve the process deeply and widely for conducting real-world studies as well as for optimizing clinical trials, the duration and cost of which are constantly increasing. The current health crisis highlights the need for healthcare institutions to continue the development and deployment of Big Data spaces, to strengthen their expertise in data science and to implement efficient data quality evaluation and improvement programs.


Assuntos
COVID-19 , Informática Médica , Humanos , Pandemias , Big Data , Coleta de Dados
6.
Comput Intell Neurosci ; 2022: 5231262, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36458231

RESUMO

With the increasing complexity of users' needs and increasing uncertainty of a single web service in big data environment, service composition becomes more and more difficult. In order to improve the solution accuracy and computing speed of the constrained optimization model, several improvements are raised on ant colony optimization (ACO) and its calculation strategy. We introduce beetle antenna search (BAS) strategy to avoid the danger of falling into local optimization, and a service composition method based on fusing beetle-ant colony optimization algorithm (Be-ACO) is proposed. The model first generates search subspace for ant colony through beetle antenna search strategy and optimization service set by traversing subspace based on ant colony algorithm. Continuously rely on beetle antenna search strategy to generate the next search subspace in global scope for ant colony to traverse and converge to the global optimal solution finally. The experimental results show that compared with the traditional optimization method, the proposed method improves combination optimization convergence performance and solution accuracy greatly.


Assuntos
Algoritmos , Big Data , Incerteza
8.
J Popul Ther Clin Pharmacol ; 29(4): e107-e115, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36441048

RESUMO

In the era of technological trends, large statistics have been broadly carried out in diverse businesses, especially healthcare. An extensive amount of data has unfolded new gaps in fitness care. The immense facts in healthcare have the capability to improve healthcare to a higher level. Large records can correctly lessen healthcare problems such as the selection of the appropriate remedy, solution for healthcare, and enhancing the healthcare machine. There are six defining attributes in large data, namely, extent, range, speed, veracity, variability and complexity, and value. Massive information represents an expansion of possibilities that could enhance the performance of healthcare. The large data in healthcare should help in the advanced use of massive data analytics to gain valuable know-how. This large information analytics is used to get valuable facts from all types of sources in healthcare that may be used to take advantage of the data in order to make better choice in healthcare. The massive information analytics can enhance healthcare by discovering institutions and expertise styles and trends in scientific facts. Cardiovascular disorder datasets are massive data in healthcare, and they are used as part of facilitating the system of documenting scientific facts that must be analyzed to offer powerful answers to troubles in fitness care. This paper offers valuable statistics by using massive information analytics from clinical statistics of cardiovascular disease to provide convincing answers for the troubles in healthcare and also to indicate how huge information is essential for healthcare.


Assuntos
Big Data , Doenças Cardiovasculares , Humanos , Doenças Cardiovasculares/epidemiologia , Doenças Cardiovasculares/terapia , Comércio , Atenção à Saúde
9.
PLoS One ; 17(11): e0277660, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36441767

RESUMO

OBJECTIVES: This study aimed to characterize verbal fluency performance in monolinguals and bilinguals using data from the Canadian Longitudinal Study on Aging (CLSA). METHODS: A large sample of adults aged 45-85 (n = 12,875) completed a one-minute animal fluency task in English. Participants were English-speaking monolinguals (n = 9,759), bilinguals who spoke English as their first language (L1 bilinguals, n = 1,836), and bilinguals who spoke English as their second language (L2 bilinguals, n = 1,280). Using a distributional modeling approach to quantify the semantic similarity of words, we examined the impact of word frequency and pairwise semantic similarity on performance on this task. RESULTS: Overall, L1 bilinguals outperformed monolinguals on the verbal fluency task: they produced more items, and these items were of lower average frequency and semantic similarity. Monolinguals in turn outperformed L2 bilinguals on these measures. The results held across different age groups, educational, and income levels. DISCUSSION: These results demonstrate an advantage for bilinguals compared to monolinguals on a category fluency task, when performed in the first language, indicating that, at least in the CLSA sample, bilinguals have superior semantic search capabilities in their first language compared to monolingual speakers of that language.


Assuntos
Big Data , Semântica , Animais , Estudos Longitudinais , Canadá , Envelhecimento
10.
Comput Biol Med ; 151(Pt A): 106245, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36335809

RESUMO

It is an important research task in the field of medical big data to predict patient's future health status according to the historical temporal Electronic Health Records (EHRs). Most of the existing deep learning-based medical prediction methods only focus on the patient's individual information. However, due to the sparseness and low quality of EHR data, individual clinical records of single patient often cannot provide complete health information, which severely limits the accuracy of the prediction models. In this paper, we propose a Multi-graph attEntive Representation learning framework integrating Group information from similar patiEnts(MERGE) for medical prediction. In this framework, while capturing the individual patient's temporal characteristics through the individual representation learning module, the group representation leaning module is used to learn group representations of similar patients from different aspects as a supplement, thereby effectively improving the accuracy of patients' representation. We evaluate our method on the MIMIC-III dataset for the task of in-hospital mortality prediction and Xiangya dataset for cardiovascular diseases (CVDs) prediction. The experimental results show that MERGE outperforms the state-of-the-art methods.


Assuntos
Big Data , Registros Eletrônicos de Saúde , Humanos , Previsões
11.
J Med Internet Res ; 24(11): e33166, 2022 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-36346659

RESUMO

BACKGROUND: Topic modeling approaches allow researchers to analyze and represent written texts. One of the commonly used approaches in psychology is latent Dirichlet allocation (LDA), which is used for rapidly synthesizing patterns of text within "big data," but outputs can be sensitive to decisions made during the analytic pipeline and may not be suitable for certain scenarios such as short texts, and we highlight resources for alternative approaches. This review focuses on the complex analytical practices specific to LDA, which existing practical guides for training LDA models have not addressed. OBJECTIVE: This scoping review used key analytical steps (data selection, data preprocessing, and data analysis) as a framework to understand the methodological approaches being used in psychology research using LDA. METHODS: A total of 4 psychology and health databases were searched. Studies were included if they used LDA to analyze written words and focused on a psychological construct or issue. The data charting processes were constructed and employed based on common data selection, preprocessing, and data analysis steps. RESULTS: A total of 68 studies were included. These studies explored a range of research areas and mostly sourced their data from social media platforms. Although some studies reported on preprocessing and data analysis steps taken, most studies did not provide sufficient detail for reproducibility. Furthermore, the debate surrounding the necessity of certain preprocessing and data analysis steps is revealed. CONCLUSIONS: Our findings highlight the growing use of LDA in psychological science. However, there is a need to improve analytical reporting standards and identify comprehensive and evidence-based best practice recommendations. To work toward this, we developed an LDA Preferred Reporting Checklist that will allow for consistent documentation of LDA analytic decisions and reproducible research outcomes.


Assuntos
Big Data , Documentação , Humanos , Reprodutibilidade dos Testes , Bases de Dados Factuais
12.
Transl Vis Sci Technol ; 11(11): 20, 2022 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-36441131

RESUMO

Purpose: To describe the methods involved in processing and characteristics of an open dataset of annotated clinical notes from the electronic health record (EHR) annotated for glaucoma medications. Methods: In this study, 480 clinical notes from office visits, medical record numbers (MRNs), visit identification numbers, provider names, and billing codes were extracted for 480 patients seen for glaucoma by a comprehensive or glaucoma ophthalmologist from January 1, 2019, to August 31, 2020. MRNs and all visit data were de-identified using a hash function with salt from the deidentifyr package. All progress notes were annotated for glaucoma medication name, route, frequency, dosage, and drug use using an open-source annotation tool, Doccano. Annotations were saved separately. All protected health information (PHI) in progress notes and annotated files were de-identified using the published de-identifying algorithm Philter. All progress notes and annotations were manually validated by two ophthalmologists to ensure complete de-identification. Results: The final dataset contained 5520 annotated sentences, including those with and without medications, for 480 clinical notes. Manual validation revealed 10 instances of remaining PHI which were manually corrected. Conclusions: Annotated free-text clinical notes can be de-identified for upload as an open dataset. As data availability increases with the adoption of EHRs, free-text open datasets will become increasingly valuable for "big data" research and artificial intelligence development. This dataset is published online and publicly available at https://github.com/jche253/Glaucoma_Med_Dataset. Translational Relevance: This open access medication dataset may be a source of raw data for future research involving big data and artificial intelligence research using free-text.


Assuntos
Registros Eletrônicos de Saúde , Glaucoma , Humanos , Inteligência Artificial , Glaucoma/tratamento farmacológico , Glaucoma/epidemiologia , Big Data , Registros
13.
Comput Intell Neurosci ; 2022: 8216522, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36444310

RESUMO

Based on the expansion of scientific and technological capabilities, the trend of global integration has further strengthened, and the relations between countries have become closer and closer. Therefore, the foreign affairs translation system plays a very important role. Many scientific and technological projects have carried out research and analysis around the foreign affairs translation system. Nowadays, the wide variety of data and the complexity of languages in various countries force the processing structure of the foreign affairs translation system to be changed to adapt to the development of big data. In this context, this article studies the foreign affairs translation system based on big data mining technology and designs the application of a new foreign affairs translation system model. The results of the experiment are as follows: (1) The development status of big data mining technology and the problems existing in the current foreign affairs translation system are analyzed, and the research direction of the experiment is determined. The foreign affairs translation system is analyzed according to big data mining technology, which determines the technical guarantee for the research of this article. (2) In keeping with the traditional effective foreign affairs translation system, this article uses big data mining algorithm analysis, the fuzzy c-means clustering algorithm, and the BP neural network algorithm to identify and analyze the problems of the foreign affairs translation model about data analysis ability technology, which quickly and accurately analyzes the problems of the system and optimizes and improves according to the specific problems.


Assuntos
Big Data , Mineração de Dados , Traduções , Análise de Dados , Análise por Conglomerados
14.
Zhonghua Liu Xing Bing Xue Za Zhi ; 43(11): 1835-1841, 2022 Nov 10.
Artigo em Chinês | MEDLINE | ID: mdl-36444470

RESUMO

With the promotion and application of big medical data, non-interventional real-world evidence (RWE) has been used by regulators to assess the effectiveness of medical products. This paper briefly introduces the latest progress and research results of the RCT DUPLICATE Initiative launched by the research team of Harvard University in 2018 and summarizes relevant research experience based on the characteristics of China's medical service to provide inspiration and reference for domestic scholars to conduct related RWE research in the future.


Assuntos
Big Data , Cognição , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto , Universidades
15.
Artigo em Inglês | MEDLINE | ID: mdl-36429980

RESUMO

Dengue fever is an acute mosquito-borne disease that mostly spreads within urban or semi-urban areas in warm climate zones. The dengue-related risk map is one of the most practical tools for executing effective control policies, breaking the transmission chain, and preventing disease outbreaks. Mapping risk at a small scale, such as at an urban level, can demonstrate the spatial heterogeneities in complicated built environments. This review aims to summarize state-of-the-art modeling methods and influential factors in mapping dengue fever risk in urban settings. Data were manually extracted from five major academic search databases following a set of querying and selection criteria, and a total of 28 studies were analyzed. Twenty of the selected papers investigated the spatial pattern of dengue risk by epidemic data, whereas the remaining eight papers developed an entomological risk map as a proxy for potential dengue burden in cities or agglomerated urban regions. The key findings included: (1) Big data sources and emerging data-mining techniques are innovatively employed for detecting hot spots of dengue-related burden in the urban context; (2) Bayesian approaches and machine learning algorithms have become more popular as spatial modeling tools for predicting the distribution of dengue incidence and mosquito presence; (3) Climatic and built environmental variables are the most common factors in making predictions, though the effects of these factors vary with the mosquito species; (4) Socio-economic data may be a better representation of the huge heterogeneity of risk or vulnerability spatial distribution on an urban scale. In conclusion, for spatially assessing dengue-related risk in an urban context, data availability and the purpose for mapping determine the analytical approaches and modeling methods used. To enhance the reliabilities of predictive models, sufficient data about dengue serotyping, socio-economic status, and spatial connectivity may be more important for mapping dengue-related risk in urban settings for future studies.


Assuntos
Diretivas Antecipadas , Dengue , Animais , Teorema de Bayes , Algoritmos , Big Data , Dengue/epidemiologia
16.
Artigo em Inglês | MEDLINE | ID: mdl-36430048

RESUMO

The advanced level of medical care is closely related to the development and popularity of a city, and it will also drive the development of tourism. The smart urban medical system based on big data analysis technology can greatly facilitate people's lives and increase the flow of people in the city, which is of great significance to the city's tourism image dissemination and branding. The medical system, with eight layers of architecture including access, medical cloud service governance, the medical cloud service resource, the platform's public service, the platform's runtime service, infrastructure, and the overall security and monitoring system of the platform, is designed based on big data analysis technology. Chengdu city is taken as an example based on big data analysis technology to position the dissemination of an urban tourism image. Quantitative analysis and questionnaire methods are used to study the effect of urban smart medical system measurement and tourism image communication positioning based on big data analysis technology. The results show that the smart medical cloud service platform of the urban smart medical system, as a public information service system, supports users in obtaining medical services through various terminal devices without geographical restrictions. The smart medical cloud realizes service aggregation and data sharing compared to the traditional isolated medical service system. Cloud computing has been used as the technical basis, making the scalability and reliability of the system have unprecedented improvements. This paper discusses how to effectively absorb, understand, and use tools in the big data environment, extract information from data, find effective information, make image communication activities accurate, reduce the cost, and improve the efficiency of city image communication. The research shows that big data analysis technology improves patients' medical experience, improves medical efficiency, and alleviates urban medical resource allocation to a certain extent. This technology improves people's satisfaction with the dissemination of urban tourism images, makes urban tourism image dissemination activities accurate, reduces the cost of urban tourism image dissemination, and improves the efficiency of urban tourism image dissemination. The combination of the two can provide a reference for developing urban smart medical care and disseminating a tourism image.


Assuntos
Turismo Médico , Humanos , Turismo , Análise de Dados , Big Data , Reprodutibilidade dos Testes , Tecnologia
17.
OMICS ; 26(11): 589-593, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36374252

RESUMO

Big data and data deluge are topics that are well known in the field of systems science. Digital transformation of big data and omics fields is also underway at present. These changes are impacting life sciences broadly, and high-throughput omics inquiries specifically. On the other hand, digital transformation also calls for rethinking citizenship and moving toward critically informed digital citizenship. Past approaches to digital citizenship have tended to frame the digital health issues narrowly, around technocracy, digital literacy, and technical competence in deployment and use of digital technologies. However, digital citizenship also calls for questioning the means and ends of digital transformation, the frames in which knowledge is produced in the current era. In this context, Industry 4.0 has been one of the innovation frameworks for automation through big data, and embedded sensors connected by wireless communication. Industry 4.0 and the attendant "smart" technologies relate to various automation approaches deployed as part of the public health responses to the COVID-19 pandemic as well. This article argues that there is a growing need to steer digital transformation toward critically informed digital citizenship, so that the provenance of digital data and knowledge is held to account from scientific design to implementation science, whether they concern academic or Industry 4.0 paradigms of innovation. There are enormous potentials and expectations from digital transformation in an era of COVID-19 and digital health. For this potential to materialize in ways that are efficient, democratic, and socially just, critical digital citizenship offers new ways forward. Systems science scholarship stands to benefit from a broadening of the focus on high-throughput omics technologies to a realm of critical digital citizenship, so the digital health innovations are well situated in their societal and political contexts.


Assuntos
Big Data , COVID-19 , Humanos , Pandemias , Cidadania , Indústrias
19.
Nefrología (Madrid) ; 42(6): 680-687, nov.-dic. 2022. ilus, tab
Artigo em Espanhol | IBECS | ID: ibc-LC-253

RESUMO

Antecedentes y objetivo: Gran parte de la información médica que se deriva de la práctica clínica habitual queda recogida en forma de lenguaje natural en los informes médicos. Clásicamente, la extracción de información clínica para su posterior análisis a partir de los informes médicos requiere de la lectura y revisión manual de cada uno de ellos con la consiguiente inversión de tiempo. El objetivo de este proyecto piloto ha sido evaluar la utilidad de la folksonomía para la extracción y análisis rápido de los datos que contienen los informes médicos. Material y métodos: En este proyecto piloto hemos utilizado la folksonomía para el análisis y la rápida extracción de datos de 1.631 informes médicos de alta de hospitalización del Servicio de Nefrología del Hospital del Mar sin necesidad de crear una base de datos estructurada previamente. Resultados: A partir de determinadas preguntas sobre la práctica médica habitual (tratamiento hipoglicemiante de los pacientes diabéticos, tratamiento antihipertensivo y manejo de los inhibidores del sistema renina angiotensina durante el ingreso en nefrología y análisis de datos relacionados con la esfera emocional de los pacientes renales) la herramienta ha permitido estructurar y analizar la información contenida en texto libre en los informes de alta. Conclusiones: La aplicación de folksonomía a los informes médicos nos permite transformar la información contenida en lenguaje natural en una serie de datos estructurados y analizables de manera automática sin necesidad de proceder a la revisión manual de los mismos. (AU)


Background: A huge amount of clinical data is daily generated and it is usually filed in clinical reports as natural language. Data extraction and further analysis requires reading and manual review of each report, which is a time consuming process. With the aim to test folksonomy to quickly obtain and analyze the information contained in medial reports we set up this study. Methods and objectives:We have used folksonomy to quickly obtain and analyse data from 1631 discharge clinical reports from Nephrology Department of Hospital del Mar, without the need to create an structured database. Results: After posing some questions related to daily clinical practice (hypoglycaemic drugs used in diabetic patients, antihypertensive drugs and the use of renin angiotensin blockers during hospitalisation in the nephrology department and data related to emotional environment of patients with chronic kidney disease) this tool has allowed the conversion of unstructured information in natural language into a structured pool of data for its further analysis. Conclusions: Folksonomy allows the conversion of the information contained in clinical reports as natural language into a pool of structured data which can be further easily analysed without the need of the classical manual review of the reports. (AU)


Assuntos
Humanos , Big Data , Nefrologia , Processamento de Linguagem Natural , Classificação , Algoritmos
20.
J Am Soc Echocardiogr ; 35(11): A7, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36336396

Assuntos
Big Data , Comércio , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...