Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 3.172
Filtrar
1.
Biol Pharm Bull ; 47(10): 1594-1599, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39358238

RESUMO

To conduct clinical pharmacy research, we often face the limitations of conventional statistical methods and single-center observational study. To overcome these issues, we have conducted data-driven research using machine learning methods and medical big data. Decision tree analysis, one of the typical machine learning methods, has a flowchart-like structure that allows users to easily and quantitatively evaluate the occurrence percentage of events due to the combination of multiple factors by answering related questions with Yes or No. Using this feature, we first developed a risk prediction model for acute kidney injury caused by vancomycin, a condition we frequently encounter in clinical practice. Additionally, by replacing the prediction target from a binary variable (i.e., presence or absence of adverse drug reactions) to a continuous variable (i.e., drug dosage), we built a model to estimate the initial dose of vancomycin required to reach the optimal blood level recommended by guidelines. We found its accuracy to be better than that of conventional dose-setting algorithms. Moreover, employing Japanese medical big data such as the claims database helped us overcome the major limitations of conventional clinical pharmacy research such as institutional bias caused by single-center studies. We demonstrated that the combined use of machine learning and medical big data could generate high-quality evidence leveraging the strengths of each approach. Data-driven clinical pharmacy research using machine learning and medical big data has enabled researchers to surpass the limitations of conventional research and produce clinically valuable findings.


Assuntos
Big Data , Aprendizado de Máquina , Humanos , Pesquisa em Farmácia/métodos , Vancomicina/efeitos adversos , Árvores de Decisões
2.
N C Med J ; 85(1): 20-24, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39359617

RESUMO

Cancer is the second leading cause of death in North Carolina and approximately half of cancers are diagnosed in older adults (≥65 years). Cancer clinical trials in older adults are limited and there is a lack of evidence on optimal care strategies in this population. We highlight how big data can fill in gaps in geriatric oncology research.


Assuntos
Big Data , Geriatria , Oncologia , Neoplasias , Humanos , Idoso , North Carolina/epidemiologia , Neoplasias/terapia , Pesquisa Biomédica
3.
Nature ; 634(8032): 7, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39358532
5.
BMC Med Inform Decis Mak ; 24(1): 252, 2024 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-39267022

RESUMO

This paper explores the potential of artificial intelligence, machine learning, and big data analytics in revolutionizing infection control. It addresses the challenges and innovative approaches in combating infectious diseases and antimicrobial resistance, emphasizing the critical role of interdisciplinary collaboration, ethical data practices, and integration of advanced computational tools in modern healthcare.


Assuntos
Inteligência Artificial , Controle de Infecções , Aprendizado de Máquina , Humanos , Controle de Infecções/métodos , Big Data
6.
Zhonghua Liu Xing Bing Xue Za Zhi ; 45(9): 1321-1326, 2024 Sep 10.
Artigo em Chinês | MEDLINE | ID: mdl-39307708

RESUMO

Population based health data collection and analysis are important in epidemiological research. In recent years, with the rapid development of big data, Internet and cloud computing, artificial intelligence has gradually attracted attention of epidemiological researchers. More and more researchers are trying to use artificial intelligence algorithms for genome sequencing and medical image data mining, and for disease diagnosis, risk prediction and others. In recent years, machine learning, a branch of artificial intelligence, has been widely used in epidemiological research. This paper summarizes the key fields and progress in the application of machine learning in epidemiology, reviews the development history of machine learning, analyzes the classic cases and current challenges in its application in epidemiological research, and introduces the current application scenarios and future development trends of machine learning and artificial intelligence algorithms for the better exploration of the epidemiological research value of massive medical health data in China.


Assuntos
Aprendizado de Máquina , Humanos , China/epidemiologia , Inteligência Artificial , Mineração de Dados/métodos , Algoritmos , Big Data , Epidemiologia
7.
Age Ageing ; 53(9)2024 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-39311424

RESUMO

Machine learning (ML) and prediction modelling have become increasingly influential in healthcare, providing critical insights and supporting clinical decisions, particularly in the age of big data. This paper serves as an introductory guide for health researchers and readers interested in prediction modelling and explores how these technologies support clinical decisions, particularly with big data, and covers all aspects of the development, assessment and reporting of a model using ML. The paper starts with the importance of prediction modelling for precision medicine. It outlines different types of prediction and machine learning approaches, including supervised, unsupervised and semi-supervised learning, and provides an overview of popular algorithms for various outcomes and settings. It also introduces key theoretical ML concepts. The importance of data quality, preprocessing and unbiased model performance evaluation is highlighted. Concepts of apparent, internal and external validation will be introduced along with metrics for discrimination and calibration for different types of outcomes. Additionally, the paper addresses model interpretation, fairness and implementation in clinical practice. Finally, the paper provides recommendations for reporting and identifies common pitfalls in prediction modelling and machine learning. The aim of the paper is to help readers understand and critically evaluate research papers that present ML models and to serve as a first guide for developing, assessing and implementing their own.


Assuntos
Pesquisa sobre Serviços de Saúde , Aprendizado de Máquina , Humanos , Idoso , Medicina de Precisão/métodos , Big Data
8.
J Exp Biol ; 227(18)2024 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-39287119

RESUMO

JEB has broadened its scope to include non-hypothesis-led research. In this Perspective, based on our lab's lived experience, I argue that this is excellent news, because truly novel insights can occur from 'blue skies' idea-led experiments. Hypothesis-led and hypothesis-free experimentation are not philosophically antagonistic; rather, the latter can provide a short-cut to an unbiased view of organism function, and is intrinsically hypothesis generating. Insights derived from hypothesis-free research are commonly obtained by the generation and analysis of big datasets - for example, by genetic screens - or from omics-led approaches (notably transcriptomics). Furthermore, meta-analyses of existing datasets can also provide a lower-cost means to formulating new hypotheses, specifically if researchers take advantage of the FAIR principles (findability, accessibility, interoperability and reusability) to access relevant, publicly available datasets. The broadened scope will thus bring new, original work and novel insights to our journal, by expanding the range of fundamental questions that can be asked.


Assuntos
Big Data
10.
Proc Natl Acad Sci U S A ; 121(39): e2402387121, 2024 Sep 24.
Artigo em Inglês | MEDLINE | ID: mdl-39288180

RESUMO

New data sources and AI methods for extracting information are increasingly abundant and relevant to decision-making across societal applications. A notable example is street view imagery, available in over 100 countries, and purported to inform built environment interventions (e.g., adding sidewalks) for community health outcomes. However, biases can arise when decision-making does not account for data robustness or relies on spurious correlations. To investigate this risk, we analyzed 2.02 million Google Street View (GSV) images alongside health, demographic, and socioeconomic data from New York City. Findings demonstrate robustness challenges; built environment characteristics inferred from GSV labels at the intracity level often do not align with ground truth. Moreover, as average individual-level behavior of physical inactivity significantly mediates the impact of built environment features by census tract, intervention on features measured by GSV would be misestimated without proper model specification and consideration of this mediation mechanism. Using a causal framework accounting for these mediators, we determined that intervening by improving 10% of samples in the two lowest tertiles of physical inactivity would lead to a 4.17 (95% CI 3.84-4.55) or 17.2 (95% CI 14.4-21.3) times greater decrease in the prevalence of obesity or diabetes, respectively, compared to the same proportional intervention on the number of crosswalks by census tract. This study highlights critical issues of robustness and model specification in using emergent data sources, showing the data may not measure what is intended, and ignoring mediators can result in biased intervention effect estimates.


Assuntos
Big Data , Tomada de Decisões , Saúde Pública , Humanos , Cidade de Nova Iorque , Ambiente Construído , Masculino , Feminino
11.
Curr Opin Ophthalmol ; 35(6): 431-437, 2024 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-39259650

RESUMO

PURPOSE OF REVIEW: Patient privacy protection is a critical focus in medical practice. Advances over the past decade in big data have led to the digitization of medical records, making medical data increasingly accessible through frequent data sharing and online communication. Periocular features, iris, and fundus images all contain biometric characteristics of patients, making privacy protection in ophthalmology particularly important. Consequently, privacy-preserving technologies have emerged, and are reviewed in this study. RECENT FINDINGS: Recent findings indicate that general medical privacy-preserving technologies, such as federated learning and blockchain, have been gradually applied in ophthalmology. However, the exploration of privacy protection techniques of specific ophthalmic examinations, like digital mask, is still limited. Moreover, we have observed advancements in addressing ophthalmic ethical issues related to privacy protection in the era of big data, such as algorithm fairness and explainability. SUMMARY: Future privacy protection for ophthalmic patients still faces challenges and requires improved strategies. Progress in privacy protection technology for ophthalmology will continue to promote a better healthcare environment and patient experience, as well as more effective data sharing and scientific research.


Assuntos
Confidencialidade , Oftalmologia , Humanos , Segurança Computacional , Disseminação de Informação/métodos , Registros Eletrônicos de Saúde , Privacidade , Big Data , Blockchain
14.
Gigascience ; 132024 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-39250076

RESUMO

Research on animal venoms and their components spans multiple disciplines, including biology, biochemistry, bioinformatics, pharmacology, medicine, and more. Manipulating and analyzing the diverse array of data required for venom research can be challenging, and relevant tools and resources are often dispersed across different online platforms, making them less accessible to nonexperts. In this article, we address the multifaceted needs of the scientific community involved in venom and toxin-related research by identifying and discussing web resources, databases, and tools commonly used in this field. We have compiled these resources into a comprehensive table available on the VenomZone website (https://venomzone.expasy.org/10897). Furthermore, we highlight the challenges currently faced by researchers in accessing and using these resources and emphasize the importance of community-driven interdisciplinary approaches. We conclude by underscoring the significance of enhancing standards, promoting interoperability, and encouraging data and method sharing within the venom research community.


Assuntos
Big Data , Biologia Computacional , Internet , Peçonhas , Animais , Biologia Computacional/métodos , Bases de Dados Factuais
15.
Nat Methods ; 21(9): 1597-1602, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39174710

RESUMO

Over the last decade, biology has begun utilizing 'big data' approaches, resulting in large, comprehensive atlases in modalities ranging from transcriptomics to neural connectomics. However, these approaches must be complemented and integrated with 'small data' approaches to efficiently utilize data from individual labs. Integration of smaller datasets with major reference atlases is critical to provide context to individual experiments, and approaches toward integration of large and small data have been a major focus in many fields in recent years. Here we discuss progress in integration of small data with consortium-sized atlases across multiple modalities, and its potential applications. We then examine promising future directions for utilizing the power of small data to maximize the information garnered from small-scale experiments. We envision that, in the near future, international consortia comprising many laboratories will work together to collaboratively build reference atlases and foundation models using small data methods.


Assuntos
Genômica , Humanos , Genômica/métodos , Big Data , Animais , Conectoma/métodos , Biologia Computacional/métodos
16.
Int J Equity Health ; 23(1): 161, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39148041

RESUMO

In this study, we evaluated and forecasted the cumulative opportunities for residents to access radiotherapy services in Cali, Colombia, while accounting for traffic congestion, using a new people-centred methodology with an equity focus. Furthermore, we identified 1-2 optimal locations where new services would maximise accessibility. We utilised open data and publicly available big data. Cali is one of South America's cities most impacted by traffic congestion. METHODOLOGY: Using a people-centred approach, we tested a web-based digital platform developed through an iterative participatory design. The platform integrates open data, including the location of radiotherapy services, the disaggregated sociodemographic microdata for the population and places of residence, and big data for travel times from Google Distance Matrix API. We used genetic algorithms to identify optimal locations for new services. We predicted accessibility cumulative opportunities (ACO) for traffic ranging from peak congestion to free-flow conditions with hourly assessments for 6-12 July 2020 and 23-29 November 2020. The interactive digital platform is openly available. PRIMARY AND SECONDARY OUTCOMES: We present descriptive statistics and population distribution heatmaps based on 20-min accessibility cumulative opportunities (ACO) isochrones for car journeys. There is no set national or international standard for these travel time thresholds. Most key informants found the 20-min threshold reasonable. These isochrones connect the population-weighted centroid of the traffic analysis zone at the place of residence to the corresponding zone of the radiotherapy service with the shortest travel time under varying traffic conditions ranging from free-flow to peak-traffic congestion levels. Additionally, we conducted a time-series bivariate analysis to assess geographical accessibility based on economic stratum. We identify 1-2 optimal locations where new services would maximize the 20-min ACO during peak-traffic congestion. RESULTS: Traffic congestion significantly diminished accessibility to radiotherapy services, particularly affecting vulnerable populations. For instance, urban 20-min ACO by car dropped from 91% of Cali's urban population within a 20-min journey to the service during free-flow traffic to 31% during peak traffic for the week of 6-12 July 2020. Percentages represent the population within a 20-min journey by car from their residence to a radiotherapy service. Specific ethnic groups, individuals with lower educational attainment, and residents on the outskirts of Cali experienced disproportionate effects, with accessibility decreasing to 11% during peak traffic compared to 81% during free-flow traffic for low-income households. We predict that strategically adding sufficient services in 1-2 locations in eastern Cali would notably enhance accessibility and reduce inequities. The recommended locations for new services remained consistent in both of our measurements.These findings underscore the significance of prioritising equity and comprehensive care in healthcare accessibility. They also offer a practical approach to optimising service locations to mitigate disparities. Expanding this approach to encompass other transportation modes, services, and cities, or updating measurements, is feasible and affordable. The new approach and data are particularly relevant for planning authorities and urban development actors.


Assuntos
Acessibilidade aos Serviços de Saúde , Radioterapia , Viagem , Humanos , Colômbia , Acessibilidade aos Serviços de Saúde/estatística & dados numéricos , Estudos Transversais , Viagem/estatística & dados numéricos , Radioterapia/estatística & dados numéricos , Radioterapia/normas , Big Data
17.
PLoS One ; 19(8): e0307043, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39141627

RESUMO

Realty management relies on data from previous successful and failed purchase and utilization outcomes. The cumulative data at different stages are used to improve utilization efficacy. The vital problem is selecting data for analyzing the value incremental sequence and profitable utilization. This article proposes a knowledge-dependent data processing scheme (KDPS) to augment precise data analysis. This scheme operates on two levels. Data selection based on previous stagnant outcomes is performed in the first level. Different data processing is performed in the second level to mend the first level's flaws. Data processing uses knowledge acquired from the sales process, amenities, and market value. Based on the knowledge determined from successful realty sales and incremental features, further processing for new improvements and existing stagnancy mitigation is recommended. The stagnancy and realty values are used as knowledge for training the data processing system. This ensures definite profitable features meeting the amenity requirements under reduced stagnancy time. The proposed scheme improves the processing rate, stagnancy detection, success rate, and training ratio by 8.2%, 10.25%, 10.28%, and 7%, respectively. It reduces the processing time by 8.56% compared to the existing methods.


Assuntos
Inteligência Artificial , Big Data , Tomada de Decisões , Humanos , Algoritmos
18.
J Environ Manage ; 368: 122125, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39121621

RESUMO

Digital industrialization represented by big data provides substantial support for the high-quality development of the digital economy, but its impact on urban energy conservation development requires further research. To this end, based on the panel data of Chinese cities from 2010 to 2019 and taking the establishment of the national big data comprehensive pilot zone (NBDCPZ) as a quasi-natural experiment, this paper explores the impact, mechanism, and spatial spillover effect of digital industrialization represented by big data on urban energy conservation development using the Difference-in-Differences (DID) method. The results show that digital industrialization can help achieve urban energy conservation development, which still holds after a series of robustness tests. Mechanism analysis reveals that digital industrialization impacts urban energy conservation development by driving industrial sector output growth, promoting industrial upgrading, stimulating green technology innovation, and alleviating resource misallocation. Heterogeneity analysis indicates that the energy conservation effect of digital industrialization is more significant in the central region, intra-regional demonstration comprehensive pilot zones, large cities, non-resource-based cities, and high-level digital infrastructure cities. Additionally, digital industrialization can promote energy conservation development in neighboring areas through spatial spillover effect. This paper enriches the theoretical framework concerning the relationship between digital industrialization and energy conservation development. The findings have significant implications for achieving the coordinated development of digitalization and conservation.


Assuntos
Big Data , Desenvolvimento Industrial , China , Conservação de Recursos Energéticos , Cidades , Conservação dos Recursos Naturais , Indústrias
19.
J Environ Manage ; 368: 122178, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39128356

RESUMO

As a strategic resource, big data has become a key force affecting carbon emission reduction in agriculture. However, its impacts remain controversial, and relevant empirical evidence remains to be explored. Based on quasi-natural experimental analysis, this study explored the impact and mechanism of the construction of the National Big Data Comprehensive Pilot Zone (NBDCPZ) on agricultural carbon emissions (ACE) in China and adopted a difference-in-difference (DID) model using China's provincial panel data from 2003 to 2020. The results showed that the ACE in the NBDCPZ establishment area was significantly reduced by 11.91%, a finding that remained robust following the parallel trend test and the placebo test, among others. Mechanism analysis showed that the ACE was reduced through industrial upgrading and technological innovation. Heterogeneity analysis showed that more pronounced policy gains were achieved in China's central-eastern regions as well as in non-major grain-producing areas compared to western and major grain-producing areas. This research provided supporting evidence for the prospect of big data application in ACE and provided useful guidance regarding the promotion of green and sustainable agricultural development.


Assuntos
Agricultura , Big Data , Carbono , China
20.
PLoS One ; 19(8): e0307381, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39178296

RESUMO

Big data pertains to extensive and intricate compilations of information that necessitate the implementation of proficient and cost-effective evaluation and analysis tools to derive insights and support decision making. The Fermatean fuzzy set theory possesses remarkable capability in capturing imprecision due to its capacity to accommodate complex and ambiguous problem descriptions. This paper presents the study of the concepts of dynamic ordered weighted aggregation operators in the context of Fermatean fuzzy environment. In numerous practical decision making scenarios, the term "dynamic" frequently denotes the capability of obtaining decision-relevant data at various time intervals. In this study, we introduce two novel aggregation operators: Fermatean fuzzy dynamic ordered weighted averaging and geometric operators. We investigate the attributes of these operators in detail, offering a comprehensive description of their salient features. We present a step-by-step mathematical algorithm for decision making scenarios in the context of proposed methodologies. In addition, we highlight the significance of these approaches by presenting the solution to the decision making problem and determining the most effective big data analytics platform for YouTube data analysis. Finally, we perform a thorough comparative analysis to assess the effectiveness of the suggested approaches in comparison to a variety of existing techniques.


Assuntos
Algoritmos , Big Data , Lógica Fuzzy , Mídias Sociais , Humanos , Tomada de Decisões
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...