Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.318
Filtrar
Mais filtros

Coleção BVS Equador
Intervalo de ano de publicação
1.
Brief Bioinform ; 25(4)2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38836701

RESUMO

Biomedical data are generated and collected from various sources, including medical imaging, laboratory tests and genome sequencing. Sharing these data for research can help address unmet health needs, contribute to scientific breakthroughs, accelerate the development of more effective treatments and inform public health policy. Due to the potential sensitivity of such data, however, privacy concerns have led to policies that restrict data sharing. In addition, sharing sensitive data requires a secure and robust infrastructure with appropriate storage solutions. Here, we examine and compare the centralized and federated data sharing models through the prism of five large-scale and real-world use cases of strategic significance within the European data sharing landscape: the French Health Data Hub, the BBMRI-ERIC Colorectal Cancer Cohort, the federated European Genome-phenome Archive, the Observational Medical Outcomes Partnership/OHDSI network and the EBRAINS Medical Informatics Platform. Our analysis indicates that centralized models facilitate data linkage, harmonization and interoperability, while federated models facilitate scaling up and legal compliance, as the data typically reside on the data generator's premises, allowing for better control of how data are shared. This comparative study thus offers guidance on the selection of the most appropriate sharing strategy for sensitive datasets and provides key insights for informed decision-making in data sharing efforts.


Assuntos
Disciplinas das Ciências Biológicas , Disseminação de Informação , Humanos , Informática Médica/métodos
2.
Nat Immunol ; 15(2): 118-27, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24448569

RESUMO

The immune system is a highly complex and dynamic system. Historically, the most common scientific and clinical practice has been to evaluate its individual components. This kind of approach cannot always expose the interconnecting pathways that control immune-system responses and does not reveal how the immune system works across multiple biological systems and scales. High-throughput technologies can be used to measure thousands of parameters of the immune system at a genome-wide scale. These system-wide surveys yield massive amounts of quantitative data that provide a means to monitor and probe immune-system function. New integrative analyses can help synthesize and transform these data into valuable biological insight. Here we review some of the computational analysis tools for high-dimensional data and how they can be applied to immunology.


Assuntos
Alergia e Imunologia , Sistema Imunitário , Informática Médica/métodos , Biologia de Sistemas/métodos , Animais , Estudo de Associação Genômica Ampla , Ensaios de Triagem em Larga Escala , Humanos , Análise de Componente Principal , Projetos de Pesquisa
3.
BMC Med Res Methodol ; 24(1): 136, 2024 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-38909216

RESUMO

BACKGROUND: Generating synthetic patient data is crucial for medical research, but common approaches build up on black-box models which do not allow for expert verification or intervention. We propose a highly available method which enables synthetic data generation from real patient records in a privacy preserving and compliant fashion, is interpretable and allows for expert intervention. METHODS: Our approach ties together two established tools in medical informatics, namely OMOP as a data standard for electronic health records and Synthea as a data synthetization method. For this study, data pipelines were built which extract data from OMOP, convert them into time series format, learn temporal rules by 2 statistical algorithms (Markov chain, TARM) and 3 algorithms of causal discovery (DYNOTEARS, J-PCMCI+, LiNGAM) and map the outputs into Synthea graphs. The graphs are evaluated quantitatively by their individual and relative complexity and qualitatively by medical experts. RESULTS: The algorithms were found to learn qualitatively and quantitatively different graph representations. Whereas the Markov chain results in extremely large graphs, TARM, DYNOTEARS, and J-PCMCI+ were found to reduce the data dimension during learning. The MultiGroupDirect LiNGAM algorithm was found to not be applicable to the problem statement at hand. CONCLUSION: Only TARM and DYNOTEARS are practical algorithms for real-world data in this use case. As causal discovery is a method to debias purely statistical relationships, the gradient-based causal discovery algorithm DYNOTEARS was found to be most suitable.


Assuntos
Algoritmos , Registros Eletrônicos de Saúde , Humanos , Registros Eletrônicos de Saúde/estatística & dados numéricos , Registros Eletrônicos de Saúde/normas , Cadeias de Markov , Informática Médica/métodos , Informática Médica/estatística & dados numéricos
4.
J Biomed Inform ; 157: 104716, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39197732

RESUMO

OBJECTIVE: This study aims to review the recent advances in community challenges for biomedical text mining in China. METHODS: We collected information of evaluation tasks released in community challenges of biomedical text mining, including task description, dataset description, data source, task type and related links. A systematic summary and comparative analysis were conducted on various biomedical natural language processing tasks, such as named entity recognition, entity normalization, attribute extraction, relation extraction, event extraction, text classification, text similarity, knowledge graph construction, question answering, text generation, and large language model evaluation. RESULTS: We identified 39 evaluation tasks from 6 community challenges that spanned from 2017 to 2023. Our analysis revealed the diverse range of evaluation task types and data sources in biomedical text mining. We explored the potential clinical applications of these community challenge tasks from a translational biomedical informatics perspective. We compared with their English counterparts, and discussed the contributions, limitations, lessons and guidelines of these community challenges, while highlighting future directions in the era of large language models. CONCLUSION: Community challenge evaluation competitions have played a crucial role in promoting technology innovation and fostering interdisciplinary collaboration in the field of biomedical text mining. These challenges provide valuable platforms for researchers to develop state-of-the-art solutions.


Assuntos
Mineração de Dados , Processamento de Linguagem Natural , China , Mineração de Dados/métodos , Informática Médica/métodos
5.
J Biomed Inform ; 154: 104653, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38734158

RESUMO

Many approaches in biomedical informatics (BMI) rely on the ability to define, gather, and manipulate biomedical data to support health through a cyclical research-practice lifecycle. Researchers within this field are often fortunate to work closely with healthcare and public health systems to influence data generation and capture and have access to a vast amount of biomedical data. Many informaticists also have the expertise to engage with stakeholders, develop new methods and applications, and influence policy. However, research and policy that explicitly seeks to address the systemic drivers of health would more effectively support health. Intersectionality is a theoretical framework that can facilitate such research. It holds that individual human experiences reflect larger socio-structural level systems of privilege and oppression, and cannot be truly understood if these systems are examined in isolation. Intersectionality explicitly accounts for the interrelated nature of systems of privilege and oppression, providing a lens through which to examine and challenge inequities. In this paper, we propose intersectionality as an intervention into how we conduct BMI research. We begin by discussing intersectionality's history and core principles as they apply to BMI. We then elaborate on the potential for intersectionality to stimulate BMI research. Specifically, we posit that our efforts in BMI to improve health should address intersectionality's five key considerations: (1) systems of privilege and oppression that shape health; (2) the interrelated nature of upstream health drivers; (3) the nuances of health outcomes within groups; (4) the problematic and power-laden nature of categories that we assign to people in research and in society; and (5) research to inform and support social change.


Assuntos
Informática Médica , Humanos , Informática Médica/métodos , Pesquisa Biomédica
6.
J Biomed Inform ; 156: 104674, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38871012

RESUMO

OBJECTIVE: Biomedical Named Entity Recognition (bio NER) is the task of recognizing named entities in biomedical texts. This paper introduces a new model that addresses bio NER by considering additional external contexts. Different from prior methods that mainly use original input sequences for sequence labeling, the model takes into account additional contexts to enhance the representation of entities in the original sequences, since additional contexts can provide enhanced information for the concept explanation of biomedical entities. METHODS: To exploit an additional context, given an original input sequence, the model first retrieves the relevant sentences from PubMed and then ranks the retrieved sentences to form the contexts. It next combines the context with the original input sequence to form a new enhanced sequence. The original and new enhanced sequences are fed into PubMedBERT for learning feature representation. To obtain more fine-grained features, the model stacks a BiLSTM layer on top of PubMedBERT. The final named entity label prediction is done by using a CRF layer. The model is jointly trained in an end-to-end manner to take advantage of the additional context for NER of the original sequence. RESULTS: Experimental results on six biomedical datasets show that the proposed model achieves promising performance compared to strong baselines and confirms the contribution of additional contexts for bio NER. CONCLUSION: The promising results confirm three important points. First, the additional context from PubMed helps to improve the quality of the recognition of biomedical entities. Second, PubMed is more appropriate than the Google search engine for providing relevant information of bio NER. Finally, more relevant sentences from the context are more beneficial than irrelevant ones to provide enhanced information for the original input sequences. The model is flexible to integrate any additional context types for the NER task.


Assuntos
Processamento de Linguagem Natural , PubMed , Humanos , Algoritmos , Mineração de Dados/métodos , Semântica , Informática Médica/métodos
7.
J Biomed Inform ; 157: 104700, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39079607

RESUMO

BACKGROUND: The future European Health Research and Innovation Cloud (HRIC), as fundamental part of the European Health Data Space (EHDS), will promote the secondary use of data and the capabilities to push the boundaries of health research within an ethical and legally compliant framework that reinforces the trust of patients and citizens. OBJECTIVE: This study aimed to analyse health data management mechanisms in Europe to determine their alignment with FAIR principles and data discovery generating best. practices for new data hubs joining the HRIC ecosystem. In this line, the compliance of health data hubs with FAIR principles and data discovery were assessed, and a set of best practices for health data hubs was concluded. METHODS: A survey was conducted in January 2022, involving 99 representative health data hubs from multiple countries, and 42 responses were obtained in June 2022. Stratification methods were employed to cover different levels of granularity. The survey data was analysed to assess compliance with FAIR and data discovery principles. The study started with a general analysis of survey responses, followed by the creation of specific profiles based on three categories: organization type, function, and level of data aggregation. RESULTS: The study produced specific best practices for data hubs regarding the adoption of FAIR principles and data discoverability. It also provided an overview of the survey study and specific profiles derived from category analysis, considering different types of data hubs. CONCLUSIONS: The study concluded that a significant number of health data hubs in Europe did not fully comply with FAIR and data discovery principles. However, the study identified specific best practices that can guide new data hubs in adhering to these principles. The study highlighted the importance of aligning health data management mechanisms with FAIR principles to enhance interoperability and reusability in the future HRIC.


Assuntos
Computação em Nuvem , Humanos , Europa (Continente) , Inquéritos e Questionários , Gerenciamento de Dados/métodos , Registros Eletrônicos de Saúde , Informática Médica/métodos
8.
J Biomed Inform ; 157: 104693, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39019301

RESUMO

OBJECTIVE: Understanding and quantifying biases when designing and implementing actionable approaches to increase fairness and inclusion is critical for artificial intelligence (AI) in biomedical applications. METHODS: In this Special Communication, we discuss how bias is introduced at different stages of the development and use of AI applications in biomedical sciences and health care. We describe various AI applications and their implications for fairness and inclusion in sections on 1) Bias in Data Source Landscapes, 2) Algorithmic Fairness, 3) Uncertainty in AI Predictions, 4) Explainable AI for Fairness and Equity, and 5) Sociological/Ethnographic Issues in Data and Results Representation. RESULTS: We provide recommendations to address biases when developing and using AI in clinical applications. CONCLUSION: These recommendations can be applied to informatics research and practice to foster more equitable and inclusive health care systems and research discoveries.


Assuntos
Inteligência Artificial , Pesquisa Biomédica , Humanos , Algoritmos , Viés , Informática Médica/métodos , Atenção à Saúde
9.
J Biomed Inform ; 156: 104682, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38944260

RESUMO

OBJECTIVES: This study aims to enhance the analysis of healthcare processes by introducing Object-Centric Process Mining (OCPM). By offering a holistic perspective that accounts for the interactions among various objects, OCPM transcends the constraints of conventional patient-centric process mining approaches, ensuring a more detailed and inclusive understanding of healthcare dynamics. METHODS: We develop a novel method to transform the Observational Medical Outcomes Partnership Common Data Models (OMOP CDM) into Object-Centric Event Logs (OCELs). First, an OMOP CDM4PM is created from the standard OMOP CDM, focusing on data relevant to generating OCEL and addressing healthcare data's heterogeneity and standardization challenges. Second, this subset is transformed into OCEL based on specified healthcare criteria, including identifying various object types, clinical activities, and their relationships. The methodology is tested on the MIMIC-IV database to evaluate its effectiveness and utility. RESULTS: Our proposed method effectively produces OCELs when applied to the MIMIC-IV dataset, allowing for the implementation of OCPM in the healthcare industry. We rigorously evaluate the comprehensiveness and level of abstraction to validate our approach's effectiveness. Additionally, we create diverse object-centric process models intricately designed to navigate the complexities inherent in healthcare processes. CONCLUSION: Our approach introduces a novel perspective by integrating multiple viewpoints simultaneously. To the best of our knowledge, this is the inaugural application of OCPM within the healthcare sector, marking a significant advancement in the field.


Assuntos
Mineração de Dados , Mineração de Dados/métodos , Humanos , Atenção à Saúde , Avaliação de Processos em Cuidados de Saúde/métodos , Bases de Dados Factuais , Informática Médica/métodos , Registros Eletrônicos de Saúde
10.
J Biomed Inform ; 155: 104659, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38777085

RESUMO

OBJECTIVE: This study aims to promote interoperability in precision medicine and translational research by aligning the Observational Medical Outcomes Partnership (OMOP) and Phenopackets data models. Phenopackets is an expert knowledge-driven schema designed to facilitate the storage and exchange of multimodal patient data, and support downstream analysis. The first goal of this paper is to explore model alignment by characterizing the common data models using a newly developed data transformation process and evaluation method. Second, using OMOP normalized clinical data, we evaluate the mapping of real-world patient data to Phenopackets. We evaluate the suitability of Phenopackets as a patient data representation for real-world clinical cases. METHODS: We identified mappings between OMOP and Phenopackets and applied them to a real patient dataset to assess the transformation's success. We analyzed gaps between the models and identified key considerations for transforming data between them. Further, to improve ambiguous alignment, we incorporated Unified Medical Language System (UMLS) semantic type-based filtering to direct individual concepts to their most appropriate domain and conducted a domain-expert evaluation of the mapping's clinical utility. RESULTS: The OMOP to Phenopacket transformation pipeline was executed for 1,000 Alzheimer's disease patients and successfully mapped all required entities. However, due to missing values in OMOP for required Phenopacket attributes, 10.2 % of records were lost. The use of UMLS-semantic type filtering for ambiguous alignment of individual concepts resulted in 96 % agreement with clinical thinking, increased from 68 % when mapping exclusively by domain correspondence. CONCLUSION: This study presents a pipeline to transform data from OMOP to Phenopackets. We identified considerations for the transformation to ensure data quality, handling restrictions for successful Phenopacket validation and discrepant data formats. We identified unmappable Phenopacket attributes that focus on specialty use cases, such as genomics or oncology, which OMOP does not currently support. We introduce UMLS semantic type filtering to resolve ambiguous alignment to Phenopacket entities to be most appropriate for real-world interpretation. We provide a systematic approach to align OMOP and Phenopackets schemas. Our work facilitates future use of Phenopackets in clinical applications by addressing key barriers to interoperability when deriving a Phenopacket from real-world patient data.


Assuntos
Unified Medical Language System , Humanos , Semântica , Registros Eletrônicos de Saúde , Medicina de Precisão/métodos , Pesquisa Translacional Biomédica , Informática Médica/métodos , Processamento de Linguagem Natural , Doença de Alzheimer
11.
J Biomed Inform ; 156: 104673, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38862083

RESUMO

OBJECTIVE: Pneumothorax is an acute thoracic disease caused by abnormal air collection between the lungs and chest wall. Recently, artificial intelligence (AI), especially deep learning (DL), has been increasingly employed for automating the diagnostic process of pneumothorax. To address the opaqueness often associated with DL models, explainable artificial intelligence (XAI) methods have been introduced to outline regions related to pneumothorax. However, these explanations sometimes diverge from actual lesion areas, highlighting the need for further improvement. METHOD: We propose a template-guided approach to incorporate the clinical knowledge of pneumothorax into model explanations generated by XAI methods, thereby enhancing the quality of the explanations. Utilizing one lesion delineation created by radiologists, our approach first generates a template that represents potential areas of pneumothorax occurrence. This template is then superimposed on model explanations to filter out extraneous explanations that fall outside the template's boundaries. To validate its efficacy, we carried out a comparative analysis of three XAI methods (Saliency Map, Grad-CAM, and Integrated Gradients) with and without our template guidance when explaining two DL models (VGG-19 and ResNet-50) in two real-world datasets (SIIM-ACR and ChestX-Det). RESULTS: The proposed approach consistently improved baseline XAI methods across twelve benchmark scenarios built on three XAI methods, two DL models, and two datasets. The average incremental percentages, calculated by the performance improvements over the baseline performance, were 97.8% in Intersection over Union (IoU) and 94.1% in Dice Similarity Coefficient (DSC) when comparing model explanations and ground-truth lesion areas. We further visualized baseline and template-guided model explanations on radiographs to showcase the performance of our approach. CONCLUSIONS: In the context of pneumothorax diagnoses, we proposed a template-guided approach for improving model explanations. Our approach not only aligns model explanations more closely with clinical insights but also exhibits extensibility to other thoracic diseases. We anticipate that our template guidance will forge a novel approach to elucidating AI models by integrating clinical domain expertise.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Pneumotórax , Humanos , Pneumotórax/diagnóstico por imagem , Algoritmos , Tomografia Computadorizada por Raios X/métodos , Informática Médica/métodos
12.
BMC Geriatr ; 24(1): 618, 2024 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-39030512

RESUMO

INTRODUCTION: In the emergency departments (EDs), usually the longest waiting time for treatment and discharge belongs to the elderly patients. Moreover, the number of the ED admissions for the elderly increases every year. It seems that the use of health information technology in geriatric emergency departments can help to reduce the burden of the healthcare services for this group of patients. This research aimed to develop a conceptual model for using health information technology in the geriatric emergency department. METHODS: This study was conducted in 2021. The initial conceptual model was designed based on the findings derived from the previous research phases (literature review and interview with the experts). Then, the model was examined by an expert panel (n = 7). Finally, using the Delphi technique (two rounds), the components of the conceptual model were reviewed and finalized. To collect data, a questionnaire was used, and data were analyzed using descriptive statistics. RESULTS: The common information technologies appropriate for the elderly care in the emergency departments included emergency department information system, clinical decision support system, electronic health records, telemedicine, personal health records, electronic questionnaires for screening, and other technologies such as picture archiving and communication systems (PACS), electronic vital sign monitoring systems, etc. The participants approved all of the proposed systems and their applications in the geriatric emergency departments. CONCLUSION: The proposed model can help to design and implement the most useful information systems in the geriatric emergency departments. As the application of technology accelerates care processes, investing in this field would help to support the care plans for the elderly and improve quality of care services. Further research is recommended to investigate the efficiency and effectiveness of using these technologies in the EDs.


Assuntos
Serviço Hospitalar de Emergência , Humanos , Idoso , Informática Médica/métodos , Técnica Delphi , Registros Eletrônicos de Saúde , Serviços de Saúde para Idosos , Sistemas de Apoio a Decisões Clínicas
13.
J Med Internet Res ; 26: e60501, 2024 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-39255030

RESUMO

BACKGROUND: Prompt engineering, focusing on crafting effective prompts to large language models (LLMs), has garnered attention for its capabilities at harnessing the potential of LLMs. This is even more crucial in the medical domain due to its specialized terminology and language technicity. Clinical natural language processing applications must navigate complex language and ensure privacy compliance. Prompt engineering offers a novel approach by designing tailored prompts to guide models in exploiting clinically relevant information from complex medical texts. Despite its promise, the efficacy of prompt engineering in the medical domain remains to be fully explored. OBJECTIVE: The aim of the study is to review research efforts and technical approaches in prompt engineering for medical applications as well as provide an overview of opportunities and challenges for clinical practice. METHODS: Databases indexing the fields of medicine, computer science, and medical informatics were queried in order to identify relevant published papers. Since prompt engineering is an emerging field, preprint databases were also considered. Multiple data were extracted, such as the prompt paradigm, the involved LLMs, the languages of the study, the domain of the topic, the baselines, and several learning, design, and architecture strategies specific to prompt engineering. We include studies that apply prompt engineering-based methods to the medical domain, published between 2022 and 2024, and covering multiple prompt paradigms such as prompt learning (PL), prompt tuning (PT), and prompt design (PD). RESULTS: We included 114 recent prompt engineering studies. Among the 3 prompt paradigms, we have observed that PD is the most prevalent (78 papers). In 12 papers, PD, PL, and PT terms were used interchangeably. While ChatGPT is the most commonly used LLM, we have identified 7 studies using this LLM on a sensitive clinical data set. Chain-of-thought, present in 17 studies, emerges as the most frequent PD technique. While PL and PT papers typically provide a baseline for evaluating prompt-based approaches, 61% (48/78) of the PD studies do not report any nonprompt-related baseline. Finally, we individually examine each of the key prompt engineering-specific information reported across papers and find that many studies neglect to explicitly mention them, posing a challenge for advancing prompt engineering research. CONCLUSIONS: In addition to reporting on trends and the scientific landscape of prompt engineering, we provide reporting guidelines for future studies to help advance research in the medical field. We also disclose tables and figures summarizing medical prompt engineering papers available and hope that future contributions will leverage these existing works to better advance the field.


Assuntos
Processamento de Linguagem Natural , Humanos , Informática Médica/métodos
14.
J Med Internet Res ; 26: e46407, 2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39110494

RESUMO

Given the requirement to minimize the risks and maximize the benefits of technology applications in health care provision, there is an urgent need to incorporate theory-informed health IT (HIT) evaluation frameworks into existing and emerging guidelines for the evaluation of artificial intelligence (AI). Such frameworks can help developers, implementers, and strategic decision makers to build on experience and the existing empirical evidence base. We provide a pragmatic conceptual overview of selected concrete examples of how existing theory-informed HIT evaluation frameworks may be used to inform the safe development and implementation of AI in health care settings. The list is not exhaustive and is intended to illustrate applications in line with various stakeholder requirements. Existing HIT evaluation frameworks can help to inform AI-based development and implementation by supporting developers and strategic decision makers in considering relevant technology, user, and organizational dimensions. This can facilitate the design of technologies, their implementation in user and organizational settings, and the sustainability and scalability of technologies.


Assuntos
Inteligência Artificial , Humanos , Informática Médica/métodos
15.
J Med Internet Res ; 26: e58764, 2024 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-39083765

RESUMO

Evidence-based medicine (EBM) emerged from McMaster University in the 1980-1990s, which emphasizes the integration of the best research evidence with clinical expertise and patient values. The Health Information Research Unit (HiRU) was created at McMaster University in 1985 to support EBM. Early on, digital health informatics took the form of teaching clinicians how to search MEDLINE with modems and phone lines. Searching and retrieval of published articles were transformed as electronic platforms provided greater access to clinically relevant studies, systematic reviews, and clinical practice guidelines, with PubMed playing a pivotal role. In the early 2000s, the HiRU introduced Clinical Queries-validated search filters derived from the curated, gold-standard, human-appraised Hedges dataset-to enhance the precision of searches, allowing clinicians to hone their queries based on study design, population, and outcomes. Currently, almost 1 million articles are added to PubMed annually. To filter through this volume of heterogenous publications for clinically important articles, the HiRU team and other researchers have been applying classical machine learning, deep learning, and, increasingly, large language models (LLMs). These approaches are built upon the foundation of gold-standard annotated datasets and humans in the loop for active machine learning. In this viewpoint, we explore the evolution of health informatics in supporting evidence search and retrieval processes over the past 25+ years within the HiRU, including the evolving roles of LLMs and responsible artificial intelligence, as we continue to facilitate the dissemination of knowledge, enabling clinicians to integrate the best available evidence into their clinical practice.


Assuntos
Medicina Baseada em Evidências , Informática Médica , Informática Médica/métodos , Informática Médica/tendências , Humanos , História do Século XX , História do Século XXI , Aprendizado de Máquina
16.
J Med Internet Res ; 26: e52399, 2024 05 13.
Artigo em Inglês | MEDLINE | ID: mdl-38739445

RESUMO

BACKGROUND: A large language model (LLM) is a machine learning model inferred from text data that captures subtle patterns of language use in context. Modern LLMs are based on neural network architectures that incorporate transformer methods. They allow the model to relate words together through attention to multiple words in a text sequence. LLMs have been shown to be highly effective for a range of tasks in natural language processing (NLP), including classification and information extraction tasks and generative applications. OBJECTIVE: The aim of this adapted Delphi study was to collect researchers' opinions on how LLMs might influence health care and on the strengths, weaknesses, opportunities, and threats of LLM use in health care. METHODS: We invited researchers in the fields of health informatics, nursing informatics, and medical NLP to share their opinions on LLM use in health care. We started the first round with open questions based on our strengths, weaknesses, opportunities, and threats framework. In the second and third round, the participants scored these items. RESULTS: The first, second, and third rounds had 28, 23, and 21 participants, respectively. Almost all participants (26/28, 93% in round 1 and 20/21, 95% in round 3) were affiliated with academic institutions. Agreement was reached on 103 items related to use cases, benefits, risks, reliability, adoption aspects, and the future of LLMs in health care. Participants offered several use cases, including supporting clinical tasks, documentation tasks, and medical research and education, and agreed that LLM-based systems will act as health assistants for patient education. The agreed-upon benefits included increased efficiency in data handling and extraction, improved automation of processes, improved quality of health care services and overall health outcomes, provision of personalized care, accelerated diagnosis and treatment processes, and improved interaction between patients and health care professionals. In total, 5 risks to health care in general were identified: cybersecurity breaches, the potential for patient misinformation, ethical concerns, the likelihood of biased decision-making, and the risk associated with inaccurate communication. Overconfidence in LLM-based systems was recognized as a risk to the medical profession. The 6 agreed-upon privacy risks included the use of unregulated cloud services that compromise data security, exposure of sensitive patient data, breaches of confidentiality, fraudulent use of information, vulnerabilities in data storage and communication, and inappropriate access or use of patient data. CONCLUSIONS: Future research related to LLMs should not only focus on testing their possibilities for NLP-related tasks but also consider the workflows the models could contribute to and the requirements regarding quality, integration, and regulations needed for successful implementation in practice.


Assuntos
Técnica Delphi , Processamento de Linguagem Natural , Humanos , Aprendizado de Máquina , Atenção à Saúde/métodos , Informática Médica/métodos
17.
Brief Bioinform ; 22(6)2021 11 05.
Artigo em Inglês | MEDLINE | ID: mdl-34213525

RESUMO

Identifying the frequencies of the drug-side effects is a very important issue in pharmacological studies and drug risk-benefit. However, designing clinical trials to determine the frequencies is usually time consuming and expensive, and most existing methods can only predict the drug-side effect existence or associations, not their frequencies. Inspired by the recent progress of graph neural networks in the recommended system, we develop a novel prediction model for drug-side effect frequencies, using a graph attention network to integrate three different types of features, including the similarity information, known drug-side effect frequency information and word embeddings. In comparison, the few available studies focusing on frequency prediction use only the known drug-side effect frequency scores. One novel approach used in this work first decomposes the feature types in drug-side effect graph to extract different view representation vectors based on three different type features, and then recombines these latent view vectors automatically to obtain unified embeddings for prediction. The proposed method demonstrates high effectiveness in 10-fold cross-validation. The computational results show that the proposed method achieves the best performance in the benchmark dataset, outperforming the state-of-the-art matrix decomposition model. In addition, some ablation experiments and visual analyses are also supplied to illustrate the usefulness of our method for the prediction of the drug-side effect frequencies. The codes of MGPred are available at https://github.com/zhc940702/MGPred and https://zenodo.org/record/4449613.


Assuntos
Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos/diagnóstico , Informática Médica/métodos , Software , Algoritmos , Benchmarking , Bases de Dados Factuais , Aprendizado Profundo , Interações Medicamentosas , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos/etiologia , Humanos , Reprodutibilidade dos Testes
18.
Brief Bioinform ; 22(6)2021 11 05.
Artigo em Inglês | MEDLINE | ID: mdl-34453158

RESUMO

Continuous evaluation of drug safety is needed following approval to determine adverse events (AEs) in patient populations with diverse backgrounds. Spontaneous reporting systems are an important source of information for the detection of AEs not identified in clinical trials and for safety assessments that reflect the real-world use of drugs in specific populations and clinical settings. The use of spontaneous reporting systems is expected to detect drug-related AEs early after the launch of a new drug. Spontaneous reporting systems do not contain data on the total number of patients that use a drug; therefore, signal detection by disproportionality analysis, focusing on differences in the ratio of AE reports, is frequently used. In recent years, new analyses have been devised, including signal detection methods focused on the difference in the time to onset of an AE, methods that consider the patient background and those that identify drug-drug interactions. However, unlike commonly used statistics, the results of these analyses are open to misinterpretation if the method and the characteristics of the spontaneous reporting system cannot be evaluated properly. Therefore, this review describes signal detection using data mining, considering traditional methods and the latest knowledge, and their limitations.


Assuntos
Sistemas de Notificação de Reações Adversas a Medicamentos , Algoritmos , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos/diagnóstico , Informática Médica/métodos , Teorema de Bayes , Mineração de Dados , Bases de Dados Factuais , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos/epidemiologia , Humanos , Modelos Estatísticos , Razão de Chances , Curva ROC , Reprodutibilidade dos Testes
19.
Proc Natl Acad Sci U S A ; 117(9): 4571-4577, 2020 03 03.
Artigo em Inglês | MEDLINE | ID: mdl-32071251

RESUMO

Machine learning is proving invaluable across disciplines. However, its success is often limited by the quality and quantity of available data, while its adoption is limited by the level of trust afforded by given models. Human vs. machine performance is commonly compared empirically to decide whether a certain task should be performed by a computer or an expert. In reality, the optimal learning strategy may involve combining the complementary strengths of humans and machines. Here, we present expert-augmented machine learning (EAML), an automated method that guides the extraction of expert knowledge and its integration into machine-learned models. We used a large dataset of intensive-care patient data to derive 126 decision rules that predict hospital mortality. Using an online platform, we asked 15 clinicians to assess the relative risk of the subpopulation defined by each rule compared to the total sample. We compared the clinician-assessed risk to the empirical risk and found that, while clinicians agreed with the data in most cases, there were notable exceptions where they overestimated or underestimated the true risk. Studying the rules with greatest disagreement, we identified problems with the training data, including one miscoded variable and one hidden confounder. Filtering the rules based on the extent of disagreement between clinician-assessed risk and empirical risk, we improved performance on out-of-sample data and were able to train with less data. EAML provides a platform for automated creation of problem-specific priors, which help build robust and dependable machine-learning models in critical applications.


Assuntos
Sistemas Inteligentes , Aprendizado de Máquina/normas , Informática Médica/métodos , Gerenciamento de Dados/métodos , Sistemas de Gerenciamento de Base de Dados , Informática Médica/normas
20.
J Assoc Physicians India ; 71(10): 83-88, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38716529

RESUMO

Digital technology has encompassed all aspects of healthcare. There are many international and national organizations, guidelines, and formats available in health information systems (HIS), but many are presently still not being used in India. The aim is to give a flawless, secure, and user-friendly health information technology (IT) system for Indian healthcare. We discuss the timeline of digital technology in hospital administration, administrative applications, and the importance of clinical quality in health. Clinical perspectives of clinical information systems (CIS), both in acute as well as chronic clinical care models. Cross-integration of healthcare in IT (HIT) in electronic health records (EHR) or electronic medical records (EMRs), in chronic disease management (CDM) systems, and in clinical decision support systems (CDSS) are elaborated. Also, practical strategic application methods are discussed. The limitations of the current HIS software in India are mostly used for transaction reporting, prescription, and administrative tools. They lack CIS and strategic business applications as compared to mature multinational company (MNC) HIS software. Along with this, various features and levels of HIS Software, challenges of HIT adoption, Indian health IT standards, and the future framework of IT in health in India are systematically analyzed. We aim at all physicians in India and at all levels of practice, from individuals, group practices, health institutes, or corporate hospitals, and to encourage them to make strategic use of CIS and strategic IT applications in their individual practice and hospital management. This will improve clinical outcomes, patient safety, practitioner performance, adherence to treatment guidelines, and reduction in medical errors, along with efficiency improvements and cost reductions. How to cite this article: Taneja D, Kulkarni SV, Sinha S, et al. Digital Technology in Hospital Administration: A Strategic Choice. J Assoc Physicians India 2023;71(10):83-88.


Assuntos
Administração Hospitalar , Humanos , Índia , Administração Hospitalar/métodos , Tecnologia Digital , Sistemas de Apoio a Decisões Clínicas , Registros Eletrônicos de Saúde , Informática Médica/métodos , Sistemas de Informação em Saúde
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA