Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 93
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
BMC Med Inform Decis Mak ; 24(1): 65, 2024 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-38443881

RESUMEN

BACKGROUND: Multimodal histology image registration is a process that transforms into a common coordinate system two or more images obtained from different microscopy modalities. The combination of information from various modalities can contribute to a comprehensive understanding of tissue specimens, aiding in more accurate diagnoses, and improved research insights. Multimodal image registration in histology samples presents a significant challenge due to the inherent differences in characteristics and the need for tailored optimization algorithms for each modality. RESULTS: We developed MMIR a cloud-based system for multimodal histological image registration, which consists of three main modules: a project manager, an algorithm manager, and an image visualization system. CONCLUSION: Our software solution aims to simplify image registration tasks with a user-friendly approach. It facilitates effective algorithm management, responsive web interfaces, supports multi-resolution images, and facilitates batch image registration. Moreover, its adaptable architecture allows for the integration of custom algorithms, ensuring that it aligns with the specific requirements of each modality combination. Beyond image registration, our software enables the conversion of segmented annotations from one modality to another.


Asunto(s)
Algoritmos , Programas Informáticos , Humanos
2.
J Biomed Inform ; 137: 104272, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36563828

RESUMEN

BACKGROUND: Secondary use of health data is a valuable source of knowledge that boosts observational studies, leading to important discoveries in the medical and biomedical sciences. The fundamental guiding principle for performing a successful observational study is the research question and the approach in advance of executing a study. However, in multi-centre studies, finding suitable datasets to support the study is challenging, time-consuming, and sometimes impossible without a deep understanding of each dataset. METHODS: We propose a strategy for retrieving biomedical datasets of interest that were semantically annotated, using an interface built by applying a methodology for transforming natural language questions into formal language queries. The advantages of creating biomedical semantic data are enhanced by using natural language interfaces to issue complex queries without manipulating a logical query language. RESULTS: Our methodology was validated using Alzheimer's disease datasets published in a European platform for sharing and reusing biomedical data. We converted data to semantic information format using biomedical ontologies in everyday use in the biomedical community and published it as a FAIR endpoint. We have considered natural language questions of three types: single-concept questions, questions with exclusion criteria, and multi-concept questions. Finally, we analysed the performance of the question-answering module we used and its limitations. The source code is publicly available at https://bioinformatics-ua.github.io/BioKBQA/. CONCLUSION: We propose a strategy for using information extracted from biomedical data and transformed into a semantic format using open biomedical ontologies. Our method uses natural language to formulate questions to be answered by this semantic data without the direct use of formal query languages.


Asunto(s)
Procesamiento de Lenguaje Natural , Semántica , Programas Informáticos , Lenguaje , Bases de Datos Factuales
3.
Bioinformatics ; 37(Suppl_1): i84-i92, 2021 07 12.
Artículo en Inglés | MEDLINE | ID: mdl-34252946

RESUMEN

MOTIVATION: The process of placing new drugs into the market is time-consuming, expensive and complex. The application of computational methods for designing molecules with bespoke properties can contribute to saving resources throughout this process. However, the fundamental properties to be optimized are often not considered or conflicting with each other. In this work, we propose a novel approach to consider both the biological property and the bioavailability of compounds through a deep reinforcement learning framework for the targeted generation of compounds. We aim to obtain a promising set of selective compounds for the adenosine A2A receptor and, simultaneously, that have the necessary properties in terms of solubility and permeability across the blood-brain barrier to reach the site of action. The cornerstone of the framework is based on a recurrent neural network architecture, the Generator. It seeks to learn the building rules of valid molecules to sample new compounds further. Also, two Predictors are trained to estimate the properties of interest of the new molecules. Finally, the fine-tuning of the Generator was performed with reinforcement learning, integrated with multi-objective optimization and exploratory techniques to ensure that the Generator is adequately biased. RESULTS: The biased Generator can generate an interesting set of molecules, with approximately 85% having the two fundamental properties biased as desired. Thus, this approach has transformed a general molecule generator into a model focused on optimizing specific objectives. Furthermore, the molecules' synthesizability and drug-likeness demonstrate the potential applicability of the de novo drug design in medicinal chemistry. AVAILABILITY AND IMPLEMENTATION: All code is publicly available in the https://github.com/larngroup/De-Novo-Drug-Design. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Asunto(s)
Barrera Hematoencefálica , Diseño de Fármacos , Transporte Biológico , Redes Neurales de la Computación
4.
J Biomed Inform ; 120: 103849, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34214696

RESUMEN

BACKGROUND: The content of the clinical notes that have been continuously collected along patients' health history has the potential to provide relevant information about treatments and diseases, and to increase the value of structured data available in Electronic Health Records (EHR) databases. EHR databases are currently being used in observational studies which lead to important findings in medical and biomedical sciences. However, the information present in clinical notes is not being used in those studies, since the computational analysis of this unstructured data is much complex in comparison to structured data. METHODS: We propose a two-stage workflow for solving an existing gap in Extraction, Transformation and Loading (ETL) procedures regarding observational databases. The first stage of the workflow extracts prescriptions present in patient's clinical notes, while the second stage harmonises the extracted information into their standard definition and stores the resulting information in a common database schema used in observational studies. RESULTS: We validated this methodology using two distinct data sets, in which the goal was to extract and store drug related information in a new Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) database. We analysed the performance of the used annotator as well as its limitations. Finally, we described some practical examples of how users can explore these datasets once migrated to OMOP CDM databases. CONCLUSION: With this methodology, we were able to show a strategy for using the information extracted from the clinical notes in business intelligence tools, or for other applications such as data exploration through the use of SQL queries. Besides, the extracted information complements the data present in OMOP CDM databases which was not directly available in the EHR database.


Asunto(s)
Registros Electrónicos de Salud , Preparaciones Farmacéuticas , Bases de Datos Factuales , Atención a la Salud , Humanos , Flujo de Trabajo
5.
J Biomed Inform ; 93: 103154, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30922867

RESUMEN

BACKGROUND: The global shift from paper health records to electronic ones has led to an impressive growth of biomedical digital data along the past two decades. Exploring and extracting knowledge from these data has the potential to enhance translational research and lead to positive outcomes for the population's health and healthcare. OBECTIVE: The aim of this study was to conduct a systematic review to identify software platforms that enable discovery, secondary use and interoperability of biomedical data. Additionally, we aim evaluating the identified solutions in terms of clinical interest and main healthcare-related outcomes. METHODS: A systematic search of the scientific literature published and indexed in Pubmed between January 2014 and September 2018 was performed. Inclusion criteria were as follows: relevance for the topic of biomedical data discovery, English language, and free full text. To increase the recall, we developed a semi-automatic and incremental methodology to retrieve articles that cite one or more of the previous set. RESULTS: A total number of 500 candidate papers were retrieved through this methodology. Of these, 85 were eligible for abstract assessment. Finally, 37 studies qualified for a full-text review, and 20 provided enough information for the study objectives. CONCLUSIONS: This study revealed that biomedical discovery platforms are both a current necessity and a significantly innovative agent in the area of healthcare. The outcomes that were identified, in terms of scientific publications, clinical studies and research collaborations stand as evidence.


Asunto(s)
Registros Electrónicos de Salud , Investigación Biomédica Traslacional , Humanos , Programas Informáticos
6.
BMC Med Inform Decis Mak ; 19(1): 121, 2019 07 02.
Artículo en Inglés | MEDLINE | ID: mdl-31266480

RESUMEN

BACKGROUND: Many healthcare databases have been routinely collected over the past decades, to support clinical practice and administrative services. However, their secondary use for research is often hindered by restricted governance rules. Furthermore, health research studies typically involve many participants with complementary roles and responsibilities which require proper process management. RESULTS: From a wide set of requirements collected from European clinical studies, we developed TASKA, a task/workflow management system that helps to cope with the socio-technical issues arising when dealing with multidisciplinary and multi-setting clinical studies. The system is based on a two-layered architecture: 1) the backend engine, which follows a micro-kernel pattern, for extensibility, and RESTful web services, for decoupling from the web clients; 2) and the client, entirely developed in ReactJS, allowing the construction and management of studies through a graphical interface. TASKA is a GNU GPL open source project, accessible at https://github.com/bioinformatics-ua/taska . A demo version is also available at https://bioinformatics.ua.pt/taska . CONCLUSIONS: The system is currently used to support feasibility studies across several institutions and countries, in the context of the European Medical Information Framework (EMIF) project. The tool was shown to simplify the set-up of health studies, the management of participants and their roles, as well as the overall governance process.


Asunto(s)
Investigación sobre Servicios de Salud/organización & administración , Análisis y Desempeño de Tareas , Bases de Datos Factuales , Humanos , Programas Informáticos , Interfaz Usuario-Computador , Flujo de Trabajo
7.
PLoS Comput Biol ; 12(11): e1005219, 2016 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-27893735

RESUMEN

De novo experimental drug discovery is an expensive and time-consuming task. It requires the identification of drug-target interactions (DTIs) towards targets of biological interest, either to inhibit or enhance a specific molecular function. Dedicated computational models for protein simulation and DTI prediction are crucial for speed and to reduce the costs associated with DTI identification. In this paper we present a computational pipeline that enables the discovery of putative leads for drug repositioning that can be applied to any microbial proteome, as long as the interactome of interest is at least partially known. Network metrics calculated for the interactome of the bacterial organism of interest were used to identify putative drug-targets. Then, a random forest classification model for DTI prediction was constructed using known DTI data from publicly available databases, resulting in an area under the ROC curve of 0.91 for classification of out-of-sampling data. A drug-target network was created by combining 3,081 unique ligands and the expected ten best drug targets. This network was used to predict new DTIs and to calculate the probability of the positive class, allowing the scoring of the predicted instances. Molecular docking experiments were performed on the best scoring DTI pairs and the results were compared with those of the same ligands with their original targets. The results obtained suggest that the proposed pipeline can be used in the identification of new leads for drug repositioning. The proposed classification model is available at http://bioinformatics.ua.pt/software/dtipred/.


Asunto(s)
Antibacterianos/química , Proteínas Bacterianas/química , Descubrimiento de Drogas/métodos , Reposicionamiento de Medicamentos/métodos , Modelos Químicos , Mapeo de Interacción de Proteínas/métodos , Simulación por Computador , Evaluación Preclínica de Medicamentos/métodos
8.
J Med Syst ; 41(5): 89, 2017 May.
Artículo en Inglés | MEDLINE | ID: mdl-28405948

RESUMEN

Clinical data sharing between healthcare institutions, and between practitioners is often hindered by privacy protection requirements. This problem is critical in collaborative scenarios where data sharing is fundamental for establishing a workflow among parties. The anonymization of patient information burned in DICOM images requires elaborate processes somewhat more complex than simple de-identification of textual information. Usually, before sharing, there is a need for manual removal of specific areas containing sensitive information in the images. In this paper, we present a pipeline for ultrasound medical image de-identification, provided as a free anonymization REST service for medical image applications, and a Software-as-a-Service to streamline automatic de-identification of medical images, which is freely available for end-users. The proposed approach applies image processing functions and machine-learning models to bring about an automatic system to anonymize medical images. To perform character recognition, we evaluated several machine-learning models, being Convolutional Neural Networks (CNN) selected as the best approach. For accessing the system quality, 500 processed images were manually inspected showing an anonymization rate of 89.2%. The tool can be accessed at https://bioinformatics.ua.pt/dicom/anonymizer and it is available with the most recent version of Google Chrome, Mozilla Firefox and Safari. A Docker image containing the proposed service is also publicly available for the community.


Asunto(s)
Anonimización de la Información , Confidencialidad , Procesamiento de Imagen Asistido por Computador , Difusión de la Información , Privacidad , Programas Informáticos , Ultrasonografía
9.
J Med Syst ; 41(4): 54, 2017 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-28214993

RESUMEN

In recent years, we have witnessed an explosion of biological data resulting largely from the demands of life science research. The vast majority of these data are freely available via diverse bioinformatics platforms, including relational databases and conventional keyword search applications. This type of approach has achieved great results in the last few years, but proved to be unfeasible when information needs to be combined or shared among different and scattered sources. During recent years, many of these data distribution challenges have been solved with the adoption of semantic web. Despite the evident benefits of this technology, its adoption introduced new challenges related with the migration process, from existent systems to the semantic level. To facilitate this transition, we have developed Scaleus, a semantic web migration tool that can be deployed on top of traditional systems in order to bring knowledge, inference rules, and query federation to the existent data. Targeted at the biomedical domain, this web-based platform offers, in a single package, straightforward data integration and semantic web services that help developers and researchers in the creation process of new semantically enhanced information systems. SCALEUS is available as open source at http://bioinformatics-ua.github.io/scaleus/ .


Asunto(s)
Bases de Datos Factuales , Informática Médica/organización & administración , Semántica , Integración de Sistemas , Humanos , Almacenamiento y Recuperación de la Información/métodos
10.
BMC Bioinformatics ; 16: 328, 2015 Oct 13.
Artículo en Inglés | MEDLINE | ID: mdl-26464306

RESUMEN

BACKGROUND: In recent years data integration has become an everyday undertaking for life sciences researchers. Aggregating and processing data from disparate sources, whether through specific developed software or via manual processes, is a common task for scientists. However, the scope and usability of the majority of current integration tools fail to deal with the fast growing and highly dynamic nature of biomedical data. RESULTS: In this work we introduce a reactive and event-driven framework that simplifies real-time data integration and interoperability. This platform facilitates otherwise difficult tasks, such as connecting heterogeneous services, indexing, linking and transferring data from distinct resources, or subscribing to notifications regarding the timeliness of dynamic data. For developers, the framework automates the deployment of integrative and interoperable bioinformatics applications, using atomic data storage for content change detection, and enabling agent-based intelligent extract, transform and load tasks. CONCLUSIONS: This work bridges the gap between the growing number of services, accessing specific data sources or algorithms, and the growing number of users, performing simple integration tasks on a recurring basis, through a streamlined workspace available to researchers and developers alike.


Asunto(s)
Programas Informáticos , Automatización , Nube Computacional , Biología Computacional , Humanos
11.
BMC Mol Biol ; 16: 22, 2015 Dec 22.
Artículo en Inglés | MEDLINE | ID: mdl-26694924

RESUMEN

BACKGROUND: Small non-coding RNAs (sncRNAs) are a class of transcripts implicated in several eukaryotic regulatory mechanisms, namely gene silencing and chromatin regulation. Despite significant progress in their identification by next generation sequencing (NGS) we are still far from understanding their full diversity and functional repertoire. RESULTS: Here we report the identification of tRNA derived fragments (tRFs) by NGS of the sncRNA fraction of zebrafish. The tRFs identified are 18-30 nt long, are derived from specific 5' and 3' processing of mature tRNAs and are differentially expressed during development and in differentiated tissues, suggesting that they are likely produced by specific processing rather than random degradation of tRNAs. We further show that a highly expressed tRF (5'tRF-Pro(CGG)) is cleaved in vitro by Dicer and has silencing ability, indicating that it can enter the RNAi pathway. A computational analysis of zebrafish tRFs shows that they are conserved among vertebrates and mining of publicly available datasets reveals that some 5'tRFs are differentially expressed in disease conditions, namely during infection and colorectal cancer. CONCLUSIONS: tRFs constitute a class of conserved regulatory RNAs in vertebrates and may be involved in mechanisms of genome regulation and in some diseases.


Asunto(s)
Secuencia de Bases/genética , Secuencia Conservada/genética , ARN Pequeño no Traducido/genética , ARN de Transferencia/genética , Secuencias Reguladoras de Ácido Ribonucleico/genética , Animales , Línea Celular , Neoplasias Colorrectales/genética , Regulación de la Expresión Génica/genética , Secuenciación de Nucleótidos de Alto Rendimiento , Humanos , Ratones , Células 3T3 NIH , Interferencia de ARN , Ribonucleasa III/metabolismo , Análisis de Secuencia de ARN , Pez Cebra
12.
Nucleic Acids Res ; 41(6): e73, 2013 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-23325845

RESUMEN

Secondary structure of messenger RNA plays an important role in the bio-synthesis of proteins. Its negative impact on translation can reduce the yield of protein by slowing or blocking the initiation and movement of ribosomes along the mRNA, becoming a major factor in the regulation of gene expression. Several algorithms can predict the formation of secondary structures by calculating the minimum free energy of RNA sequences, or perform the inverse process of obtaining an RNA sequence for a given structure. However, there is still no approach to redesign an mRNA to achieve minimal secondary structure without affecting the amino acid sequence. Here we present the first strategy to optimize mRNA secondary structures, to increase (or decrease) the minimum free energy of a nucleotide sequence, without changing its resulting polypeptide, in a time-efficient manner, through a simplistic approximation to hairpin formation. Our data show that this approach can efficiently increase the minimum free energy by >40%, strongly reducing the strength of secondary structures. Applications of this technique range from multi-objective optimization of genes by controlling minimum free energy together with CAI and other gene expression variables, to optimization of secondary structures at the genomic level.


Asunto(s)
Algoritmos , ARN Mensajero/química , Animales , Drosophila melanogaster/genética , Conformación de Ácido Nucleico
13.
J Digit Imaging ; 28(6): 671-83, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26013637

RESUMEN

The production of medical imaging is a continuing trend in healthcare institutions. Quality assurance for planned radiation exposure situations (e.g. X-ray, computer tomography) requires examination-specific set-ups according to several parameters, such as patient's age and weight, body region and clinical indication. These data are normally stored in several formats and with different nomenclatures, which hinder the continuous and automatic monitoring of these indicators and the comparison between several institutions and equipment. This article proposes a framework that aggregates, normalizes and provides different views over collected indicators. The developed tool can be used to improve the quality of radiologic procedures and also for benchmarking and auditing purposes. Finally, a case study and several experimental results related to radiation exposure and productivity are presented and discussed.


Asunto(s)
Garantía de la Calidad de Atención de Salud , Dosis de Radiación , Sistemas de Información Radiológica , Humanos , Tomografía Computarizada por Rayos X
14.
BMC Bioinformatics ; 15: 31, 2014 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-24475928

RESUMEN

BACKGROUND: The diagnosis and prognosis of several diseases can be shortened through the use of different large-scale genome experiments. In this context, microarrays can generate expression data for a huge set of genes. However, to obtain solid statistical evidence from the resulting data, it is necessary to train and to validate many classification techniques in order to find the best discriminative method. This is a time-consuming process that normally depends on intricate statistical tools. RESULTS: geneCommittee is a web-based interactive tool for routinely evaluating the discriminative classification power of custom hypothesis in the form of biologically relevant gene sets. While the user can work with different gene set collections and several microarray data files to configure specific classification experiments, the tool is able to run several tests in parallel. Provided with a straightforward and intuitive interface, geneCommittee is able to render valuable information for diagnostic analyses and clinical management decisions based on systematically evaluating custom hypothesis over different data sets using complementary classifiers, a key aspect in clinical research. CONCLUSIONS: geneCommittee allows the enrichment of microarrays raw data with gene functional annotations, producing integrated datasets that simplify the construction of better discriminative hypothesis, and allows the creation of a set of complementary classifiers. The trained committees can then be used for clinical research and diagnosis. Full documentation including common use cases and guided analysis workflows is freely available at http://sing.ei.uvigo.es/GC/.


Asunto(s)
Biología Computacional/métodos , Bases de Datos Genéticas , Perfilación de la Expresión Génica/métodos , Internet , Programas Informáticos , Enfermedad/genética , Humanos , Interfaz Usuario-Computador
15.
Hum Mutat ; 35(2): 202-7, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24186831

RESUMEN

Triggered by the sequencing of the human genome, personalized medicine has been one of the fastest growing research areas in the last decade. Multiple software and hardware technologies have been developed by several projects, culminating in the exponential growth of genetic data. Considering the technological developments in this field, it is now fairly easy and inexpensive to obtain genetic profiles for unique individuals, such as those performed by several genetic analysis companies. The availability of computational tools that simplify genetic data analysis and the disclosure of biomedical evidences are of utmost importance. We present Variobox, a desktop tool to annotate, analyze, and compare human genes. Variobox obtains variant annotation data from WAVe, protein metadata annotations from Protein Data Bank, and sequences are obtained from Locus Reference Genomic or RefSeq databases. To explore the data, Variobox provides an advanced sequence visualization that enables agile navigation through genetic regions. DNA sequencing data can be compared with reference sequences retrieved from LRG or RefSeq records, identifying and automatically annotating new potential variants. These features and data, ranging from patient sequences to HGVS-compliant variant descriptions, are combined in an intuitive interface to analyze genes and variants. Variobox is a Java application, available at http://bioinformatics.ua.pt/variobox.


Asunto(s)
Biología Computacional/métodos , Bases de Datos Genéticas , Variación Genética , Genoma Humano , Anotación de Secuencia Molecular , Secuencia de Aminoácidos , Secuencia de Bases , Humanos , Medicina de Precisión , Reproducibilidad de los Resultados , Programas Informáticos
16.
Bioinformatics ; 29(15): 1915-6, 2013 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-23736528

RESUMEN

SUMMARY: The continuous growth of the biomedical scientific literature has been motivating the development of text-mining tools able to efficiently process all this information. Although numerous domain-specific solutions are available, there is no web-based concept-recognition system that combines the ability to select multiple concept types to annotate, to reference external databases and to automatically annotate nested and intercepted concepts. BeCAS, the Biomedical Concept Annotation System, is an API for biomedical concept identification and a web-based tool that addresses these limitations. MEDLINE abstracts or free text can be annotated directly in the web interface, where identified concepts are enriched with links to reference databases. Using its customizable widget, it can also be used to augment external web pages with concept highlighting features. Furthermore, all text-processing and annotation features are made available through an HTTP REST API, allowing integration in any text-processing pipeline. AVAILABILITY: BeCAS is freely available for non-commercial use at http://bioinformatics.ua.pt/becas. CONTACTS: tiago.nunes@ua.pt or jlo@ua.pt.


Asunto(s)
Minería de Datos/métodos , Programas Informáticos , Bases de Datos Factuales , Internet , MEDLINE
17.
J Digit Imaging ; 27(2): 165-73, 2014 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-24146358

RESUMEN

Picture Archive and Communication System (PACS) is a globally adopted concept and plays a fundamental role in patient care flow within healthcare institutions. However, the deployment of medical imaging repositories over multiple sites still brings several practical challenges namely related to operation and management (O&M). This paper describes a Web-based centralized console that provides remote monitoring, testing, and management over multiple geo-distributed PACS. The system allows the PACS administrator to define any kind of service or operation, reducing the need for local technicians and providing a 24/7 monitoring solution.


Asunto(s)
Redes de Comunicación de Computadores , Sistemas de Información Radiológica/organización & administración , Humanos , Almacenamiento y Recuperación de la Información , Internet , Integración de Sistemas
18.
J Med Syst ; 38(8): 63, 2014 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-24957389

RESUMEN

The growing use of computer systems in medical institutions has been generating a tremendous quantity of data. While these data have a critical role in assisting physicians in the clinical practice, the information that can be extracted goes far beyond this utilization. This article proposes a platform capable of assembling multiple data sources within a medical imaging laboratory, through a network of intelligent sensors. The proposed integration framework follows a SOA hybrid architecture based on an information sensor network, capable of collecting information from several sources in medical imaging laboratories. Currently, the system supports three types of sensors: DICOM repository meta-data, network workflows and examination reports. Each sensor is responsible for converting unstructured information from data sources into a common format that will then be semantically indexed in the framework engine. The platform was deployed in the Cardiology department of a central hospital, allowing identification of processes' characteristics and users' behaviours that were unknown before the utilization of this solution.


Asunto(s)
Sistemas de Información en Hospital/organización & administración , Integración de Sistemas , Flujo de Trabajo , Minería de Datos , Humanos , Sistemas de Información Radiológica/organización & administración , Interfaz Usuario-Computador
19.
Heliyon ; 10(7): e28560, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38590890

RESUMEN

Single Sign-On (SSO) methods are the primary solution to authenticate users across multiple web systems. These mechanisms streamline the authentication procedure by avoiding duplicate developments of authentication modules for each application. Besides, these mechanisms also provide convenience to the end-user by keeping the user authenticated when switching between different contexts. To ensure this cross-application authentication, SSO relies on an Identity Provider (IdP), which is commonly set up and managed by each institution that needs to enforce SSO internally. However, the solution is not so straightforward when several institutions need to cooperate in a unique ecosystem. This could be tackled by centralizing the authentication mechanisms in one of the involved entities, a solution raising responsibilities that may be difficult for peers to accept. Moreover, this solution is not appropriate for dynamic groups, where peers may join or leave frequently. In this paper, we propose an architecture that uses a trusted third-party service to authenticate multiple entities, ensuring the isolation of the user's attributes between this service and the institutional SSO systems. This architecture was validated in the EHDEN Portal, which includes web tools and services of this European health project, to establish a Federated Authentication schema.

20.
J Imaging Inform Med ; 2024 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-38485898

RESUMEN

Deep learning techniques have recently yielded remarkable results across various fields. However, the quality of these results depends heavily on the quality and quantity of data used during the training phase. One common issue in multi-class and multi-label classification is class imbalance, where one or several classes make up a substantial portion of the total instances. This imbalance causes the neural network to prioritize features of the majority classes during training, as their detection leads to higher scores. In the context of object detection, two types of imbalance can be identified: (1) an imbalance between the space occupied by the foreground and background and (2) an imbalance in the number of instances for each class. This paper aims to address the second type of imbalance without exacerbating the first. To achieve this, we propose a modification of the copy-paste data augmentation technique, combined with weight-balancing methods in the loss function. This strategy was specifically tailored to improve the performance in datasets with a high instance density, where instance overlap could be detrimental. To validate our methodology, we applied it to a highly unbalanced dataset focused on nuclei detection. The results show that this hybrid approach improves the classification of minority classes without significantly compromising the performance of majority classes.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA