Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros

Bases de datos
Tipo de estudio
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Bioinformatics ; 40(7)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38917415

RESUMEN

SUMMARY: Protein Interaction Explorer (PIE) is a new web-based tool integrated to our database iPPI-DB, specifically crafted to support structure-based drug discovery initiatives focused on protein-protein interactions (PPIs). Drawing upon extensive structural data encompassing thousands of heterodimer complexes, including those with successful ligands, PIE provides a comprehensive suite of tools dedicated to aid decision-making in PPI drug discovery. PIE enables researchers/bioinformaticians to identify and characterize crucial factors such as the presence of binding pockets or functional binding sites at the interface, predicting hot spots, and foreseeing similar protein-embedded pockets for potential repurposing efforts. AVAILABILITY AND IMPLEMENTATION: PIE is user-friendly and readily accessible at https://ippidb.pasteur.fr/targetcentric/. It relies on the NGL visualizer.


Asunto(s)
Mapeo de Interacción de Proteínas , Proteínas , Programas Informáticos , Ligandos , Sitios de Unión , Proteínas/metabolismo , Proteínas/química , Mapeo de Interacción de Proteínas/métodos , Bases de Datos de Proteínas , Descubrimiento de Drogas/métodos , Unión Proteica , Biología Computacional/métodos
2.
PLoS Comput Biol ; 18(4): e1009879, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35421081

RESUMEN

Segmenting three-dimensional (3D) microscopy images is essential for understanding phenomena like morphogenesis, cell division, cellular growth, and genetic expression patterns. Recently, deep learning (DL) pipelines have been developed, which claim to provide high accuracy segmentation of cellular images and are increasingly considered as the state of the art for image segmentation problems. However, it remains difficult to define their relative performances as the concurrent diversity and lack of uniform evaluation strategies makes it difficult to know how their results compare. In this paper, we first made an inventory of the available DL methods for 3D cell segmentation. We next implemented and quantitatively compared a number of representative DL pipelines, alongside a highly efficient non-DL method named MARS. The DL methods were trained on a common dataset of 3D cellular confocal microscopy images. Their segmentation accuracies were also tested in the presence of different image artifacts. A specific method for segmentation quality evaluation was adopted, which isolates segmentation errors due to under- or oversegmentation. This is complemented with a 3D visualization strategy for interactive exploration of segmentation quality. Our analysis shows that the DL pipelines have different levels of accuracy. Two of them, which are end-to-end 3D and were originally designed for cell boundary detection, show high performance and offer clear advantages in terms of adaptability to new data.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Benchmarking , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional
3.
Sensors (Basel) ; 18(9)2018 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-30231547

RESUMEN

An eye tracker's accuracy and system behavior play critical roles in determining the reliability and usability of eye gaze data obtained from them. However, in contemporary eye gaze research, there exists a lot of ambiguity in the definitions of gaze estimation accuracy parameters and lack of well-defined methods for evaluating the performance of eye tracking systems. In this paper, a set of fully defined evaluation metrics are therefore developed and presented for complete performance characterization of generic commercial eye trackers, when they operate under varying conditions on desktop or mobile platforms. In addition, some useful visualization methods are implemented, which will help in studying the performance and data quality of eye trackers irrespective of their design principles and application areas. Also the concept of a graphical user interface software named GazeVisual v1.1 is proposed that would integrate all these methods and enable general users to effortlessly access the described metrics, generate visualizations and extract valuable information from their own gaze datasets. We intend to present these tools as open resources in future to the eye gaze research community for use and further advancement, as a contribution towards standardization of gaze research outputs and analysis.


Asunto(s)
Computadores , Exactitud de los Datos , Movimientos Oculares , Conjuntos de Datos como Asunto/normas , Fijación Ocular , Humanos , Reproducibilidad de los Resultados , Programas Informáticos
4.
Artículo en Inglés | MEDLINE | ID: mdl-38812098

RESUMEN

Neuropathological diagnosis of Alzheimer disease (AD) relies on semiquantitative analysis of phosphorylated tau-positive neurofibrillary tangles (NFTs) and neuritic plaques (NPs), without consideration of lesion heterogeneity in individual cases. We developed a deep learning workflow for automated annotation and segmentation of NPs and NFTs from AT8-immunostained whole slide images (WSIs) of AD brain sections. Fifteen WSIs of frontal cortex from 4 biobanks with varying tissue quality, staining intensity, and scanning formats were analyzed. We established an artificial intelligence (AI)-driven iterative procedure to improve the generation of expert-validated annotation datasets for NPs and NFTs thereby increasing annotation quality by >50%. This strategy yielded an expert-validated annotation database with 5013 NPs and 5143 NFTs. We next trained two U-Net convolutional neural networks for detection and segmentation of NPs or NFTs, achieving high accuracy and consistency (mean Dice similarity coefficient: NPs, 0.77; NFTs, 0.81). The workflow showed high generalization performance across different cases. This study serves as a proof-of-concept for the utilization of proprietary image analysis software (Visiopharm) in the automated deep learning segmentation of NPs and NFTs, demonstrating that AI can significantly improve the annotation quality of complex neuropathological features and enable the creation of highly precise models for identifying these markers in AD brain sections.

5.
Vision (Basel) ; 4(2)2020 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-32392760

RESUMEN

Analyzing the gaze accuracy characteristics of an eye tracker is a critical task as its gaze data is frequently affected by non-ideal operating conditions in various consumer eye tracking applications. In previous research on pattern analysis of gaze data, efforts were made to model human visual behaviors and cognitive processes. What remains relatively unexplored are questions related to identifying gaze error sources as well as quantifying and modeling their impacts on the data quality of eye trackers. In this study, gaze error patterns produced by a commercial eye tracking device were studied with the help of machine learning algorithms, such as classifiers and regression models. Gaze data were collected from a group of participants under multiple conditions that commonly affect eye trackers operating on desktop and handheld platforms. These conditions (referred here as error sources) include user distance, head pose, and eye-tracker pose variations, and the collected gaze data were used to train the classifier and regression models. It was seen that while the impact of the different error sources on gaze data characteristics were nearly impossible to distinguish by visual inspection or from data statistics, machine learning models were successful in identifying the impact of the different error sources and predicting the variability in gaze error levels due to these conditions. The objective of this study was to investigate the efficacy of machine learning methods towards the detection and prediction of gaze error patterns, which would enable an in-depth understanding of the data quality and reliability of eye trackers under unconstrained operating conditions. Coding resources for all the machine learning methods adopted in this study were included in an open repository named MLGaze to allow researchers to replicate the principles presented here using data from their own eye trackers.

6.
Vision (Basel) ; 3(4)2019 Oct 22.
Artículo en Inglés | MEDLINE | ID: mdl-31735856

RESUMEN

In this paper, a range of open-source tools, datasets, and software that have been developed for quantitative and in-depth evaluation of eye gaze data quality are presented. Eye tracking systems in contemporary vision research and applications face major challenges due to variable operating conditions such as user distance, head pose, and movements of the eye tracker platform. However, there is a lack of open-source tools and datasets that could be used for quantitatively evaluating an eye tracker's data quality, comparing performance of multiple trackers, or studying the impact of various operating conditions on a tracker's accuracy. To address these issues, an open-source code repository named GazeVisual-Lib is developed that contains a number of algorithms, visualizations, and software tools for detailed and quantitative analysis of an eye tracker's performance and data quality. In addition, a new labelled eye gaze dataset that is collected from multiple user platforms and operating conditions is presented in an open data repository for benchmark comparison of gaze data from different eye tracking systems. The paper presents the concept, development, and organization of these two repositories that are envisioned to improve the performance analysis and reliability of eye tracking systems.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA