Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
IEEE Trans Med Imaging ; 42(12): 3665-3677, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37494157

RESUMEN

Automated nanoparticle phenotyping is a critical aspect of high-throughput drug research, which requires analyzing nanoparticle size, shape, and surface topography from microscopy images. To automate this process, we present an instance segmentation pipeline that partitions individual nanoparticles on microscopy images. Our pipeline makes two key contributions. Firstly, we synthesize diverse and approximately realistic nanoparticle images to improve robust learning. Secondly, we improve the BlendMask model to segment tiny, overlapping, or sparse particle images. Specifically, we propose a parameterized approach for generating novel pairs of single particles and their masks, encouraging greater diversity in the training data. To synthesize more realistic particle images, we explore three particle placement rules and an image selection criterion. The improved one-stage instance segmentation network extracts distinctive features of nanoparticles and their context at both local and global levels, which addresses the data challenges associated with tiny, overlapping, or sparse nanoparticles. Extensive experiments demonstrate the effectiveness of our pipeline for automating nanoparticle partitioning and phenotyping in drug research using microscopy images.


Asunto(s)
Microscopía , Nanopartículas , Procesamiento de Imagen Asistido por Computador/métodos
2.
Sensors (Basel) ; 22(21)2022 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-36366079

RESUMEN

In image captioning models, the main challenge in describing an image is identifying all the objects by precisely considering the relationships between the objects and producing various captions. Over the past few years, many methods have been proposed, from an attribute-to-attribute comparison approach to handling issues related to semantics and their relationships. Despite the improvements, the existing techniques suffer from inadequate positional and geometrical attributes concepts. The reason is that most of the abovementioned approaches depend on Convolutional Neural Networks (CNNs) for object detection. CNN is notorious for failing to detect equivariance and rotational invariance in objects. Moreover, the pooling layers in CNNs cause valuable information to be lost. Inspired by the recent successful approaches, this paper introduces a novel framework for extracting meaningful descriptions based on a parallelized capsule network that describes the content of images through a high level of understanding of the semantic contents of an image. The main contribution of this paper is proposing a new method that not only overrides the limitations of CNNs but also generates descriptions with a wide variety of words by using Wikipedia. In our framework, capsules focus on the generation of meaningful descriptions with more detailed spatial and geometrical attributes for a given set of images by considering the position of the entities as well as their relationships. Qualitative experiments on the benchmark dataset MS-COCO show that our framework outperforms state-of-the-art image captioning models when describing the semantic content of the images.


Asunto(s)
Redes Neurales de la Computación , Semántica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA