Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Front Neurorobot ; 16: 928707, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35990884

RESUMEN

As bio-inspired vision devices, dynamic vision sensors (DVS) are being applied in more and more applications. Unlike normal cameras, pixels in DVS independently respond to the luminance change with asynchronous output spikes. Therefore, removing raindrops and streaks from DVS event videos is a new but challenging task as the conventional deraining methods are no longer applicable. In this article, we propose to perform the deraining process in the width and time (W-T) space. This is motivated by the observation that rain steaks exhibits discontinuity in the width and time directions while background moving objects are usually piecewise smooth along with both directions. The W-T space can fuse the discontinuity in both directions and thus transforms raindrops and streaks to approximately uniform noise that are easy to remove. The non-local means filter is adopted as background object motion has periodic patterns in the W-T space. A repairing method is also designed to restore edge details erased during the deraining process. Experimental results demonstrate that our approach can better remove rain noise than the four existing methods for traditional camera videos. We also study how the event buffer depth and event frame time affect the performance investigate the potential implementation of our approach to classic RGB images. A new real-world database for DVS deraining is also created and shared for public use.

2.
Ann Transl Med ; 8(11): 697, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-32617317

RESUMEN

BACKGROUND: About 30% of cell lines have been cellular cross-contaminated and misidentification, which can result in invalidated experimental results and unusable therapeutic products. Cell morphology under the microscope was observed routinely, and further DNA sequencing analysis was performed periodically to verify cell line identity, but the sequencing analysis was costly, time-consuming, and labor intensive. The purpose of this study was to construct a novel artificial intelligence (AI) technology for "cell face" recognition, in which can predict DNA-level identification labels only using cell images. METHODS: Seven commonly used cell lines were cultured and co-cultured in pairs (totally 8 categories) to simulated the situation of pure and cross-contaminated cells. The microscopy images were obtained and labeled of cell types by the result of short tandem repeat profiling. About 2 million patch images were used for model training and testing. AlexNet was used to demonstrate the effectiveness of convolutional neural network (CNN) in cell classification. To further improve the feasibility of detecting cross-contamination, the bilinear network for fine-grained identification was constructed. The specificity, sensitivity, and accuracy of the model were tested separately by external validation. Finally, the cell semantic segmentation was conducted by DilatedNet. RESULTS: The cell texture and density were the influencing factors that can be better recognized by the bilinear convolutional neural network (BCNN) comparing to AlexNet. The BCNN achieved 99.5% accuracy in identifying seven pure cell lines and 86.3% accuracy for detecting cross-contamination (mixing two of the seven cell lines). DilatedNet was applied to the semantic segment for analyzing in single-cell level and achieved an accuracy of 98.2%. CONCLUSIONS: The deep CNN model proposed in this study has the ability to recognize small differences in cell morphology, and achieved high classification accuracy.

3.
Gigascience ; 9(2)2020 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-32101298

RESUMEN

BACKGROUND: Color vision is the ability to detect, distinguish, and analyze the wavelength distributions of light independent of the total intensity. It mediates the interaction between an organism and its environment from multiple important aspects. However, the physicochemical basis of color coding has not been explored completely, and how color perception is integrated with other sensory input, typically odor, is unclear. RESULTS: Here, we developed an artificial intelligence platform to train algorithms for distinguishing color and odor based on the large-scale physicochemical features of 1,267 and 598 structurally diverse molecules, respectively. The predictive accuracies achieved using the random forest and deep belief network for the prediction of color were 100% and 95.23% ± 0.40% (mean ± SD), respectively. The predictive accuracies achieved using the random forest and deep belief network for the prediction of odor were 93.40% ± 0.31% and 94.75% ± 0.44% (mean ± SD), respectively. Twenty-four physicochemical features were sufficient for the accurate prediction of color, while 39 physicochemical features were sufficient for the accurate prediction of odor. A positive correlation between the color-coding and odor-coding properties of the molecules was predicted. A group of descriptors was found to interlink prominently in color and odor perceptions. CONCLUSIONS: Our random forest model and deep belief network accurately predicted the colors and odors of structurally diverse molecules. These findings extend our understanding of the molecular and structural basis of color vision and reveal the interrelationship between color and odor perceptions in nature.


Asunto(s)
Inteligencia Artificial , Quimioinformática/métodos , Color , Odorantes , Programas Informáticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...