Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
J Forensic Sci ; 66(3): 890-909, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33682930

RESUMEN

Forensic activities related to footwear evidence may be broadly classified into the following two categories: (1) intelligence gathering and (2) evidential value assessment. Intelligence gathering provides additional leads for investigators. Assessment of evidential value, as practiced in the United States, involves a trained footwear examiner evaluating the degree of similarity between a known shoe of interest (together with its test impressions) and footwear impressions obtained from a crime scene, by performing side-by-side visual comparisons. However, the need for developing quantitative approaches for expressing similarities during such comparisons is being increasingly recognized by the forensic science community. In this paper, we explore the ability of similarity metrics to discriminate between impressions made by a shoe of interest and impressions made by close non-matching shoes. Close non-matching shoes largely share the same design and size. Therefore, the ability to effectively discriminate between them requires considering, either explicitly or implicitly, not only design and size, but also wear patterns and, to some extent, individual characteristics. This type of discrimination is necessary for assessment of evidential value. The similarity metrics examined in this paper are correlation-based metrics, including normalized cross-correlation, phase-only correlation, AvNCC, and AvPOC. The latter two metrics are based on features obtained from a convolutional neural network. Experiments are performed using Everspry impressions, FBI boot impressions, and the West Virginia University footwear impression collection. The results show that phase-only correlation performs as well as or better than the other metrics in all cases for the datasets we considered.

2.
Artículo en Inglés | MEDLINE | ID: mdl-32864421

RESUMEN

Predicting Retinal Pigment Epithelium (RPE) cell functions in stem cell implants using non-invasive bright field microscopy imaging is a critical task for clinical deployment of stem cell therapies. Such cell function predictions can be carried out using Artificial Intelligence (AI) based models. In this paper we used Traditional Machine Learning (TML) and Deep Learning (DL) based AI models for cell function prediction tasks. TML models depend on feature engineering and DL models perform feature engineering automatically but have higher modeling complexity. This work aims at exploring the tradeoffs between three approaches using TML and DL based models for RPE cell function prediction from microscopy images and at understanding the accuracy relationship between pixel-, cell feature-, and implant label-level accuracies of models. Among the three compared approaches to cell function prediction, the direct approach to cell function prediction from images is slightly more accurate in comparison to indirect approaches using intermediate segmentation and/or feature engineering steps. We also evaluated accuracy variations with respect to model selections (five TML models and two DL models) and model configurations (with and without transfer learning). Finally, we quantified the relationships between segmentation accuracy and the number of samples used for training a model, segmentation accuracy and cell feature error, and cell feature error and accuracy of implant labels. We concluded that for the RPE cell data set, there is a monotonic relationship between the number of training samples and image segmentation accuracy, and between segmentation accuracy and cell feature error, but there is no such a relationship between segmentation accuracy and accuracy of RPE implant labels.

3.
J Clin Invest ; 130(2): 1010-1023, 2020 02 03.
Artículo en Inglés | MEDLINE | ID: mdl-31714897

RESUMEN

Increases in the number of cell therapies in the preclinical and clinical phases have prompted the need for reliable and noninvasive assays to validate transplant function in clinical biomanufacturing. We developed a robust characterization methodology composed of quantitative bright-field absorbance microscopy (QBAM) and deep neural networks (DNNs) to noninvasively predict tissue function and cellular donor identity. The methodology was validated using clinical-grade induced pluripotent stem cell-derived retinal pigment epithelial cells (iPSC-RPE). QBAM images of iPSC-RPE were used to train DNNs that predicted iPSC-RPE monolayer transepithelial resistance, predicted polarized vascular endothelial growth factor (VEGF) secretion, and matched iPSC-RPE monolayers to the stem cell donors. DNN predictions were supplemented with traditional machine-learning algorithms that identified shape and texture features of single cells that were used to predict tissue function and iPSC donor identity. These results demonstrate noninvasive cell therapy characterization can be achieved with QBAM and machine learning.


Asunto(s)
Diferenciación Celular , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Células Madre Pluripotentes Inducidas , Microscopía , Epitelio Pigmentado de la Retina , Humanos , Células Madre Pluripotentes Inducidas/citología , Células Madre Pluripotentes Inducidas/metabolismo , Epitelio Pigmentado de la Retina/citología , Epitelio Pigmentado de la Retina/metabolismo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA