Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Imaging ; 9(3)2023 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-36976110

RESUMEN

This paper investigates the impact of the amount of training data and the shape variability on the segmentation provided by the deep learning architecture U-Net. Further, the correctness of ground truth (GT) was also evaluated. The input data consisted of a three-dimensional set of images of HeLa cells observed with an electron microscope with dimensions 8192×8192×517. From there, a smaller region of interest (ROI) of 2000×2000×300 was cropped and manually delineated to obtain the ground truth necessary for a quantitative evaluation. A qualitative evaluation was performed on the 8192×8192 slices due to the lack of ground truth. Pairs of patches of data and labels for the classes nucleus, nuclear envelope, cell and background were generated to train U-Net architectures from scratch. Several training strategies were followed, and the results were compared against a traditional image processing algorithm. The correctness of GT, that is, the inclusion of one or more nuclei within the region of interest was also evaluated. The impact of the extent of training data was evaluated by comparing results from 36,000 pairs of data and label patches extracted from the odd slices in the central region, to 135,000 patches obtained from every other slice in the set. Then, 135,000 patches from several cells from the 8192×8192 slices were generated automatically using the image processing algorithm. Finally, the two sets of 135,000 pairs were combined to train once more with 270,000 pairs. As would be expected, the accuracy and Jaccard similarity index improved as the number of pairs increased for the ROI. This was also observed qualitatively for the 8192×8192 slices. When the 8192×8192 slices were segmented with U-Nets trained with 135,000 pairs, the architecture trained with automatically generated pairs provided better results than the architecture trained with the pairs from the manually segmented ground truths. This suggests that the pairs that were extracted automatically from many cells provided a better representation of the four classes of the various cells in the 8192×8192 slice than those pairs that were manually segmented from a single cell. Finally, the two sets of 135,000 pairs were combined, and the U-Net trained with these provided the best results.

2.
Sensors (Basel) ; 21(16)2021 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-34450821

RESUMEN

This paper investigates the classification of radiographic images with eleven convolutional neural network (CNN) architectures (GoogleNet, VGG-19, AlexNet, SqueezeNet, ResNet-18, Inception-v3, ResNet-50, VGG-16, ResNet-101, DenseNet-201 and Inception-ResNet-v2). The CNNs were used to classify a series of wrist radiographs from the Stanford Musculoskeletal Radiographs (MURA) dataset into two classes-normal and abnormal. The architectures were compared for different hyper-parameters against accuracy and Cohen's kappa coefficient. The best two results were then explored with data augmentation. Without the use of augmentation, the best results were provided by Inception-ResNet-v2 (Mean accuracy = 0.723, Mean kappa = 0.506). These were significantly improved with augmentation to Inception-ResNet-v2 (Mean accuracy = 0.857, Mean kappa = 0.703). Finally, Class Activation Mapping was applied to interpret activation of the network against the location of an anomaly in the radiographs.


Asunto(s)
Redes Neurales de la Computación , Radiografía
3.
J Imaging ; 7(6)2021 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-39080881

RESUMEN

In this work, an unsupervised volumetric semantic instance segmentation of the plasma membrane of HeLa cells as observed with serial block face scanning electron microscopy is described. The resin background of the images was segmented at different slices of a 3D stack of 518 slices with 8192 × 8192 pixels each. The background was used to create a distance map, which helped identify and rank the cells by their size at each slice. The centroids of the cells detected at different slices were linked to identify them as a single cell that spanned a number of slices. A subset of these cells, i.e., the largest ones and those not close to the edges were selected for further processing. The selected cells were then automatically cropped to smaller regions of interest of 2000 × 2000 × 300 voxels that were treated as cell instances. Then, for each of these volumes, the nucleus was segmented, and the cell was separated from any neighbouring cells through a series of traditional image processing steps that followed the plasma membrane. The segmentation process was repeated for all the regions of interest previously selected. For one cell for which the ground truth was available, the algorithm provided excellent results in Accuracy (AC) and the Jaccard similarity Index (JI): nucleus: JI =0.9665, AC =0.9975, cell including nucleus JI =0.8711, AC =0.9655, cell excluding nucleus JI =0.8094, AC =0.9629. A limitation of the algorithm for the plasma membrane segmentation was the presence of background. In samples with tightly packed cells, this may not be available. When tested for these conditions, the segmentation of the nuclear envelope was still possible. All the code and data were released openly through GitHub, Zenodo and EMPIAR.

4.
PLoS One ; 15(10): e0230605, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33006963

RESUMEN

The quantitative study of cell morphology is of great importance as the structure and condition of cells and their structures can be related to conditions of health or disease. The first step towards that, is the accurate segmentation of cell structures. In this work, we compare five approaches, one traditional and four deep-learning, for the semantic segmentation of the nuclear envelope of cervical cancer cells commonly known as HeLa cells. Images of a HeLa cancer cell were semantically segmented with one traditional image-processing algorithm and four three deep learning architectures: VGG16, ResNet18, Inception-ResNet-v2, and U-Net. Three hundred slices, each 2000 × 2000 pixels, of a HeLa Cell were acquired with Serial Block Face Scanning Electron Microscopy. The first three deep learning architectures were pre-trained with ImageNet and then fine-tuned with transfer learning. The U-Net architecture was trained from scratch with 36, 000 training images and labels of size 128 × 128. The image-processing algorithm followed a pipeline of several traditional steps like edge detection, dilation and morphological operators. The algorithms were compared by measuring pixel-based segmentation accuracy and Jaccard index against a labelled ground truth. The results indicated a superior performance of the traditional algorithm (Accuracy = 99%, Jaccard = 93%) over the deep learning architectures: VGG16 (93%, 90%), ResNet18 (94%, 88%), Inception-ResNet-v2 (94%, 89%), and U-Net (92%, 56%).


Asunto(s)
Células HeLa/citología , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Aprendizaje Profundo , Humanos , Microscopía de Fuerza Atómica
5.
J Imaging ; 6(10)2020 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-34460542

RESUMEN

This paper describes a methodology that extracts key morphological features from histological breast cancer images in order to automatically assess Tumour Cellularity (TC) in Neo-Adjuvant treatment (NAT) patients. The response to NAT gives information on therapy efficacy and it is measured by the residual cancer burden index, which is composed of two metrics: TC and the assessment of lymph nodes. The data consist of whole slide images (WSIs) of breast tissue stained with Hematoxylin and Eosin (H&E) released in the 2019 SPIE Breast Challenge. The methodology proposed is based on traditional computer vision methods (K-means, watershed segmentation, Otsu's binarisation, and morphological operations), implementing colour separation, segmentation, and feature extraction. Correlation between morphological features and the residual TC after a NAT treatment was examined. Linear regression and statistical methods were used and twenty-two key morphological parameters from the nuclei, epithelial region, and the full image were extracted. Subsequently, an automated TC assessment that was based on Machine Learning (ML) algorithms was implemented and trained with only selected key parameters. The methodology was validated with the score assigned by two pathologists through the intra-class correlation coefficient (ICC). The selection of key morphological parameters improved the results reported over other ML methodologies and it was very close to deep learning methodologies. These results are encouraging, as a traditionally-trained ML algorithm can be useful when limited training data are available preventing the use of deep learning approaches.

6.
J Imaging ; 5(9)2019 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-34460669

RESUMEN

This paper describes an unsupervised algorithm, which segments the nuclear envelope of HeLa cells imaged by Serial Block Face Scanning Electron Microscopy. The algorithm exploits the variations of pixel intensity in different cellular regions by calculating edges, which are then used to generate superpixels. The superpixels are morphologically processed and those that correspond to the nuclear region are selected through the analysis of size, position, and correspondence with regions detected in neighbouring slices. The nuclear envelope is segmented from the nuclear region. The three-dimensional segmented nuclear envelope is then modelled against a spheroid to create a two-dimensional (2D) surface. The 2D surface summarises the complex 3D shape of the nuclear envelope and allows the extraction of metrics that may be relevant to characterise the nature of cells. The algorithm was developed and validated on a single cell and tested in six separate cells, each with 300 slices of 2000 × 2000 pixels. Ground truth was available for two of these cells, i.e., 600 hand-segmented slices. The accuracy of the algorithm was evaluated with two similarity metrics: Jaccard Similarity Index and Mean Hausdorff distance. Jaccard values of the first/second segmentation were 93%/90% for the whole cell, and 98%/94% between slices 75 and 225, as the central slices of the nucleus are more regular than those on the extremes. Mean Hausdorff distances were 9/17 pixels for the whole cells and 4/13 pixels for central slices. One slice was processed in approximately 8 s and a whole cell in 40 min. The algorithm outperformed active contours in both accuracy and time.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA