Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
2.
Med Image Anal ; 89: 102886, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37494811

RESUMEN

Microsatellite instability (MSI) refers to alterations in the length of simple repetitive genomic sequences. MSI status serves as a prognostic and predictive factor in colorectal cancer. The MSI-high status is a good prognostic factor in stage II/III cancer, and predicts a lack of benefit to adjuvant fluorouracil chemotherapy in stage II cancer but a good response to immunotherapy in stage IV cancer. Therefore, determining MSI status in patients with colorectal cancer is important for identifying the appropriate treatment protocol. In the Pathology Artificial Intelligence Platform (PAIP) 2020 challenge, artificial intelligence researchers were invited to predict MSI status based on colorectal cancer slide images. Participants were required to perform two tasks. The primary task was to classify a given slide image as belonging to either the MSI-high or the microsatellite-stable group. The second task was tumor area segmentation to avoid ties with the main task. A total of 210 of the 495 participants enrolled in the challenge downloaded the images, and 23 teams submitted their final results. Seven teams from the top 10 participants agreed to disclose their algorithms, most of which were convolutional neural network-based deep learning models, such as EfficientNet and UNet. The top-ranked system achieved the highest F1 score (0.9231). This paper summarizes the various methods used in the PAIP 2020 challenge. This paper supports the effectiveness of digital pathology for identifying the relationship between colorectal cancer and the MSI characteristics.


Asunto(s)
Neoplasias Colorrectales , Inestabilidad de Microsatélites , Humanos , Inteligencia Artificial , Pronóstico , Fluorouracilo/uso terapéutico , Neoplasias Colorrectales/genética , Neoplasias Colorrectales/patología
3.
J Pathol Inform ; 14: 100160, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36536772

RESUMEN

Deep learning has been widely used to analyze digitized hematoxylin and eosin (H&E)-stained histopathology whole slide images. Automated cancer segmentation using deep learning can be used to diagnose malignancy and to find novel morphological patterns to predict molecular subtypes. To train pixel-wise cancer segmentation models, manual annotation from pathologists is generally a bottleneck due to its time-consuming nature. In this paper, we propose Deep Interactive Learning with a pretrained segmentation model from a different cancer type to reduce manual annotation time. Instead of annotating all pixels from cancer and non-cancer regions on giga-pixel whole slide images, an iterative process of annotating mislabeled regions from a segmentation model and training/finetuning the model with the additional annotation can reduce the time. Especially, employing a pretrained segmentation model can further reduce the time than starting annotation from scratch. We trained an accurate ovarian cancer segmentation model with a pretrained breast segmentation model by 3.5 hours of manual annotation which achieved intersection-over-union of 0.74, recall of 0.86, and precision of 0.84. With automatically extracted high-grade serous ovarian cancer patches, we attempted to train an additional classification deep learning model to predict BRCA mutation. The segmentation model and code have been released at https://github.com/MSKCC-Computational-Pathology/DMMN-ovary.

4.
Mod Pathol ; 34(8): 1487-1494, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-33903728

RESUMEN

The surgical margin status of breast lumpectomy specimens for invasive carcinoma and ductal carcinoma in situ (DCIS) guides clinical decisions, as positive margins are associated with higher rates of local recurrence. The "cavity shave" method of margin assessment has the benefits of allowing the surgeon to orient shaved margins intraoperatively and the pathologist to assess one inked margin per specimen. We studied whether a deep convolutional neural network, a deep multi-magnification network (DMMN), could accurately segment carcinoma from benign tissue in whole slide images (WSIs) of shave margin slides, and therefore serve as a potential screening tool to improve the efficiency of microscopic evaluation of these specimens. Applying the pretrained DMMN model, or the initial model, to a validation set of 408 WSIs (348 benign, 60 with carcinoma) achieved an area under the curve (AUC) of 0.941. After additional manual annotations and fine-tuning of the model, the updated model achieved an AUC of 0.968 with sensitivity set at 100% and corresponding specificity of 78%. We applied the initial model and updated model to a testing set of 427 WSIs (374 benign, 53 with carcinoma) which showed AUC values of 0.900 and 0.927, respectively. Using the pixel classification threshold selected from the validation set, the model achieved a sensitivity of 92% and specificity of 78%. The four false-negative classifications resulted from two small foci of DCIS (1 mm, 0.5 mm) and two foci of well-differentiated invasive carcinoma (3 mm, 1.5 mm). This proof-of-principle study demonstrates that a DMMN machine learning model can segment invasive carcinoma and DCIS in surgical margin specimens with high accuracy and has the potential to be used as a screening tool for pathologic assessment of these specimens.


Asunto(s)
Neoplasias de la Mama/patología , Carcinoma Ductal de Mama/patología , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos , Márgenes de Escisión , Carcinoma Intraductal no Infiltrante/patología , Femenino , Humanos , Mastectomía Segmentaria , Neoplasia Residual/diagnóstico
5.
Comput Med Imaging Graph ; 88: 101866, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33485058

RESUMEN

Pathologic analysis of surgical excision specimens for breast carcinoma is important to evaluate the completeness of surgical excision and has implications for future treatment. This analysis is performed manually by pathologists reviewing histologic slides prepared from formalin-fixed tissue. In this paper, we present Deep Multi-Magnification Network trained by partial annotation for automated multi-class tissue segmentation by a set of patches from multiple magnifications in digitized whole slide images. Our proposed architecture with multi-encoder, multi-decoder, and multi-concatenation outperforms other single and multi-magnification-based architectures by achieving the highest mean intersection-over-union, and can be used to facilitate pathologists' assessments of breast cancer.


Asunto(s)
Neoplasias de la Mama , Redes Neurales de la Computación , Mama , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos
6.
J Med Imaging (Bellingham) ; 7(4): 044003, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32904135

RESUMEN

Purpose: Fluorescence microscopy visualizes three-dimensional subcellular structures in tissue with two-photon microscopy achieving deeper penetration into tissue. Nuclei detection, which is essential for analyzing tissue for clinical and research purposes, remains a challenging problem due to the spatial variability of nuclei. Recent advancements in deep learning techniques have enabled the analysis of fluorescence microscopy data to localize and segment nuclei. However, these localization or segmentation techniques would require additional steps to extract characteristics of nuclei. We develop a 3D convolutional neural network, called Sphere Estimation Network (SphEsNet), to extract characteristics of nuclei without any postprocessing steps. Approach: To simultaneously estimate the center locations of nuclei and their sizes, SphEsNet is composed of two branches to localize nuclei center coordinates and to estimate their radii. Synthetic microscopy volumes automatically generated using a spatially constrained cycle-consistent adversarial network are used for training the network because manually generating 3D real ground truth volumes would be extremely tedious. Results: Three SphEsNet models based on the size of nuclei were trained and tested on five real fluorescence microscopy data sets from rat kidney and mouse intestine. Our method can successfully detect nuclei in multiple locations with various sizes. In addition, our method was compared with other techniques and outperformed them based on object-level precision, recall, and F 1 score. Our model achieved 89.90% for F 1 score. Conclusions: SphEsNet can simultaneously localize nuclei and estimate their size without additional steps. SphEsNet can be potentially used to extract more information from nuclei in fluorescence microscopy images.

7.
Sci Rep ; 9(1): 18295, 2019 12 04.
Artículo en Inglés | MEDLINE | ID: mdl-31797882

RESUMEN

The scale of biological microscopy has increased dramatically over the past ten years, with the development of new modalities supporting collection of high-resolution fluorescence image volumes spanning hundreds of microns if not millimeters. The size and complexity of these volumes is such that quantitative analysis requires automated methods of image processing to identify and characterize individual cells. For many workflows, this process starts with segmentation of nuclei that, due to their ubiquity, ease-of-labeling and relatively simple structure, make them appealing targets for automated detection of individual cells. However, in the context of large, three-dimensional image volumes, nuclei present many challenges to automated segmentation, such that conventional approaches are seldom effective and/or robust. Techniques based upon deep-learning have shown great promise, but enthusiasm for applying these techniques is tempered by the need to generate training data, an arduous task, particularly in three dimensions. Here we present results of a new technique of nuclear segmentation using neural networks trained on synthetic data. Comparisons with results obtained using commonly-used image processing packages demonstrate that DeepSynth provides the superior results associated with deep-learning techniques without the need for manual annotation.


Asunto(s)
Núcleo Celular/ultraestructura , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Animales , Riñón/ultraestructura , Imagen Óptica , Ratas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...