Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Med Image Anal ; 94: 103143, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38507894

RESUMEN

Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E) tissue images are important clinical tasks and crucial for a wide range of applications. However, it is a challenging task due to nuclei variances in staining and size, overlapping boundaries, and nuclei clustering. While convolutional neural networks have been extensively used for this task, we explore the potential of Transformer-based networks in combination with large scale pre-training in this domain. Therefore, we introduce a new method for automated instance segmentation of cell nuclei in digitized tissue samples using a deep learning architecture based on Vision Transformer called CellViT. CellViT is trained and evaluated on the PanNuke dataset, which is one of the most challenging nuclei instance segmentation datasets, consisting of nearly 200,000 annotated nuclei into 5 clinically important classes in 19 tissue types. We demonstrate the superiority of large-scale in-domain and out-of-domain pre-trained Vision Transformers by leveraging the recently published Segment Anything Model and a ViT-encoder pre-trained on 104 million histological image patches - achieving state-of-the-art nuclei detection and instance segmentation performance on the PanNuke dataset with a mean panoptic quality of 0.50 and an F1-detection score of 0.83. The code is publicly available at https://github.com/TIO-IKIM/CellViT.


Asunto(s)
Núcleo Celular , Redes Neurales de la Computación , Humanos , Eosina Amarillenta-(YS) , Hematoxilina , Coloración y Etiquetado , Procesamiento de Imagen Asistido por Computador
2.
Comput Methods Programs Biomed ; 243: 107912, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37981454

RESUMEN

BACKGROUND AND OBJECTIVE: We present a novel deep learning-based skull stripping algorithm for magnetic resonance imaging (MRI) that works directly in the information rich complex valued k-space. METHODS: Using four datasets from different institutions with a total of around 200,000 MRI slices, we show that our network can perform skull-stripping on the raw data of MRIs while preserving the phase information which no other skull stripping algorithm is able to work with. For two of the datasets, skull stripping performed by HD-BET (Brain Extraction Tool) in the image domain is used as the ground truth, whereas the third and fourth dataset comes with per-hand annotated brain segmentations. RESULTS: All four datasets were very similar to the ground truth (DICE scores of 92 %-99 % and Hausdorff distances of under 5.5 pixel). Results on slices above the eye-region reach DICE scores of up to 99 %, whereas the accuracy drops in regions around the eyes and below, with partially blurred output. The output of k-Strip often has smoothed edges at the demarcation to the skull. Binary masks are created with an appropriate threshold. CONCLUSION: With this proof-of-concept study, we were able to show the feasibility of working in the k-space frequency domain, preserving phase information, with consistent results. Besides preserving valuable information for further diagnostics, this approach makes an immediate anonymization of patient data possible, already before being transformed into the image domain. Future research should be dedicated to discovering additional ways the k-space can be used for innovative image analysis and further workflows.


Asunto(s)
Algoritmos , Cráneo , Humanos , Cráneo/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Procesamiento de Imagen Asistido por Computador/métodos , Cabeza , Imagen por Resonancia Magnética/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA