Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Adv Neural Inf Process Syst ; 36(DB1): 37995-38017, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38742142

RESUMO

Recent accelerations in multi-modal applications have been made possible with the plethora of image and text data available online. However, the scarcity of analogous data in the medical field, specifically in histopathology, has halted comparable progress. To enable similar representation learning for histopathology, we turn to YouTube, an untapped resource of videos, offering 1,087 hours of valuable educational histopathology videos from expert clinicians. From YouTube, we curate Quilt: a large-scale vision-language dataset consisting of 768,826 image and text pairs. Quilt was automatically curated using a mixture of models, including large language models, handcrafted algorithms, human knowledge databases, and automatic speech recognition. In comparison, the most comprehensive datasets curated for histopathology amass only around 200K samples. We combine Quilt with datasets from other sources, including Twitter, research papers, and the internet in general, to create an even larger dataset: Quilt-1M, with 1M paired image-text samples, marking it as the largest vision-language histopathology dataset to date. We demonstrate the value of Quilt-1M by fine-tuning a pre-trained CLIP model. Our model outperforms state-of-the-art models on both zero-shot and linear probing tasks for classifying new histopathology images across 13 diverse patch-level datasets of 8 different sub-pathologies and cross-modal retrieval tasks.

2.
Front Artif Intell ; 5: 1005086, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36204597

RESUMO

A rapidly increasing rate of melanoma diagnosis has been noted over the past three decades, and nearly 1 in 4 skin biopsies are diagnosed as melanocytic lesions. The gold standard for diagnosis of melanoma is the histopathological examination by a pathologist to analyze biopsy material at both the cellular and structural levels. A pathologist's diagnosis is often subjective and prone to variability, while deep learning image analysis methods may improve and complement current diagnostic and prognostic capabilities. Mitoses are important entities when reviewing skin biopsy cases as their presence carries prognostic information; thus, their precise detection is an important factor for clinical care. In addition, semantic segmentation of clinically important structures in skin biopsies might help the diagnosis pipeline with an accurate classification. We aim to provide prognostic and diagnostic information on skin biopsy images, including the detection of cellular level entities, segmentation of clinically important tissue structures, and other important factors toward the accurate diagnosis of skin biopsy images. This paper is an overview of our work on analysis of digital whole slide skin biopsy images, including mitotic figure (mitosis) detection, semantic segmentation, diagnosis, and analysis of pathologists' viewing patterns, and with new work on melanocyte detection. Deep learning has been applied to our methods for all the detection, segmentation, and diagnosis work. In our studies, deep learning is proven superior to prior approaches to skin biopsy analysis. Our work on analysis of pathologists' viewing patterns is the only such work in the skin biopsy literature. Our work covers the whole spectrum from low-level entities through diagnosis and understanding what pathologists do in performing their diagnoses.

3.
J Pathol Inform ; 13: 100104, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36268085

RESUMO

Although pathologists have their own viewing habits while diagnosing, viewing behaviors leading to the most accurate diagnoses are under-investigated. Digital whole slide imaging has enabled investigators to analyze pathologists' visual interpretation of histopathological features using mouse and viewport tracking techniques. In this study, we provide definitions for basic viewing behavior variables and investigate the association of pathologists' characteristics and viewing behaviors, and how they relate to diagnostic accuracy when interpreting whole slide images. We use recordings of 32 pathologists' actions while interpreting a set of 36 digital whole slide skin biopsy images (5 sets of 36 cases; 180 cases total). These viewport tracking data include the coordinates of a viewport scene on pathologists' screens, the magnification level at which that viewport was viewed, as well as a timestamp. We define a set of variables to quantify pathologists' viewing behaviors such as zooming, panning, and interacting with a consensus reference panel's selected region of interest (ROI). We examine the association of these viewing behaviors with pathologists' demographics, clinical characteristics, and diagnostic accuracy using cross-classified multilevel models. Viewing behaviors differ based on clinical experience of the pathologists. Pathologists with a higher caseload of melanocytic skin biopsy cases and pathologists with board certification and/or fellowship training in dermatopathology have lower average zoom and lower variance of zoom levels. Viewing behaviors associated with higher diagnostic accuracy include higher average and variance of zoom levels, a lower magnification percentage (a measure of consecutive zooming behavior), higher total interpretation time, and higher amount of time spent viewing ROIs. Scanning behavior, which refers to panning with a fixed zoom level, has marginally significant positive association with accuracy. Pathologists' training, clinical experience, and their exposure to a range of cases are associated with their viewing behaviors, which may contribute to their diagnostic accuracy. Research in computational pathology integrating digital imaging and clinical informatics opens up new avenues for leveraging viewing behaviors in medical education and training, potentially improving patient care and the effectiveness of clinical workflow.

4.
PLoS Comput Biol ; 16(4): e1007698, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32271746

RESUMO

Humans are able to track multiple objects at any given time in their daily activities-for example, we can drive a car while monitoring obstacles, pedestrians, and other vehicles. Several past studies have examined how humans track targets simultaneously and what underlying behavioral and neural mechanisms they use. At the same time, computer-vision researchers have proposed different algorithms to track multiple targets automatically. These algorithms are useful for video surveillance, team-sport analysis, video analysis, video summarization, and human-computer interaction. Although there are several efficient biologically inspired algorithms in artificial intelligence, the human multiple-target tracking (MTT) ability is rarely imitated in computer-vision algorithms. In this paper, we review MTT studies in neuroscience and biologically inspired MTT methods in computer vision and discuss the ways in which they can be seen as complementary.


Assuntos
Inteligência Artificial , Memória/fisiologia , Visão Ocular/fisiologia , Algoritmos , Animais , Encéfalo/fisiologia , Cognição , Humanos , Processamento de Imagem Assistida por Computador/métodos , Movimento (Física) , Neurociências , Gravação em Vídeo/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA