Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
1.
J Digit Imaging ; 35(5): 1238-1249, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35501416

RESUMO

The number of melanoma diagnoses has increased dramatically over the past three decades, outpacing almost all other cancers. Nearly 1 in 4 skin biopsies is of melanocytic lesions, highlighting the clinical and public health importance of correct diagnosis. Deep learning image analysis methods may improve and complement current diagnostic and prognostic capabilities. The histologic evaluation of melanocytic lesions, including melanoma and its precursors, involves determining whether the melanocytic population involves the epidermis, dermis, or both. Semantic segmentation of clinically important structures in skin biopsies is a crucial step towards an accurate diagnosis. While training a segmentation model requires ground-truth labels, annotation of large images is a labor-intensive task. This issue becomes especially pronounced in a medical image dataset in which expert annotation is the gold standard. In this paper, we propose a two-stage segmentation pipeline using coarse and sparse annotations on a small region of the whole slide image as the training set. Segmentation results on whole slide images show promising performance for the proposed pipeline.


Assuntos
Melanoma , Humanos , Melanoma/diagnóstico por imagem , Melanoma/patologia , Processamento de Imagem Assistida por Computador/métodos , Pele/diagnóstico por imagem , Pele/patologia , Epiderme/patologia , Biópsia
2.
Pattern Recognit ; 84: 345-356, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30679879

RESUMO

Generalizability of algorithms for binary cancer vs. no cancer classification is unknown for clinically more significant multi-class scenarios where intermediate categories have different risk factors and treatment strategies. We present a system that classifies whole slide images (WSI) of breast biopsies into five diagnostic categories. First, a saliency detector that uses a pipeline of four fully convolutional networks, trained with samples from records of pathologists' screenings, performs multi-scale localization of diagnostically relevant regions of interest in WSI. Then, a convolutional network, trained from consensus-derived reference samples, classifies image patches as non-proliferative or proliferative changes, atypical ductal hyperplasia, ductal carcinoma in situ, and invasive carcinoma. Finally, the saliency and classification maps are fused for pixel-wise labeling and slide-level categorization. Experiments using 240 WSI showed that both saliency detector and classifier networks performed better than competing algorithms, and the five-class slide-level accuracy of 55% was not statistically different from the predictions of 45 pathologists. We also present example visualizations of the learned representations for breast cancer diagnosis.

3.
J Digit Imaging ; 31(1): 32-41, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-28681097

RESUMO

Following a baseline demographic survey, 87 pathologists interpreted 240 digital whole slide images of breast biopsy specimens representing a range of diagnostic categories from benign to atypia, ductal carcinoma in situ, and invasive cancer. A web-based viewer recorded pathologists' behaviors while interpreting a subset of 60 randomly selected and randomly ordered slides. To characterize diagnostic search patterns, we used the viewport location, time stamp, and zoom level data to calculate four variables: average zoom level, maximum zoom level, zoom level variance, and scanning percentage. Two distinct search strategies were confirmed: scanning is characterized by panning at a constant zoom level, while drilling involves zooming in and out at various locations. Statistical analysis was applied to examine the associations of different visual interpretive strategies with pathologist characteristics, diagnostic accuracy, and efficiency. We found that females scanned more than males, and age was positively correlated with scanning percentage, while the facility size was negatively correlated. Throughout 60 cases, the scanning percentage and total interpretation time per slide decreased, and these two variables were positively correlated. The scanning percentage was not predictive of diagnostic accuracy. Increasing average zoom level, maximum zoom level, and zoom variance were correlated with over-interpretation.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Adulto , Biópsia , Mama/diagnóstico por imagem , Mama/patologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes
4.
Mod Pathol ; 29(9): 1004-11, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27198567

RESUMO

A pathologist's accurate interpretation relies on identifying relevant histopathological features. Little is known about the precise relationship between feature identification and diagnostic decision making. We hypothesized that greater overlap between a pathologist's selected diagnostic region of interest (ROI) and a consensus derived ROI is associated with higher diagnostic accuracy. We developed breast biopsy test cases that included atypical ductal hyperplasia (n=80); ductal carcinoma in situ (n=78); and invasive breast cancer (n=22). Benign cases were excluded due to the absence of specific abnormalities. Three experienced breast pathologists conducted an independent review of the 180 digital whole slide images, established a reference consensus diagnosis and marked one or more diagnostic ROIs for each case. Forty-four participating pathologists independently diagnosed and marked ROIs on the images. Participant diagnoses and ROI were compared with consensus reference diagnoses and ROI. Regression models tested whether percent overlap between participant ROI and consensus reference ROI predicted diagnostic accuracy. Each of the 44 participants interpreted 39-50 cases for a total of 1972 individual diagnoses. Percent ROI overlap with the expert reference ROI was higher in pathologists who self-reported academic affiliation (69 vs 65%, P=0.002). Percent overlap between participants' ROI and consensus reference ROI was then classified into ordinal categories: 0, 1-33, 34-65, 66-99 and 100% overlap. For each incremental change in the ordinal percent ROI overlap, diagnostic agreement increased by 60% (OR 1.6, 95% CI (1.5-1.7), P<0.001) and the association remained significant even after adjustment for other covariates. The magnitude of the association between ROI overlap and diagnostic agreement increased with increasing diagnostic severity. The findings indicate that pathologists are more likely to converge with an expert reference diagnosis when they identify an overlapping diagnostic image region, suggesting that future computer-aided detection systems that highlight potential diagnostic regions could be a helpful tool to improve accuracy and education.


Assuntos
Neoplasias da Mama/patologia , Carcinoma Intraductal não Infiltrante/patologia , Carcinoma/patologia , Patologistas , Adulto , Biópsia , Consenso , Feminino , Humanos , Hiperplasia , Masculino , Pessoa de Meia-Idade , Invasividade Neoplásica , Variações Dependentes do Observador , Projetos Piloto , Valor Preditivo dos Testes , Prognóstico , Estados Unidos
5.
J Digit Imaging ; 29(4): 496-506, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-26961982

RESUMO

Whole slide digital imaging technology enables researchers to study pathologists' interpretive behavior as they view digital slides and gain new understanding of the diagnostic medical decision-making process. In this study, we propose a simple yet important analysis to extract diagnostically relevant regions of interest (ROIs) from tracking records using only pathologists' actions as they viewed biopsy specimens in the whole slide digital imaging format (zooming, panning, and fixating). We use these extracted regions in a visual bag-of-words model based on color and texture features to predict diagnostically relevant ROIs on whole slide images. Using a logistic regression classifier in a cross-validation setting on 240 digital breast biopsy slides and viewport tracking logs of three expert pathologists, we produce probability maps that show 74 % overlap with the actual regions at which pathologists looked. We compare different bag-of-words models by changing dictionary size, visual word definition (patches vs. superpixels), and training data (automatically extracted ROIs vs. manually marked ROIs). This study is a first step in understanding the scanning behaviors of pathologists and the underlying reasons for diagnostic errors.


Assuntos
Mama/diagnóstico por imagem , Mama/patologia , Patologistas , Biópsia , Tomada de Decisões , Feminino , Humanos , Modelos Logísticos , Mamografia , Erros Médicos
6.
Artigo em Inglês | MEDLINE | ID: mdl-33719359
7.
Adv Neural Inf Process Syst ; 36(DB1): 37995-38017, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38742142

RESUMO

Recent accelerations in multi-modal applications have been made possible with the plethora of image and text data available online. However, the scarcity of analogous data in the medical field, specifically in histopathology, has halted comparable progress. To enable similar representation learning for histopathology, we turn to YouTube, an untapped resource of videos, offering 1,087 hours of valuable educational histopathology videos from expert clinicians. From YouTube, we curate Quilt: a large-scale vision-language dataset consisting of 768,826 image and text pairs. Quilt was automatically curated using a mixture of models, including large language models, handcrafted algorithms, human knowledge databases, and automatic speech recognition. In comparison, the most comprehensive datasets curated for histopathology amass only around 200K samples. We combine Quilt with datasets from other sources, including Twitter, research papers, and the internet in general, to create an even larger dataset: Quilt-1M, with 1M paired image-text samples, marking it as the largest vision-language histopathology dataset to date. We demonstrate the value of Quilt-1M by fine-tuning a pre-trained CLIP model. Our model outperforms state-of-the-art models on both zero-shot and linear probing tasks for classifying new histopathology images across 13 diverse patch-level datasets of 8 different sub-pathologies and cross-modal retrieval tasks.

8.
Med Image Anal ; 79: 102466, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35525135

RESUMO

Diagnostic disagreements among pathologists occur throughout the spectrum of benign to malignant lesions. A computer-aided diagnostic system capable of reducing uncertainties would have important clinical impact. To develop a computer-aided diagnosis method for classifying breast biopsy images into a range of diagnostic categories (benign, atypia, ductal carcinoma in situ, and invasive breast cancer), we introduce a transformer-based hollistic attention network called HATNet. Unlike state-of-the-art histopathological image classification systems that use a two pronged approach, i.e., they first learn local representations using a multi-instance learning framework and then combine these local representations to produce image-level decisions, HATNet streamlines the histopathological image classification pipeline and shows how to learn representations from gigapixel size images end-to-end. HATNet extends the bag-of-words approach and uses self-attention to encode global information, allowing it to learn representations from clinically relevant tissue structures without any explicit supervision. It outperforms the previous best network Y-Net, which uses supervision in the form of tissue-level segmentation masks, by 8%. Importantly, our analysis reveals that HATNet learns representations from clinically relevant structures, and it matches the classification accuracy of 87 U.S. pathologists for this challenging test set.


Assuntos
Neoplasias da Mama , Mama , Biópsia , Mama/diagnóstico por imagem , Mama/patologia , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos
9.
Front Artif Intell ; 5: 1005086, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36204597

RESUMO

A rapidly increasing rate of melanoma diagnosis has been noted over the past three decades, and nearly 1 in 4 skin biopsies are diagnosed as melanocytic lesions. The gold standard for diagnosis of melanoma is the histopathological examination by a pathologist to analyze biopsy material at both the cellular and structural levels. A pathologist's diagnosis is often subjective and prone to variability, while deep learning image analysis methods may improve and complement current diagnostic and prognostic capabilities. Mitoses are important entities when reviewing skin biopsy cases as their presence carries prognostic information; thus, their precise detection is an important factor for clinical care. In addition, semantic segmentation of clinically important structures in skin biopsies might help the diagnosis pipeline with an accurate classification. We aim to provide prognostic and diagnostic information on skin biopsy images, including the detection of cellular level entities, segmentation of clinically important tissue structures, and other important factors toward the accurate diagnosis of skin biopsy images. This paper is an overview of our work on analysis of digital whole slide skin biopsy images, including mitotic figure (mitosis) detection, semantic segmentation, diagnosis, and analysis of pathologists' viewing patterns, and with new work on melanocyte detection. Deep learning has been applied to our methods for all the detection, segmentation, and diagnosis work. In our studies, deep learning is proven superior to prior approaches to skin biopsy analysis. Our work on analysis of pathologists' viewing patterns is the only such work in the skin biopsy literature. Our work covers the whole spectrum from low-level entities through diagnosis and understanding what pathologists do in performing their diagnoses.

10.
Diagnostics (Basel) ; 12(7)2022 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-35885617

RESUMO

Invasive melanoma, a common type of skin cancer, is considered one of the deadliest. Pathologists routinely evaluate melanocytic lesions to determine the amount of atypia, and if the lesion represents an invasive melanoma, its stage. However, due to the complicated nature of these assessments, inter- and intra-observer variability among pathologists in their interpretation are very common. Machine-learning techniques have shown impressive and robust performance on various tasks including healthcare. In this work, we study the potential of including semantic segmentation of clinically important tissue structure in improving the diagnosis of skin biopsy images. Our experimental results show a 6% improvement in F-score when using whole slide images along with epidermal nests and cancerous dermal nest segmentation masks compared to using whole-slide images alone in training and testing the diagnosis pipeline.

11.
J Pathol Inform ; 13: 100104, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36268085

RESUMO

Although pathologists have their own viewing habits while diagnosing, viewing behaviors leading to the most accurate diagnoses are under-investigated. Digital whole slide imaging has enabled investigators to analyze pathologists' visual interpretation of histopathological features using mouse and viewport tracking techniques. In this study, we provide definitions for basic viewing behavior variables and investigate the association of pathologists' characteristics and viewing behaviors, and how they relate to diagnostic accuracy when interpreting whole slide images. We use recordings of 32 pathologists' actions while interpreting a set of 36 digital whole slide skin biopsy images (5 sets of 36 cases; 180 cases total). These viewport tracking data include the coordinates of a viewport scene on pathologists' screens, the magnification level at which that viewport was viewed, as well as a timestamp. We define a set of variables to quantify pathologists' viewing behaviors such as zooming, panning, and interacting with a consensus reference panel's selected region of interest (ROI). We examine the association of these viewing behaviors with pathologists' demographics, clinical characteristics, and diagnostic accuracy using cross-classified multilevel models. Viewing behaviors differ based on clinical experience of the pathologists. Pathologists with a higher caseload of melanocytic skin biopsy cases and pathologists with board certification and/or fellowship training in dermatopathology have lower average zoom and lower variance of zoom levels. Viewing behaviors associated with higher diagnostic accuracy include higher average and variance of zoom levels, a lower magnification percentage (a measure of consecutive zooming behavior), higher total interpretation time, and higher amount of time spent viewing ROIs. Scanning behavior, which refers to panning with a fixed zoom level, has marginally significant positive association with accuracy. Pathologists' training, clinical experience, and their exposure to a range of cases are associated with their viewing behaviors, which may contribute to their diagnostic accuracy. Research in computational pathology integrating digital imaging and clinical informatics opens up new avenues for leveraging viewing behaviors in medical education and training, potentially improving patient care and the effectiveness of clinical workflow.

12.
Acta Neuropathol Commun ; 9(1): 191, 2021 12 04.
Artigo em Inglês | MEDLINE | ID: mdl-34863298

RESUMO

Knowledge of 1p/19q-codeletion and IDH1/2 mutational status is necessary to interpret any investigational study of diffuse gliomas in the modern era. While DNA sequencing is the gold standard for determining IDH mutational status, genome-wide methylation arrays and gene expression profiling have been used for surrogate mutational determination. Previous studies by our group suggest that 1p/19q-codeletion and IDH mutational status can be predicted by genome-wide somatic copy number alteration (SCNA) data alone, however a rigorous model to accomplish this task has yet to be established. In this study, we used SCNA data from 786 adult diffuse gliomas in The Cancer Genome Atlas (TCGA) to develop a two-stage classification system that identifies 1p/19q-codeleted oligodendrogliomas and predicts the IDH mutational status of astrocytic tumors using a machine-learning model. Cross-validated results on TCGA SCNA data showed near perfect classification results. Furthermore, our astrocytic IDH mutation model validated well on four additional datasets (AUC = 0.97, AUC = 0.99, AUC = 0.95, AUC = 0.96) as did our 1p/19q-codeleted oligodendroglioma screen on the two datasets that contained oligodendrogliomas (MCC = 0.97, MCC = 0.97). We then retrained our system using data from these validation sets and applied our system to a cohort of REMBRANDT study subjects for whom SCNA data, but not IDH mutational status, is available. Overall, using genome-wide SCNAs, we successfully developed a system to robustly predict 1p/19q-codeletion and IDH mutational status in diffuse gliomas. This system can assign molecular subtype labels to tumor samples of retrospective diffuse glioma cohorts that lack 1p/19q-codeletion and IDH mutational status, such as the REMBRANDT study, recasting these datasets as validation cohorts for diffuse glioma research.


Assuntos
Biomarcadores Tumorais/genética , Neoplasias Encefálicas/diagnóstico , Glioma/diagnóstico , Isocitrato Desidrogenase/genética , Aprendizado de Máquina , Variações do Número de Cópias de DNA , Humanos , Sequenciamento Completo do Genoma
13.
IEEE Access ; 9: 163526-163541, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35211363

RESUMO

Diagnosing melanocytic lesions is one of the most challenging areas of pathology with extensive intra- and inter-observer variability. The gold standard for a diagnosis of invasive melanoma is the examination of histopathological whole slide skin biopsy images by an experienced dermatopathologist. Digitized whole slide images offer novel opportunities for computer programs to improve the diagnostic performance of pathologists. In order to automatically classify such images, representations that reflect the content and context of the input images are needed. In this paper, we introduce a novel self-attention-based network to learn representations from digital whole slide images of melanocytic skin lesions at multiple scales. Our model softly weighs representations from multiple scales, allowing it to discriminate between diagnosis-relevant and -irrelevant information automatically. Our experiments show that our method outperforms five other state-of-the-art whole slide image classification methods by a significant margin. Our method also achieves comparable performance to 187 practicing U.S. pathologists who interpreted the same cases in an independent study. To facilitate relevant research, full training and inference code is made publicly available at https://github.com/meredith-wenjunwu/ScATNet.

14.
Artigo em Inglês | MEDLINE | ID: mdl-36589620

RESUMO

This paper studies why pathologists can misdiagnose diagnostically challenging breast biopsy cases, using a data set of 240 whole slide images (WSIs). Three experienced pathologists agreed on a consensus reference ground-truth diagnosis for each slide and also a consensus region of interest (ROI) from which the diagnosis could best be made. A study group of 87 other pathologists then diagnosed test sets (60 slides each) and marked their own regions of interest. Diagnoses and ROIs were categorized such that if on a given slide, their ROI differed from the consensus ROI and their diagnosis was incorrect, that ROI was called a distractor. We used the HATNet transformer-based deep learning classifier to evaluate the visual similarities and differences between the true (consensus) ROIs and the distractors. Results showed high accuracy for both the similarity and difference networks, showcasing the challenging nature of feature classification with breast biopsy images. This study is important in the potential use of its results for teaching pathologists how to diagnose breast biopsy slides.

15.
Proc IAPR Int Conf Pattern Recogn ; 2020: 8727-8734, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36745147

RESUMO

In this study, we propose the Ductal Instance-Oriented Pipeline (DIOP) that contains a duct-level instance segmentation model, a tissue-level semantic segmentation model, and three-levels of features for diagnostic classification. Based on recent advancements in instance segmentation and the Mask RCNN model, our duct-level segmenter tries to identify each ductal individual inside a microscopic image; then, it extracts tissue-level information from the identified ductal instances. Leveraging three levels of information obtained from these ductal instances and also the histopathology image, the proposed DIOP outperforms previous approaches (both feature-based and CNN-based) in all diagnostic tasks; for the four-way classification task, the DIOP achieves comparable performance to general pathologists in this unique dataset. The proposed DIOP only takes a few seconds to run in the inference time, which could be used interactively on most modern computers. More clinical explorations are needed to study the robustness and generalizability of this system in the future.

16.
IEEE J Biomed Health Inform ; 25(6): 2041-2049, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33166257

RESUMO

OBJECTIVE: Modeling variable-sized regions of interest (ROIs) in whole slide images using deep convolutional networks is a challenging task, as these networks typically require fixed-sized inputs that should contain sufficient structural and contextual information for classification. We propose a deep feature extraction framework that builds an ROI-level feature representation via weighted aggregation of the representations of variable numbers of fixed-sized patches sampled from nuclei-dense regions in breast histopathology images. METHODS: First, the initial patch-level feature representations are extracted from both fully-connected layer activations and pixel-level convolutional layer activations of a deep network, and the weights are obtained from the class predictions of the same network trained on patch samples. Then, the final patch-level feature representations are computed by concatenation of weighted instances of the extracted feature activations. Finally, the ROI-level representation is obtained by fusion of the patch-level representations by average pooling. RESULTS: Experiments using a well-characterized data set of 240 slides containing 437 ROIs marked by experienced pathologists with variable sizes and shapes result in an accuracy score of 72.65% in classifying ROIs into four diagnostic categories that cover the whole histologic spectrum. CONCLUSION: The results show that the proposed feature representations are superior to existing approaches and provide accuracies that are higher than the average accuracy of another set of pathologists. SIGNIFICANCE: The proposed generic representation that can be extracted from any type of deep convolutional architecture combines the patch appearance information captured by the network activations and the diagnostic relevance predicted by the class-specific scoring of patches for effective modeling of variable-sized ROIs.


Assuntos
Mama , Redes Neurais de Computação , Mama/diagnóstico por imagem , Humanos
17.
Biomed Eng Online ; 9: 30, 2010 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-20569461

RESUMO

BACKGROUND: The success of radiation therapy depends critically on accurately delineating the target volume, which is the region of known or suspected disease in a patient. Methods that can compute a contour set defining a target volume on a set of patient images will contribute greatly to the success of radiation therapy and dramatically reduce the workload of radiation oncologists, who currently draw the target by hand on the images using simple computer drawing tools. The most challenging part of this process is to estimate where there is microscopic spread of disease. METHODS: Given a set of reference CT images with "gold standard" lymph node regions drawn by the experts, we are proposing an image registration based method that could automatically contour the cervical lymph code levels for patients receiving radiation therapy. We are also proposing a method that could help us identify the reference models which could potentially produce the best results. RESULTS: The computer generated lymph node regions are evaluated quantitatively and qualitatively. CONCLUSIONS: Although not conforming to clinical criteria, the results suggest the technique has promise.


Assuntos
Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Linfonodos/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Controle de Qualidade , Padrões de Referência , Tomografia Computadorizada por Raios X/normas
18.
JCO Clin Cancer Inform ; 4: 290-298, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-32216637

RESUMO

PURPOSE: Machine Learning Package for Cancer Diagnosis (MLCD) is the result of a National Institutes of Health/National Cancer Institute (NIH/NCI)-sponsored project for developing a unified software package from state-of-the-art breast cancer biopsy diagnosis and machine learning algorithms that can improve the quality of both clinical practice and ongoing research. METHODS: Whole-slide images of 240 well-characterized breast biopsy cases, initially assembled under R01 CA140560, were used for developing the algorithms and training the machine learning models. This software package is based on the methodology developed and published under our recent NIH/NCI-sponsored research grant (R01 CA172343) for finding regions of interest (ROIs) in whole-slide breast biopsy images, for segmenting ROIs into histopathologic tissue types and for using this segmentation in classifiers that can suggest final diagnoses. RESULT: The package provides an ROI detector for whole-slide images and modules for semantic segmentation into tissue classes and diagnostic classification into 4 classes (benign, atypia, ductal carcinoma in situ, invasive cancer) of the ROIs. It is available through the GitHub repository under the Massachusetts Institute of Technology license and will later be distributed with the Pathology Image Informatics Platform system. A Web page provides instructions for use. CONCLUSION: Our tools have the potential to provide help to other cancer researchers and, ultimately, to practicing physicians and will motivate future research in this field. This article describes the methodology behind the software development and gives sample outputs to guide those interested in using this package.


Assuntos
Algoritmos , Neoplasias da Mama/diagnóstico , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Software/normas , Neoplasias da Mama/classificação , Feminino , Humanos
19.
J Digit Imaging ; 22(6): 681-8, 2009 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18488268

RESUMO

Doppler ultrasound is an important noninvasive diagnostic tool for cardiovascular diseases. Modern ultrasound imaging systems utilize spectral Doppler techniques for quantitative evaluation of blood flow velocities, and these measurements play a crucial rule in the diagnosis and grading of arterial stenosis. One drawback of Doppler-based blood flow quantification is that the operator has to manually specify the angle between the Doppler ultrasound beam and the vessel orientation, which is called the Doppler angle, in order to calculate flow velocities. In this paper, we will describe a computer vision approach to automate the Doppler angle estimation. Our approach starts with the segmentation of blood vessels in ultrasound color Doppler images. The segmentation step is followed by an estimation technique for the Doppler angle based on a skeleton representation of the segmented vessel. We conducted preliminary clinical experiments to evaluate the agreement between the expert operator's angle specification and the new automated method. Statistical regression analysis showed strong agreement between the manual and automated methods. We hypothesize that the automation of the Doppler angle will enhance the workflow of the ultrasound Doppler exam and achieve more standardized clinical outcome.


Assuntos
Doenças Cardiovasculares/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador , Processamento de Sinais Assistido por Computador , Ultrassonografia Doppler de Pulso/instrumentação , Automação , Velocidade do Fluxo Sanguíneo , Humanos , Modelos Cardiovasculares , Sensibilidade e Especificidade , Ultrassonografia Doppler em Cores/instrumentação , Estados Unidos
20.
JAMA Netw Open ; 2(8): e198777, 2019 08 02.
Artigo em Inglês | MEDLINE | ID: mdl-31397859

RESUMO

Importance: Following recent US Food and Drug Administration approval, adoption of whole slide imaging in clinical settings may be imminent, and diagnostic accuracy, particularly among challenging breast biopsy specimens, may benefit from computerized diagnostic support tools. Objective: To develop and evaluate computer vision methods to assist pathologists in diagnosing the full spectrum of breast biopsy samples, from benign to invasive cancer. Design, Setting, and Participants: In this diagnostic study, 240 breast biopsies from Breast Cancer Surveillance Consortium registries that varied by breast density, diagnosis, patient age, and biopsy type were selected, reviewed, and categorized by 3 expert pathologists as benign, atypia, ductal carcinoma in situ (DCIS), and invasive cancer. The atypia and DCIS cases were oversampled to increase statistical power. High-resolution digital slide images were obtained, and 2 automated image features (tissue distribution feature and structure feature) were developed and evaluated according to the consensus diagnosis of the expert panel. The performance of the automated image analysis methods was compared with independent interpretations from 87 practicing US pathologists. Data analysis was performed between February 2017 and February 2019. Main Outcomes and Measures: Diagnostic accuracy defined by consensus reference standard of 3 experienced breast pathologists. Results: The accuracy of machine learning tissue distribution features, structure features, and pathologists for classification of invasive cancer vs noninvasive cancer was 0.94, 0.91, and 0.98, respectively; the accuracy of classification of atypia and DCIS vs benign tissue was 0.70, 0.70, and 0.81, respectively; and the accuracy of classification of DCIS vs atypia was 0.83, 0.85, and 0.80, respectively. The sensitivity of both machine learning features was lower than that of the pathologists for the invasive vs noninvasive classification (tissue distribution feature, 0.70; structure feature, 0.49; pathologists, 0.84) but higher for the classification of atypia and DCIS vs benign cases (tissue distribution feature, 0.79; structure feature, 0.85; pathologists, 0.72) and the classification of DCIS vs atypia (tissue distribution feature, 0.88; structure feature, 0.89; pathologists, 0.70). For the DCIS vs atypia classification, the specificity of the machine learning feature classification was similar to that of the pathologists (tissue distribution feature, 0.78; structure feature, 0.80; pathologists, 0.82). Conclusion and Relevance: The computer-based automated approach to interpreting breast pathology showed promise, especially as a diagnostic aid in differentiating DCIS from atypical hyperplasia.


Assuntos
Neoplasias da Mama/patologia , Carcinoma Ductal/patologia , Carcinoma Intraductal não Infiltrante/patologia , Aprendizado de Máquina , Redes Neurais de Computação , Biópsia , Neoplasias da Mama/diagnóstico , Carcinoma Ductal/diagnóstico , Carcinoma Intraductal não Infiltrante/diagnóstico , Feminino , Humanos , Padrões de Referência , Sistema de Registros , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA