Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
IEEE J Biomed Health Inform ; 21(2): 441-450, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-26800556

RESUMO

In this paper, we introduce and evaluate the systems submitted to the first Overlapping Cervical Cytology Image Segmentation Challenge, held in conjunction with the IEEE International Symposium on Biomedical Imaging 2014. This challenge was organized to encourage the development and benchmarking of techniques capable of segmenting individual cells from overlapping cellular clumps in cervical cytology images, which is a prerequisite for the development of the next generation of computer-aided diagnosis systems for cervical cancer. In particular, these automated systems must detect and accurately segment both the nucleus and cytoplasm of each cell, even when they are clumped together and, hence, partially occluded. However, this is an unsolved problem due to the poor contrast of cytoplasm boundaries, the large variation in size and shape of cells, and the presence of debris and the large degree of cellular overlap. The challenge initially utilized a database of 16 high-resolution ( ×40 magnification) images of complex cellular fields of view, in which the isolated real cells were used to construct a database of 945 cervical cytology images synthesized with a varying number of cells and degree of overlap, in order to provide full access of the segmentation ground truth. These synthetic images were used to provide a reliable and comprehensive framework for quantitative evaluation on this segmentation problem. Results from the submitted methods demonstrate that all the methods are effective in the segmentation of clumps containing at most three cells, with overlap coefficients up to 0.3. This highlights the intrinsic difficulty of this challenge and provides motivation for significant future improvement.


Assuntos
Algoritmos , Colo do Útero/citologia , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Colo do Útero/diagnóstico por imagem , Feminino , Humanos , Teste de Papanicolaou/métodos , Neoplasias do Colo do Útero
2.
J Pathol Inform ; 7: 28, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27563487

RESUMO

CONTEXT: It has been shown that ovarian carcinoma subtypes are distinct pathologic entities with differing prognostic and therapeutic implications. Histotyping by pathologists has good reproducibility, but occasional cases are challenging and require immunohistochemistry and subspecialty consultation. Motivated by the need for more accurate and reproducible diagnoses and to facilitate pathologists' workflow, we propose an automatic framework for ovarian carcinoma classification. MATERIALS AND METHODS: Our method is inspired by pathologists' workflow. We analyse imaged tissues at two magnification levels and extract clinically-inspired color, texture, and segmentation-based shape descriptors using image-processing methods. We propose a carefully designed machine learning technique composed of four modules: A dissimilarity matrix, dimensionality reduction, feature selection and a support vector machine classifier to separate the five ovarian carcinoma subtypes using the extracted features. RESULTS: This paper presents the details of our implementation and its validation on a clinically derived dataset of eighty high-resolution histopathology images. The proposed system achieved a multiclass classification accuracy of 95.0% when classifying unseen tissues. Assessment of the classifier's confusion (confusion matrix) between the five different ovarian carcinoma subtypes agrees with clinician's confusion and reflects the difficulty in diagnosing endometrioid and serous carcinomas. CONCLUSIONS: Our results from this first study highlight the difficulty of ovarian carcinoma diagnosis which originate from the intrinsic class-imbalance observed among subtypes and suggest that the automatic analysis of ovarian carcinoma subtypes could be valuable to clinician's diagnostic procedure by providing a second opinion.

3.
Int J Comput Assist Radiol Surg ; 11(8): 1409-18, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26872810

RESUMO

PURPOSE: Despite great advances in medical image segmentation, the accurate and automatic segmentation of endoscopic scenes remains a challenging problem. Two important aspects have to be considered in segmenting an endoscopic scene: (1) noise and clutter due to light reflection and smoke from cutting tissue, and (2) structure occlusion (e.g. vessels occluded by fat, or endophytic tumours occluded by healthy kidney tissue). METHODS: In this paper, we propose a variational technique to augment a surgeon's endoscopic view by segmenting visible as well as occluded structures in the intraoperative endoscopic view. Our method estimates the 3D pose and deformation of anatomical structures segmented from 3D preoperative data in order to align to and segment corresponding structures in 2D intraoperative endoscopic views. Our preoperative to intraoperative alignment is driven by, first, spatio-temporal, signal processing based vessel pulsation cues and, second, machine learning based analysis of colour and textural visual cues. To our knowledge, this is the first work that utilizes vascular pulsation cues for guiding preoperative to intraoperative registration. In addition, we incorporate a tissue-specific (i.e. heterogeneous) physically based deformation model into our framework to cope with the non-rigid deformation of structures that occurs during the intervention. RESULTS: We validated the utility of our technique on fifteen challenging clinical cases with 45 % improvements in accuracy compared to the state-of-the-art method. CONCLUSIONS: A new technique for localizing both visible and occluded structures in an endoscopic view was proposed and tested. This method leverages both preoperative data, as a source of patient-specific prior knowledge, as well as vasculature pulsation and endoscopic visual cues in order to accurately segment the highly noisy and cluttered environment of an endoscopic video. Our results on in vivo clinical cases of partial nephrectomy illustrate the potential of the proposed framework for augmented reality applications in minimally invasive surgeries.


Assuntos
Endoscopia/métodos , Imageamento Tridimensional/métodos , Cor , Humanos , Nefrectomia/métodos
4.
IEEE Trans Med Imaging ; 35(1): 1-12, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26151933

RESUMO

In image-guided robotic surgery, segmenting the endoscopic video stream into meaningful parts provides important contextual information that surgeons can exploit to enhance their perception of the surgical scene. This information provides surgeons with real-time decision-making guidance before initiating critical tasks such as tissue cutting. Segmenting endoscopic video is a challenging problem due to a variety of complications including significant noise attributed to bleeding and smoke from cutting, poor appearance contrast between different tissue types, occluding surgical tools, and limited visibility of the objects' geometries on the projected camera views. In this paper, we propose a multi-modal approach to segmentation where preoperative 3D computed tomography scans and intraoperative stereo-endoscopic video data are jointly analyzed. The idea is to segment multiple poorly visible structures in the stereo/multichannel endoscopic videos by fusing reliable prior knowledge captured from the preoperative 3D scans. More specifically, we estimate and track the pose of the preoperative models in 3D and consider the models' non-rigid deformations to match with corresponding visual cues in multi-channel endoscopic video and segment the objects of interest. Further, contrary to most augmented reality frameworks in endoscopic surgery that assume known camera parameters, an assumption that is often violated during surgery due to non-optimal camera calibration and changes in camera focus/zoom, our method embeds these parameters into the optimization hence correcting the calibration parameters within the segmentation process. We evaluate our technique on synthetic data, ex vivo lamb kidney datasets, and in vivo clinical partial nephrectomy surgery with results demonstrating high accuracy and robustness.


Assuntos
Imageamento Tridimensional/métodos , Procedimentos Cirúrgicos Robóticos/métodos , Algoritmos , Animais , Humanos , Rim/patologia , Rim/cirurgia , Neoplasias Renais/patologia , Neoplasias Renais/cirurgia , Nefrectomia , Ovinos
5.
Med Image Comput Comput Assist Interv ; 17(Pt 2): 324-31, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25485395

RESUMO

Synergistic fusion of pre-operative (pre-op) and intraoperative (intra-op) imaging data provides surgeons with invaluable insightful information that can improve their decision-making during minimally invasive robotic surgery. In this paper, we propose an efficient technique to segment multiple objects in intra-op multi-view endoscopic videos based on priors captured from pre-op data. Our approach leverages information from 3D pre-op data into the analysis of visual cues in the 2D intra-op data by formulating the problem as one of finding the 3D pose and non-rigid deformations of tissue models driven by features from 2D images. We present a closed-form solution for our formulation and demonstrate how it allows for the inclusion of laparoscopic camera motion model. Our efficient method runs in real-time on a single core CPU making it practical even for robotic surgery systems with limited computational resources. We validate the utility of our technique on ex vivo data as well as in vivo clinical data from laparoscopic partial nephrectomy surgery and demonstrate its robustness in segmenting stereo endoscopic videos.


Assuntos
Endoscopia por Cápsula/métodos , Imageamento Tridimensional/métodos , Neoplasias Renais/patologia , Neoplasias Renais/cirurgia , Nefrectomia/métodos , Reconhecimento Automatizado de Padrão/métodos , Cirurgia Assistida por Computador/métodos , Animais , Interpretação de Imagem Assistida por Computador/métodos , Cuidados Pré-Operatórios/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Ovinos , Técnica de Subtração , Vísceras/patologia , Vísceras/cirurgia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA