Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters

Database
Language
Affiliation country
Publication year range
1.
BMC Bioinformatics ; 14: 333, 2013 Nov 20.
Article in English | MEDLINE | ID: mdl-24255945

ABSTRACT

BACKGROUND: Unsupervised segmentation of multi-spectral images plays an important role in annotating infrared microscopic images and is an essential step in label-free spectral histopathology. In this context, diverse clustering approaches have been utilized and evaluated in order to achieve segmentations of Fourier Transform Infrared (FT-IR) microscopic images that agree with histopathological characterization. RESULTS: We introduce so-called interactive similarity maps as an alternative annotation strategy for annotating infrared microscopic images. We demonstrate that segmentations obtained from interactive similarity maps lead to similarly accurate segmentations as segmentations obtained from conventionally used hierarchical clustering approaches. In order to perform this comparison on quantitative grounds, we provide a scheme that allows to identify non-horizontal cuts in dendrograms. This yields a validation scheme for hierarchical clustering approaches commonly used in infrared microscopy. CONCLUSIONS: We demonstrate that interactive similarity maps may identify more accurate segmentations than hierarchical clustering based approaches, and thus are a viable and due to their interactive nature attractive alternative to hierarchical clustering. Our validation scheme furthermore shows that performance of hierarchical two-means is comparable to the traditionally used Ward's clustering. As the former is much more efficient in time and memory, our results suggest another less resource demanding alternative for annotating large spectral images.


Subject(s)
Spectroscopy, Fourier Transform Infrared/methods , Adenocarcinoma/pathology , Algorithms , Cluster Analysis , Colorectal Neoplasms/pathology , Database Management Systems , Databases, Factual , Humans , Microscopy, Fluorescence/methods , Monte Carlo Method , Reproducibility of Results , Spectrum Analysis, Raman/methods , Tissue Engineering/methods
2.
IEEE J Biomed Health Inform ; 26(8): 4325-4334, 2022 08.
Article in English | MEDLINE | ID: mdl-35653451

ABSTRACT

The Cervical Vertebral Maturation (CVM) method aims to determine the craniofacial skeletal maturational stage, which is crucial for orthodontic and orthopedic treatment. In this paper, we explore the potential of deep learning for automatic CVM assessment. In particular, we propose a convolutional neural network named iCVM. Based on the residual network, it is specialized for the challenges unique to the task of CVM assessment. 1) To combat overfitting due to limited data size, multiple dropout layers are utilized. 2) To address the inevitable label ambiguity between adjacent maturational stages, we introduce the concept of label distribution learning in the loss function. Besides, we attempt to analyze the regions important for the prediction of the model by using the Grad-CAM technique. The learned strategy shows surprisingly high consistency with the clinical criteria. This indicates that the decisions made by our model are well interpretable, which is critical in evaluation of growth and development in orthodontics. Moreover, to drive future research in the field, we release a new dataset named CVM-900 along with the paper. It contains the cervical part of 900 lateral cephalograms collected from orthodontic patients of different ages and genders. Experimental results show that the proposed approach achieves superior performance on CVM-900 in terms of various evaluation metrics.


Subject(s)
Deep Learning , Age Determination by Skeleton/methods , Cervical Vertebrae/diagnostic imaging , Female , Humans , Male , Radiography , Uncertainty
3.
Article in English | MEDLINE | ID: mdl-32224457

ABSTRACT

Existing enhancement methods are empirically expected to help the high-level end computer vision task: however, that is observed to not always be the case in practice. We focus on object or face detection in poor visibility enhancements caused by bad weathers (haze, rain) and low light conditions. To provide a more thorough examination and fair comparison, we introduce three benchmark sets collected in real-world hazy, rainy, and low-light conditions, respectively, with annotated objects/faces. We launched the UG2+ challenge Track 2 competition in IEEE CVPR 2019, aiming to evoke a comprehensive discussion and exploration about whether and how low-level vision techniques can benefit the high-level automatic visual recognition in various scenarios. To our best knowledge, this is the first and currently largest effort of its kind. Baseline results by cascading existing enhancement and detection models are reported, indicating the highly challenging nature of our new data as well as the large room for further technical innovations. Thanks to a large participation from the research community, we are able to analyze representative team solutions, striving to better identify the strengths and limitations of existing mindsets as well as the future directions.

SELECTION OF CITATIONS
SEARCH DETAIL