Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
J Imaging ; 10(1)2024 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-38249005

RESUMO

Mass segmentation is one of the fundamental tasks used when identifying breast cancer due to the comprehensive information it provides, including the location, size, and border of the masses. Despite significant improvement in the performance of the task, certain properties of the data, such as pixel class imbalance and the diverse appearance and sizes of masses, remain challenging. Recently, there has been a surge in articles proposing to address pixel class imbalance through the formulation of the loss function. While demonstrating an enhancement in performance, they mostly fail to address the problem comprehensively. In this paper, we propose a new perspective on the calculation of the loss that enables the binary segmentation loss to incorporate the sample-level information and region-level losses in a hybrid loss setting. We propose two variations of the loss to include mass size and density in the loss calculation. Also, we introduce a single loss variant using the idea of utilizing mass size and density to enhance focal loss. We tested the proposed method on benchmark datasets: CBIS-DDSM and INbreast. Our approach outperformed the baseline and state-of-the-art methods on both datasets.

3.
Sensors (Basel) ; 22(18)2022 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-36146321

RESUMO

A fingerprint sensor interoperability problem, or a cross-sensor matching problem, occurs when one type of sensor is used for enrolment and a different type for matching. Fingerprints captured for the same person using various sensor technologies have various types of noises and artifacts. This problem motivated us to develop an algorithm that can enhance fingerprints captured using different types of sensors and touch technologies. Inspired by the success of deep learning in various computer vision tasks, we formulate this problem as an image-to-image transformation designed using a deep encoder-decoder model. It is trained using two learning frameworks, i.e., conventional learning and adversarial learning based on a conditional Generative Adversarial Network (cGAN) framework. Since different types of edges form the ridge patterns in fingerprints, we employed edge loss to train the model for effective fingerprint enhancement. The designed method was evaluated on fingerprints from two benchmark cross-sensor fingerprint datasets, i.e., MOLF and FingerPass. To assess the quality of enhanced fingerprints, we employed two standard metrics commonly used: NBIS Fingerprint Image Quality (NFIQ) and Structural Similarity Index Metric (SSIM). In addition, we proposed a metric named Fingerprint Quality Enhancement Index (FQEI) for comprehensive evaluation of fingerprint enhancement algorithms. Effective fingerprint quality enhancement results were achieved regardless of the sensor type used, where this issue was not investigated in the related literature before. The results indicate that the proposed method outperforms the state-of-the-art methods.


Assuntos
Algoritmos , Artefatos , Humanos , Processamento de Imagem Assistida por Computador/métodos
5.
Sensors (Basel) ; 21(11)2021 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-34073988

RESUMO

The fingerprint is one of the leading biometric modalities that is used worldwide for authenticating the identity of persons. Over time, a lot of research has been conducted to develop automatic fingerprint verification techniques. However, due to different authentication needs, the use of different sensors and the fingerprint verification systems encounter cross-sensor matching or sensor interoperability challenges, where different sensors are used for the enrollment and query phases. The challenge is to develop an efficient, robust and automatic system for cross-sensor matching. This paper proposes a new cross-matching system (SiameseFinger) using the Siamese network that takes the features extracted using the Gabor-HoG descriptor. The proposed Siamese network is trained using adversarial learning. The SiameseFinger was evaluated on two benchmark public datasets FingerPass and MOLF. The results of the experiments presented in this paper indicate that SiameseFinger achieves a comparable performance with that of the state-of-the-art methods.

6.
Front Oncol ; 10: 1052, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32714868

RESUMO

Cancer is an umbrella term that includes a range of disorders, from those that are fast-growing and lethal to indolent lesions with low or delayed potential for progression to death. One critical unmet challenge is that molecular disease subtypes characterized by relevant clinical differences, such as survival, are difficult to differentiate. With the advancement of multi-omics technologies, subtyping methods have shifted toward data integration in order to differentiate among subtypes from a holistic perspective that takes into consideration phenomena at multiple levels. However, these integrative methods are still limited by their statistical assumption and their sensitivity to noise. In addition, they are unable to predict the risk scores of patients using multi-omics data. Here, we present a novel approach named Subtyping via Consensus Factor Analysis (SCFA) that can efficiently remove noisy signals from consistent molecular patterns in order to reliably identify cancer subtypes and accurately predict risk scores of patients. In an extensive analysis of 7,973 samples related to 30 cancers that are available at The Cancer Genome Atlas (TCGA), we demonstrate that SCFA outperforms state-of-the-art approaches in discovering novel subtypes with significantly different survival profiles. We also demonstrate that SCFA is able to predict risk scores that are highly correlated with true patient survival and vital status. More importantly, the accuracy of subtype discovery and risk prediction improves when more data types are integrated into the analysis. The SCFA software and TCGA data packages will be available on Bioconductor.

8.
ACM BCB ; 2012: 170-177, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28090597

RESUMO

Electron tomography is a promising technology for imaging ultrastructures at nanoscale resolutions. However, image and quantitative analyses are often hindered by high levels of noise, staining heterogeneity, and material damage either as a result of the electron beam or sample preparation. We have developed and built a framework that allows for automatic segmentation and quantification of filamentous objects in 3D electron tomography. Our approach consists of three steps: (i) local enhancement of filaments by Hessian filtering; (ii) detection and completion (e.g., gap filling) of filamentous structures through tensor voting; and (iii) delineation of the filamentous networks. Our approach allows for quantification of filamentous networks in terms of their compositional and morphological features. We first validate our approach using a set of specifically designed synthetic data. We then apply our segmentation framework to tomograms of plant cell walls that have undergone different chemical treatments for polysaccharide extraction. The subsequent compositional and morphological analyses of the plant cell walls reveal their organizational characteristics and the effects of the different chemical protocols on specific polysaccharides.

9.
IEEE Trans Med Imaging ; 30(8): 1503-13, 2011 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-21421432

RESUMO

In this paper, a novel approach is proposed for perceptual grouping and localization of ill-defined curvilinear structures. Our approach builds upon the tensor voting and the iterative voting frameworks. Its efficacy lies on iterative refinements of curvilinear structures by gradually shifting from an exploratory to an exploitative mode. Such a mode shifting is achieved by reducing the aperture of the tensor voting fields, which is shown to improve curve grouping and inference by enhancing the concentration of the votes over promising, salient structures. The proposed technique is validated on delineating adherens junctions that are imaged through fluorescence microscopy. However, the method is also applicable for screening other organisms based on characteristics of their cell wall structures. Adherens junctions maintain tissue structural integrity and cell-cell interactions. Visually, they exhibit fibrous patterns that may be diffused, heterogeneous in fluorescence intensity, or punctate and frequently perceptual. Besides the application to real data, the proposed method is compared to prior methods on synthetic and annotated real data, showing high precision rates.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Junções Aderentes/ultraestrutura , Parede Celular/química , Humanos , Microscopia de Fluorescência/métodos , Reprodutibilidade dos Testes
10.
IEEE Trans Image Process ; 17(11): 2236-55, 2008 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-18854254

RESUMO

We consider the problem of recognizing 3-D objects from 2-D images using geometric models and assuming different viewing angles and positions. Our goal is to recognize and localize instances of specific objects (i.e., model-based) in a scene. This is in contrast to category-based object recognition methods where the goal is to search for instances of objects that belong to a certain visual category (e.g., faces or cars). The key contribution of our work is improving 3-D object recognition by integrating Algebraic Functions of Views (AFoVs), a powerful framework for predicting the geometric appearance of an object due to viewpoint changes, with indexing and learning. During training, we compute the space of views that groups of object features can produce under the assumption of 3-D linear transformations, by combining a small number of reference views that contain the object features using AFoVs. Unrealistic views (e.g., due to the assumption of 3-D linear transformations) are eliminated by imposing a pair of rigidity constraints based on knowledge of the transformation between the reference views of the object. To represent the space of views that an object can produce compactly while allowing efficient hypothesis generation during recognition, we propose combining indexing with learning in two stages. In the first stage, we sample the space of views of an object sparsely and represent information about the samples using indexing. In the second stage, we build probabilistic models of shape appearance by sampling the space of views of the object densely and learning the manifold formed by the samples. Learning employs the Expectation-Maximization (EM) algorithm and takes place in a "universal," lower-dimensional, space computed through Random Projection (RP). During recognition, we extract groups of point features from the scene and we use indexing to retrieve the most feasible model groups that might have produced them (i.e., hypothesis generation). The likelihood of each hypothesis is then computed using the probabilistic models of shape appearance. Only hypotheses ranked high enough are considered for further verification with the most likely hypotheses verified first. The proposed approach has been evaluated using both artificial and real data, illustrating promising performance. We also present preliminary results illustrating extensions of the AFoVs framework to predict the intensity appearance of an object. In this context, we have built a hybrid recognition framework that exploits geometric knowledge to hypothesize the location of an object in the scene and both geometrical and intesnity information to verify the hypotheses.


Assuntos
Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
11.
IEEE Trans Image Process ; 15(7): 2019-34, 2006 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-16830921

RESUMO

Robust and reliable vehicle detection from images acquired by a moving vehicle (i.e., on-road vehicle detection) is an important problem with applications to driver assistance systems and autonomous, self-guided vehicles. The focus of this work is on the issues of feature extraction and classification for rear-view vehicle detection. Specifically, by treating the problem of vehicle detection as a two-class classification problem, we have investigated several different feature extraction methods such as principal component analysis, wavelets, and Gabor filters. To evaluate the extracted features, we have experimented with two popular classifiers, neural networks and support vector machines (SVMs). Based on our evaluation results, we have developed an on-board real-time monocular vehicle detection system that is capable of acquiring grey-scale images, using Ford's proprietary low-light camera, achieving an average detection rate of 10 Hz. Our vehicle detection algorithm consists of two main steps: a multiscale driven hypothesis generation step and an appearance-based hypothesis verification step. During the hypothesis generation step, image locations where vehicles might be present are extracted. This step uses multiscale techniques not only to speed up detection, but also to improve system robustness. The appearance-based hypothesis verification step verifies the hypotheses using Gabor features and SVMs. The system has been tested in Ford's concept vehicle under different traffic conditions (e.g., structured highway, complex urban streets, and varying weather conditions), illustrating good performance.


Assuntos
Acidentes de Trânsito/prevenção & controle , Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Veículos Automotores/classificação , Reconhecimento Automatizado de Padrão/métodos , Análise por Conglomerados , Armazenamento e Recuperação da Informação/métodos , Visão Monocular
12.
IEEE Trans Pattern Anal Mach Intell ; 28(5): 694-711, 2006 May.
Artigo em Inglês | MEDLINE | ID: mdl-16640257

RESUMO

Developing on-board automotive driver assistance systems aiming to alert drivers about driving environments, and possible collision with other vehicles has attracted a lot of attention lately. In these systems, robust and reliable vehicle detection is a critical step. This paper presents a review of recent vision-based on-road vehicle detection systems. Our focus is on systems where the camera is mounted on the vehicle rather than being fixed such as in traffic/driveway monitoring systems. First, we discuss the problem of on-road vehicle detection using optical sensors followed by a brief review of intelligent vehicle research worldwide. Then, we discuss active and passive sensors to set the stage for vision-based vehicle detection. Methods aiming to quickly hypothesize the location of vehicles in an image as well as to verify the hypothesized locations are reviewed next. Integrating detection with tracking is also reviewed to illustrate the benefits of exploiting temporal continuity for vehicle detection. Finally, we present a critical overview of the methods discussed, we assess their potential for future deployment, and we present directions for future research.


Assuntos
Acidentes de Trânsito/prevenção & controle , Inteligência Artificial , Condução de Veículo , Automóveis , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Segurança , Algoritmos
13.
Neural Netw ; 9(9): 1541-1559, 1996 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-12662552

RESUMO

This paper focuses on two ART architectures, the Fuzzy ART and the Fuzzy ARTMAP. Fuzzy ART is a pattern clustering machine, while Fuzzy ARTMAP is a pattern classification machine. Our study concentrates on the order according to which categories in Fuzzy ART, or the ART(a) model of Fuzzy ARTMAP are chosen. Our work provides a geometrical, and clearer understanding of why, and in what order, these categories are chosen for various ranges of the choice parameter of the Fuzzy ART module. This understanding serves as a powerful tool in developing properties of learning pertaining to these neural network architectures; to strengthen this argument, it is worth mentioning that the order according to which categories are chosen in ART 1 and ARTMAP provided a valuable tool in proving important properties about these architectures. Copyright 1996 Elsevier Science Ltd.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA