Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Neurosci ; 17: 1181703, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37287799

RESUMO

Background: Deep learning (DL) has shown promising results in molecular-based classification of glioma subtypes from MR images. DL requires a large number of training data for achieving good generalization performance. Since brain tumor datasets are usually small in size, combination of such datasets from different hospitals are needed. Data privacy issue from hospitals often poses a constraint on such a practice. Federated learning (FL) has gained much attention lately as it trains a central DL model without requiring data sharing from different hospitals. Method: We propose a novel 3D FL scheme for glioma and its molecular subtype classification. In the scheme, a slice-based DL classifier, EtFedDyn, is exploited which is an extension of FedDyn, with the key differences on using focal loss cost function to tackle severe class imbalances in the datasets, and on multi-stream network to exploit MRIs in different modalities. By combining EtFedDyn with domain mapping as the pre-processing and 3D scan-based post-processing, the proposed scheme makes 3D brain scan-based classification on datasets from different dataset owners. To examine whether the FL scheme could replace the central learning (CL) one, we then compare the classification performance between the proposed FL and the corresponding CL schemes. Furthermore, detailed empirical-based analysis were also conducted to exam the effect of using domain mapping, 3D scan-based post-processing, different cost functions and different FL schemes. Results: Experiments were done on two case studies: classification of glioma subtypes (IDH mutation and wild-type on TCGA and US datasets in case A) and glioma grades (high/low grade glioma HGG and LGG on MICCAI dataset in case B). The proposed FL scheme has obtained good performance on the test sets (85.46%, 75.56%) for IDH subtypes and (89.28%, 90.72%) for glioma LGG/HGG all averaged on five runs. Comparing with the corresponding CL scheme, the drop in test accuracy from the proposed FL scheme is small (-1.17%, -0.83%), indicating its good potential to replace the CL scheme. Furthermore, the empirically tests have shown that an increased classification test accuracy by applying: domain mapping (0.4%, 1.85%) in case A; focal loss function (1.66%, 3.25%) in case A and (1.19%, 1.85%) in case B; 3D post-processing (2.11%, 2.23%) in case A and (1.81%, 2.39%) in case B and EtFedDyn over FedAvg classifier (1.05%, 1.55%) in case A and (1.23%, 1.81%) in case B with fast convergence, which all contributed to the improvement of overall performance in the proposed FL scheme. Conclusion: The proposed FL scheme is shown to be effective in predicting glioma and its subtypes by using MR images from test sets, with great potential of replacing the conventional CL approaches for training deep networks. This could help hospitals to maintain their data privacy, while using a federated trained classifier with nearly similar performance as that from a centrally trained one. Further detailed experiments have shown that different parts in the proposed 3D FL scheme, such as domain mapping (make datasets more uniform) and post-processing (scan-based classification), are essential.

2.
Sensors (Basel) ; 22(14)2022 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-35890972

RESUMO

In most deep learning-based brain tumor segmentation methods, training the deep network requires annotated tumor areas. However, accurate tumor annotation puts high demands on medical personnel. The aim of this study is to train a deep network for segmentation by using ellipse box areas surrounding the tumors. In the proposed method, the deep network is trained by using a large number of unannotated tumor images with foreground (FG) and background (BG) ellipse box areas surrounding the tumor and background, and a small number of patients (<20) with annotated tumors. The training is conducted by initial training on two ellipse boxes on unannotated MRIs, followed by refined training on a small number of annotated MRIs. We use a multi-stream U-Net for conducting our experiments, which is an extension of the conventional U-Net. This enables the use of complementary information from multi-modality (e.g., T1, T1ce, T2, and FLAIR) MRIs. To test the feasibility of the proposed approach, experiments and evaluation were conducted on two datasets for glioma segmentation. Segmentation performance on the test sets is then compared with those used on the same network but trained entirely by annotated MRIs. Our experiments show that the proposed method has obtained good tumor segmentation results on the test sets, wherein the dice score on tumor areas is (0.8407, 0.9104), and segmentation accuracy on tumor areas is (83.88%, 88.47%) for the MICCAI BraTS'17 and US datasets, respectively. Comparing the segmented results by using the network trained by all annotated tumors, the drop in the segmentation performance from the proposed approach is (0.0594, 0.0159) in the dice score, and (8.78%, 2.61%) in segmented tumor accuracy for MICCAI and US test sets, which is relatively small. Our case studies have demonstrated that training the network for segmentation by using ellipse box areas in place of all annotated tumors is feasible, and can be considered as an alternative, which is a trade-off between saving medical experts' time annotating tumors and a small drop in segmentation performance.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Neoplasias Encefálicas/diagnóstico por imagem , Estudos de Viabilidade , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos
3.
BMC Biomed Eng ; 4(1): 4, 2022 May 19.
Artigo em Inglês | MEDLINE | ID: mdl-35590389

RESUMO

BACKGROUND: For brain tumors, identifying the molecular subtypes from magnetic resonance imaging (MRI) is desirable, but remains a challenging task. Recent machine learning and deep learning (DL) approaches may help the classification/prediction of tumor subtypes through MRIs. However, most of these methods require annotated data with ground truth (GT) tumor areas manually drawn by medical experts. The manual annotation is a time consuming process with high demand on medical personnel. As an alternative automatic segmentation is often used. However, it does not guarantee the quality and could lead to improper or failed segmented boundaries due to differences in MRI acquisition parameters across imaging centers, as segmentation is an ill-defined problem. Analogous to visual object tracking and classification, this paper shifts the paradigm by training a classifier using tumor bounding box areas in MR images. The aim of our study is to see whether it is possible to replace GT tumor areas by tumor bounding box areas (e.g. ellipse shaped boxes) for classification without a significant drop in performance. METHOD: In patients with diffuse gliomas, training a deep learning classifier for subtype prediction by employing tumor regions of interest (ROIs) using ellipse bounding box versus manual annotated data. Experiments were conducted on two datasets (US and TCGA) consisting of multi-modality MRI scans where the US dataset contained patients with diffuse low-grade gliomas (dLGG) exclusively. RESULTS: Prediction rates were obtained on 2 test datasets: 69.86% for 1p/19q codeletion status on US dataset and 79.50% for IDH mutation/wild-type on TCGA dataset. Comparisons with that of using annotated GT tumor data for training showed an average of 3.0% degradation (2.92% for 1p/19q codeletion status and 3.23% for IDH genotype). CONCLUSION: Using tumor ROIs, i.e., ellipse bounding box tumor areas to replace annotated GT tumor areas for training a deep learning scheme, cause only a modest decline in performance in terms of subtype prediction. With more data that can be made available, this may be a reasonable trade-off where decline in performance may be counteracted with more data.

4.
Acta Neurochir Suppl ; 134: 79-89, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34862531

RESUMO

The use of deep learning (DL) is rapidly increasing in clinical neuroscience. The term denotes models with multiple sequential layers of learning algorithms, architecturally similar to neural networks of the brain. We provide examples of DL in analyzing MRI data and discuss potential applications and methodological caveats.Important aspects are data pre-processing, volumetric segmentation, and specific task-performing DL methods, such as CNNs and AEs. Additionally, GAN-expansion and domain mapping are useful DL techniques for generating artificial data and combining several smaller datasets.We present results of DL-based segmentation and accuracy in predicting glioma subtypes based on MRI features. Dice scores range from 0.77 to 0.89. In mixed glioma cohorts, IDH mutation can be predicted with a sensitivity of 0.98 and specificity of 0.97. Results in test cohorts have shown improvements of 5-7% in accuracy, following GAN-expansion of data and domain mapping of smaller datasets.The provided DL examples are promising, although not yet in clinical practice. DL has demonstrated usefulness in data augmentation and for overcoming data variability. DL methods should be further studied, developed, and validated for broader clinical use. Ultimately, DL models can serve as effective decision support systems, and are especially well-suited for time-consuming, detail-focused, and data-ample tasks.


Assuntos
Aprendizado Profundo , Glioma , Adulto , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Redes Neurais de Computação
5.
Brain Sci ; 10(7)2020 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-32708419

RESUMO

Brain tumors, such as low grade gliomas (LGG), are molecularly classified which require the surgical collection of tissue samples. The pre-surgical or non-operative identification of LGG molecular type could improve patient counseling and treatment decisions. However, radiographic approaches to LGG molecular classification are currently lacking, as clinicians are unable to reliably predict LGG molecular type using magnetic resonance imaging (MRI) studies. Machine learning approaches may improve the prediction of LGG molecular classification through MRI, however, the development of these techniques requires large annotated data sets. Merging clinical data from different hospitals to increase case numbers is needed, but the use of different scanners and settings can affect the results and simply combining them into a large dataset often have a significant negative impact on performance. This calls for efficient domain adaption methods. Despite some previous studies on domain adaptations, mapping MR images from different datasets to a common domain without affecting subtitle molecular-biomarker information has not been reported yet. In this paper, we propose an effective domain adaptation method based on Cycle Generative Adversarial Network (CycleGAN). The dataset is further enlarged by augmenting more MRIs using another GAN approach. Further, to tackle the issue of brain tumor segmentation that requires time and anatomical expertise to put exact boundary around the tumor, we have used a tight bounding box as a strategy. Finally, an efficient deep feature learning method, multi-stream convolutional autoencoder (CAE) and feature fusion, is proposed for the prediction of molecular subtypes (1p/19q-codeletion and IDH mutation). The experiments were conducted on a total of 161 patients consisting of FLAIR and T1 weighted with contrast enhanced (T1ce) MRIs from two different institutions in the USA and France. The proposed scheme is shown to achieve the test accuracy of 74 . 81 % on 1p/19q codeletion and 81 . 19 % on IDH mutation, with marked improvement over the results obtained without domain mapping. This approach is also shown to have comparable performance to several state-of-the-art methods.

6.
BMC Med Imaging ; 20(1): 87, 2020 07 29.
Artigo em Inglês | MEDLINE | ID: mdl-32727476

RESUMO

BACKGROUND: This paper addresses issues of brain tumor, glioma, classification from four modalities of Magnetic Resonance Image (MRI) scans (i.e., T1 weighted MRI, T1 weighted MRI with contrast-enhanced, T2 weighted MRI and FLAIR). Currently, many available glioma datasets often contain some unlabeled brain scans, and many datasets are moderate in size. METHODS: We propose to exploit deep semi-supervised learning to make full use of the unlabeled data. Deep CNN features were incorporated into a new graph-based semi-supervised learning framework for learning the labels of the unlabeled data, where a new 3D-2D consistent constraint is added to make consistent classifications for the 2D slices from the same 3D brain scan. A deep-learning classifier is then trained to classify different glioma types using both labeled and unlabeled data with estimated labels. To alleviate the overfitting caused by moderate-size datasets, synthetic MRIs generated by Generative Adversarial Networks (GANs) are added in the training of CNNs. RESULTS: The proposed scheme has been tested on two glioma datasets, TCGA dataset for IDH-mutation prediction (molecular-based glioma subtype classification) and MICCAI dataset for glioma grading. Our results have shown good performance (with test accuracies 86.53% on TCGA dataset and 90.70% on MICCAI dataset). CONCLUSIONS: The proposed scheme is effective for glioma IDH-mutation prediction and glioma grading, and its performance is comparable to the state-of-the-art.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Glioma/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Neoplasias Encefálicas/classificação , Neoplasias Encefálicas/genética , Neoplasias Encefálicas/patologia , Bases de Dados Factuais , Aprendizado Profundo , Glioma/classificação , Glioma/genética , Glioma/patologia , Humanos , Isocitrato Desidrogenase/genética , Mutação , Gradação de Tumores , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 1572-1575, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30440693

RESUMO

This paper addresses the issue of fall detection from videos for e-healthcare and assisted-living. Instead of using conventional hand-crafted features from videos, we propose a fall detection scheme based on co-saliency-enhanced recurrent convolutional network (RCN) architecture for fall detection from videos. In the proposed scheme, a deep learning method RCN is realized by a set of Convolutional Neural Networks (CNNs) in segment-levels followed by a Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), to handle the time-dependent video frames. The co-saliency-based method enhances salient human activity regions hence further improves the deep learning performance. The main contributions of the paper include: (a) propose a recurrent convolutional network (RCN) architecture that is dedicated to the tasks of human fall detection in videos; (b) integrate a co-saliency enhancement to the deep learning scheme for further improving the deep learning performance; (c) extensive empirical tests for performance analysis and evaluation under different network settings and data partitioning. Experiments using the proposed scheme were conducted on an open dataset containing multicamera videos from different view angles, results have shown very good performance (test accuracy 98.96%). Comparisons with two existing methods have provided further support to the proposed scheme.


Assuntos
Acidentes por Quedas , Aprendizado Profundo , Redes Neurais de Computação , Humanos , Gravação em Vídeo
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 5894-5897, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30441677

RESUMO

This paper addresses issues of brain tumor, glioma, grading from multi-sensor images. Different types of scanners (or sensors) like enhanced T1-MRI, T2-MRI and FLAIR, show different contrast and are sensitive to different brain tissues and fluid regions. Most existing works use 3D brain images from single sensor. In this paper, we propose a novel multistream deep Convolutional Neural Network (CNN) architecture that extracts and fuses the features from multiple sensors for glioma tumor grading/subcategory grading. The main contributions of the paper are: (a) propose a novel multistream deep CNN architecture for glioma grading; (b) apply sensor fusion from T1-MRI, T2-MRI and/or FLAIR for enhancing performance through feature aggregation; (c) mitigate overfitting by using 2D brain image slices in combination with 2D image augmentation. Two datasets were used for our experiments, one for classifying low/high grade gliomas, another for classifying glioma with/without 1p19q codeletion. Experiments using the proposed scheme have shown good results (with test accuracy of 90.87% for former case, and 89.39 % for the latter case). Comparisons with several existing methods have provided further support to the proposed scheme. keywords: brain tumor classification, glioma, 1p19q codeletion, glioma grading, deep learning, multi-stream convolutional neural networks, sensor fusion, T1-MR image, T2-MR image, FLAIR.


Assuntos
Neoplasias Encefálicas/classificação , Aprendizado Profundo , Glioma/classificação , Redes Neurais de Computação , Neoplasias Encefálicas/diagnóstico por imagem , Glioma/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 2395-2398, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28268807

RESUMO

Central nervous system dysfunction in infants may be manifested through inconsistent, rigid and abnormal limb movements. Detection of limb movement anomalies associated with such neurological dysfunctions in infants is the first step towards early treatment for improving infant development. This paper addresses the issue of detecting and quantifying limb movement anomalies in infants through non-invasive 3D image analysis methods using videos from multiple camera views. We propose a novel scheme for tracking 3D time trajectories of markers on infant's limbs by video analysis techniques. The proposed scheme employ videos captured from three camera views. This enables us to detect a set of enhanced 3D markers through cross-view matching and to effectively handle marker self-occlusions by other body parts. We track a set of 3D trajectories of limb movements by a set of particle filters in parallel, enabling more robust 3D tracking of markers, and use the 3D model errors for quantifying abrupt limb movements. The proposed work makes a significant advancement to the previous work in [1] through employing tracking in 3D space, and hence overcome several main barriers that hinder real applications by using single camera-based techniques. To the best of our knowledge, applying such a multi-view video analysis approach for assessing neurological dysfunctions of infants through 3D time trajectories of markers on limbs is novel, and could lead to computer-aided tools for diagnosis of dysfunctions where early treatment may improve infant development. Experiments were conducted on multi-view neonate videos recorded in a clinical setting and results have provided further support to the proposed method.


Assuntos
Doenças do Sistema Nervoso Central/diagnóstico por imagem , Imageamento Tridimensional , Movimento/fisiologia , Algoritmos , Calibragem , Diagnóstico por Computador , Extremidades/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Lactente , Recém-Nascido , Funções Verossimilhança , Modelos Teóricos , Gravação em Vídeo
10.
Biomed Res Int ; 2015: 760230, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26451376

RESUMO

The design of an optimal gradient encoding scheme (GES) is a fundamental problem in diffusion MRI. It is well studied for the case of second-order tensor imaging (Gaussian diffusion). However, it has not been investigated for the wide range of non-Gaussian diffusion models. The optimal GES is the one that minimizes the variance of the estimated parameters. Such a GES can be realized by minimizing the condition number of the design matrix (K-optimal design). In this paper, we propose a new approach to solve the K-optimal GES design problem for fourth-order tensor-based diffusion profile imaging. The problem is a nonconvex experiment design problem. Using convex relaxation, we reformulate it as a tractable semidefinite programming problem. Solving this problem leads to several theoretical properties of K-optimal design: (i) the odd moments of the K-optimal design must be zero; (ii) the even moments of the K-optimal design are proportional to the total number of measurements; (iii) the K-optimal design is not unique, in general; and (iv) the proposed method can be used to compute the K-optimal design for an arbitrary number of measurements. Our Monte Carlo simulations support the theoretical results and show that, in comparison with existing designs, the K-optimal design leads to the minimum signal deviation.


Assuntos
Algoritmos , Imagem de Difusão por Ressonância Magnética/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Simulação por Computador , Modelos Estatísticos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
11.
IEEE Trans Image Process ; 24(12): 5671-83, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26441448

RESUMO

Existing salient object detection models favor over-segmented regions upon which saliency is computed. Such local regions are less effective on representing object holistically and degrade emphasis of entire salient objects. As a result, the existing methods often fail to highlight an entire object in complex background. Toward better grouping of objects and background, in this paper, we consider graph cut, more specifically, the normalized graph cut (Ncut) for saliency detection. Since the Ncut partitions a graph in a normalized energy minimization fashion, resulting eigenvectors of the Ncut contain good cluster information that may group visual contents. Motivated by this, we directly induce saliency maps via eigenvectors of the Ncut, contributing to accurate saliency estimation of visual clusters. We implement the Ncut on a graph derived from a moderate number of superpixels. This graph captures both intrinsic color and edge information of image data. Starting from the superpixels, an adaptive multi-level region merging scheme is employed to seek such cluster information from Ncut eigenvectors. With developed saliency measures for each merged region, encouraging performance is obtained after across-level integration. Experiments by comparing with 13 existing methods on four benchmark datasets, including MSRA-1000, SOD, SED, and CSSD show the proposed method, Ncut saliency, results in uniform object enhancement and achieves comparable/better performance to the state-of-the-art methods.

12.
Biomed Res Int ; 2015: 138060, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26839880

RESUMO

The monoexponential model is widely used in quantitative biomedical imaging. Notable applications include apparent diffusion coefficient (ADC) imaging and pharmacokinetics. The application of ADC imaging to the detection of malignant tissue has in turn prompted several studies concerning optimal experiment design for monoexponential model fitting. In this paper, we propose a new experiment design method that is based on minimizing the determinant of the covariance matrix of the estimated parameters (D-optimal design). In contrast to previous methods, D-optimal design is independent of the imaged quantities. Applying this method to ADC imaging, we demonstrate its steady performance for the whole range of input variables (imaged parameters, number of measurements, and range of b-values). Using Monte Carlo simulations we show that the D-optimal design outperforms existing experiment design methods in terms of accuracy and precision of the estimated parameters.


Assuntos
Imagem de Difusão por Ressonância Magnética/métodos , Modelos Teóricos , Humanos
13.
IEEE Trans Cybern ; 43(6): 2005-19, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23757588

RESUMO

This paper proposes a novel Bayesian online learning and tracking scheme for video objects on Grassmann manifolds. Although manifold visual object tracking is promising, large and fast nonplanar (or out-of-plane) pose changes and long-term partial occlusions of deformable objects in video remain a challenge that limits the tracking performance. The proposed method tackles these problems with the main novelties on: 1) online estimation of object appearances on Grassmann manifolds; 2) optimal criterion-based occlusion handling for online updating of object appearances; 3) a nonlinear dynamic model for both the appearance basis matrix and its velocity; and 4) Bayesian formulations, separately for the tracking process and the online learning process, that are realized by employing two particle filters: one is on the manifold for generating appearance particles and another on the linear space for generating affine box particles. Tracking and online updating are performed in an alternating fashion to mitigate the tracking drift. Experiments using the proposed tracker on videos captured by a single dynamic/static camera have shown robust tracking performance, particularly for scenarios when target objects contain significant nonplanar pose changes and long-term partial occlusions. Comparisons with eight existing state-of-the-art/most relevant manifold/nonmanifold trackers with evaluations have provided further support to the proposed scheme.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Modelos Teóricos , Reconhecimento Automatizado de Padrão/métodos , Gravação em Vídeo/métodos , Simulação por Computador , Dinâmica não Linear
14.
Med Image Comput Comput Assist Interv ; 16(Pt 1): 687-94, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24505727

RESUMO

Several data acquisition schemes for diffusion MRI have been proposed and explored to date for the reconstruction of the 2nd order tensor. Our main contributions in this paper are: (i) the definition of a new class of sampling schemes based on repeated measurements in every sampling point; (ii) two novel schemes belonging to this class; and (iii) a new reconstruction framework for the second scheme. We also present an evaluation, based on Monte Carlo computer simulations, of the performances of these schemes relative to known optimal sampling schemes for both 2nd and 4th order tensors. The results demonstrate that tensor estimation by the proposed sampling schemes and estimation framework is more accurate and robust.


Assuntos
Algoritmos , Encéfalo/anatomia & histologia , Imagem de Difusão por Ressonância Magnética/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador , Humanos , Modelos Neurológicos , Modelos Estatísticos , Reprodutibilidade dos Testes , Tamanho da Amostra , Sensibilidade e Especificidade
15.
IEEE Trans Syst Man Cybern B Cybern ; 38(5): 1254-69, 2008 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-18784010

RESUMO

Efficiency and robustness are the two most important issues for multiobject tracking algorithms in real-time intelligent video surveillance systems. We propose a novel 2.5-D approach to real-time multiobject tracking in crowds, which is formulated as a maximum a posteriori estimation problem and is approximated through an assignment step and a location step. Observing that the occluding object is usually less affected by the occluded objects, sequential solutions for the assignment and the location are derived. A novel dominant color histogram (DCH) is proposed as an efficient object model. The DCH can be regarded as a generalized color histogram, where dominant colors are selected based on a given distance measure. Comparing with conventional color histograms, the DCH only requires a few color components (31 on average). Furthermore, our theoretical analysis and evaluation on real data have shown that DCHs are robust to illumination changes. Using the DCH, efficient implementations of sequential solutions for the assignment and location steps are proposed. The assignment step includes the estimation of the depth order for the objects in a dispersing group, one-by-one assignment, and feature exclusion from the group representation. The location step includes the depth-order estimation for the objects in a new group, the two-phase mean-shift location, and the exclusion of tracked objects from the new position in the group. Multiobject tracking results and evaluation from public data sets are presented. Experiments on image sequences captured from crowded public environments have shown good tracking results, where about 90% of the objects have been successfully tracked with the correct identification numbers by the proposed method. Our results and evaluation have indicated that the method is efficient and robust for tracking multiple objects (>or= 3) in complex occlusion for real-world surveillance scenarios.


Assuntos
Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Medidas de Segurança , Gravação em Vídeo/métodos , Sistemas Computacionais , Movimento (Física) , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
16.
IEEE Trans Image Process ; 13(11): 1459-72, 2004 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-15540455

RESUMO

This paper addresses the problem of background modeling for foreground object detection in complex environments. A Bayesian framework that incorporates spectral, spatial, and temporal features to characterize the background appearance is proposed. Under this framework, the background is represented by the most significant and frequent features, i.e., the principal features, at each pixel. A Bayes decision rule is derived for background and foreground classification based on the statistics of principal features. Principal feature representation for both the static and dynamic background pixels is investigated. A novel learning method is proposed to adapt to both gradual and sudden "once-off" background changes. The convergence of the learning process is analyzed and a formula to select a proper learning rate is derived. Under the proposed framework, a novel algorithm for detecting foreground objects from complex environments is then established. It consists of change detection, change classification, foreground segmentation, and background maintenance. Experiments were conducted on image sequences containing targets of interest in a variety of environments, e.g., offices, public buildings, subway stations, campuses, parking lots, airports, and sidewalks. Good results of foreground detection were obtained. Quantitative evaluation and comparison with the existing method show that the proposed method provides much improved results.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Técnica de Subtração , Análise por Conglomerados , Gráficos por Computador , Simulação por Computador , Humanos , Aumento da Imagem/métodos , Armazenamento e Recuperação da Informação/métodos , Modelos Biológicos , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...