Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Stud Health Technol Inform ; 184: 161-7, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23400150

RESUMO

Surgical training plays an important role in assisting residents to develop critical skills. Providing effective surgical training, however, remains as a challenging task. Existing videotaped training instructions can only show imagery from a fixed viewpoint that lacks both depth perception and interactivity. We present a new portable immersive surgical training system that is capable of acquiring and displaying high fidelity 3D reconstructions of actual surgical procedures. Our solution utilizes a set of Microsoft Kinect sensors to simultaneously recover the participants, the surgical environment, and the surgical scene itself. We then develop a space-time navigator to allow the trainees to witness and explore a prior procedure as if they were there. Preliminary feedback from residents shows that our system is much more effective than conventional videotaped system.


Assuntos
Actigrafia/instrumentação , Biorretroalimentação Psicológica/instrumentação , Instrução por Computador/instrumentação , Imageamento Tridimensional/instrumentação , Cirurgia Assistida por Computador/instrumentação , Transdutores , Interface Usuário-Computador , Colorimetria/instrumentação , Avaliação Educacional/métodos , Desenho de Equipamento , Análise de Falha de Equipamento , Humanos
2.
Stud Health Technol Inform ; 173: 186-92, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22356984

RESUMO

Surgery simulation is playing an increasing role in medical education. A long standing problem in this area is how to integrate fast yet realistic haptic feedback to the system. In this paper, we propose an algorithm to accelerate the recently proposed volume-based haptic feedback approach. Unlike existing techniques that require separately scanning along all three axes, we only scan the volume once along one axis and recover the penetration information along the other two based on geometric constraints and heuristics. This significantly reduces the computational cost and doubles the haptic refresh rate, which significantly improves the stability of haptic feedback.


Assuntos
Algoritmos , Módulo de Elasticidade/fisiologia , Retroalimentação , Percepção do Tato , Simulação por Computador , Humanos , Procedimentos Cirúrgicos Operatórios
3.
Stud Health Technol Inform ; 173: 193-9, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22356985

RESUMO

Surgery simulation plays an important role in surgery planning, surgeon training, and telemedicine. A long-standing problem in this region is how to integrate coherent visual illustrations to deformation. In this paper, we present a new non-photorealistic surgery simulation system that combines force visualization and dynamic pencil-stroke illustration. We estimate the elastic force field in real-time and integrate it with the contact force to form a combined force map. Then, our rendering module is able to dynamically compute the principal directions on deforming organ models and apply color coded, pencil-style strokes onto the model for illustrating deformations. We implement these modules on GPU using NVidia's CUDA. Our system can faithfully and coherently reveal geometric deformation of organs under the force field.


Assuntos
Simulação por Computador , Técnicas de Imagem por Elasticidade/métodos , Procedimentos Cirúrgicos Operatórios , Humanos , Modelos Anatômicos
4.
Stud Health Technol Inform ; 163: 224-30, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21335793

RESUMO

In surgery procedures, haptic interaction provides surgeons with indispensable information to accurately locate the surgery target. This is especially critical when visual feedback cannot provide sufficient information and tactile interrogation, such as palpating some region of tissue, is required to locate a specific underlying tumor. However, in most current surgery simulators, the haptic interaction model is usually simplified into a contact sphere or rod model, leaving arbitrarily shaped intersection haptic feedback between target tissue and surgery instrument less unreliable. In this paper, a novel haptic feedback algorithm is introduced for generating the feedback forces in surgery simulations. The proposed algorithm initially employs three Layered Depth Images (LDI) to sample the 3D objects in X, Y and Z directions. A secondary analysis scans through two sampled meshes and detects their penetration volume. Based on the principle that interaction force should minimize the penetration volume, the haptic feedback force is derived directly. Additionally, a post-processing technique is developed to render distinct physical tissue properties across different interaction areas. The proposed approach does not require any pre-processing and is applicable for both rigid and deformable objects.


Assuntos
Biorretroalimentação Psicológica/fisiologia , Tecido Conjuntivo/fisiologia , Tecido Conjuntivo/cirurgia , Modelos Biológicos , Cirurgia Assistida por Computador/métodos , Tato/fisiologia , Interface Usuário-Computador , Simulação por Computador , Módulo de Elasticidade/fisiologia , Dureza/fisiologia , Humanos
5.
Stud Health Technol Inform ; 163: 691-5, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21335882

RESUMO

We are developing agents for positron emission tomography (PET) imaging of cancer gene mRNA expression and software to fuse mRNA PET images with anatomical computerized tomography (CT) images to enable volumetric (3D) haptic (touch-and-feel) simulation of pancreatic cancer and surrounding organs prior to surgery in a particular patient. We have identified a novel ligand specific for epidermal growth factor receptor (EGFR) to direct PET agent uptake specifically into cancer cells, and created a volumetric haptic surgical simulation of human pancreatic cancer reconstructed from patient CT data. Young's modulus and the Poisson ratio for each tissue will be adjusted to fit the experience of participating surgeons.


Assuntos
Imageamento Tridimensional/métodos , Modelos Biológicos , Imagem Molecular/métodos , Neoplasias/diagnóstico por imagem , Neoplasias/cirurgia , Cirurgia Assistida por Computador/métodos , Interface Usuário-Computador , Simulação por Computador , Desenho de Fármacos , Humanos , Tomografia por Emissão de Pósitrons/métodos , Compostos Radiofarmacêuticos/síntese química
6.
Comput Biol Med ; 38(1): 1-13, 2008 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-17669389

RESUMO

The electrocardiogram (ECG) is widely used for diagnosis of heart diseases. Good quality ECG are utilized by physicians for interpretation and identification of physiological and pathological phenomena. However, in real situations, ECG recordings are often corrupted by artifacts. Two dominant artifacts present in ECG recordings are: (1) high-frequency noise caused by electromyogram induced noise, power line interferences, or mechanical forces acting on the electrodes; (2) baseline wander (BW) that may be due to respiration or the motion of the patients or the instruments. These artifacts severely limit the utility of recorded ECGs and thus need to be removed for better clinical evaluation. Several methods have been developed for ECG enhancement. In this paper, we propose a new ECG enhancement method based on the recently developed empirical mode decomposition (EMD). The proposed EMD-based method is able to remove both high-frequency noise and BW with minimum signal distortion. The method is validated through experiments on the MIT-BIH databases. Both quantitative and qualitative results are given. The simulations show that the proposed EMD-based method provides very good results for denoising and BW removal.


Assuntos
Artefatos , Eletrocardiografia/métodos , Processamento de Sinais Assistido por Computador , Algoritmos , Arritmias Cardíacas/diagnóstico , Arritmias Cardíacas/fisiopatologia , Bases de Dados Factuais , Diagnóstico por Computador/métodos , Humanos , Reprodutibilidade dos Testes
7.
Comput Biol Med ; 99: 53-62, 2018 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-29886261

RESUMO

Detecting and classifying cardiac arrhythmias is critical to the diagnosis of patients with cardiac abnormalities. In this paper, a novel approach based on deep learning methodology is proposed for the classification of single-lead electrocardiogram (ECG) signals. We demonstrate the application of the Restricted Boltzmann Machine (RBM) and deep belief networks (DBN) for ECG classification following detection of ventricular and supraventricular heartbeats using single-lead ECG. The effectiveness of this proposed algorithm is illustrated using real ECG signals from the widely-used MIT-BIH database. Simulation results demonstrate that with a suitable choice of parameters, RBM and DBN can achieve high average recognition accuracies of ventricular ectopic beats (93.63%) and of supraventricular ectopic beats (95.57%) at a low sampling rate of 114 Hz. Experimental results indicate that classifiers built into this deep learning-based framework achieved state-of-the art performance models at lower sampling rates and simple features when compared to traditional methods. Further, employing features extracted at a sampling rate of 114 Hz when combined with deep learning provided enough discriminatory power for the classification task. This performance is comparable to that of traditional methods and uses a much lower sampling rate and simpler features. Thus, our proposed deep neural network algorithm demonstrates that deep learning-based methods offer accurate ECG classification and could potentially be extended to other physiological signal classifications, such as those in arterial blood pressure (ABP), nerve conduction (EMG), and heart rate variability (HRV) studies.


Assuntos
Arritmias Cardíacas/fisiopatologia , Bases de Dados Factuais , Aprendizado Profundo , Eletrocardiografia , Processamento de Sinais Assistido por Computador , Humanos
8.
IEEE Trans Neural Syst Rehabil Eng ; 15(2): 310-21, 2007 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-17601201

RESUMO

Methods to automatically convert graphics into raised-line images have been recently investigated. In this paper, concepts from previous research are extended to the vector graphics case, producing tactile pictures in which important features are emphasized. The proposed algorithm extracts object boundaries and employs a classification process, based on a graphic's hierarchical structure, to determine critical outlines. A single parameter is introduced into the classification process, enabling users to tailor graphics to their own preferences. The resulting outlines are printed using a Braille printer to produce tactile output. Critical outlines are embossed with raised dots of highest height while other lines and details are embossed with a lower height. Psychophysical experiments including discrimination, identification, and comprehension are utilized to evaluate and compare the proposed algorithm. Results indicate that the proposed method outperforms other methods in all three considered tasks. The results also show that emphasizing important features significantly increases comprehension of tactile graphics, validating the proposed method's effectiveness in conveying visual information.


Assuntos
Algoritmos , Gráficos por Computador , Interpretação de Imagem Assistida por Computador/métodos , Auxiliares Sensoriais , Processamento de Sinais Assistido por Computador , Tato , Interface Usuário-Computador , Transtornos da Visão/reabilitação , Periféricos de Computador
9.
IEEE Trans Med Imaging ; 26(5): 712-27, 2007 May.
Artigo em Inglês | MEDLINE | ID: mdl-17518065

RESUMO

Speckle is a multiplicative noise that degrades ultrasound images. Recent advancements in ultrasound instrumentation and portable ultrasound devices necessitate the need for more robust despeckling techniques, for both routine clinical practice and teleconsultation. Methods previously proposed for speckle reduction suffer from two major limitations: 1) noise attenuation is not sufficient, especially in the smooth and background areas; 2) existing methods do not sufficiently preserve or enhance edges--they only inhibit smoothing near edges. In this paper, we propose a novel technique that is capable of reducing the speckle more effectively than previous methods and jointly enhancing the edge information, rather than just inhibiting smoothing. The proposed method utilizes the Rayleigh distribution to model the speckle and adopts the robust maximum-likelihood estimation approach. The resulting estimator is statistically analyzed through first and second moment derivations. A tuning parameter that naturally evolves in the estimation equation is analyzed, and an adaptive method utilizing the instantaneous coefficient of variation is proposed to adjust this parameter. To further tailor performance, a weighted version of the proposed estimator is introduced to exploit varying statistics of input samples. Finally, the proposed method is evaluated and compared to well-accepted methods through simulations utilizing synthetic and real ultrasound data.


Assuntos
Algoritmos , Artefatos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Ultrassonografia Pré-Natal/métodos , Simulação por Computador , Humanos , Funções Verossimilhança , Modelos Biológicos , Modelos Estatísticos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
10.
IEEE Trans Biomed Eng ; 54(4): 766-9, 2007 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-17405386

RESUMO

Most of the recent electrocardiogram (ECG) compression approaches developed with the wavelet transform are implemented using the discrete wavelet transform. Conversely, wavelet packets (WP) are not extensively used, although they are an adaptive decomposition for representing signals. In this paper, we present a thresholding-based method to encode ECG signals using WP. The design of the compressor has been carried out according to two main goals: (1) The scheme should be simple to allow real-time implementation; (2) quality, i.e., the reconstructed signal should be as similar as possible to the original signal. The proposed scheme is versatile as far as neither QRS detection nor a priori signal information is required. As such, it can thus be applied to any ECG. Results show that WP perform efficiently and can now be considered as an alternative in ECG compression applications.


Assuntos
Algoritmos , Artefatos , Compressão de Dados/métodos , Eletrocardiografia/métodos , Processamento de Sinais Assistido por Computador , Estudos de Viabilidade , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
11.
IEEE Trans Image Process ; 15(12): 3636-54, 2006 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17153940

RESUMO

The rank information of samples is widely utilized in nonlinear signal processing algorithms. Recently developed fuzzy transformation theory introduces the concept of fuzzy ranks, which incorporates sample spread (or sample diversity) information into the sample ranking framework. Thus, the fuzzy rank reflects a sample's rank, as well as its similarity to the other sample (namely, joint rank order and spread), and can be utilized to improve the performance of the conventional rank-order-based filters. In this paper, the well-known lower-upper-middle (LUM) filters are generalized utilizing the fuzzy ranks, yielding the class of fuzzy rank LUM (F-LUM) filters. Statistical and deterministic properties of the F-LUM filters are derived, showing that the F-LUM smoothers have similar impulsive noise removal capability to the LUM smoothers, while preserving the image details better. The F-LUM sharpeners are capable of enhancing strong edges while simultaneously preserving small variations. The performance of the F-LUM filters are evaluated for the problems of image impulsive noise removal, sharpening and edge-detection preprocessing. The experimental results show that the F-LUM smoothers can achieve a better tradeoff between noise removal and detail preservation than the LUM smoothers. The F-LUM sharpeners are capable of sharpening the image edges without amplifying the noise or distorting the fine details. The joint smoothing and sharpening operation of the general F-LUM filters also showed superiority in edge detection preprocessing application. In conclusion, the simplicity and versatility of the F-LUM filters and their advantages over the conventional LUM filters are desirable in many practical applications. This also shows that utilizing fuzzy ranks in filter generalization is a promising methodology.


Assuntos
Algoritmos , Artefatos , Lógica Fuzzy , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Processamento de Sinais Assistido por Computador , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
12.
IEEE Trans Image Process ; 15(11): 3294-310, 2006 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17076391

RESUMO

Quadratic Volterra filters are effective in image sharpening applications. The linear combination of polynomial terms, however, yields poor performance in noisy environments. Weighted median (WM) filters, in contrast, are well known for their outlier suppression and detail preservation properties. The WM sample selection methodology is naturally extended to the quadratic sample case, yielding a filter structure referred to as quadratic weighted median (QWM) that exploits the higher order statistics of the observed samples while simultaneously being robust to outliers arising in the higher order statistics of environment noise. Through statistical analysis of higher order samples, it is shown that, although the parent Gaussian distribution is light tailed, the higher order terms exhibit heavy-tailed distributions. The optimal combination of terms contributing to a quadratic system, i.e., cross and square, is approached from a maximum likelihood perspective which yields the WM processing of these terms. The proposed QWM filter structure is analyzed through determination of the output variance and breakdown probability. The studies show that the QWM exhibits lower variance and breakdown probability indicating the robustness of the proposed structure. The performance of the QWM filter is tested on constant regions, edges and real images, and compared to its weighted-sum dual, the quadratic Volterra filter. The simulation results show that the proposed method simultaneously suppresses the noise and enhances image details. Compared with the quadratic Volterra sharpener, the QWM filter exhibits superior qualitative and quantitative performance in noisy image sharpening.


Assuntos
Algoritmos , Filtração/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Modelos Estatísticos , Simulação por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Processos Estocásticos
13.
IEEE Trans Image Process ; 15(4): 910-27, 2006 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-16579378

RESUMO

The spatial and rank (SR) orderings of samples play a critical role in most signal processing algorithms. The recently introduced fuzzy ordering theory generalizes traditional, or crisp, SR ordering concepts and defines the fuzzy (spatial) samples, fuzzy order statistics, fuzzy spatial indexes, and fuzzy ranks. Here, we introduce a more general concept, the fuzzy transformation (FZT), which refers to the mapping of the crisp samples, order statistics, and SR ordering indexes to their fuzzy counterparts. We establish the element invariant and order invariant properties of the FZT. These properties indicate that fuzzy spatial samples and fuzzy order statistics constitute the same set and, under commonly satisfied membership function conditions, the sample rank order is preserved by the FZT. The FZT also possesses clustering and symmetry properties, which are established through analysis of the distributions and expectations of fuzzy samples and fuzzy order statistics. These properties indicate that the FZT incorporates sample diversity into the ordering operation, which can be utilized in the generalization of conventional filters. Here, we establish the fuzzy weighted median (FWM), fuzzy lower-upper-middle (FLUM), and fuzzy identity filters as generalizations of their crisp counterparts. The advantage of the fuzzy generalizations is illustrated in the applications of DCT coded image deblocking, impulse removal, and noisy image sharpening.


Assuntos
Algoritmos , Lógica Fuzzy , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Gráficos por Computador , Simulação por Computador , Armazenamento e Recuperação da Informação/métodos , Modelos Estatísticos , Análise Numérica Assistida por Computador , Processamento de Sinais Assistido por Computador
14.
IEEE Trans Image Process ; 15(7): 1900-15, 2006 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-16830911

RESUMO

Partition-based Weighted Sum (P-WS) filtering is an effective method for processing nonstationary signals, especially those with regularly occurring structures, such as images. P-WS filters were originally formulated as Hard-partition Weighted Sum (HP-WS) filters and were successfully applied to image denoising. This formulation relied on intuitive arguments to generate the filter class. Here we present a statistical analysis that justifies the use of weighted sum filters after observation space partitioning. Unfortunately, the HP-WS filters are nondifferentiable and an analytical solution for their global optimization is therefore difficult to obtain. A two-stage suboptimal training procedure has been reported in the literature, but prior to this research no evaluation on the optimality of this approach has been reported. Here, a Genetic Algorithm (GA) HP-WS optimization procedure is developed that, in simulations, shows that the simpler two-stage training procedure yields near optimal results. Also developed in this paper are Soft-partition Weighted Sum (SP-WS) filters. The SP-WS filters utilize soft, or fuzzy, partitions that yield a differentiable filtering operation, enabling the development of gradient-based optimization procedures. Image denoising simulation results are presented comparing HP-WS and SP-WS filters, their optimization procedures, and wavelet-based image denoising. These results show that P-WS filters, in general, outperform traditional and wavelet-based image filters, and SP-WS filters utilizing soft partitioning not only allow for simple optimization, but also yields improved performance.


Assuntos
Algoritmos , Artefatos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Modelos Estatísticos , Processamento de Sinais Assistido por Computador , Gráficos por Computador , Simulação por Computador , Filtração/métodos , Análise Numérica Assistida por Computador , Processos Estocásticos
15.
Proc IEEE Int Symp Biomed Imaging ; 2015: 1284-1287, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28101301

RESUMO

Automated profiling of nuclear architecture, in histology sections, can potentially help predict the clinical outcomes. However, the task is challenging as a result of nuclear pleomorphism and cellular states (e.g., cell fate, cell cycle), which are compounded by the batch effect (e.g., variations in fixation and staining). Present methods, for nuclear segmentation, are based on human-designed features that may not effectively capture intrinsic nuclear architecture. In this paper, we propose a novel approach, called sparsity constrained convolutional regression (SCCR), for nuclei segmentation. Specifically, given raw image patches and the corresponding annotated binary masks, our algorithm jointly learns a bank of convolutional filters and a sparse linear regressor, where the former is used for feature extraction, and the latter aims to produce a likelihood for each pixel being nuclear region or background. During classification, the pixel label is simply determined by a thresholding operation applied on the likelihood map. The method has been evaluated using the benchmark dataset collected from The Cancer Genome Atlas (TCGA). Experimental results demonstrate that our method outperforms traditional nuclei segmentation algorithms and is able to achieve competitive performance compared to the state-of-the-art algorithm built upon human-designed features with biological prior knowledge.

16.
IEEE J Biomed Health Inform ; 19(2): 508-19, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24846672

RESUMO

Recent results in telecardiology show that compressed sensing (CS) is a promising tool to lower energy consumption in wireless body area networks for electrocardiogram (ECG) monitoring. However, the performance of current CS-based algorithms, in terms of compression rate and reconstruction quality of the ECG, still falls short of the performance attained by state-of-the-art wavelet-based algorithms. In this paper, we propose to exploit the structure of the wavelet representation of the ECG signal to boost the performance of CS-based methods for compression and reconstruction of ECG signals. More precisely, we incorporate prior information about the wavelet dependencies across scales into the reconstruction algorithms and exploit the high fraction of common support of the wavelet coefficients of consecutive ECG segments. Experimental results utilizing the MIT-BIH Arrhythmia Database show that significant performance gains, in terms of compression rate and reconstruction quality, can be obtained by the proposed algorithms compared to current CS-based methods.


Assuntos
Compressão de Dados/métodos , Eletrocardiografia/métodos , Algoritmos , Bases de Dados Factuais , Humanos , Tecnologia de Sensoriamento Remoto , Análise de Ondaletas , Tecnologia sem Fio
17.
IEEE Trans Neural Syst Rehabil Eng ; 12(2): 216-27, 2004 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-15218936

RESUMO

Reading of text and understanding images by touch is an important alternative and additional source of information when sight is absent or lost. Tactile graphics and models such as edge maps, binary output, etc., are the solution for simple access to images for blind persons. This paper introduces an approach to model the human tactile system based on the responses produced by stimuli on microcapsule paper. This system is utilized for the purpose of generating optimum halftone patterns on microcapsule paper that can be utilized for the effective generation of tactile graphics.


Assuntos
Cegueira/fisiopatologia , Cegueira/reabilitação , Interpretação de Imagem Assistida por Computador/métodos , Modelos Biológicos , Leitura , Auxiliares Sensoriais , Tato , Adulto , Simulação por Computador , Feminino , Dedos/inervação , Dedos/fisiopatologia , Humanos , Masculino , Processamento de Sinais Assistido por Computador
18.
IEEE Trans Biomed Eng ; 51(3): 471-83, 2004 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-15000378

RESUMO

The electroencephalogram is a noninvasive method of demonstrating cerebral function. The fetal electroencephalogram (FEEG) contains important information regarding the status of a fetus. It is believed that disorganization of normal FEEG development may help detect the onset of cerebral palsy and mental retardation syndromes. Unfortunately, noninvasive methods of monitoring FEEG are not currently available. Noninvasively obtained abdominal surface electrical recordings include FEEG components, but are dominated by large interfering components, and, thus, have very low signal to noise ratio. In this paper, we propose a multistep extraction procedure to separate the four main components in transabdominal recordings: 1) maternal ECG; 2) FECG; and 3) FEEG signals as well as 4) interfering baseline wander. The algorithm is tested on simulated and real transabdominal recordings. This study shows that the proposed method successfully extracts the desired FEEG signal.


Assuntos
Algoritmos , Artefatos , Encéfalo/embriologia , Encéfalo/fisiologia , Diagnóstico por Computador/métodos , Eletroencefalografia/métodos , Monitorização Fetal/métodos , Processamento de Sinais Assistido por Computador , Abdome , Simulação por Computador , Feminino , Humanos , Gravidez , Processos Estocásticos
19.
IEEE Trans Image Process ; 12(2): 140-52, 2003.
Artigo em Inglês | MEDLINE | ID: mdl-18237895

RESUMO

Permutation filters are a broad class of nonlinear selection filters that utilize the complete spatial and rank order information of observation samples. This use of joint spatial-rank information has proven useful in numerous applications. The application of permutation filters, however, is limited by the factorial growth in the number of spatial-rank orderings. Although M-permutation filters have been developed to address the growth in orderings, their a priori uniform selection of samples is not appropriate in most cases. Permutation filter implementations based on acyclic connected graphs provide a more general approach that allows the level of ordering information utilized to automatically adjust to the problem at hand. In addition to developing and analyzing graph implementations of permutation filters this paper presents a LNE based optimization of the graph structure and filter operation. Simulation results illustrating the performance of the optimization technique and the advantages of the graph implementation are presented.

20.
IEEE Trans Vis Comput Graph ; 10(3): 252-65, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-18579957

RESUMO

This paper proposes a novel approach for smoothing surfaces represented by triangular meshes. The proposed method is a two-step procedure: surface normal smoothing through fuzzy vector median (FVM) filtering followed by integration of surface normals for vertex position update based on the least square error (LSE) criteria. Median and Order Statistic-based filters are extensively used in signal processing, especially image processing, due to their ability to reject outliers and preserve features such as edges and monotonic regions. More recently, fuzzy ordering theory has been introduced to allow averaging among similarly valued samples. Fuzzy ordering theory leads naturally to the fuzzy median, which yields improved noise smoothing over traditional crisp median filters. This paper extends the fuzzy ordering concept to vector-based data and introduces the fuzzy vector median filter. The application of FVM filters to surface normal smoothing yields improved results over previously introduced normal smoothing algorithms. The improved filtering results, coupled with LSE vertex position update, produces surface smoothing that minimizes the effects of noise while simultaneously preserving detail features. The proposed method is simple to implement and relatively fast. Simulation results are presented showing the performance of the proposed method and its advantages over commonly used surface smoothing algorithms. Additionally, optimization procedures for FVM filters are derived and evaluated.


Assuntos
Algoritmos , Gráficos por Computador , Lógica Fuzzy , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA