Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Bioinformatics ; 23(17): 2342-4, 2007 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-17586826

RESUMO

UNLABELLED: BiVisu is an open-source software tool for detecting and visualizing biclusters embedded in a gene expression matrix. Through the use of appropriate coherence relations, BiVisu can detect constant, constant-row, constant-column, additive-related as well as multiplicative-related biclusters. The biclustering results are then visualized under a 2D setting for easy inspection. In particular, parallel coordinate (PC) plots for each bicluster are displayed, from which objective and subjective cluster quality evaluation can be performed. AVAILABILITY: BiVisu has been developed in Matlab and is available at http://www.eie.polyu.edu.hk/~nflaw/Biclustering/.


Assuntos
Análise por Conglomerados , Gráficos por Computador , Perfilação da Expressão Gênica/métodos , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Software , Interface Usuário-Computador , Algoritmos , Inteligência Artificial , Reconhecimento Automatizado de Padrão/métodos
2.
IEEE Trans Med Imaging ; 17(6): 986-94, 1998 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-10048855

RESUMO

When performing dynamic studies using emission tomography the tracer distribution changes during acquisition of a single set of projections. This is particularly true for some positron emission tomography (PET) systems which, like single photon emission computed tomography (SPECT), acquire data over a limited angle at any time, with full projections obtained by rotation of the detectors. In this paper, an approach is proposed for processing data from these systems, applicable to either PET or SPECT. A method of interpolation, based on overlapped parabolas, is used to obtain an estimate of the total counts in each pixel of the projections for each required frame-interval, which is the total time to acquire a single complete set of projections necessary for reconstruction. The resultant projections are reconstructed using traditional filtered backprojection (FBP) and tracer kinetic parameters are estimated using a method which relies on counts integrated over the frame-interval rather than instantaneous values. Simulated data were used to illustrate the technique's capabilities with noise levels typical of those encountered in either PET or SPECT. Dynamic datasets were constructed, based on kinetic parameters for fluoro-deoxy-glucose (FDG) and use of either a full ring detector or rotating detector acquisition. For the rotating detector, use of the interpolation scheme provided reconstructed dynamic images with reduced artefacts compared to unprocessed data or use of linear interpolation. Estimates for the metabolic rate of glucose had similar bias to those obtained from a full ring detector.


Assuntos
Modelos Biológicos , Compostos Radiofarmacêuticos/farmacocinética , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Tomografia Computadorizada de Emissão/métodos , Algoritmos , Artefatos , Filtração/métodos , Humanos , Análise dos Mínimos Quadrados , Imagens de Fantasmas/estatística & dados numéricos , Terminologia como Assunto , Fatores de Tempo , Distribuição Tecidual , Tomografia Computadorizada de Emissão/instrumentação , Tomografia Computadorizada de Emissão/estatística & dados numéricos , Tomografia Computadorizada de Emissão de Fóton Único/instrumentação , Tomografia Computadorizada de Emissão de Fóton Único/estatística & dados numéricos
3.
IEEE Trans Med Imaging ; 17(3): 334-43, 1998 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-9735897

RESUMO

With the recent development in scatter and attenuation correction algorithms, dynamic single photon emission computerized tomography (SPECT) can potentially yield physiological parameters, with tracers exhibiting suitable kinetics such as thallium-201 (Tl-201). A systematic way is proposed to investigate the minimum data acquisition times and sampling requirements for estimating physiological parameters with quantitative dynamic SPECT. Two different sampling schemes were investigated with Monte Carlo simulations: 1) Continuous data collection for total study duration ranging from 30-240 min. 2) Continuous data collection for first 10-45 min followed by a delayed study at approximately 3 h. Tissue time activity curves with realistic noise were generated from a mean plasma time activity curve and rate constants (K1 - k4) derived from Tl-201 kinetic studies in 16 dogs. Full dynamic sampling schedules (DynSS) were compared to optimum sampling schedules (OSS). We found that OSS can reliably estimate the blood flow related K1 and Vd comparable to DynSS. A 30-min continuous collection was sufficient if only K1 was of interest. A split session schedule of a 30-min dynamic followed by a static study at 3 h allowed reliable estimation of both K1 and Vd avoiding the need for a prolonged (>60-min) continuous dynamic acquisition. The methodology developed should also be applicable to optimizing sampling schedules for other SPECT tracers.


Assuntos
Radioisótopos de Tálio , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Animais , Cães , Método de Monte Carlo , Radioisótopos de Tálio/farmacocinética
4.
IEEE Trans Image Process ; 6(5): 758-63, 1997.
Artigo em Inglês | MEDLINE | ID: mdl-18282969

RESUMO

Three-dimensional discrete cosine transform (3-D DCT) coding has the advantage of reducing the interframe redundancy among a number of consecutive frames, while the motion compensation technique can only reduce the redundancy of at most two frames. However, the performance of the 3-D DCT coding will be degraded for complex scenes with a greater amount of motion. This paper presents a 3-D DCT coding with a variable temporal length that is determined by the scene change detector. Our idea is to let the motion activity in each block be very low, while the efficiency of the 3-D DCT coding could be increased. Experimental results show that this technique is indeed very efficient. The present approach has substantial improvement over the conventional fixed-length 3-D DCT coding and is also better than that of the Moving Picture Expert Group (MPEG) coding.

5.
IEEE Trans Image Process ; 10(8): 1223-38, 2001.
Artigo em Inglês | MEDLINE | ID: mdl-18255539

RESUMO

Block motion estimation using the exhaustive full search is computationally intensive. Fast search algorithms offered in the past tend to reduce the amount of computation by limiting the number of locations to be searched. Nearly all of these algorithms rely on this assumption: the mean absolute difference (MAD) distortion function increases monotonically as the search location moves away from the global minimum. Essentially, this assumption requires that the MAD error surface be unimodal over the search window. Unfortunately, this is usually not true in real-world video signals. However, we can reasonably assume that it is monotonic in a small neighborhood around the global minimum. Consequently, one simple strategy, but perhaps the most efficient and reliable, is to place the checking point as close as possible to the global minimum. In this paper, some image features are suggested to locate the initial search points. Such a guided scheme is based on the location of certain feature points. After applying a feature detecting process to each frame to extract a set of feature points as matching primitives, we have extensively studied the statistical behavior of these matching primitives, and found that they are highly correlated with the MAD error surface of real-world motion vectors. These correlation characteristics are extremely useful for fast search algorithms. The results are robust and the implementation could be very efficient. A beautiful point of our approach is that the proposed search algorithm can work together with other block motion estimation algorithms. Results of our experiment on applying the present approach to the block-based gradient descent search algorithm (BBGDS), the diamond search algorithm (DS) and our previously proposed edge-oriented block motion estimation show that the proposed search strategy is able to strengthen these searching algorithms. As compared to the conventional approach, the new algorithm, through the extraction of image features, is more robust, produces smaller motion compensation errors, and has a simple computational complexity.

6.
IEEE Trans Image Process ; 7(10): 1488-96, 1998.
Artigo em Inglês | MEDLINE | ID: mdl-18276215

RESUMO

In this work, we introduce a deblocking algorithm for Joint Photographic Experts Group (JPEG) decoded images using the wavelet transform modulus maxima (WTMM) representation. Under the WTMM representation, we can characterize the blocking effect of a JPEG decoded image as: 1) small modulus maxima at block boundaries oversmooth regions; 2) noise or irregular structures near strong edges; and 3) corrupted edges across block boundaries. The WTMM representation not only provides characterization of the blocking effect, but also enables simple and local operations to reduce the adverse effect due to this problem. The proposed algorithm first performs a segmentation on a JPEG decoded image to identify the texture regions by noting that their WTMM have small variation in regularity. We do not process the modulus maxima of these regions, to avoid the image texture being "oversmoothed"by the algorithm. Then, the singularities in the remaining regions of the blocky image and the small modulus maxima at block boundaries are removed. We link up the corrupted edges, and regularize the phase of modulus maxima as well as the magnitude of strong edges. Finally,the image is reconstructed using the projection onto convex set (POCS)technique on the processed WTMM of that JPEG decoded image.This simple algorithm improves the quality of a JPEG decoded image inthe senses of signal-to-noise ratio (SNR) as well as visual quality. We also compare the performance of our algorithm to the previous approaches,such as CLS and POCS methods. The most remarkable advantage of the WTMM deblocking algorithm is that we can directly process the edges and texture of an image using its WTMM representation.

7.
IEEE Trans Neural Netw ; 9(6): 1258-69, 1998.
Artigo em Inglês | MEDLINE | ID: mdl-18255807

RESUMO

In this paper, we study a qualitative property of a class of competitive learning (CL) models, which is called the multiplicatively biased competitive learning (MBCL) model, namely that it avoids neuron underutilization with probability one as time goes to infinity. In the MBCL, the competition among neurons is biased by a multiplicative term, while only one weight vector is updated per learning step. This is of practical interest since its instances have computational complexities among the lowest in existing CL models. In addition, in applications like classification, vector quantizer design and probability density function estimation, a necessary condition for optimal performance is to avoid neuron underutilization. Hence, it is possible to define instances of MBCL to achieve optimal performance in these applications.

8.
IEEE Trans Neural Netw ; 10(2): 239-52, 1999.
Artigo em Inglês | MEDLINE | ID: mdl-18252524

RESUMO

This paper proposes a hybrid optimization algorithm which combines the efforts of local search (individual learning) and cellular genetic algorithms (GA's) for training recurrent neural networks (RNN's). Each weight of an RNN is encoded as a floating point number, and a concatenation of the numbers forms a chromosome. Reproduction takes place locally in a square grid with each grid point representing a chromosome. Two approaches, Lamarckian and Baldwinian mechanisms, for combining cellular GA's and learning have been compared. Different hill-climbing algorithms are incorporated into the cellular GA's as learning methods. These include the real-time recurrent learning (RTRL) and its simplified versions, and the delta rule. The RTRL algorithm has been successively simplified by freezing some of the weights to form simplified versions. The delta rule, which is the simplest form of learning, has been implemented by considering the RNN's as feedforward networks during learning. The hybrid algorithms are used to train the RNN's to solve a long-term dependency problem. The results show that Baldwinian learning is inefficient in assisting the cellular GA. It is conjectured that the more difficult it is for genetic operations to produce the genotypic changes that match the phenotypic changes due to learning, the poorer is the convergence of Baldwinian learning. Most of the combinations using the Lamarckian mechanism show an improvement in reducing the number of generations required for an optimum network; however, only a few can reduce the actual time taken. Embedding the delta rule in the cellular GA's has been found to be the fastest method. It is also concluded that learning should not be too extensive if the hybrid algorithm is to be benefit from learning.

9.
IEEE Trans Inf Technol Biomed ; 1(4): 243-54, 1997 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-11020827

RESUMO

Positron emission tomography (PET) is an important tool for enabling quantification of human brain function. However, quantitative studies using tracer kinetic modeling require the measurement of the tracer time-activity curve in plasma (PTAC) as the model input function. It is widely believed that the insertion of arterial lines and the subsequent collection and processing of the biomedical signal sampled from the arterial blood are not compatible with the practice of clinical PET, as it is invasive and exposes personnel to the risks associated with the handling of patient blood and radiation dose. Therefore, it is of interest to develop practical noninvasive measurement techniques for tracer kinetic modeling with PET. In this paper, a technique is proposed to extract the input function together with the physiological parameters from the brain dynamic images alone. The identifiability of this method is tested rigorously by using Monte Carlo simulation. The results show that the proposed method is able to quantify all the required parameters by using the information obtained from two or more regions of interest (ROI's) with very different dynamics in the PET dynamic images. There is no significant improvement in parameter estimation for the local cerebral metabolic rate of glucose (LCMRGlc) if the number of ROI's are more than three. The proposed method can provide very reliable estimation of LCMRGlc, which is our primary interest in this study.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada de Emissão/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/metabolismo , Simulação por Computador , Glucose/metabolismo , Humanos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Método de Monte Carlo , Tomografia Computadorizada de Emissão/estatística & dados numéricos
10.
Comput Methods Programs Biomed ; 57(3): 167-77, 1998 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-9822854

RESUMO

The recently developed generalized linear least squares (GLLS) algorithm has been found very useful in non-uniformly sampled biomedical signal processing and parameter estimation. However, the current version of the algorithm cannot deal with signals and systems containing repeated eigenvalues. In this paper, we extend the algorithm, so that it can be used for non-uniformly sampled signals and systems with/without repeated eigenvalues. The related theory and detailed derivation of the algorithm are given. A case study is presented, which demonstrates that the extended algorithm can provide more choices for system identification and is able to select the most suitable model for the system from the non-uniformly sampled noisy signal.


Assuntos
Algoritmos , Processamento de Sinais Assistido por Computador , Matemática
11.
Comput Methods Programs Biomed ; 59(1): 31-43, 1999 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-10215175

RESUMO

The original generalized linear least squares (GLLS) algorithm was developed for non-uniformly sampled biomedical system parameter estimation using finely sampled instantaneous measurements (D. Feng, S.C. Huang, Z. Wang, D. Ho, An unbiased parametric imaging algorithm for non-uniformly sampled biomedical system parameter estimation, IEEE Trans. Med. Imag. 15 (1996) 512-518). This algorithm is particularly useful for image-wide generation of parametric images with positron emission tomography (PET), as it is computationally efficient and statistically reliable (D. Feng, D. Ho, Chen, K., L.C. Wu, J.K. Wang, R.S. Liu, S.H. Yeh, An evaluation of the algorithms for determining local cerebral metabolic rates of glucose using positron emission tomography dynamic data, IEEE Trans. Med. Imag. 14 (1995) 697-710). However, when dynamic PET image data are sampled according to the optimal image sampling schedule (OISS) to reduce memory and storage space (X. Li, D. Feng, K. Chen, Optimal image sampling schedule: A new effective way to reduce dynamic image storage space and functional image processing time, IEEE Trans. Med. Imag. 15 (1996) 710-718), only a few temporal image frames are recorded (e.g. only four images are recorded for the four parameter fluoro-deoxy-glucose (FDG) model). These image frames are recorded in terms of accumulated radio-activity counts and as a result, the direct application of GLLS is not reliable as instantaneous measurement samples can no longer be approximated by averaging of accumulated measurements over the sampling intervals. In this paper, we extend GLLS to OISS-GLLS which deals with the fewer accumulated measurement samples obtained from OISS dynamic systems. The theory and algorithm of this new technique are formulated and studied extensively. To investigate statistical reliability and computational efficiency of OISS-GLLS, a simulation study using dynamic PET data was performed. OISS-GLLS using 4-measurement samples was compared to the non-linear least squares (NLS) method using 22-measurement samples, GLLS using 22-measurement samples and OISS-NLS using 4-measurement samples. Results demonstrated that OISS-GLLS was able to achieve parameter estimates of equivalent accuracy and reliability in comparison to NLS or GLLS using finely sampled measurements (22-measurement samples), or OISS-NLS using optimally sampled measurements (4-measurement samples). Further more, as fewer measurement samples are used in OISS-GLLS, this algorithm is computationally faster than NLS or GLLS. Therefore, OISS-GLLS is well-suited for image-wide parameter estimation when PET image data are recorded according to the optimal image sampling schedule.


Assuntos
Algoritmos , Simulação por Computador , Modelos Lineares , Humanos , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada de Emissão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA