Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
Appl Numer Math ; 187: 138-157, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37006783

RESUMO

The aim of this expository paper is to explain to graduate students and beginning researchers in the field of mathematics, statistics and engineering the fundamental concept of sparse machine learning in Banach spaces. In particular, we use binary classification as an example to explain the essence of learning in a reproducing kernel Hilbert space and sparse learning in a reproducing kernel Banach space (RKBS). We then utilize the Banach space ℓ 1 ( ℕ ) to illustrate the basic concepts of the RKBS in an elementary yet rigorous fashion. This paper reviews existing results in the author's perspectives to reflect the state of the art of the field of sparse learning, and includes new theoretical observations on the RKBS. Several open problems critical to the theory of the RKBS are also discussed at the end of this paper.

2.
Med Phys ; 50(2): 837-853, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36196045

RESUMO

PURPOSE: Synthetic digital mammogram (SDM) is a 2D image generated from digital breast tomosynthesis (DBT) and used as a substitute for a full-field digital mammogram (FFDM) to reduce the radiation dose for breast cancer screening. The previous deep learning-based method used FFDM images as the ground truth, and trained a single neural network to directly generate SDM images with similar appearances (e.g., intensity distribution, textures) to the FFDM images. However, the FFDM image has a different texture pattern from DBT. The difference in texture pattern might make the training of the neural network unstable and result in high-intensity distortion, which makes it hard to decrease intensity distortion and increase perceptual similarity (e.g., generate similar textures) at the same time. Clinically, radiologists want to have a 2D synthesized image that feels like an FFDM image in vision and preserves local structures such as both mass and microcalcifications (MCs) in DBT because radiologists have been trained on reading FFDM images for a long time, while local structures are important for diagnosis. In this study, we proposed to use a deep convolutional neural network to learn the transformation to generate SDM from DBT. METHOD: To decrease intensity distortion and increase perceptual similarity, a multi-scale cascaded network (MSCN) is proposed to generate low-frequency structures (e.g., intensity distribution) and high-frequency structures (e.g., textures) separately. The MSCN consist of two cascaded sub-networks: the first sub-network is used to predict the low-frequency part of the FFDM image; the second sub-network is used to generate a full SDM image with textures similar to the FFDM image based on the prediction of the first sub-network. The mean-squared error (MSE) objective function is used to train the first sub-network, termed low-frequency network, to generate a low-frequency SDM image. The gradient-guided generative adversarial network's objective function is to train the second sub-network, termed high-frequency network, to generate a full SDM image with textures similar to the FFDM image. RESULTS: 1646 cases with FFDM and DBT were retrospectively collected from the Hologic Selenia system for training and validation dataset, and 145 cases with masses or MC clusters were independently collected from the Hologic Selenia system for testing dataset. For comparison, the baseline network has the same architecture as the high-frequency network and directly generates a full SDM image. Compared to the baseline method, the proposed MSCN improves the peak-to-noise ratio from 25.3 to 27.9 dB and improves the structural similarity from 0.703 to 0.724, and significantly increases the perceptual similarity. CONCLUSIONS: The proposed method can stabilize the training and generate SDM images with lower intensity distortion and higher perceptual similarity.


Assuntos
Neoplasias da Mama , Mamografia , Humanos , Feminino , Estudos Retrospectivos , Mamografia/métodos , Neoplasias da Mama/diagnóstico por imagem , Intensificação de Imagem Radiográfica/métodos , Redes Neurais de Computação
3.
Neural Netw ; 153: 553-563, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35839599

RESUMO

Convergence of deep neural networks as the depth of the networks tends to infinity is fundamental in building the mathematical foundation for deep learning. In a previous study, we investigated this question for deep networks with the Rectified Linear Unit (ReLU) activation function and with a fixed width. This does not cover the important convolutional neural networks where the widths are increased from layer to layer. For this reason, we first study convergence of general ReLU networks with increased widths and then apply the results obtained to deep convolutional neural networks. It turns out the convergence reduces to convergence of infinite products of matrices with increased sizes, which has not been considered in the literature. We establish sufficient conditions for convergence of such infinite products of matrices. Based on the conditions, we present sufficient conditions for pointwise convergence of general deep ReLU networks with increasing widths, and as well as pointwise convergence of deep ReLU convolutional neural networks.


Assuntos
Redes Neurais de Computação
4.
IEEE Trans Med Imaging ; 41(11): 3289-3300, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35679379

RESUMO

We investigated the imaging performance of a fast convergent ordered-subsets algorithm with subiteration-dependent preconditioners (SDPs) for positron emission tomography (PET) image reconstruction. In particular, we considered the use of SDP with the block sequential regularized expectation maximization (BSREM) approach with the relative difference prior (RDP) regularizer due to its prior clinical adaptation by vendors. Because the RDP regularization promotes smoothness in the reconstructed image, the directions of the gradients in smooth areas more accurately point toward the objective function's minimizer than those in variable areas. Motivated by this observation, two SDPs have been designed to increase iteration step-sizes in the smooth areas and reduce iteration step-sizes in the variable areas relative to a conventional expectation maximization preconditioner. The momentum technique used for convergence acceleration can be viewed as a special case of SDP. We have proved the global convergence of SDP-BSREM algorithms by assuming certain characteristics of the preconditioner. By means of numerical experiments using both simulated and clinical PET data, we have shown that the SDP-BSREM algorithms substantially improve the convergence rate, as compared to conventional BSREM and a vendor's implementation as Q.Clear. Specifically, SDP-BSREM algorithms converge 35%-50% faster in reaching the same objective function value than conventional BSREM and commercial Q.Clear algorithms. Moreover, we showed in phantoms with hot, cold and background regions that the SDP-BSREM algorithms approached the values of a highly converged reference image faster than conventional BSREM and commercial Q.Clear algorithms.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons , Algoritmos , Imagens de Fantasmas
5.
Entropy (Basel) ; 24(4)2022 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-35455187

RESUMO

Sample entropy, an approximation of the Kolmogorov entropy, was proposed to characterize complexity of a time series, which is essentially defined as -log(B/A), where B denotes the number of matched template pairs with length m and A denotes the number of matched template pairs with m+1, for a predetermined positive integer m. It has been widely used to analyze physiological signals. As computing sample entropy is time consuming, the box-assisted, bucket-assisted, x-sort, assisted sliding box, and kd-tree-based algorithms were proposed to accelerate its computation. These algorithms require O(N2) or O(N2-1m+1) computational complexity, where N is the length of the time series analyzed. When N is big, the computational costs of these algorithms are large. We propose a super fast algorithm to estimate sample entropy based on Monte Carlo, with computational costs independent of N (the length of the time series) and the estimation converging to the exact sample entropy as the number of repeating experiments becomes large. The convergence rate of the algorithm is also established. Numerical experiments are performed for electrocardiogram time series, electroencephalogram time series, cardiac inter-beat time series, mechanical vibration signals (MVS), meteorological data (MD), and 1/f noise. Numerical results show that the proposed algorithm can gain 100-1000 times speedup compared to the kd-tree and assisted sliding box algorithms while providing satisfactory approximate accuracy.

6.
IEEE Trans Med Imaging ; 40(8): 2080-2091, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33826513

RESUMO

Synthetic digital mammography (SDM), a 2D image generated from digital breast tomosynthesis (DBT), is used as a potential substitute for full-field digital mammography (FFDM) in clinic to reduce the radiation dose for breast cancer screening. Previous studies exploited projection geometry and fused projection data and DBT volume, with different post-processing techniques applied on re-projection data which may generate different image appearance compared to FFDM. To alleviate this issue, one possible solution to generate an SDM image is using a learning-based method to model the transformation from the DBT volume to the FFDM image using current DBT/FFDM combo images. In this study, we proposed to use a deep convolutional neural network (DCNN) to learn the transformation to generate SDM using current DBT/FFDM combo images. Gradient guided conditional generative adversarial networks (GGGAN) objective function was designed to preserve subtle MCs and the perceptual loss was exploited to improve the performance of the proposed DCNN on perceptual quality. We used various image quality criteria for evaluation, including preserving masses and MCs which are important in mammogram. Experiment results demonstrated progressive performance improvement of network using different objective functions in terms of those image quality criteria. The methodology we exploited in the SDM generation task to analyze and progressively improve image quality by designing objective functions may be helpful to other image generation tasks.


Assuntos
Mamografia , Intensificação de Imagem Radiográfica , Detecção Precoce de Câncer , Redes Neurais de Computação
7.
IEEE Trans Med Imaging ; 38(9): 2114-2126, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-30794510

RESUMO

This paper presents a preconditioned Krasnoselskii-Mann (KM) algorithm with an improved EM preconditioner (IEM-PKMA) for higher-order total variation (HOTV) regularized positron emission tomography (PET) image reconstruction. The PET reconstruction problem can be formulated as a three-term convex optimization model consisting of the Kullback-Leibler (KL) fidelity term, a nonsmooth penalty term, and a nonnegative constraint term which is also nonsmooth. We develop an efficient KM algorithm for solving this optimization problem based on a fixed-point characterization of its solution, with a preconditioner and a momentum technique for accelerating convergence. By combining the EM precondtioner, a thresholding, and a good inexpensive estimate of the solution, we propose an improved EM preconditioner that can not only accelerate convergence but also avoid the reconstructed image being "stuck at zero." Numerical results in this paper show that the proposed IEM-PKMA outperforms existing state-of-the-art algorithms including, the optimization transfer descent algorithm and the preconditioned L-BFGS-B algorithm for the differentiable smoothed anisotropic total variation regularized model, the preconditioned alternating projection algorithm, and the alternating direction method of multipliers for the nondifferentiable HOTV regularized model. Encouraging initial experiments using clinical data are presented.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Encéfalo/diagnóstico por imagem , Humanos , Masculino , Pessoa de Meia-Idade , Imagens de Fantasmas
8.
Inverse Probl ; 35(11)2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33603259

RESUMO

The purpose of this research is to develop an advanced reconstruction method for low-count, hence high-noise, single-photon emission computed tomography (SPECT) image reconstruction. It consists of a novel reconstruction model to suppress noise while conducting reconstruction and an efficient algorithm to solve the model. A novel regularizer is introduced as the nonconvex denoising term based on the approximate sparsity of the image under a geometric tight frame transform domain. The deblurring term is based on the negative log-likelihood of the SPECT data model. To solve the resulting nonconvex optimization problem a preconditioned fixed-point proximity algorithm (PFPA) is introduced. We prove that under appropriate assumptions, PFPA converges to a local solution of the optimization problem at a global O ( 1 / k ) convergence rate. Substantial numerical results for simulation data are presented to demonstrate the superiority of the proposed method in denoising, suppressing artifacts and reconstruction accuracy. We simulate noisy 2D SPECT data from two phantoms: hot Gaussian spheres on random lumpy warm background, and the anthropomorphic brain phantom, at high- and low-noise levels (64k and 90k counts, respectively), and reconstruct them using PFPA. We also perform limited comparative studies with selected competing state-of-the-art total variation (TV) and higher-order TV (HOTV) transform-based methods, and widely used post-filtered maximum-likelihood expectation-maximization. We investigate imaging performance of these methods using: contrast-to-noise ratio (CNR), ensemble variance images (EVI), background ensemble noise (BEN), normalized mean-square error (NMSE), and channelized hotelling observer (CHO) detectability. Each of the competing methods is independently optimized for each metric. We establish that the proposed method outperforms the other approaches in all image quality metrics except NMSE where it is matched by HOTV. The superiority of the proposed method is especially evident in the CHO detectability tests results. We also perform qualitative image evaluation for presence and severity of image artifacts where it also performs better in terms of suppressing 'staircase' artifacts, as compared to TV methods. However, edge artifacts on high-contrast regions persist. We conclude that the proposed method may offer a powerful tool for detection tasks in high-noise SPECT imaging.

9.
IEEE Trans Med Imaging ; 38(5): 1271-1283, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30489263

RESUMO

Existing single-photon emission computed tomography (SPECT) reconstruction methods are mostly based on discrete models that may be viewed as piecewise constant approximations of a continuous data acquisition process. Due to low accuracy order of piecewise constant approximations, a traditional discrete model introduces irreducible model errors which are a bottleneck of the quality improvement of reconstructed images in clinical applications. To overcome this drawback, we develop a higher-order polynomial method for SPECT reconstruction. Specifically, we represent the data acquisition of SPECT imaging by using an integral equation model, approximate the solution of the underlying integral equation by higher-order piecewise polynomials leading to a new discrete system and introduce two novel regularizers for the system, by exploring the a priori knowledge of the radiotracer distribution, suitable for the approximation. The proposed higher-order polynomial method outperforms significantly the cutting edge reconstruction method based on a traditional discrete model in terms of model error reduction, noise suppression, and artifact reduction. In particular, the coefficient of variation of images reconstructed by the piecewise linear polynomial method is reduced by a factor of 10 in comparison to that of a traditional discrete model-based method.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Algoritmos , Artefatos , Encéfalo/diagnóstico por imagem , Humanos , Imagens de Fantasmas
10.
Med Phys ; 45(12): 5397-5410, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30291718

RESUMO

PURPOSE: Total variation (TV) regularization is efficient in suppressing noise, but is known to suffer from staircase artifacts. The goal of this work was to develop a regularization method using the infimal convolution of the first- and the second-order derivatives to reduce or even prevent staircase artifacts in the reconstructed images, and to investigate if the advantage in noise suppression by this TV-type regularization can be translated into dose reduction. METHODS: In the present work, we introduce the infimal convolution of the first- and the second-order total variation (ICTV) as the regularization term in penalized maximum likelihood reconstruction. The preconditioned alternating projection algorithm (PAPA), previously developed by the authors of this article, was employed to produce the reconstruction. Using Monte Carlo-simulated data, we evaluate noise properties and lesion detectability in the reconstructed images and compare the results with conventional total variation (TV) and clinical EM-based methods with Gaussian post filter (GPF-EM). We also evaluate the quality of ICTV regularized images obtained for lower photon number data, compared with clinically used photon number, to verify the feasibility of radiation-dose reduction to patients by use of the ICTV reconstruction method. RESULTS: By comparison with GPF-EM reconstructed images, we have found that the ICTV-PAPA method can achieve a lower background variability level while maintaining the same level of contrast. Images reconstructed by the ICTV-PAPA method with 80,000 counts per view exhibit even higher channelized Hotelling observer (CHO) signal-to-noise ratio (SNR), as compared to images reconstructed by the GPF-EM method with 120,000 counts per view. CONCLUSIONS: In contrast to the TV-PAPA method, the ICTV-PAPA reconstruction method avoids substantial staircase artifacts, while producing reconstructed images with higher CHO SNR and comparable local spatial resolution. Simulation studies indicate that a 33% dose reduction is feasible by switching to the ICTV-PAPA method, compared with the GPF-EM clinical standard.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada de Emissão de Fóton Único , Artefatos , Humanos , Imagens de Fantasmas , Razão Sinal-Ruído
11.
Phys Med ; 38: 23-35, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28610694

RESUMO

PURPOSE: The authors recently developed a preconditioned alternating projection algorithm (PAPA) for solving the penalized-likelihood SPECT reconstruction problem. The proposed algorithm can solve a wide variety of non-differentiable optimization models. This work is dedicated to comparing the performance of PAPA with total variation (TV) regularization (TV-PAPA) and a novel forward-backward algorithm with nested expectation maximization (EM)-TV iteration scheme (FB-EM-TV). METHODS: Monte Carlo technique was used to simulate multiple noise realizations of the fan-beam collimated SPECT data for a piecewise constant phantom with warm background, and hot and cold spheres with uniform activities at two noise levels. They were reconstructed using the aforementioned algorithms with attenuation, scatter, distance-dependent collimator blurring and sensitivity corrections. Noise suppressing performance, lesion detectability, lesion contrast, contrast recovery coefficient, convergence speed and selection of optimal parameters were evaluated. The conventional EM algorithms with TV post-filter (TVPF-EM) and Gaussian post-filter (GPF-EM) were used as benchmarks. RESULTS: The TV-PAPA and FB-EM-TV demonstrated similar performance in all investigated categories. Both algorithms outperformed TVPF-EM in terms of image noise suppression, lesion detectability, lesion contrast and convergence speed. We established that the optimal parameters versus information density approximately followed power laws, which offers a guidance in parameter selection for reconstruction methods. CONCLUSIONS: For the simulated SPECT data, TV-PAPA and FB-EM-TV produced qualitatively and quantitatively similar images. They performed better than the benchmark TVPF-EM and GPF-EM, with only limited loss of lesion contrast.


Assuntos
Algoritmos , Tomografia Computadorizada de Emissão de Fóton Único , Humanos , Processamento de Imagem Assistida por Computador , Método de Monte Carlo , Imagens de Fantasmas , Probabilidade
12.
Med Phys ; 44(8): 4083-4097, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28437565

RESUMO

PURPOSE: Performance of the preconditioned alternating projection algorithm (PAPA) using relaxed ordered subsets (ROS) with a non-smooth penalty function was investigated in positron emission tomography (PET). A higher order total variation (HOTV) regularizer was applied and a method for unsupervised selection of penalty weights based on the measured data is introduced. METHODS: A ROS version of PAPA with HOTV penalty (ROS-HOTV-PAPA) for PET image reconstruction was developed and implemented. Two-dimensional PET data were simulated using two synthetic phantoms (geometric and brain) in geometry similar to GE D690/710 PET/CT with uniform attenuation, and realistic scatter (25%) and randoms (25%). Three count levels (high/medium/low) corresponding to mean information densities (ID¯s) of 125, 25, and 5 noise equivalent counts (NEC) per support voxel were reconstructed using ROS-HOTV-PAPA. The patients' brain and whole body PET data were acquired at similar ID¯s on GE D690 PET/CT with time-of-fight and were reconstructed using ROS-HOTV-PAPA and available clinical ordered-subset expectation-maximization (OSEM) algorithms. A power-law model of the penalty weights' dependence on ID¯ was semi-empirically derived. Its parameters were elucidated from the data and used for unsupervised selection of the penalty weights within a reduced search space. The resulting image quality was evaluated qualitatively, including reduction of staircase artifacts, image noise, spatial resolution and contrast, and quantitatively using root mean squared error (RMSE) as a global metric. The convergence rates were also investigated. RESULTS: ROS-HOTV-PAPA converged rapidly, in comparison to non-ROS-HOTV-PAPA, with no evidence of limit cycle behavior. The reconstructed image quality was superior to optimally post-filtered OSEM reconstruction in terms of noise, spatial resolution, and contrast. Staircase artifacts were not observed. Images of the measured phantom reconstructed using ROS-HOTV-PAPA showed reductions in RMSE of 5%-44% as compared with optimized OSEM. The greatest improvement occurred in the lowest count images. Further, ROS-HOTV-PAPA reconstructions produced images with RMSE similar to images reconstructed using optimally post-filtered OSEM but at one-quarter the NEC. CONCLUSION: Acceleration of HOTV-PAPA was achieved using ROS. This was accompanied by an improved RMSE metric and perceptual image quality that were both superior to that obtained with either clinical or optimized OSEM. This may allow up to a four-fold reduction of the radiation dose to the patients in a PET study, as compared with current clinical practice. The proposed unsupervised parameter selection method provided useful estimates of the penalty weights for the selected phantoms' and patients' PET studies. In sum, the outcomes of this research indicate that ROS-HOTV-PAPA is an appropriate candidate for clinical applications and warrants further research.


Assuntos
Algoritmos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Artefatos , Humanos , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons
13.
Bioinformatics ; 33(8): 1130-1138, 2017 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-28087515

RESUMO

Motivation: The sequence alignment is a fundamental problem in bioinformatics. BLAST is a routinely used tool for this purpose with over 118 000 citations in the past two decades. As the size of bio-sequence databases grows exponentially, the computational speed of alignment softwares must be improved. Results: We develop the heterogeneous BLAST (H-BLAST), a fast parallel search tool for a heterogeneous computer that couples CPUs and GPUs, to accelerate BLASTX and BLASTP-basic tools of NCBI-BLAST. H-BLAST employs a locally decoupled seed-extension algorithm for better performance on GPUs, and offers a performance tuning mechanism for better efficiency among various CPUs and GPUs combinations. H-BLAST produces identical alignment results as NCBI-BLAST and its computational speed is much faster than that of NCBI-BLAST. Speedups achieved by H-BLAST over sequential NCBI-BLASTP (resp. NCBI-BLASTX) range mostly from 4 to 10 (resp. 5 to 7.2). With 2 CPU threads and 2 GPUs, H-BLAST can be faster than 16-threaded NCBI-BLASTX. Furthermore, H-BLAST is 1.5-4 times faster than GPU-BLAST. Availability and Implementation: https://github.com/Yeyke/H-BLAST.git. Contact: yux06@syr.edu. Supplementary information: Supplementary data are available at Bioinformatics online.


Assuntos
Gráficos por Computador , Proteínas/química , Alinhamento de Sequência/métodos , Algoritmos , Sequência de Aminoácidos , Computadores , Bases de Dados de Ácidos Nucleicos , Software , Fatores de Tempo
14.
Nucleic Acids Res ; 43(16): 7762-8, 2015 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-26250111

RESUMO

Sequence alignment is a long standing problem in bioinformatics. The Basic Local Alignment Search Tool (BLAST) is one of the most popular and fundamental alignment tools. The explosive growth of biological sequences calls for speedup of sequence alignment tools such as BLAST. To this end, we develop high speed BLASTN (HS-BLASTN), a parallel and fast nucleotide database search tool that accelerates MegaBLAST--the default module of NCBI-BLASTN. HS-BLASTN builds a new lookup table using the FMD-index of the database and employs an accurate and effective seeding method to find short stretches of identities (called seeds) between the query and the database. HS-BLASTN produces the same alignment results as MegaBLAST and its computational speed is much faster than MegaBLAST. Specifically, our experiments conducted on a 12-core server show that HS-BLASTN can be 22 times faster than MegaBLAST and exhibits better parallel performance than MegaBLAST. HS-BLASTN is written in C++ and the related source code is available at https://github.com/chenying2016/queries under the GPLv3 license.


Assuntos
Alinhamento de Sequência/métodos , Software , Algoritmos , Sequência de Bases , Bases de Dados de Ácidos Nucleicos , Genoma Humano , Humanos
15.
Med Phys ; 42(8): 4872-87, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26233214

RESUMO

PURPOSE: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. METHODS: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy "warm" background and "hot" lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin-Zeng-Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation-maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean square errors (MSEs), and report the convergence speed and computation time. RESULTS: HOTV-PAPA yields the best signal-to-noise ratio, followed by TV-PAPA and TV-OSL/GPF-EM. The local spatial resolution of HOTV-PAPA is somewhat worse than that of TV-PAPA and TV-OSL. Images reconstructed using HOTV-PAPA have the lowest local noise power spectrum (LNPS) amplitudes, followed by TV-PAPA, TV-OSL, and GPF-EM. The LNPS peak of GPF-EM is shifted toward higher spatial frequencies than those for the three other methods. The PAPA-type methods exhibit much lower ensemble noise, ensemble voxel variance, and image roughness. HOTV-PAPA performs best in these categories. Whereas images reconstructed using both TV-PAPA and TV-OSL are degraded by severe staircase artifacts; HOTV-PAPA substantially reduces such artifacts. It also converges faster than the other three methods and exhibits the lowest overall reconstruction error level, as measured by MSE. CONCLUSIONS: For high-noise simulated SPECT data, HOTV-PAPA outperforms TV-PAPA, GPF-EM, and TV-OSL in terms of hot lesion detectability, noise suppression, MSE, and computational efficiency. Unlike TV-PAPA and TV-OSL, HOTV-PAPA does not create sizable staircase artifacts. Moreover, HOTV-PAPA effectively suppresses noise, with only limited loss of local spatial resolution. Of the four methods, HOTV-PAPA shows the best lesion detectability, thanks to its superior noise suppression. HOTV-PAPA shows promise for clinically useful reconstructions of low-dose SPECT data.


Assuntos
Algoritmos , Artefatos , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Simulação por Computador , Funções Verossimilhança , Método de Monte Carlo , Imagens de Fantasmas , Tomografia Computadorizada de Emissão de Fóton Único/instrumentação
16.
Artigo em Inglês | MEDLINE | ID: mdl-24032873

RESUMO

Multiscale entropy (MSE) has been widely and successfully used in analyzing the complexity of physiological time series. We reinterpret the averaging process in MSE as filtering a time series by a filter of a piecewise constant type. From this viewpoint, we introduce filter-based multiscale entropy (FME), which filters a time series to generate multiple frequency components, and then we compute the blockwise entropy of the resulting components. By choosing filters adapted to the feature of a given time series, FME is able to better capture its multiscale information and to provide more flexibility for studying its complexity. Motivated by the heart rate turbulence theory, which suggests that the human heartbeat interval time series can be described in piecewise linear patterns, we propose piecewise linear filter multiscale entropy (PLFME) for the complexity analysis of the time series. Numerical results from PLFME are more robust to data of various lengths than those from MSE. The numerical performance of the adaptive piecewise constant filter multiscale entropy without prior information is comparable to that of PLFME, whose design takes prior information into account.


Assuntos
Entropia , Coração/fisiologia , Modelos Biológicos , Aceleração , Humanos , Distribuição Normal , Fatores de Tempo
17.
Inverse Probl ; 28(11): 115005, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23271835

RESUMO

We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.

18.
IEEE Trans Neural Netw ; 16(3): 533-40, 2005 May.
Artigo em Inglês | MEDLINE | ID: mdl-15940984

RESUMO

Online gradient methods are widely used for training feedforward neural networks. We prove in this paper a convergence theorem for an online gradient method with variable step size for backward propagation (BP) neural networks with a hidden layer. Unlike most of the convergence results that are of probabilistic and nonmonotone nature, the convergence result that we establish here has a deterministic and monotone nature.


Assuntos
Algoritmos , Redes Neurais de Computação , Análise Numérica Assistida por Computador , Processamento de Sinais Assistido por Computador , Simulação por Computador , Sistemas Computacionais , Sistemas On-Line
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA