Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 75
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-38557620

RESUMEN

The deep unfolding approach has attracted significant attention in computer vision tasks, which well connects conventional image processing modeling manners with more recent deep learning techniques. Specifically, by establishing a direct correspondence between algorithm operators at each implementation step and network modules within each layer, one can rationally construct an almost "white box" network architecture with high interpretability. In this architecture, only the predefined component of the proximal operator, known as a proximal network, needs manual configuration, enabling the network to automatically extract intrinsic image priors in a data-driven manner. In current deep unfolding methods, such a proximal network is generally designed as a CNN architecture, whose necessity has been proven by a recent theory. That is, CNN structure substantially delivers the translational symmetry image prior, which is the most universally possessed structural prior across various types of images. However, standard CNN-based proximal networks have essential limitations in capturing the rotation symmetry prior, another universal structural prior underlying general images. This leaves a large room for further performance improvement in deep unfolding approaches. To address this issue, this study makes efforts to suggest a high-accuracy rotation equivariant proximal network that effectively embeds rotation symmetry priors into the deep unfolding framework. Especially, we deduce, for the first time, the theoretical equivariant error for such a designed proximal network with arbitrary layers under arbitrary rotation degrees. This analysis should be the most refined theoretical conclusion for such error evaluation to date and is also indispensable for supporting the rationale behind such networks with intrinsic interpretability requirements. Through experimental validation on different vision tasks, including blind image super-resolution, medical image reconstruction, and image de-raining, the proposed method is validated to be capable of directly replacing the proximal network in current deep unfolding architecture and readily enhancing their state-of-the-art performance. This indicates its potential usability in general vision tasks. The code of our method is available at https://github.com/jiahong-fu/Equivariant-Proximal-Operator.

2.
IEEE Trans Med Imaging ; 43(5): 1677-1689, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38145543

RESUMEN

Low-dose computed tomography (LDCT) helps to reduce radiation risks in CT scanning while maintaining image quality, which involves a consistent pursuit of lower incident rays and higher reconstruction performance. Although deep learning approaches have achieved encouraging success in LDCT reconstruction, most of them treat the task as a general inverse problem in either the image domain or the dual (sinogram and image) domains. Such frameworks have not considered the original noise generation of the projection data and suffer from limited performance improvement for the LDCT task. In this paper, we propose a novel reconstruction model based on noise-generating and imaging mechanism in full-domain, which fully considers the statistical properties of intrinsic noises in LDCT and prior information in sinogram and image domains. To solve the model, we propose an optimization algorithm based on the proximal gradient technique. Specifically, we derive the approximate solutions of the integer programming problem on the projection data theoretically. Instead of hand-crafting the sinogram and image regularizers, we propose to unroll the optimization algorithm to be a deep network. The network implicitly learns the proximal operators of sinogram and image regularizers with two deep neural networks, providing a more interpretable and effective reconstruction procedure. Numerical results demonstrate our proposed method improvements of > 2.9 dB in peak signal to noise ratio, > 1.4% promotion in structural similarity metric, and > 9 HU decrements in root mean square error over current state-of-the-art LDCT methods.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Fantasmas de Imagen , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Profundo , Dosis de Radiación
3.
IEEE Trans Med Imaging ; 42(12): 3678-3689, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37540616

RESUMEN

Accurate segmentation of brain tumors is of critical importance in clinical assessment and treatment planning, which requires multiple MR modalities providing complementary information. However, due to practical limits, one or more modalities may be missing in real scenarios. To tackle this problem, existing methods need to train multiple networks or a unified but fixed network for various possible missing modality cases, which leads to high computational burdens or sub-optimal performance. In this paper, we propose a unified and adaptive multi-modal MR image synthesis method, and further apply it to tumor segmentation with missing modalities. Based on the decomposition of multi-modal MR images into common and modality-specific features, we design a shared hyper-encoder for embedding each available modality into the feature space, a graph-attention-based fusion block to aggregate the features of available modalities to the fused features, and a shared hyper-decoder for image reconstruction. We also propose an adversarial common feature constraint to enforce the fused features to be in a common space. As for missing modality segmentation, we first conduct the feature-level and image-level completion using our synthesis method and then segment the tumors based on the completed MR images together with the extracted common features. Moreover, we design a hypernet-based modulation module to adaptively utilize the real and synthetic modalities. Experimental results suggest that our method can not only synthesize reasonable multi-modal MR images, but also achieve state-of-the-art performance on brain tumor segmentation with missing modalities.


Asunto(s)
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador
5.
IEEE Trans Pattern Anal Mach Intell ; 45(10): 12618-12634, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37126627

RESUMEN

Deep neural networks suffer from catastrophic forgetting when trained on sequential tasks in continual learning. Various methods rely on storing data of previous tasks to mitigate catastrophic forgetting, which is prohibited in real-world applications considering privacy and security issues. In this paper, we consider a realistic setting of continual learning, where training data of previous tasks are unavailable and memory resources are limited. We contribute a novel knowledge distillation-based method in an information-theoretic framework by maximizing mutual information between outputs of previously learned and current networks. Due to the intractability of computation of mutual information, we instead maximize its variational lower bound, where the covariance of variational distribution is modeled by a graph convolutional network. The inaccessibility of data of previous tasks is tackled by Taylor expansion, yielding a novel regularizer in network training loss for continual learning. The regularizer relies on compressed gradients of network parameters. It avoids storing previous task data and previously learned networks. Additionally, we employ self-supervised learning technique for learning effective features, which improves the performance of continual learning. We conduct extensive experiments including image classification and semantic segmentation, and the results show that our method achieves state-of-the-art performance on continual learning benchmarks.

6.
IEEE Trans Pattern Anal Mach Intell ; 45(10): 11521-11539, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37126626

RESUMEN

Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance. Sample re-weighting methods are popularly used to alleviate this data bias issue. Most current methods, however, require to manually pre-specify the weighting schemes relying on the characteristics of the investigated problem and training data. This makes them fairly hard to be generally applied in practical scenarios, due to their significant complexities and inter-class variations of data bias. To address this issue, we propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data. Specifically, by seeing each training class as a separate learning task, our method aims to extract an explicit weighting function with sample loss and task/class feature as input, and sample weight as output, expecting to impose adaptively varying weighting schemes to different sample classes based on their own intrinsic bias characteristics. Extensive experiments substantiate the capability of our method on achieving proper weighting schemes in various data bias cases, like class imbalance, feature-independent and dependent label noises, and more complicated bias scenarios beyond conventional cases. Besides, the task-transferability of the learned weighting scheme is also substantiated, by readily deploying the weighting function learned on relatively smaller-scale CIFAR-10 dataset on much larger-scale full WebVision dataset. The general availability of our method for multiple robust deep learning issues, including partial-label learning, semi-supervised learning and selective classification, has also been validated. Code for reproducing our experiments is available at https://github.com/xjtushujun/CMW-Net.

7.
IEEE Trans Cybern ; 53(9): 5469-5482, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35286274

RESUMEN

Detecting overlapping communities of an attribute network is a ubiquitous yet very difficult task, which can be modeled as a discrete optimization problem. Besides the topological structure of the network, node attributes and node overlapping aggravate the difficulty of community detection significantly. In this article, we propose a novel continuous encoding method to convert the discrete-natured detection problem to a continuous one by associating each edge and node attribute in the network with a continuous variable. Based on the encoding, we propose to solve the converted continuous problem by a multiobjective evolutionary algorithm (MOEA) based on decomposition. To find the overlapping nodes, a heuristic based on double-decoding is proposed, which is only with linear complexity. Furthermore, a postprocess community merging method in consideration of node attributes is developed to enhance the homogeneity of nodes in the detected communities. Various synthetic and real-world networks are used to verify the effectiveness of the proposed approach. The experimental results show that the proposed approach performs significantly better than a variety of evolutionary and nonevolutionary methods on most of the benchmark networks.

8.
IEEE Trans Pattern Anal Mach Intell ; 45(3): 3505-3521, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35724299

RESUMEN

The learning rate (LR) is one of the most important hyperparameters in stochastic gradient descent (SGD) algorithm for training deep neural networks (DNN). However, current hand-designed LR schedules need to manually pre-specify a fixed form, which limits their ability to adapt to practical non-convex optimization problems due to the significant diversification of training dynamics. Meanwhile, it always needs to search proper LR schedules from scratch for new tasks, which, however, are often largely different with task variations, like data modalities, network architectures, or training data capacities. To address this learning-rate-schedule setting issue, we propose to parameterize LR schedules with an explicit mapping formulation, called MLR-SNet. The learnable parameterized structure brings more flexibility for MLR-SNet to learn a proper LR schedule to comply with the training dynamics of DNN. Image and text classification benchmark experiments substantiate the capability of our method for achieving proper LR schedules. Moreover, the explicit parameterized structure makes the meta-learned LR schedules capable of being transferable and plug-and-play, which can be easily generalized to new heterogeneous tasks. We transfer our meta-learned MLR-SNet to query tasks like different training epochs, network architectures, data modalities, dataset sizes from the training ones, and achieve comparable or even better performance compared with hand-designed LR schedules specifically designed for the query tasks. The robustness of MLR-SNet is also substantiated when the training data are biased with corrupted noise. We further prove the convergence of the SGD algorithm equipped with LR schedule produced by our MLR-SNet, with the convergence rate comparable to the best-known ones of the algorithm for solving the problem. The source code of our method is released at https://github.com/xjtushujun/MLR-SNet.

9.
IEEE Trans Pattern Anal Mach Intell ; 45(4): 4537-4551, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35930514

RESUMEN

It has been shown that equivariant convolution is very helpful for many types of computer vision tasks. Recently, the 2D filter parametrization technique has played an important role for designing equivariant convolutions, and has achieved success in making use of rotation symmetry of images. However, the current filter parametrization strategy still has its evident drawbacks, where the most critical one lies in the accuracy problem of filter representation. To address this issue, in this paper we explore an ameliorated Fourier series expansion for 2D filters, and propose a new filter parametrization method based on it. The proposed filter parametrization method not only finely represents 2D filters with zero error when the filter is not rotated (similar as the classical Fourier series expansion), but also substantially alleviates the aliasing-effect-caused quality degradation when the filter is rotated (which usually arises in classical Fourier series expansion method). Accordingly, we construct a new equivariant convolution method based on the proposed filter parametrization method, named F-Conv. We prove that the equivariance of the proposed F-Conv is exact in the continuous domain, which becomes approximate only after discretization. Moreover, we provide theoretical error analysis for the case when the equivariance is approximate, showing that the approximation error is related to the mesh size and filter size. Extensive experiments show the superiority of the proposed method. Particularly, we adopt rotation equivariant convolution methods to a typical low-level image processing task, image super-resolution. It can be substantiated that the proposed F-Conv based method evidently outperforms classical convolution based methods. Compared with pervious filter parametrization based methods, the F-Conv performs more accurately on this low-level image processing task, reflecting its intrinsic capability of faithfully preserving rotation symmetries in local image features.

10.
IEEE Trans Cybern ; PP2022 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-35994533

RESUMEN

Matrix factorization (MF) methods decompose a data matrix into a product of two-factor matrices (denoted as U and V ) which are with low ranks. In this article, we propose a generative latent variable model for the data matrix, in which each entry is assumed to be a Gaussian with mean to be the inner product of the corresponding columns of U and V . The prior of each column of U and V is assumed to be as a finite mixture of Gaussians. Further, we propose to model the attribute matrix with the data matrix jointly by considering them as conditional independence with respect to the factor matrix U , building upon previously defined model for the data matrix. Due to the intractability of the proposed models, we employ variational Bayes to infer the posteriors of the factor matrices and the clustering relationships, and to optimize for the model parameters. In our development, the posteriors and model parameters can be readily computed in closed forms, which is much more computationally efficient than existing sampling-based probabilistic MF models. Comprehensive experimental studies of the proposed methods on collaborative filtering and community detection tasks demonstrate that the proposed methods achieve the state-of-the-art performance against a great number of MF-based and non-MF-based algorithms.

11.
Artículo en Inglés | MEDLINE | ID: mdl-35275811

RESUMEN

Adversarial domain adaptation has been an effective approach for learning domain-invariant features by adversarial training. In this paper, we propose a novel adversarial domain adaptation approach defined in the spherical feature space, in which we define spherical classifier for label prediction and spherical domain discriminator for discriminating domain labels. In the spherical feature space, we develop a robust pseudo-label loss to utilize pseudo-labels robustly, which weights the importance of the estimated labels of target data by the posterior probability of correct labeling, modeled by the Gaussian-uniform mixture model in the spherical space. Our proposed approach can be generally applied to both unsupervised and semi-supervised domain adaptation settings. In particular, to tackle the semi-supervised domain adaptation setting where a few labeled target data are available for training, we proposed a novel reweighted adversarial training strategy for effectively reducing the intra-domain discrepancy within the target domain. We also present theoretical analysis for the proposed method based on the domain adaptation theory. Extensive experiments are conducted on benchmarks for multiple applications, including object recognition, digit recognition, and face recognition. The results show that our method either surpasses or is competitive compared with recent methods for both unsupervised and semi-supervised domain adaptation.

12.
Natl Sci Rev ; 9(2): nwab183, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35242339

RESUMEN

Clustering is the discovery of latent group structure in data and is a fundamental problem in artificial intelligence, and a vital procedure in data-driven scientific research over all disciplines. Yet, existing methods have various limitations, especially weak cognitive interpretability and poor computational scalability, when it comes to clustering massive datasets that are increasingly available in all domains. Here, by simulating the multi-scale cognitive observation process of humans, we design a scalable algorithm to detect clusters hierarchically hidden in massive datasets. The observation scale changes, following the Weber-Fechner law to capture the gradually emerging meaningful grouping structure. We validated our approach in real datasets with up to a billion records and 2000 dimensions, including taxi trajectories, single-cell gene expressions, face images, computer logs and audios. Our approach outperformed popular methods in usability, efficiency, effectiveness and robustness across different domains.

13.
IEEE Trans Pattern Anal Mach Intell ; 44(3): 1457-1473, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-32780695

RESUMEN

Multispectral and hyperspectral image fusion (MS/HS fusion) aims to fuse a high-resolution multispectral (HrMS) and a low-resolution hyperspectral (LrHS) images to generate a high-resolution hyperspectral (HrHS) image, which has become one of the most commonly addressed problems for hyperspectral image processing. In this paper, we specifically designed a network architecture for the MS/HS fusion task, called MHF-net, which not only contains clear interpretability, but also reasonably embeds the well studied linear mapping that links the HrHS image to HrMS and LrHS images. In particular, we first construct an MS/HS fusion model which merges the generalization models of low-resolution images and the low-rankness prior knowledge of HrHS image into a concise formulation, and then we build the proposed network by unfolding the proximal gradient algorithm for solving the proposed model. As a result of the careful design for the model and algorithm, all the fundamental modules in MHF-net have clear physical meanings and are thus easily interpretable. This not only greatly facilitates an easy intuitive observation and analysis on what happens inside the network, but also leads to its good generalization capability. Based on the architecture of MHF-net, we further design two deep learning regimes for two general cases in practice: consistent MHF-net and blind MHF-net. The former is suitable in the case that spectral and spatial responses of training and testing data are consistent, just as considered in most of the pervious general supervised MS/HS fusion researches. The latter ensures a good generalization in mismatch cases of spectral and spatial responses in training and testing data, and even across different sensors, which is generally considered to be a challenging issue for general supervised MS/HS fusion methods. Experimental results on simulated and real data substantiate the superiority of our method both visually and quantitatively as compared with state-of-the-art methods along this line of research.

14.
IEEE Trans Cybern ; 52(8): 7791-7804, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-33566785

RESUMEN

In this article, we first propose a graph neural network encoding method for the multiobjective evolutionary algorithm (MOEA) to handle the community detection problem in complex attribute networks. In the graph neural network encoding method, each edge in an attribute network is associated with a continuous variable. Through nonlinear transformation, a continuous valued vector (i.e., a concatenation of the continuous variables associated with the edges) is transferred to a discrete valued community grouping solution. Further, two objective functions for the single-attribute and multiattribute network are proposed to evaluate the attribute homogeneity of the nodes in communities, respectively. Based on the new encoding method and the two objectives, a MOEA based upon NSGA-II, called continuous encoding MOEA, is developed for the transformed community detection problem with continuous decision variables. Experimental results on single-attribute and multiattribute networks with different types show that the developed algorithm performs significantly better than some well-known evolutionary- and nonevolutionary-based algorithms. The fitness landscape analysis verifies that the transformed community detection problems have smoother landscapes than those of the original problems, which justifies the effectiveness of the proposed graph neural network encoding method.


Asunto(s)
Algoritmos , Redes Neurales de la Computación
15.
IEEE Trans Pattern Anal Mach Intell ; 44(8): 4469-4484, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-33621172

RESUMEN

Stochastic optimization algorithms have been popular for training deep neural networks. Recently, there emerges a new approach of learning-based optimizer, which has achieved promising performance for training neural networks. However, these black-box learning-based optimizers do not fully take advantage of the experience in human-designed optimizers and heavily rely on learning from meta-training tasks, therefore have limited generalization ability. In this paper, we propose a novel optimizer, dubbed as Variational HyperAdam, which is based on a parametric generalized Adam algorithm, i.e., HyperAdam, in a variational framework. With Variational HyperAdam as optimizer for training neural network, the parameter update vector of the neural network at each training step is considered as random variable, whose approximate posterior distribution given the training data and current network parameter vector is predicted by Variational HyperAdam. The parameter update vector for network training is sampled from this approximate posterior distribution. Specifically, in Variational HyperAdam, we design a learnable generalized Adam algorithm for estimating expectation, paired with a VarBlock for estimating the variance of the approximate posterior distribution of parameter update vector. The Variational HyperAdam is learned in a meta-learning approach with meta-training loss derived by variational inference. Experiments verify that the learned Variational HyperAdam achieved state-of-the-art network training performance for various types of networks on different datasets, such as multilayer perceptron, CNN, LSTM and ResNet.

16.
Neural Netw ; 135: 91-104, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33373885

RESUMEN

Recently, the focus of functional connectivity analysis of human brain has shifted from merely revealing the inter-regional functional correlation over the entire scan duration to capturing the time-varying information of brain networks and characterizing time-resolved reoccurring patterns of connectivity. Much effort has been invested into developing approaches that can track changes in re-occurring patterns of functional connectivity over time. In this paper, we propose a sparse deep dictionary learning method to characterize the essential differences of reoccurring patterns of time-varying functional connectivity between different age groups. The proposed method combines both the interpretability of sparse dictionary learning and the capability of extracting sparse nonlinear higher-level features in the latent space of sparse deep autoencoder. In other words, it learns a sparse dictionary of the original data by considering the nonlinear representation of the data in the encoder layer based on a sparse deep autoencoder. In this way, the nonlinear structure and higher-level features of the data can be captured by deep dictionary learning. The proposed method is applied to the analysis of the Philadelphia Neurodevelopmental Cohort. It shows that there exist essential differences in the reoccurrence patterns of function connectivity between child and young adult groups. Specially, children have more diffusive functional connectivity patterns while young adults possess more focused functional connectivity patterns, and the brain function transits from undifferentiated systems to specialized neural networks with the growth.


Asunto(s)
Algoritmos , Encéfalo/diagnóstico por imagen , Encéfalo/crecimiento & desarrollo , Aprendizaje Profundo , Redes Neurales de la Computación , Adolescente , Niño , Preescolar , Femenino , Humanos , Lactante , Imagen por Resonancia Magnética/métodos , Masculino , Adulto Joven
17.
IEEE Trans Cybern ; 51(3): 1556-1570, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-31880577

RESUMEN

It is known that boosting can be interpreted as an optimization technique to minimize an underlying loss function. Specifically, the underlying loss being minimized by the traditional AdaBoost is the exponential loss, which proves to be very sensitive to random noise/outliers. Therefore, several boosting algorithms, e.g., LogitBoost and SavageBoost, have been proposed to improve the robustness of AdaBoost by replacing the exponential loss with some designed robust loss functions. In this article, we present a new way to robustify AdaBoost, that is, incorporating the robust learning idea of self-paced learning (SPL) into the boosting framework. Specifically, we design a new robust boosting algorithm based on the SPL regime, that is, SPLBoost, which can be easily implemented by slightly modifying off-the-shelf boosting packages. Extensive experiments and a theoretical characterization are also carried out to illustrate the merits of the proposed SPLBoost.

18.
IEEE Trans Med Imaging ; 39(12): 4249-4261, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32780700

RESUMEN

Synthesizing a CT image from an available MR image has recently emerged as a key goal in radiotherapy treatment planning for cancer patients. CycleGANs have achieved promising results on unsupervised MR-to-CT image synthesis; however, because they have no direct constraints between input and synthetic images, cycleGANs do not guarantee structural consistency between these two images. This means that anatomical geometry can be shifted in the synthetic CT images, clearly a highly undesirable outcome in the given application. In this paper, we propose a structure-constrained cycleGAN for unsupervised MR-to-CT synthesis by defining an extra structure-consistency loss based on the modality independent neighborhood descriptor. We also utilize a spectral normalization technique to stabilize the training process and a self-attention module to model the long-range spatial dependencies in the synthetic images. Results on unpaired brain and abdomen MR-to-CT image synthesis show that our method produces better synthetic CT images in both accuracy and visual quality as compared to other unsupervised synthesis methods. We also show that an approximate affine pre-registration for unpaired training data can improve synthesis results.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Abdomen , Humanos , Imagen por Resonancia Magnética , Planificación de la Radioterapia Asistida por Computador
19.
IEEE Trans Med Imaging ; 39(9): 2831-2843, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32112677

RESUMEN

Energy-resolved computed tomography (ErCT) with a photon counting detector concurrently produces multiple CT images corresponding to different photon energy ranges. It has the potential to generate energy-dependent images with improved contrast-to-noise ratio and sufficient material-specific information. Since the number of detected photons in one energy bin in ErCT is smaller than that in conventional energy-integrating CT (EiCT), ErCT images are inherently more noisy than EiCT images, which leads to increased noise and bias in the subsequent material estimation. In this work, we first deeply analyze the intrinsic tensor properties of two-dimensional (2D) ErCT images acquired in different energy bins and then present a F ull- S pectrum-knowledge-aware Tensor analysis and processing (FSTensor) method for ErCT reconstruction to suppress noise-induced artifacts to obtain high-quality ErCT images and high-accuracy material images. The presented method is based on three considerations: (1) 2D ErCT images obtained in different energy bins can be treated as a 3-order tensor with three modes, i.e., width, height and energy bin, and a rich global correlation exists among the three modes, which can be characterized by tensor decomposition. (2) There is a locally piecewise smooth property in the 3-order ErCT images, and it can be captured by a tensor total variation regularization. (3) The images from the full spectrum are much better than the ErCT images with respect to noise variance and structural details and serve as external information to improve the reconstruction performance. We then develop an alternating direction method of multipliers algorithm to numerically solve the presented FSTensor method. We further utilize a genetic algorithm to tackle the parameter selection in ErCT reconstruction, instead of manually determining parameters. Simulation, preclinical and synthesized clinical ErCT results demonstrate that the presented FSTensor method leads to significant improvements over the filtered back-projection, robust principal component analysis, tensor-based dictionary learning and low-rank tensor decomposition with spatial-temporal total variation methods.


Asunto(s)
Fotones , Tomografía Computarizada por Rayos X , Algoritmos , Procesamiento de Imagen Asistido por Computador , Fantasmas de Imagen
20.
Appl Spectrosc ; 74(5): 583-596, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-31880169

RESUMEN

Hadamard coding spectral imaging technology is a computational spectral imaging technology, which modulates the target's spectral information and recovers the original spectrum by inverse transformation. Because it has the advantage of multichannel detection, it is being studied by more researchers. For the engineering realization of push-broom coding spectral imaging instrument, it will inevitably be subjected to push-broom error, template error and detection noise, the redundant sampling problem caused by detector. Therefore, three restoration methods are presented in this paper: firstly, the one is the least squares solution, the two is the zero-filling inverse solution by extending the coding matrix in the redundant coding state to a complete higher order Hadamard matrix, the three is sparse method. Secondly, the numerical and principle analysis shows that the inverse solution of zero-compensation has better robustness and is more suitable for engineering application; its conditional number, error expectation and covariance are better and more stable because it directly uses Hadamard matrix, which has good generalized orthogonality. Then, a real-time spectral reconstruction method is presented, which is based on inverse solution of zero-compensation. Finally, simulation analysis shows that spectral data could be destructed relative accuracy in the error condition; however, the effect of template noise and push error on reconstruction is much greater than that of detection error. Therefore, in addition to reducing the detection noise as much as possible, lower template noise and more accurate push controlling should be guaranteed specifically in engineering realization.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...