Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Opt Express ; 28(3): 3879-3894, 2020 Feb 03.
Artigo em Inglês | MEDLINE | ID: mdl-32122049

RESUMO

We present a computational method for full-range interferometric synthetic aperture microscopy (ISAM) under dispersion encoding. With this, one can effectively double the depth range of optical coherence tomography (OCT), whilst dramatically enhancing the spatial resolution away from the focal plane. To this end, we propose a model-based iterative reconstruction (MBIR) method, where ISAM is directly considered in an optimization approach, and we make the discovery that sparsity promoting regularization effectively recovers the full-range signal. Within this work, we adopt an optimal nonuniform discrete fast Fourier transform (NUFFT) implementation of ISAM, which is both fast and numerically stable throughout iterations. We validate our method with several complex samples, scanned with a commercial SD-OCT system with no hardware modification. With this, we both demonstrate full-range ISAM imaging and significantly outperform combinations of existing methods.

2.
Magn Reson Med ; 70(2): 392-403, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-23172794

RESUMO

A multilattice sampling approach is proposed for dynamic MRI with Cartesian trajectories. It relies on the use of sampling patterns composed of several different lattices and exploits an image model where only some parts of the image are dynamic, whereas the rest is assumed static. Given the parameters of such an image model, the methodology followed for the design of a multilattice sampling pattern adapted to the model is described. The multi-lattice approach is compared to single-lattice sampling, as used by traditional acceleration methods such as UNFOLD (UNaliasing by Fourier-Encoding the Overlaps using the temporal Dimension) or k-t BLAST, and random sampling used by modern compressed sensing-based methods. On the considered image model, it allows more flexibility and higher accelerations than lattice sampling and better performance than random sampling. The method is illustrated on a phase-contrast carotid blood velocity mapping MR experiment. Combining the multilattice approach with the KEYHOLE technique allows up to 12× acceleration factors. Simulation and in vivo undersampling results validate the method. Compared to lattice and random sampling, multilattice sampling provides significant gains at high acceleration factors.


Assuntos
Algoritmos , Artéria Carótida Primitiva/anatomia & histologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Angiografia por Ressonância Magnética/métodos , Processamento de Sinais Assistido por Computador , Humanos , Reprodutibilidade dos Testes , Tamanho da Amostra , Sensibilidade e Especificidade
3.
IEEE Trans Med Imaging ; 41(1): 3-13, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34351855

RESUMO

Deep convolutional neural networks (CNNs) have emerged as a new paradigm for Mammogram diagnosis. Contemporary CNN-based computer-aided-diagnosis systems (CADs) for breast cancer directly extract latent features from input mammogram image and ignore the importance of morphological features. In this paper, we introduce a novel end-to-end deep learning framework for mammogram image processing, which computes mass segmentation and simultaneously predicts diagnosis results. Specifically, our method is constructed in a dual-path architecture that solves the mapping in a dual-problem manner, with an additional consideration of important shape and boundary knowledge. One path, called the Locality Preserving Learner (LPL), is devoted to hierarchically extracting and exploiting intrinsic features of the input. Whereas the other path, called the Conditional Graph Learner (CGL), focuses on generating geometrical features via modeling pixel-wise image to mask correlations. By integrating the two learners, both the cancer semantics and cancer representations are well learned, and the component learning paths in return complement each other, contributing an improvement to the mass segmentation and cancer classification problem at the same time. In addition, by integrating an automatic detection set-up, the DualCoreNet achieves fully automatic breast cancer diagnosis practically. Experimental results show that in benchmark DDSM dataset, DualCoreNet has outperformed other related works in both segmentation and classification tasks, achieving 92.27% DI coefficient and 0.85 AUC score. In another benchmark INbreast dataset, DualCoreNet achieves the best mammography segmentation (93.69% DI coefficient) and competitive classification performance (0.93 AUC score).


Assuntos
Neoplasias da Mama , Mamografia , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Diagnóstico por Computador , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação
4.
J Imaging ; 7(10)2021 Oct 14.
Artigo em Inglês | MEDLINE | ID: mdl-34677298

RESUMO

In this paper, we address the problem of activity estimation in passive gamma emission tomography (PGET) of spent nuclear fuel. Two different noise models are considered and compared, namely, the isotropic Gaussian and the Poisson noise models. The problem is formulated within a Bayesian framework as a linear inverse problem and prior distributions are assigned to the unknown model parameters. In particular, a Bernoulli-truncated Gaussian prior model is considered to promote sparse pin configurations. A Markov chain Monte Carlo (MCMC) method, based on a split and augmented Gibbs sampler, is then used to sample the posterior distribution of the unknown parameters. The proposed algorithm is first validated by simulations conducted using synthetic data, generated using the nominal models. We then consider more realistic data simulated using a bespoke simulator, whose forward model is non-linear and not available analytically. In that case, the linear models used are mis-specified and we analyse their robustness for activity estimation. The results demonstrate superior performance of the proposed approach in estimating the pin activities in different assembly patterns, in addition to being able to quantify their uncertainty measures, in comparison with existing methods.

5.
Sci Rep ; 10(1): 6811, 2020 04 22.
Artigo em Inglês | MEDLINE | ID: mdl-32321941

RESUMO

We propose a sparsity-promoting Bayesian algorithm capable of identifying radionuclide signatures from weak sources in the presence of a high radiation background. The proposed method is relevant to radiation identification for security applications. In such scenarios, the background typically consists of terrestrial, cosmic, and cosmogenic radiation that may cause false positive responses. We evaluate the new Bayesian approach using gamma-ray data and are able to identify weapons-grade plutonium, masked by naturally-occurring radioactive material (NORM), in a measurement time of a few seconds. We demonstrate this identification capability using organic scintillators (stilbene crystals and EJ-309 liquid scintillators), which do not provide direct, high-resolution, source spectroscopic information. Compared to the EJ-309 detector, the stilbene-based detector exhibits a lower identification error, on average, owing to its better energy resolution. Organic scintillators are used within radiation portal monitors to detect gamma rays emitted from conveyances crossing ports of entry. The described method is therefore applicable to radiation portal monitors deployed in the field and could improve their threat discrimination capability by minimizing "nuisance" alarms produced either by NORM-bearing materials found in shipped cargoes, such as ceramics and fertilizers, or radionuclides in recently treated nuclear medicine patients.

6.
Phys Med Biol ; 63(22): 225001, 2018 11 07.
Artigo em Inglês | MEDLINE | ID: mdl-30403191

RESUMO

Scatter can account for large errors in cone-beam CT (CBCT) due to its wide field of view, and its complicated nature makes its compensation difficult. Iterative polyenergetic reconstruction algorithms offer the potential to provide quantitative imaging in CT, but they are usually incompatible with scatter contaminated measurements. In this work, we introduce a polyenergetic convolutional scatter model that is directly fused into the reconstruction process, and exploits information readily available at each iteration for a fraction of additional computational cost. We evaluate this method with numerical and real CBCT measurements, and show significantly enhanced electron density estimation and artifact mitigation over pre-calculated fast adaptive scatter kernel superposition (fASKS). We demonstrate our approach has two levels of benefit: reducing the bias introduced by estimating scatter prior to reconstruction; and adapting to the spectral and spatial properties of the specimen.


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico/métodos , Artefatos , Tomografia Computadorizada de Feixe Cônico/normas , Humanos , Imagens de Fantasmas , Espalhamento de Radiação
7.
Phys Med Biol ; 62(22): 8739-8762, 2017 Nov 02.
Artigo em Inglês | MEDLINE | ID: mdl-28980976

RESUMO

Quantifying material mass and electron density from computed tomography (CT) reconstructions can be highly valuable in certain medical practices, such as radiation therapy planning. However, uniquely parameterising the x-ray attenuation in terms of mass or electron density is an ill-posed problem when a single polyenergetic source is used with a spectrally indiscriminate detector. Existing approaches to single source polyenergetic modelling often impose consistency with a physical model, such as water-bone or photoelectric-Compton decompositions, which will either require detailed prior segmentation or restrictive energy dependencies, and may require further calibration to the quantity of interest. In this work, we introduce a data centric approach to fitting the attenuation with piecewise-linear functions directly to mass or electron density, and present a segmentation-free statistical reconstruction algorithm for exploiting it, with the same order of complexity as other iterative methods. We show how this allows both higher accuracy in attenuation modelling, and demonstrate its superior quantitative imaging, with numerical chest and metal implant data, and validate it with real cone-beam CT measurements.


Assuntos
Algoritmos , Osso e Ossos/diagnóstico por imagem , Elétrons , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/instrumentação , Tomografia Computadorizada por Raios X/métodos , Humanos
8.
Artigo em Inglês | MEDLINE | ID: mdl-19163422

RESUMO

In this paper we contrast three implementations of Independent Component Analysis (ICA) as applied to epileptic scalp electroencephalographic (EEG) recordings, these are; Spatial (Ensemble) ICA, Temporal (single-channel) ICA and Spatio-Temporal ICA. These techniques are based on information derived from both multi-channel as well as single channel biomedical signal recordings. We assess the suitability of the three techniques in isolating and extracting out epileptic seizure sources. Although our results are preliminary in nature, we show that standard implementations of ICA (ensemble ICA) are lacking when attempting to extract complex underlying activity such as ictal activity in the EEG. Temporal ICA performs well in separating underlying sources, although it is clearly lacking in spatial information. Spatio-Temporal ICA has the advantage of using temporal information to inform the ICA process, aided by the spatial information inherent in multi-channel recordings. This work is being expanded for seizure onset analysis through scalp EEG.


Assuntos
Diagnóstico por Computador/métodos , Eletroencefalografia/métodos , Epilepsia/diagnóstico , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Inteligência Artificial , Humanos , Análise de Componente Principal , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA