Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
1.
Proc Natl Acad Sci U S A ; 113(42): E6352-E6361, 2016 10 18.
Artigo em Inglês | MEDLINE | ID: mdl-27679846

RESUMO

Regulation of order, such as orientation and conformation, drives the function of most molecular assemblies in living cells but remains difficult to measure accurately through space and time. We built an instantaneous fluorescence polarization microscope, which simultaneously images position and orientation of fluorophores in living cells with single-molecule sensitivity and a time resolution of 100 ms. We developed image acquisition and analysis methods to track single particles that interact with higher-order assemblies of molecules. We tracked the fluctuations in position and orientation of molecules from the level of an ensemble of fluorophores down to single fluorophores. We tested our system in vitro using fluorescently labeled DNA and F-actin, in which the ensemble orientation of polarized fluorescence is known. We then tracked the orientation of sparsely labeled F-actin network at the leading edge of migrating human keratinocytes, revealing the anisotropic distribution of actin filaments relative to the local retrograde flow of the F-actin network. Additionally, we analyzed the position and orientation of septin-GFP molecules incorporated in septin bundles in growing hyphae of a filamentous fungus. Our data indicate that septin-GFP molecules undergo positional fluctuations within ∼350 nm of the binding site and angular fluctuations within ∼30° of the central orientation of the bundle. By reporting position and orientation of molecules while they form dynamic higher-order structures, our approach can provide insights into how micrometer-scale ordered assemblies emerge from nanoscale molecules in living cells.


Assuntos
Simulação de Dinâmica Molecular , Imagem Individual de Molécula , Actinas/metabolismo , Biomarcadores , Interpretação Estatística de Dados , Polarização de Fluorescência , Proteínas de Fluorescência Verde/genética , Proteínas de Fluorescência Verde/metabolismo , Humanos , Microscopia de Fluorescência , Sensibilidade e Especificidade , Septinas/metabolismo , Imagem Individual de Molécula/métodos
2.
Proc Biol Sci ; 285(1870)2018 01 10.
Artigo em Inglês | MEDLINE | ID: mdl-29298937

RESUMO

Although relationships among the major groups of living gnathostomes are well established, the relatedness of early jawed vertebrates to modern clades is intensely debated. Here, we provide a new description of Gladbachus, a Middle Devonian (Givetian approx. 385-million-year-old) stem chondrichthyan from Germany, and one of the very few early chondrichthyans in which substantial portions of the endoskeleton are preserved. Tomographic and histological techniques reveal new details of the gill skeleton, hyoid arch and jaws, neurocranium, cartilage, scales and teeth. Despite many features resembling placoderm or osteichthyan conditions, phylogenetic analysis confirms Gladbachus as a stem chondrichthyan and corroborates hypotheses that all acanthodians are stem chondrichthyans. The unfamiliar character combination displayed by Gladbachus, alongside conditions observed in acanthodians, implies that pre-Devonian stem chondrichthyans are severely under-sampled and strongly supports indications from isolated scales that the gnathostome crown group originated at the latest by the early Silurian (approx. 440 Ma). Moreover, phylogenetic results highlight the likely convergent evolution of conventional chondrichthyan conditions among earliest members of this primary gnathostome division, while skeletal morphology points towards the likely suspension feeding habits of Gladbachus, suggesting a functional origin of the gill slit condition characteristic of the vast majority of living and fossil chondrichthyans.


Assuntos
Evolução Biológica , Tubarões/anatomia & histologia , Animais , Cartilagem/anatomia & histologia , Alemanha , Brânquias/anatomia & histologia , Osso Hioide/anatomia & histologia , Arcada Osseodentária/anatomia & histologia , Filogenia , Tubarões/classificação , Tomografia Computadorizada por Raios X , Dente/anatomia & histologia
3.
Opt Express ; 25(25): 31309-31325, 2017 Dec 11.
Artigo em Inglês | MEDLINE | ID: mdl-29245807

RESUMO

We investigate the use of polarized illumination in multiview microscopes for determining the orientation of single-molecule fluorescence transition dipoles. First, we relate the orientation of single dipoles to measurable intensities in multiview microscopes and develop an information-theoretic metric-the solid-angle uncertainty-to compare the ability of multiview microscopes to estimate the orientation of single dipoles. Next, we compare a broad class of microscopes using this metric-single- and dual-view microscopes with varying illumination polarization, illumination numerical aperture (NA), detection NA, obliquity, asymmetry, and exposure. We find that multi-view microscopes can measure all dipole orientations, while the orientations measurable with single-view microscopes is halved because of symmetries in the detection process. We also find that choosing a small illumination NA and a large detection NA are good design choices, that multiview microscopes can benefit from oblique illumination and detection, and that asymmetric NA microscopes can benefit from exposure asymmetry.

4.
J Med Imaging (Bellingham) ; 11(2): 023501, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38445223

RESUMO

Purpose: Single-energy computed tomography (CT) often suffers from poor contrast yet remains critical for effective radiotherapy treatment. Modern therapy systems are often equipped with both megavoltage (MV) and kilovoltage (kV) X-ray sources and thus already possess hardware for dual-energy (DE) CT. There is unexplored potential for enhanced image contrast using MV-kV DE-CT in radiotherapy contexts. Approach: A single-line integral toy model was designed for computing basis material signal-to-noise ratio (SNR) using estimation theory. Five dose-matched spectra (3 kV, 2 MV) and three variables were considered: spectral combination, spectral dose allocation, and object material composition. The single-line model was extended to a simulated CT acquisition of an anthropomorphic phantom with and without a metal implant. Basis material sinograms were computed and synthesized into virtual monoenergetic images (VMIs). MV-kV and kV-kV VMIs were compared with single-energy images. Results: The 80 kV-140 kV pair typically yielded the best SNRs, but for bone thicknesses >8 cm, the detunedMV-80 kV pair surpassed it. Peak MV-kV SNR was achieved with ∼90% dose allocated to the MV spectrum. In CT simulations of the pelvis with a steel implant, MV-kV VMIs yielded a higher contrast-to-noise ratio (CNR) than single-energy CT and kV-kV DE-CT. Without steel, the MV-kV VMIs produced higher contrast but lower CNR than single-energy CT. Conclusions: This work analyzes MV-kV DE-CT imaging and assesses its potential advantages. The technique may be used for metal artifact correction and generation of VMIs with higher native contrast than single-energy CT. Improved denoising is generally necessary for greater CNR without metal.

5.
bioRxiv ; 2023 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-37292910

RESUMO

Tissue phenotyping is foundational to understanding and assessing the cellular aspects of disease in organismal context and an important adjunct to molecular studies in the dissection of gene function, chemical effects, and disease. As a first step toward computational tissue phenotyping, we explore the potential of cellular phenotyping from 3-Dimensional (3D), 0.74 µm isotropic voxel resolution, whole zebrafish larval images derived from X-ray histotomography, a form of micro-CT customized for histopathology. As proof of principle towards computational tissue phenotyping of cells, we created a semi-automated mechanism for the segmentation of blood cells in the vascular spaces of zebrafish larvae, followed by modeling and extraction of quantitative geometric parameters. Manually segmented cells were used to train a random forest classifier for blood cells, enabling the use of a generalized cellular segmentation algorithm for the accurate segmentation of blood cells. These models were used to create an automated data segmentation and analysis pipeline to guide the steps in a 3D workflow including blood cell region prediction, cell boundary extraction, and statistical characterization of 3D geometric and cytological features. We were able to distinguish blood cells at two stages in development (4- and 5-days-post-fertilization) and wild-type vs. polA2 huli hutu ( hht ) mutants. The application of geometric modeling across cell types to and across organisms and sample types may comprise a valuable foundation for computational phenotyping that is more open, informative, rapid, objective, and reproducible.

6.
Biol Bull ; 242(1): 62-73, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35245159

RESUMO

AbstractWe tested the impact of temperature and symbiont state on calcification in corals, using the facultatively symbiotic coral Astrangia poculata as a model system. Symbiotic and aposymbiotic colonies of A. poculata were reared in 15, 20, and 27 °C conditions. We used scanning electron microscopy to quantify how these physiological and environmental conditions impact skeletal structure. Buoyant weight data over time revealed that temperature significantly affects calcification rates. Scanning electron microscopy of A. poculata skeletons showed that aposymbiotic colonies appear to have a lower density of calcium carbonate in actively growing septal spines. We describe a novel approach to analyze the roughness and texture of scanning electron microscopy images. Quantitative analysis of the roughness of septal spines revealed that aposymbiotic colonies have a rougher surface than symbiotic colonies in tropical conditions (27 °C). This trend reversed at 15 °C, a temperature at which the symbionts of A. poculata may exhibit parasitic properties. Analysis of surface texture patterns showed that temperature impacts the spatial variance of crystals on the spine surface. Few published studies have examined the skeleton of A. poculata by using scanning electron microscopy. Our approach provides a way to study detailed changes in skeletal microstructure in response to environmental parameters and can serve as a proxy for more expensive and time-consuming analyses. Utilizing a facultatively symbiotic coral that is native to both temperate and tropical regions provides new insights into the impact of both symbiosis and temperature on calcification in corals.


Assuntos
Antozoários , Dinoflagellida , Animais , Antozoários/fisiologia , Calcificação Fisiológica , Recifes de Corais , Dinoflagellida/fisiologia , Simbiose/fisiologia , Temperatura
7.
Med Phys ; 38(8): 4811-23, 2011 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-21928654

RESUMO

PURPOSE: In recent years, the authors and others have been exploring the use of penalized-likelihood sinogram-domain smoothing and restoration approaches for emission and transmission tomography. The motivation for this strategy was initially pragmatic: to provide a more computationally feasible alternative to fully iterative penalized-likelihood image reconstruction involving expensive backprojections and reprojections, while still obtaining some of the benefits of the statistical modeling employed in penalized-likelihood approaches. In this work, the authors seek to compare the two approaches in greater detail. METHODS: The sinogram-domain strategy entails estimating the "ideal" line integrals needed for reconstruction of an activity or attenuation distribution from the set of noisy, potentially degraded tomographic measurements by maximizing a penalized-likelihood objective function. The objective function models the data statistics as well as any degradation that can be represented in the sinogram domain. The estimated line integrals can then be input to analytic reconstruction algorithms such as filtered backprojection (FBP). The authors compare this to fully iterative approaches maximizing similar objective functions. RESULTS: The authors present mathematical analyses based on so-called equivalent optimization problems that establish that the approaches can be made precisely equivalent under certain restrictive conditions. More significantly, by use of resolution-variance tradeoff studies, the authors show that they can yield very similar performance under more relaxed, realistic conditions. CONCLUSIONS: The sinogram- and image-domain approaches are equivalent under certain restrictive conditions and can perform very similarly under more relaxed conditions. The match is particularly good for fully sampled, high-resolution CT geometries. One limitation of the sinogram-domain approach relative to the image-domain approach is the difficulty of imposing additional constraints, such as image non-negativity.


Assuntos
Algoritmos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Análise de Variância , Humanos , Funções Verossimilhança , Imagens de Fantasmas , Radiografia Abdominal/estatística & dados numéricos , Ombro/diagnóstico por imagem , Tomografia Computadorizada por Raios X/estatística & dados numéricos
8.
J Med Imaging (Bellingham) ; 8(5): 052111, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34660842

RESUMO

Computed tomography was one of the first imaging modalities to require a computerized solution of an inverse problem to produce a useful image from the data acquired by the sensor hardware. The computerized solutions, which are known as image reconstruction algorithms, have thus been a critical component of every CT scanner ever sold. We review the history of commercially deployed CT reconstruction algorithms and consider the forces that led, at various points, both to innovation and to convergence around certain broadly useful algorithms. The forces include the emergence of new hardware capabilities, competitive pressures, the availability of computational power, and regulatory considerations. We consider four major historical periods and turning points. The original EMI scanner was developed with an iterative reconstruction algorithm, but an explosion of innovation coupled with rediscovery of an older literature led to the development of alternative algorithms throughout the early 1970s. Most CT vendors quickly converged on the use of the filtered back-projection (FBP) algorithm, albeit layered with a variety of proprietary corrections in both projection data and image domains to improve image quality. Innovations such as helical scanning and multi-row detectors were both enabled by and drove the development of additional applications of FBP in the 1990s and 2000s. Finally, the last two decades have seen a return of iterative reconstruction and the introduction of artificial intelligence approaches that benefit from increased computational power to reduce radiation dose and improve image quality.

9.
J Med Imaging (Bellingham) ; 8(5): 052101, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34738026

RESUMO

Guest editors Patrick La Riviere, Rebecca Fahrig, and Norbert Pelc introduce the JMI Special Section Celebrating X-Ray Computed Tomography at 50.

10.
Med Phys ; 35(8): 3728-39, 2008 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-18777932

RESUMO

The AAPM, through its members, meetings, and its flagship journal Medical Physics, has played an important role in the development and growth of x-ray tomography in the last 50 years. From a spate of early articles in the 1970s characterizing the first commercial computed tomography (CT) scanners through the "slice wars" of the 1990s and 2000s, the history of CT and related techniques such as tomosynthesis can readily be traced through the pages of Medical Physics and the annals of the AAPM and RSNA/AAPM Annual Meetings. In this article, the authors intend to give a brief review of the role of Medical Physics and the AAPM in CT and tomosynthesis imaging over the last few decades.


Assuntos
Física Nuclear , Sociedades Médicas , Tomografia Computadorizada por Raios X/métodos , História do Século XX , História do Século XXI , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Tomografia Computadorizada por Raios X/história , Tomografia Computadorizada por Raios X/tendências
11.
J Med Imaging (Bellingham) ; 4(2): 026002, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28523283

RESUMO

Quantification of myocardial blood flow (MBF) can aid in the diagnosis and treatment of coronary artery disease. However, there are no widely accepted clinical methods for estimating MBF. Dynamic cardiac perfusion computed tomography (CT) holds the promise of providing a quick and easy method to measure MBF quantitatively. However, the need for repeated scans can potentially result in a high patient radiation dose, limiting the clinical acceptance of this approach. In our previous work, we explored techniques to reduce the patient dose by either uniformly reducing the tube current or by uniformly reducing the number of temporal frames in the dynamic CT sequence. These dose reduction techniques result in noisy time-attenuation curves (TACs), which can give rise to significant errors in MBF estimation. We seek to investigate whether nonuniformly varying the tube current and/or sampling intervals can yield more accurate MBF estimates for a given dose. Specifically, we try to minimize the dose and obtain the most accurate MBF estimate by addressing the following questions: when in the TAC should the CT data be collected and at what tube current(s)? We hypothesize that increasing the sampling rate and/or tube current during the time frames when the myocardial CT number is most sensitive to the flow rate, while reducing them elsewhere, can achieve better estimation accuracy for the same dose. We perform simulations of contrast agent kinetics and CT acquisitions to evaluate the relative MBF estimation performance of several clinically viable variable acquisition methods. We find that variable temporal and tube current sequences can be performed that impart an effective dose of 5.5 mSv and allow for reductions in MBF estimation root-mean-square error on the order of 20% compared to uniform acquisition sequences with comparable or higher radiation doses.

12.
Phys Med Biol ; 62(8): 3284-3298, 2017 04 21.
Artigo em Inglês | MEDLINE | ID: mdl-28350547

RESUMO

Vectorial extensions of total variation have recently been developed for regularizing the reconstruction and denoising of multi-channel images, such as those arising in spectral computed tomography. Early studies have focused mainly on simulated, piecewise-constant images whose structure may favor total-variation penalties. In the current manuscript, we apply vectorial total variation to real dual-energy CT data of a whole turkey in order to determine if the same benefits can be observed in more complex images with anatomically realistic textures. We consider the total nuclear variation ([Formula: see text]) as well as another vectorial total variation based on the Frobenius norm ([Formula: see text]) and standard channel-by-channel total variation ([Formula: see text]). We performed a series of 3D TV denoising experiments comparing the three TV variants across a wide range of smoothness parameter settings, optimizing each regularizer according to a very-high-dose 'ground truth' image. Consistent with the simulation studies, we find that both vectorial TV variants achieve a lower error than the channel-by-channel TV and are better able to suppress noise while preserving actual image features. In this real data study, the advantages are subtler than in the previous simulation study, although the [Formula: see text] penalty is found to have clear advantages over either [Formula: see text] or [Formula: see text] when comparing material images formed from linear combinations of the denoised energy images.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Imagens de Fantasmas
13.
F1000Res ; 6: 787, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28868135

RESUMO

Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.

14.
Artigo em Inglês | MEDLINE | ID: mdl-32733117

RESUMO

Biomedical research and clinical diagnosis would benefit greatly from full volume determinations of anatomical phenotype. Comprehensive tools for morphological phenotyping are central for the emerging field of phenomics, which requires high-throughput, systematic, accurate, and reproducible data collection from organisms affected by genetic, disease, or environmental variables. Theoretically, complete anatomical phenotyping requires the assessment of every cell type in the whole organism, but this ideal is presently untenable due to the lack of an unbiased 3D imaging method that allows histopathological assessment of any cell type despite optical opacity. Histopathology, the current clinical standard for diagnostic phenotyping, involves the microscopic study of tissue sections to assess qualitative aspects of tissue architecture, disease mechanisms, and physiological state. However, quantitative features of tissue architecture such as cellular composition and cell counting in tissue volumes can only be approximated due to characteristics of tissue sectioning, including incomplete sampling and the constraints of 2D imaging of 5 micron thick tissue slabs. We have used a small, vertebrate organism, the zebrafish, to test the potential of microCT for systematic macroscopic and microscopic morphological phenotyping. While cell resolution is routinely achieved using methods such as light sheet fluorescence microscopy and optical tomography, these methods do not provide the pancellular perspective characteristic of histology, and are constrained by the limited penetration of visible light through pigmented and opaque specimens, as characterizes zebrafish juveniles. Here, we provide an example of neuroanatomy that can be studied by microCT of stained soft tissue at 1.43 micron isotropic voxel resolution. We conclude that synchrotron microCT is a form of 3D imaging that may potentially be adopted towards more reproducible, large-scale, morphological phenotyping of optically opaque tissues. Further development of soft tissue microCT, visualization and quantitative tools will enhance its utility.

15.
Nat Commun ; 8(1): 1452, 2017 11 13.
Artigo em Inglês | MEDLINE | ID: mdl-29129912

RESUMO

Light-sheet fluorescence microscopy (LSFM) enables high-speed, high-resolution, and gentle imaging of live specimens over extended periods. Here we describe a technique that improves the spatiotemporal resolution and collection efficiency of LSFM without modifying the underlying microscope. By imaging samples on reflective coverslips, we enable simultaneous collection of four complementary views in 250 ms, doubling speed and improving information content relative to symmetric dual-view LSFM. We also report a modified deconvolution algorithm that removes associated epifluorescence contamination and fuses all views for resolution recovery. Furthermore, we enhance spatial resolution (to <300 nm in all three dimensions) by applying our method to single-view LSFM, permitting simultaneous acquisition of two high-resolution views otherwise difficult to obtain due to steric constraints at high numerical aperture. We demonstrate the broad applicability of our method in a variety of samples, studying mitochondrial, membrane, Golgi, and microtubule dynamics in cells and calcium activity in nematode embryos.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Microscopia de Fluorescência/métodos , Algoritmos , Animais , Caenorhabditis elegans/citologia , Linhagem Celular Tumoral , Escherichia coli/citologia , Humanos , Células Jurkat
16.
Med Phys ; 44(10): 5367-5377, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28703922

RESUMO

PURPOSE: X-ray-induced luminescence (XIL) is a hybrid x-ray/optical imaging modality that employs nanophosphors that luminescence in response to x-ray irradiation. X-ray-activated phosphorescent nanoparticles have potential applications in radiation therapy as theranostics, nanodosimeters, or radiosensitizers. Extracting clinically relevant information from the luminescent signal requires the development of a robust imaging model that can determine nanophosphor distributions at depth in an optically scattering environment from surface radiance measurements. The applications of XIL in radiotherapy will be limited by the dose-dependent sensitivity at depth in tissue. We propose a novel geometry called selective plane XIL (SPXIL), and apply it to experimental measurements in optical gel phantoms and sensitivity simulations. METHODS: An imaging model is presented based on the selective plane geometry which can determine the detected diffuse optical signal for a given x-ray dose and nanophosphor distribution at depth in a semi-infinite, optically homogenous material. The surface radiance in the model is calculated using an analytical solution to the extrapolated boundary condition. Y2 O3 :Eu3+ nanoparticles are synthesized and inserted into various optical phantom in order to measure the luminescent output per unit dose for a given concentration of nanophosphors and calibrate an imaging model for XIL sensitivity simulations. SPXIL imaging with a dual-source optical gel phantom is performed, and an iterative Richardson-Lucy deconvolution using a shifted Poisson noise model is applied to the measurements in order to reconstruct the nanophosphor distribution. RESULTS: Nanophosphor characterizations showed a peak emission at 611 nm, a linear luminescent response to tube current and nanoparticle concentration, and a quadratic luminescent response to tube voltage. The luminescent efficiency calculation accomplished with calibrated bioluminescence mouse phantoms determines 1.06 photons were emitted per keV of x-ray radiation absorbed per g/mL of nanophosphor concentration. Sensitivity simulations determined that XIL could detect a concentration of 1 mg/mL of nanophosphors with a dose of 1 cGy at a depth ranging from 2 to 4 cm, depending on the optical parameters of the homogeneous diffuse optical environment. The deconvolution applied to the SPXIL measurements could resolve two sources 1 cm apart up to a depth of 1.75 cm in the diffuse phantom. CONCLUSIONS: We present a novel imaging geometry for XIL in a homogenous, diffuse optical environment. Basic characterization of Y2 O3 :Eu3+ nanophosphors are presented along with XIL/SPXIL measurements in optical gel phantoms. The diffuse optical imaging model is validated using these measurements and then calibrated in order to execute initial sensitivity simulations for the dose-depth limitations of XIL imaging. The SPXIL imaging model is used to perform a deconvolution on a dual-source phantom, which successfully reconstructs the nanophosphor distributions.


Assuntos
Luminescência , Imagem Óptica/métodos , Calibragem , Nanopartículas , Imagens de Fantasmas , Razão Sinal-Ruído , Raios X
17.
IEEE Trans Med Imaging ; 25(8): 1022-36, 2006 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-16894995

RESUMO

We formulate computed tomography (CT) sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. CT measurement data are degraded by a number of factors-including beam hardening and off-focal radiation-that produce artifacts in reconstructed images unless properly corrected. Currently, such effects are addressed by a sequence of sinogram-preprocessing steps, including deconvolution corrections for off-focal radiation, that have the potential to amplify noise. Noise itself is generally mitigated through apodization of the reconstruction kernel, which effectively ignores the measurement statistics, although in high-noise situations adaptive filtering methods that loosely model data statistics are sometimes applied. As an alternative, we present a general imaging model relating the degraded measurements to the sinogram of ideal line integrals and propose to estimate these line integrals by iteratively optimizing a statistically based objective function. We consider three different strategies for estimating the set of ideal line integrals, one based on direct estimation of ideal "monochromatic" line integrals that have been corrected for single-material beam hardening, one based on estimation of ideal "polychromatic" line integrals that can be readily mapped to monochromatic line integrals, and one based on estimation of ideal transmitted intensities, from which ideal, monochromatic line integrals can be readily estimated. The first two approaches involve maximization of a penalized Poisson-likelihood objective function while the third involves minimization of a quadratic penalized weighted least squares (PWLS) objective applied in the transmitted intensity domain. We find that at low exposure levels typical of those being considered for screening CT, the Poisson-likelihood based approaches outperform the PWLS objective as well as a standard approach based on adaptive filtering followed by deconvolution. At higher exposure levels, the approaches all perform similarly.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Imageamento Tridimensional/métodos , Armazenamento e Recuperação da Informação/métodos , Funções Verossimilhança , Imagens de Fantasmas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
18.
IEEE Trans Med Imaging ; 25(9): 1117-29, 2006 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-16967798

RESUMO

In this paper, we derive a monotonic penalized-likelihood algorithm for image reconstruction in X-ray fluorescence computed tomography (XFCT) when the attenuation maps at the energies of the fluorescence X-rays are unknown. In XFCT, a sample is irradiated with pencil beams of monochromatic synchrotron radiation that stimulate the emission of fluorescence X-rays from atoms of elements whose K- or L-edges lie below the energy of the stimulating beam. Scanning and rotating the object through the beam allows for acquisition of a tomographic dataset that can be used to reconstruct images of the distribution of the elements in question. XFCT is a stimulated emission tomography modality, and it is thus necessary to correct for attenuation of the incident and fluorescence photons. The attenuation map is, however, generally known only at the stimulating beam energy and not at the energies of the various fluorescence X-rays of interest. We have developed a penalized-likelihood image reconstruction strategy for this problem. The approach alternates between updating the distribution of a given element and updating the attenuation map for that element's fluorescence X-rays. The approach is guaranteed to increase the penalized likelihood at each iteration. Because the joint objective function is not necessarily concave, the approach may drive the solution to a local maximum. To encourage the algorithm to seek out a reasonable local maximum, we include in the objective function a prior that encourages a relationship, based on physical considerations, between the fluorescence attenuation map and the distribution of the element being reconstructed.


Assuntos
Algoritmos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Espectrometria por Raios X/métodos , Tomografia Computadorizada por Raios X/métodos , Armazenamento e Recuperação da Informação/métodos , Funções Verossimilhança , Imagens de Fantasmas , Intensificação de Imagem Radiográfica/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/instrumentação , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Espectrometria por Raios X/instrumentação
19.
J Med Imaging (Bellingham) ; 3(2): 024001, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-27175377

RESUMO

Cardiac computed tomography (CT) acquisitions for perfusion assessment can be performed in a dynamic or static mode. Either method may be used for a variety of clinical tasks, including (1) stratifying patients into categories of ischemia and (2) using a quantitative myocardial blood flow (MBF) estimate to evaluate disease severity. In this simulation study, we compare method performance on these classification and quantification tasks for matched radiation dose levels and for different flow states, patient sizes, and injected contrast levels. Under conditions simulated, the dynamic method has low bias in MBF estimates (0 to [Formula: see text]) compared to linearly interpreted static assessment (0.45 to [Formula: see text]), making it more suitable for quantitative estimation. At matched radiation dose levels, receiver operating characteristic analysis demonstrated that the static method, with its high bias but generally lower variance, had superior performance ([Formula: see text]) in stratifying patients, especially for larger patients and lower contrast doses [area under the curve [Formula: see text] to 96 versus 0.86]. We also demonstrate that static assessment with a correctly tuned exponential relationship between the apparent CT number and MBF has superior quantification performance to static assessment with a linear relationship and to dynamic assessment. However, tuning the exponential relationship to the patient and scan characteristics will likely prove challenging. This study demonstrates that the selection and optimization of static or dynamic acquisition modes should depend on the specific clinical task.

20.
Optica ; 3(8): 897-910, 2016 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-27761486

RESUMO

Most fluorescence microscopes are inefficient, collecting only a small fraction of the emitted light at any instant. Besides wasting valuable signal, this inefficiency also reduces spatial resolution and causes imaging volumes to exhibit significant resolution anisotropy. We describe microscopic and computational techniques that address these problems by simultaneously capturing and subsequently fusing and deconvolving multiple specimen views. Unlike previous methods that serially capture multiple views, our approach improves spatial resolution without introducing any additional illumination dose or compromising temporal resolution relative to conventional imaging. When applying our methods to single-view wide-field or dual-view light-sheet microscopy, we achieve a twofold improvement in volumetric resolution (~235 nm × 235 nm × 340 nm) as demonstrated on a variety of samples including microtubules in Toxoplasma gondii, SpoVM in sporulating Bacillus subtilis, and multiple protein distributions and organelles in eukaryotic cells. In every case, spatial resolution is improved with no drawback by harnessing previously unused fluorescence.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA