Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 40
Filter
1.
Sensors (Basel) ; 24(9)2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38732996

ABSTRACT

X-ray nanotomography is a powerful tool for the characterization of nanoscale materials and structures, but it is difficult to implement due to the competing requirements of X-ray flux and spot size. Due to this constraint, state-of-the-art nanotomography is predominantly performed at large synchrotron facilities. We present a laboratory-scale nanotomography instrument that achieves nanoscale spatial resolution while addressing the limitations of conventional tomography tools. The instrument combines the electron beam of a scanning electron microscope (SEM) with the precise, broadband X-ray detection of a superconducting transition-edge sensor (TES) microcalorimeter. The electron beam generates a highly focused X-ray spot on a metal target held micrometers away from the sample of interest, while the TES spectrometer isolates target photons with a high signal-to-noise ratio. This combination of a focused X-ray spot, energy-resolved X-ray detection, and unique system geometry enables nanoscale, element-specific X-ray imaging in a compact footprint. The proof of concept for this approach to X-ray nanotomography is demonstrated by imaging 160 nm features in three dimensions in six layers of a Cu-SiO2 integrated circuit, and a path toward finer resolution and enhanced imaging capabilities is discussed.

2.
Opt Express ; 31(10): 15355-15371, 2023 May 08.
Article in English | MEDLINE | ID: mdl-37157639

ABSTRACT

X-ray tomography is a non-destructive imaging technique that reveals the interior of an object from its projections at different angles. Under sparse-view and low-photon sampling, regularization priors are required to retrieve a high-fidelity reconstruction. Recently, deep learning has been used in X-ray tomography. The prior learned from training data replaces the general-purpose priors in iterative algorithms, achieving high-quality reconstructions with a neural network. Previous studies typically assume the noise statistics of test data are acquired a priori from training data, leaving the network susceptible to a change in the noise characteristics under practical imaging conditions. In this work, we propose a noise-resilient deep-reconstruction algorithm and apply it to integrated circuit tomography. By training the network with regularized reconstructions from a conventional algorithm, the learned prior shows strong noise resilience without the need for additional training with noisy examples, and allows us to obtain acceptable reconstructions with fewer photons in test data. The advantages of our framework may further enable low-photon tomographic imaging where long acquisition times limit the ability to acquire a large training set.

3.
Microsyst Nanoeng ; 9: 47, 2023.
Article in English | MEDLINE | ID: mdl-37064166

ABSTRACT

We show three-dimensional reconstructions of a region of an integrated circuit from a 130 nm copper process. The reconstructions employ x-ray computed tomography, measured with a new and innovative high-magnification x-ray microscope. The instrument uses a focused electron beam to generate x-rays in a 100 nm spot and energy-resolving x-ray detectors that minimize backgrounds and hold promise for the identification of materials within the sample. The x-ray generation target, a layer of platinum, is fabricated on the circuit wafer itself. A region of interest is imaged from a limited range of angles and without physically removing the region from the larger circuit. The reconstruction is consistent with the circuit's design file.

4.
Opt Express ; 31(3): 4899-4919, 2023 Jan 30.
Article in English | MEDLINE | ID: mdl-36785446

ABSTRACT

Photon echoes in rare-earth-doped crystals are studied to understand the challenges of making broadband quantum memories using the atomic frequency comb (AFC) protocol in systems with hyperfine structure. The hyperfine structure of Pr3+ poses an obstacle to this goal because frequencies associated with the hyperfine transitions change the simple picture of modulation at an externally imposed frequency. The current work focuses on the intermediate case where the hyperfine spacing is comparable to the comb spacing, a challenging regime that has recently been considered. Operating in this regime may facilitate storing quantum information over a larger spectral range in such systems. In this work, we prepare broadband AFCs using optical combs with tooth spacings ranging from 1 MHz to 16 MHz in fine steps, and measure transmission spectra and photon echoes for each. We predict the spectra and echoes theoretically using the optical combs as input to either a rate equation code or a density matrix code, which calculates the redistribution of populations. We then use the redistributed populations as input to a semiclassical theory using the frequency-dependent dielectric function. The two sets of predictions each give a good, but different account of the photon echoes.

5.
Metrologia ; 60(2)2023.
Article in English | MEDLINE | ID: mdl-38379870

ABSTRACT

A technique for characterizing and correcting the linearity of radiometric instruments is known by the names the "flux-addition method" and the "combinatorial technique". In this paper, we develop a rigorous uncertainty quantification method for use with this technique and illustrate its use with both synthetic data and experimental data from a "beam conjoiner" instrument. We present a probabilistic model that relates the instrument readout to a set of unknown fluxes via a set of polynomial coefficients. Maximum likelihood estimates (MLEs) of the unknown fluxes and polynomial coefficients are recommended, while a non-parametric bootstrap algorithm enables uncertainty quantification including standard errors and confidence intervals. The synthetic data represent plausible outputs of a radiometric instrument and enable testing and validation of the method. The MLEs for these data are found to be approximately unbiased, and confidence intervals derived from the bootstrap replicates are found to be consistent with their target coverage of 95%. For the polynomial coefficients, the observed coverages range from 91% to 99%. The experimental data set illustrates how a complete calibration with uncertainties can be achieved using the method plus one well-known flux level. The uncertainty contribution attributable to estimation of the instrument's nonlinear response is less than 0.025% over most of its range.

6.
Opt Express ; 30(13): 23238-23259, 2022 Jun 20.
Article in English | MEDLINE | ID: mdl-36225009

ABSTRACT

X-ray tomography is capable of imaging the interior of objects in three dimensions non-invasively, with applications in biomedical imaging, materials science, electronic inspection, and other fields. The reconstruction process can be an ill-conditioned inverse problem, requiring regularization to obtain satisfactory results. Recently, deep learning has been adopted for tomographic reconstruction. Unlike iterative algorithms which require a distribution that is known a priori, deep reconstruction networks can learn a prior distribution through sampling the training distributions. In this work, we develop a Physics-assisted Generative Adversarial Network (PGAN), a two-step algorithm for tomographic reconstruction. In contrast to previous efforts, our PGAN utilizes maximum-likelihood estimates derived from the measurements to regularize the reconstruction with both known physics and the learned prior. Compared with methods with less physics assisting in training, PGAN can reduce the photon requirement with limited projection angles to achieve a given error rate. The advantages of using a physics-assisted learned prior in X-ray tomography may further enable low-photon nanoscale imaging.

7.
Opt Express ; 29(2): 1788-1804, 2021 Jan 18.
Article in English | MEDLINE | ID: mdl-33726385

ABSTRACT

A reconstruction algorithm for partially coherent x-ray computed tomography (XCT) including Fresnel diffraction is developed and applied to an optical fiber. The algorithm is applicable to a high-resolution tube-based laboratory-scale x-ray tomography instrument. The computing time is only a few times longer than the projective counterpart. The algorithm is used to reconstruct, with projections and diffraction, a tilt series acquired at the micrometer scale of a graded-index optical fiber using maximum likelihood and a Bayesian method based on the work of Bouman and Sauer. The inclusion of Fresnel diffraction removes some reconstruction artifacts and use of a Bayesian prior probability distribution removes others, resulting in a substantially more accurate reconstruction.

8.
Article in English | MEDLINE | ID: mdl-35529769

ABSTRACT

Feature sizes in integrated circuits have decreased substantially over time, and it has become increasingly difficult to three-dimensionally image these complex circuits after fabrication. This can be important for process development, defect analysis, and detection of unexpected structures in externally sourced chips, among other applications. Here, we report on a non-destructive, tabletop approach that addresses this imaging problem through x-ray tomography, which we uniquely realize with an instrument that combines a scanning electron microscope (SEM) with a transition-edge sensor (TES) x-ray spectrometer. Our approach uses the highly focused SEM electron beam to generate a small x-ray generation region in a carefully designed target layer that is placed over the sample being tested. With the high collection efficiency and resolving power of a TES spectrometer, we can isolate x-rays generated in the target from background and trace their paths through regions of interest in the sample layers, providing information about the various materials along the x-ray paths through their attenuation functions. We have recently demonstrated our approach using a 240 Mo/Cu bilayer TES prototype instrument on a simplified test sample containing features with sizes of ∼ 1 µm. Currently, we are designing and building a 3000 Mo/Au bilayer TES spectrometer upgrade, which is expected to improve the imaging speed by factor of up to 60 through a combination of increased detector number and detector speed.

9.
Microsc Microanal ; 25(1): 70-76, 2019 02.
Article in English | MEDLINE | ID: mdl-30869576

ABSTRACT

Using a commercial X-ray tomography instrument, we have obtained reconstructions of a graded-index optical fiber with voxels of edge length 1.05 µm at 12 tube voltages. The fiber manufacturer created a graded index in the central region by varying the germanium concentration from a peak value in the center of the core to a very small value at the core-cladding boundary. Operating on 12 tube voltages, we show by a singular value decomposition that there are only two singular vectors with significant weight. Physically, this means scans beyond two tube voltages contain largely redundant information. We concentrate on an analysis of the images associated with these two singular vectors. The first singular vector is dominant and images of the coefficients of the first singular vector at each voxel look are similar to any of the single-energy reconstructions. Images of the coefficients of the second singular vector by itself appear to be noise. However, by averaging the reconstructed voxels in each of several narrow bands of radii, we can obtain values of the second singular vector at each radius. In the core region, where we expect the germanium doping to go from a peak value at the fiber center to zero at the core-cladding boundary, we find that a plot of the two coefficients of the singular vectors forms a line in the two-dimensional space consistent with the dopant decreasing linearly with radial distance from the core center. The coating, made of a polymer rather than silica, is not on this line indicating that the two-dimensional results are sensitive not only to the density but also to the elemental composition.

10.
Article in English | MEDLINE | ID: mdl-34877164

ABSTRACT

Fundamental limits for the calculation of scattering corrections within X-ray computed tomography (CT) are found within the independent atom approximation from an analysis of the cross sections, CT geometry, and the Nyquist sampling theorem, suggesting large reductions in computational time compared to existing methods. By modifying the scatter by less than 1 %, it is possible to treat some of the elastic scattering in the forward direction as inelastic to achieve a smoother elastic scattering distribution. We present an analysis showing that the number of samples required for the smoother distribution can be greatly reduced. We show that fixed forced detection can be used with many fewer points for inelastic scattering, but that for pure elastic scattering, a standard Monte Carlo calculation is preferred. We use smoothing for both elastic and inelastic scattering because the intrinsic angular resolution is much poorer than can be achieved for projective tomography. Representative numerical examples are given.

11.
Article in English | MEDLINE | ID: mdl-32856003

ABSTRACT

Patient-specific computational modeling is increasingly used to assist with visualization, planning, and execution of medical treatments. This trend is placing more reliance on medical imaging to provide accurate representations of anatomical structures. Digital image analysis is used to extract anatomical data for use in clinical assessment/planning. However, the presence of image artifacts, whether due to interactions between the physical object and the scanning modality or the scanning process, can degrade image accuracy. The process of extracting anatomical structures from the medical images introduces additional sources of variability, e.g., when thresholding or when eroding along apparent edges of biological structures. An estimate of the uncertainty associated with extracting anatomical data from medical images would therefore assist with assessing the reliability of patient-specific treatment plans. To this end, two image datasets were developed and analyzed using standard image analysis procedures. The first dataset was developed by performing a "virtual voxelization" of a CAD model of a sphere, representing the idealized scenario of no error in the image acquisition and reconstruction algorithms (i.e., a perfect scan). The second dataset was acquired by scanning three spherical balls using a laboratory-grade CT scanner. For the idealized sphere, the error in sphere diameter was less than or equal to 2% if 5 or more voxels were present across the diameter. The measurement error degraded to approximately 4% for a similar degree of voxelization of the physical phantom. The adaptation of established thresholding procedures to improve segmentation accuracy was also investigated.

12.
PLoS One ; 13(12): e0208820, 2018.
Article in English | MEDLINE | ID: mdl-30571779

ABSTRACT

PURPOSE: This paper lays the groundwork for linking Hounsfield unit measurements to the International System of Units (SI), ultimately enabling traceable measurements across X-ray CT (XCT) machines. We do this by characterizing a material basis that may be used in XCT reconstruction giving linear combinations of concentrations of chemical elements (in the SI units of mol/m3) which may be observed at each voxel. By implication, linear combinations not in the set are not observable. METHODS AND MATERIALS: We formulated a model for our material basis with a set of measurements of elemental powders at four tube voltages, 80 kV, 100 kV, 120 kV, and 140 kV, on a medical XCT. The samples included 30 small plastic bottles of powders containing various compounds spanning the atomic numbers up to 20, and a bottle of water and one of air. Using the chemical formulas and measured masses, we formed a matrix giving the number of Hounsfield units per (mole per cubic meter) at each tube voltage for each of 13 chemical elements. We defined a corresponding matrix in units we call molar Hounsfield unit (HU) potency, the difference in HU values that an added mole per cubic meter in a given voxel would add to the measured HU value. We built a matrix of molar potencies for each chemical element and tube voltage and performed a singular value decomposition (SVD) on these to formulate our material basis. We determined that the dimension of this basis is two. We then compared measurements in this material space with theoretical measurements, combining XCOM cross section data with the tungsten anode spectral model using interpolating cubic splines (TASMICS), a one-parameter filter, and a simple detector model, creating a matrix similar to our experimental matrix for the first 20 chemical elements. Finally, we compared the model predictions to Hounsfield unit measurements on three XCT calibration phantoms taken from the literature. RESULTS: We predict the experimental HU potency values derived from our scans of chemical elements with our theoretical model built from XCOM data. The singular values and singular vectors of the model and powder measurements are in substantial agreement. Application of the Bayesian Information Criterion (BIC) shows that exactly two singular values and singular vectors describe the results over four tube voltages. We give a good account of the HU values from the literature, measured for the calibration phantoms at several tube voltages for several commercial instruments, compared with our theoretical model without introducing additional parameters. CONCLUSIONS: We have developed a two-dimensional material basis that specifies the degree to which individual elements in compounds effect the HU values in XCT images of samples with elements up to atomic number Z = 20. We show that two dimensions is sufficient given the contrast and noise in our experiment. The linear combination of concentrations of elements that can be observed using a medical XCT have been characterized, providing a material basis for use in dual-energy reconstruction. This approach provides groundwork for improved reconstruction and for the link of Hounsfield units to the SI.


Subject(s)
Models, Theoretical , Phantoms, Imaging , Tomography, X-Ray Computed/methods , Tomography, X-Ray Computed/standards , Calibration , Humans , Tomography, X-Ray Computed/instrumentation
13.
Phys Teach ; 56(2)2018 Feb.
Article in English | MEDLINE | ID: mdl-29542737

ABSTRACT

A measurement of a thermophysical property of water is made using items found in the author's home. Specifically, the ratio of the energy required to heat water from the melting point to boiling to the energy required to completely boil away the water is found to be 5.7. This may be compared to the standard value of 5.5. The close agreement is not representative of the actual uncertainties in this simple experiment. Heating water in a microwave oven can let a student apply the techniques of quantitative science based on questions generated by his or her scientific curiosity.

14.
Appl Opt ; 57(4): 788-793, 2018 Feb 01.
Article in English | MEDLINE | ID: mdl-29400755

ABSTRACT

We have significantly accelerated diffraction calculations using three independent acceleration devices. These innovations are restricted to cylindrically symmetrical systems. In the first case, we consider Wolf's formula for integrated flux in a circular region following diffraction of light from a point source by a circular aperture or a circular lens. Although the formula involves a double sum, we evaluate it with the effort of a single sum by use of fast Fourier transforms (FFTs) to perform convolutions. In the second case, we exploit properties of the Fresnel-Kirchhoff propagator in the Gaussian, paraxial optics approximation to achieve the propagation of a partial wave from one optical element to the next. Ordinarily, this would involve a double loop over the radial variables on each element, but we have reduced the computational cost by a factor approximately equal to the smaller number of radius values. In the third case, we reduce the number of partial waves, for which the propagation needs to be calculated, to determine the throughput of an optical system of interest in radiometry when at least one element is very small, such as a pinhole aperture. As a demonstration of the benefits of the second case, we analyze intricate diffraction effects that occur in a satellite-based solar radiometry instrument.

15.
Opt Express ; 26(25): 32788-32801, 2018 Dec 10.
Article in English | MEDLINE | ID: mdl-30645441

ABSTRACT

The low-latency requirements of a practical loophole-free Bell test preclude time-consuming post-processing steps that are often used to improve the statistical quality of a physical random number generator (RNG). Here we demonstrate a post-processing-free RNG that produces a random bit within 2.4(2) ns of an input trigger. We use weak feedback to eliminate long-term drift, resulting in 24 hour operation with output that is statistically indistinguishable from a Bernoulli process. We quantify the impact of the feedback on the predictability of the output as less than 6.4×10-7 and demonstrate the utility of the Allan variance as a tool for characterizing non-idealities in RNGs.

16.
Opt Express ; 25(22): 26728-26746, 2017 Oct 30.
Article in English | MEDLINE | ID: mdl-29092156

ABSTRACT

Preliminary experiments at the NIST Spectral Tri-function Automated Reference Reflectometer (STARR) facility have been conducted with the goal of providing the diffuse optical properties of a solid reference standard with optical properties similar to human skin. Here, we describe an algorithm for determining the best-fit parameters and the statistical uncertainty associated with the measurement. The objective function is determined from the profile log likelihood, including both experimental and Monte Carlo uncertainties. Initially, the log likelihood is determined over a large parameter search box using a relatively small number of Monte Carlo samples such as 2·104. The search area is iteratively reduced to include the 99.9999% confidence region, while doubling the number of samples at each iteration until the experimental uncertainty dominates over the Monte Carlo uncertainty. Typically this occurs by 1.28·106 samples. The log likelihood is then fit to determine a 95% confidence ellipse. The inverse problem requires the values of the log likelihood on many points. Our implementation uses importance sampling to calculate these points on a grid in an efficient manner. Ultimately, the time-to-solution is approximately six times the cost of a Monte Carlo simulation of the radiation transport problem for a single set of parameters with the largest number of photons required. The results are found to be 64 times faster than our implementation of Particle Swarm Optimization.

17.
Med Phys ; 44(9): e202-e206, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28901619

ABSTRACT

PURPOSE: The goal is to determine whether dual-energy computed tomography (CT) leads to a unique reconstruction using two basis materials. METHODS: The beam-hardening equation is simplified to the single-voxel case. The simplified equation is rewritten to show that the solution can be considered to be linear operations in a vector space followed by a measurement model which is the sum of the exponential of the coordinates. The case of finding the concentrations of two materials from measurements of two spectra with three photon energies is the simplest non-trivial case and is considered in detail. RESULTS: Using a material basis of water and bone, with photon energies of 30 keV, 60 keV, and 100 keV, a case with two solutions is demonstrated. CONCLUSIONS: Dual-energy reconstruction using two materials is not unique as shown by an example. Algorithms for dual-energy, dual-material reconstructions need to be aware of this potential ambiguity in the solution.


Subject(s)
Algorithms , Tomography, X-Ray Computed , Humans , Photons
18.
Article in English | MEDLINE | ID: mdl-34877089

ABSTRACT

The goal of this study was to compare volumetric analysis in computed tomography (CT) with the length measurement prescribed by the Response Evaluation Criteria in Solid Tumors (RECIST) for a system with known mass and unknown shape. We injected 2 mL to 4 mL of water into vials of sodium polyacrylate and into disposable diapers. Volume measurements of the sodium polyacrylate powder were able to predict both mass and proportional changes in mass within a 95 % prediction interval of width 12 % and 16 %, respectively. The corresponding figures for RECIST were 102 % and 82 %.

20.
Math J ; 192017.
Article in English | MEDLINE | ID: mdl-29706749

ABSTRACT

The derivation of the scattering force and the gradient force on a spherical particle due to an electromagnetic wave often invokes the Clausius-Mossotti factor, based on an ad hoc physical model. In this article, we derive the expressions including the Clausius-Mossotti factor directly from the fundamental equations of classical electromagnetism. Starting from an analytic expression for the force on a spherical particle in a vacuum using the Maxwell stress tensor, as well as the Mie solution for the response of dielectric particles to an electromagnetic plane wave, we derive the scattering and gradient forces. In both cases, the Clausius-Mossotti factor arises rigorously from the derivation without any physical argumentation. The limits agree with expressions in the literature.

SELECTION OF CITATIONS
SEARCH DETAIL
...