Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
1.
J Xray Sci Technol ; 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38701129

RESUMO

BACKGROUND: X-ray imaging is widely used for the non-destructive detection of defects in industrial products on a conveyor belt. In-line detection requires highly accurate, robust, and fast algorithms. Deep Convolutional Neural Networks (DCNNs) satisfy these requirements when a large amount of labeled data is available. To overcome the challenge of collecting these data, different methods of X-ray image generation are considered. OBJECTIVE: Depending on the desired degree of similarity to real data, different physical effects should either be simulated or can be ignored. X-ray scattering is known to be computationally expensive to simulate, and this effect can greatly affect the accuracy of a generated X-ray image. We aim to quantitatively evaluate the effect of scattering on defect detection. METHODS: Monte-Carlo simulation is used to generate X-ray scattering distribution. DCNNs are trained on the data with and without scattering and applied to the same test datasets. Probability of Detection (POD) curves are computed to compare their performance, characterized by the size of the smallest detectable defect. RESULTS: We apply the methodology to a model problem of defect detection in cylinders. When trained on data without scattering, DCNNs reliably detect defects larger than 1.3 mm, and using data with scattering improves performance by less than 5%. If the analysis is performed on the cases with large scattering-to-primary ratio (1 < SPR < 5), the difference in performance could reach 15% (approx. 0.4 mm). CONCLUSION: Excluding the scattering signal from the training data has the largest effect on the smallest detectable defects, and the difference decreases for larger defects. The scattering-to-primary ratio has a significant effect on detection performance and the required accuracy of data generation.

2.
J Microsc ; 289(3): 157-163, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36567626

RESUMO

Many advanced nanomaterials rely on carefully designed morphology and elemental distribution to achieve their functionalities. Among the few experimental techniques that can directly visualise the 3D elemental distribution on the nanoscale are approaches based on electron tomography in combination with energy-dispersive X-ray spectroscopy (EDXS) and electron energy loss spectroscopy (EELS). Unfortunately, these highly informative methods are severely limited by the fundamentally low signal-to-noise ratio, which makes long experimental times and high electron irradiation doses necessary to obtain reliable 3D reconstructions. Addressing these limitations has been the major research question for the development of these techniques in recent years. This short review outlines the latest progress on the methods to reduce experimental time and electron irradiation dose requirements for 3D elemental distribution analysis and gives an outlook on the development of this field in the near future.

3.
J Synchrotron Radiat ; 29(Pt 1): 254-265, 2022 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-34985443

RESUMO

Tomographic algorithms are often compared by evaluating them on certain benchmark datasets. For fair comparison, these datasets should ideally (i) be challenging to reconstruct, (ii) be representative of typical tomographic experiments, (iii) be flexible to allow for different acquisition modes, and (iv) include enough samples to allow for comparison of data-driven algorithms. Current approaches often satisfy only some of these requirements, but not all. For example, real-world datasets are typically challenging and representative of a category of experimental examples, but are restricted to the acquisition mode that was used in the experiment and are often limited in the number of samples. Mathematical phantoms are often flexible and can sometimes produce enough samples for data-driven approaches, but can be relatively easy to reconstruct and are often not representative of typical scanned objects. In this paper, we present a family of foam-like mathematical phantoms that aims to satisfy all four requirements simultaneously. The phantoms consist of foam-like structures with more than 100000 features, making them challenging to reconstruct and representative of common tomography samples. Because the phantoms are computer-generated, varying acquisition modes and experimental conditions can be simulated. An effectively unlimited number of random variations of the phantoms can be generated, making them suitable for data-driven approaches. We give a formal mathematical definition of the foam-like phantoms, and explain how they can be generated and used in virtual tomographic experiments in a computationally efficient way. In addition, several 4D extensions of the 3D phantoms are given, enabling comparisons of algorithms for dynamic tomography. Finally, example phantoms and tomographic datasets are given, showing that the phantoms can be effectively used to make fair and informative comparisons between tomography algorithms.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Imagens de Fantasmas
4.
Opt Express ; 27(6): 7834-7856, 2019 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-31052612

RESUMO

Recently we have shown that light-field photography images can be interpreted as limited-angle cone-beam tomography acquisitions. Here, we use this property to develop a direct-space tomographic refocusing formulation that allows one to refocus both unfocused and focused light-field images. We express the reconstruction as a convex optimization problem, thus enabling the use of various regularization terms to help suppress artifacts, and a wide class of existing advanced tomographic algorithms. This formulation also supports super-resolved reconstructions and the correction of the optical system's limited frequency response (point spread function). We validate this method with numerical and real-world examples.

5.
Acc Chem Res ; 50(6): 1293-1302, 2017 06 20.
Artigo em Inglês | MEDLINE | ID: mdl-28525260

RESUMO

Self-assembling structures and their dynamical processes in polymeric systems have been investigated using three-dimensional transmission electron microscopy (3D-TEM). Block copolymers (BCPs) self-assemble into nanoscale periodic structures called microphase-separated structures, a deep understanding of which is important for creating nanomaterials with superior physical properties, such as high-performance membranes with well-defined pore size and high-density data storage media. Because microphase-separated structures have become increasingly complicated with advances in precision polymerization, characterizing these complex morphologies is becoming increasingly difficult. Thus, microscopes capable of obtaining 3D images are required. In this article, we demonstrate that 3D-TEM is an essential tool for studying BCP nanostructures, especially those self-assembled during dynamical processes and under confined conditions. The first example is a dynamical process called order-order transitions (OOTs). Upon changing temperature or pressure or applying an external field, such as a shear flow or electric field, BCP nanostructures transform from one type of structure to another. The OOTs are examined by freezing the specimens in the middle of the OOT and then observing the boundary structures between the preexisting and newly formed nanostructures in three-dimensions. In an OOT between the bicontinuous double gyroid and hexagonally packed cylindrical structures, two different types of epitaxial phase transition paths are found. Interestingly, the paths depend on the direction of the OOT. The second example is BCP self-assemblies under confinement that have been examined by 3D-TEM. A variety of intriguing and very complicated 3D morphologies can be formed even from the BCPs that self-assemble into simple nanostructures, such as lamellar and cylindrical structures in the bulk (in free space). Although 3D-TEM is becoming more frequently used for detailed morphological investigations, it is generally used to study static nanostructures. Although OOTs are dynamical processes, the actual experiment is done in the static state, through a detailed morphological study of a snapshot taken during the OOT. Developing time-dependent nanoscale 3D imaging has become a hot topic. Here, the two main problems preventing the development of in situ electron tomography for polymer materials are addressed. First, the staining protocol often used to enhance contrast for electrons is replaced by a new contrast enhancement based on chemical differences between polymers. In this case, no staining is necessary. Second, a new 3D reconstruction algorithm allows us to obtain a high-contrast, quantitative 3D image from fewer projections than is required for the conventional algorithm to achieve similar contrast, reducing the number of projections and thus the electron beam dose. Combining these two new developments is expected to open new doors to 3D in situ real-time structural observation of polymer materials.

6.
Opt Express ; 26(22): 28982-28995, 2018 Oct 29.
Artigo em Inglês | MEDLINE | ID: mdl-30470067

RESUMO

We propose a combination of an experimental approach and a reconstruction technique that leads to reduction of artefacts in X-ray computer tomography of strongly attenuating objects. Through fully automatic data alignment, data generated in multiple experiments with varying object orientations are combined. Simulations and experiments show that the solutions computed using algebraic methods based on multiple acquisitions can achieve a dramatic improvement in the reconstruction quality, even when each acquisition generates a reduced number of projections. The approach does not require any advanced setup components making it ideal for laboratory-based X-ray tomography.

7.
Opt Express ; 26(18): 22574-22602, 2018 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-30184917

RESUMO

Current computational methods for light field photography model the ray-tracing geometry inside the plenoptic camera. This representation of the problem, and some common approximations, can lead to errors in the estimation of object sizes and positions. We propose a representation that leads to the correct reconstruction of object sizes and distances to the camera, by showing that light field images can be interpreted as limited angle cone-beam tomography acquisitions. We then quantitatively analyze its impact on image refocusing, depth estimation and volumetric reconstructions, comparing it against other possible representations. Finally, we validate these results with numerical and real-world examples.

8.
J Synchrotron Radiat ; 23(Pt 3): 842-9, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-27140167

RESUMO

The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method.

9.
J Xray Sci Technol ; 22(1): 77-89, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24463387

RESUMO

BACKGROUND: In computed tomography (CT), the source-detector system commonly rotates around the object in a circular trajectory. Such a trajectory does not allow to exploit a detector fully when scanning elongated objects. OBJECTIVE: Increase the spatial resolution of the reconstructed image by optimal zooming during scanning. METHODS: A new approach is proposed, in which the full width of the detector is exploited for every projection angle. This approach is based on the use of prior information about the object's convex hull to move the source as close as possible to the object, while avoiding truncation of the projections. RESULTS: Experiments show that the proposed approach can significantly improve reconstruction quality, producing reconstructions with smaller errors and revealing more details in the object. CONCLUSIONS: The proposed approach can lead to more accurate reconstructions and increased spatial resolution in the object compared to the conventional circular trajectory.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Simulação por Computador , Imagens de Fantasmas
10.
Sci Adv ; 9(38): eadg6073, 2023 Sep 22.
Artigo em Inglês | MEDLINE | ID: mdl-37729396

RESUMO

In the fine arts, impressions found on terracotta sculptures in museum collections are scarcely reported and not in a systematic manner. Here, we present a procedure for scanning fingermarks and toolmarks found on the visible surface and inner walls of a terracotta sculpture using 3D micro-computed tomography, as well as methods for quantitatively characterizing these impressions. We apply our pipeline on the terracotta sculpture Study for a Hovering Putto, attributed to Laurent Delvaux and housed in the Rijksmuseum collection. On the basis of combined archaeology and forensics research that assigns age groups to makers of European ancestry from ridge breadth values, we estimate that the fingermarks belong to an adult male. Given that each fingerprint is unique and the carving tools were exclusively made in the artist's workshop, we give incentive to aim for artist profiling using innovative computational approaches on preserved impressions from terracotta sculptures.

11.
Nanoscale ; 15(11): 5391-5402, 2023 Mar 16.
Artigo em Inglês | MEDLINE | ID: mdl-36825781

RESUMO

Electron tomography is a widely used technique for 3D structural analysis of nanomaterials, but it can cause damage to samples due to high electron doses and long exposure times. To minimize such damage, researchers often reduce beam exposure by acquiring fewer projections through tilt undersampling. However, this approach can also introduce reconstruction artifacts due to insufficient sampling. Therefore, it is important to determine the optimal number of projections that minimizes both beam exposure and undersampling artifacts for accurate reconstructions of beam-sensitive samples. Current methods for determining this optimal number of projections involve acquiring and post-processing multiple reconstructions with different numbers of projections, which can be time-consuming and requires multiple samples due to sample damage. To improve this process, we propose a protocol that combines golden ratio scanning and quasi-3D reconstruction to estimate the optimal number of projections in real-time during a single acquisition. This protocol was validated using simulated and realistic nanoparticles, and was successfully applied to reconstruct two beam-sensitive metal-organic framework complexes.

12.
Dentomaxillofac Radiol ; 51(7): 20210437, 2022 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-35532946

RESUMO

Computer-assisted surgery (CAS) allows clinicians to personalize treatments and surgical interventions and has therefore become an increasingly popular treatment modality in maxillofacial surgery. The current maxillofacial CAS consists of three main steps: (1) CT image reconstruction, (2) bone segmentation, and (3) surgical planning. However, each of these three steps can introduce errors that can heavily affect the treatment outcome. As a consequence, tedious and time-consuming manual post-processing is often necessary to ensure that each step is performed adequately. One way to overcome this issue is by developing and implementing neural networks (NNs) within the maxillofacial CAS workflow. These learning algorithms can be trained to perform specific tasks without the need for explicitly defined rules. In recent years, an extremely large number of novel NN approaches have been proposed for a wide variety of applications, which makes it a difficult task to keep up with all relevant developments. This study therefore aimed to summarize and review all relevant NN approaches applied for CT image reconstruction, bone segmentation, and surgical planning. After full text screening, 76 publications were identified: 32 focusing on CT image reconstruction, 33 focusing on bone segmentation and 11 focusing on surgical planning. Generally, convolutional NNs were most widely used in the identified studies, although the multilayer perceptron was most commonly applied in surgical planning tasks. Moreover, the drawbacks of current approaches and promising research avenues are discussed.


Assuntos
Aprendizado Profundo , Cirurgia Bucal , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos
13.
Phys Med Biol ; 66(13)2021 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-34107467

RESUMO

High cone-angle artifacts (HCAAs) appear frequently in circular cone-beam computed tomography (CBCT) images and can heavily affect diagnosis and treatment planning. To reduce HCAAs in CBCT scans, we propose a novel deep learning approach that reduces the three-dimensional (3D) nature of HCAAs to two-dimensional (2D) problems in an efficient way. Specifically, we exploit the relationship between HCAAs and the rotational scanning geometry by training a convolutional neural network (CNN) using image slices that were radially sampled from CBCT scans. We evaluated this novel approach using a dataset of input CBCT scans affected by HCAAs and high-quality artifact-free target CBCT scans. Two different CNN architectures were employed, namely U-Net and a mixed-scale dense CNN (MS-D Net). The artifact reduction performance of the proposed approach was compared to that of a Cartesian slice-based artifact reduction deep learning approach in which a CNN was trained to remove the HCAAs from Cartesian slices. In addition, all processed CBCT scans were segmented to investigate the impact of HCAAs reduction on the quality of CBCT image segmentation. We demonstrate that the proposed deep learning approach with geometry-aware dimension reduction greatly reduces HCAAs in CBCT scans and outperforms the Cartesian slice-based deep learning approach. Moreover, the proposed artifact reduction approach markedly improves the accuracy of the subsequent segmentation task compared to the Cartesian slice-based workflow.


Assuntos
Artefatos , Aprendizado Profundo , Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação
14.
Comput Methods Programs Biomed ; 207: 106192, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34062493

RESUMO

BACKGROUND AND OBJECTIVE: Over the past decade, convolutional neural networks (CNNs) have revolutionized the field of medical image segmentation. Prompted by the developments in computational resources and the availability of large datasets, a wide variety of different two-dimensional (2D) and three-dimensional (3D) CNN training strategies have been proposed. However, a systematic comparison of the impact of these strategies on the image segmentation performance is still lacking. Therefore, this study aimed to compare eight different CNN training strategies, namely 2D (axial, sagittal and coronal slices), 2.5D (3 and 5 adjacent slices), majority voting, randomly oriented 2D cross-sections and 3D patches. METHODS: These eight strategies were used to train a U-Net and an MS-D network for the segmentation of simulated cone-beam computed tomography (CBCT) images comprising randomly-placed non-overlapping cylinders and experimental CBCT images of anthropomorphic phantom heads. The resulting segmentation performances were quantitatively compared by calculating Dice similarity coefficients. In addition, all segmented and gold standard experimental CBCT images were converted into virtual 3D models and compared using orientation-based surface comparisons. RESULTS: The CNN training strategy that generally resulted in the best performances on both simulated and experimental CBCT images was majority voting. When employing 2D training strategies, the segmentation performance can be optimized by training on image slices that are perpendicular to the predominant orientation of the anatomical structure of interest. Such spatial features should be taken into account when choosing or developing novel CNN training strategies for medical image segmentation. CONCLUSIONS: The results of this study will help clinicians and engineers to choose the most-suited CNN training strategy for CBCT image segmentation.


Assuntos
Processamento de Imagem Assistida por Computador , Dente , Tomografia Computadorizada de Feixe Cônico , Redes Neurais de Computação
15.
J Imaging ; 7(3)2021 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-34460700

RESUMO

The reconstruction of computed tomography (CT) images is an active area of research. Following the rise of deep learning methods, many data-driven models have been proposed in recent years. In this work, we present the results of a data challenge that we organized, bringing together algorithm experts from different institutes to jointly work on quantitative evaluation of several data-driven methods on two large, public datasets during a ten day sprint. We focus on two applications of CT, namely, low-dose CT and sparse-angle CT. This enables us to fairly compare different methods using standardized settings. As a general result, we observe that the deep learning-based methods are able to improve the reconstruction quality metrics in both CT applications while the top performing methods show only minor differences in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). We further discuss a number of other important criteria that should be taken into account when selecting a method, such as the availability of training data, the knowledge of the physical measurement model and the reconstruction speed.

16.
J Imaging ; 6(12)2020 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-34460532

RESUMO

Circular cone-beam (CCB) Computed Tomography (CT) has become an integral part of industrial quality control, materials science and medical imaging. The need to acquire and process each scan in a short time naturally leads to trade-offs between speed and reconstruction quality, creating a need for fast reconstruction algorithms capable of creating accurate reconstructions from limited data. In this paper, we introduce the Neural Network Feldkamp-Davis-Kress (NN-FDK) algorithm. This algorithm adds a machine learning component to the FDK algorithm to improve its reconstruction accuracy while maintaining its computational efficiency. Moreover, the NN-FDK algorithm is designed such that it has low training data requirements and is fast to train. This ensures that the proposed algorithm can be used to improve image quality in high-throughput CT scanning settings, where FDK is currently used to keep pace with the acquisition speed using readily available computational resources. We compare the NN-FDK algorithm to two standard CT reconstruction algorithms and to two popular deep neural networks trained to remove reconstruction artifacts from the 2D slices of an FDK reconstruction. We show that the NN-FDK reconstruction algorithm is substantially faster in computing a reconstruction than all the tested alternative methods except for the standard FDK algorithm and we show it can compute accurate CCB CT reconstructions in cases of high noise, a low number of projection angles or large cone angles. Moreover, we show that the training time of an NN-FDK network is orders of magnitude lower than the considered deep neural networks, with only a slight reduction in reconstruction accuracy.

17.
J Imaging ; 6(4)2020 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-34460720

RESUMO

In tomographic imaging, the traditional process consists of an expert and an operator collecting data, the expert working on the reconstructed slices and drawing conclusions. The quality of reconstructions depends heavily on the quality of the collected data, except that, in the traditional process of imaging, the expert has very little influence over the acquisition parameters, experimental plan or the collected data. It is often the case that the expert has to draw limited conclusions from the reconstructions, or adapt a research question to data available. This method of imaging is static and sequential, and limits the potential of tomography as a research tool. In this paper, we propose a more dynamic process of imaging where experiments are tailored around a sample or the research question; intermediate reconstructions and analysis are available almost instantaneously, and expert has input at any stage of the process (including during acquisition) to improve acquisition or image reconstruction. Through various applications of 2D, 3D and dynamic 3D imaging at the FleX-ray Laboratory, we present the unexpected journey of exploration a research question undergoes, and the surprising benefits it yields.

18.
J Imaging ; 6(12)2020 Dec 02.
Artigo em Inglês | MEDLINE | ID: mdl-34460529

RESUMO

An important challenge in hyperspectral imaging tasks is to cope with the large number of spectral bins. Common spectral data reduction methods do not take prior knowledge about the task into account. Consequently, sparsely occurring features that may be essential for the imaging task may not be preserved in the data reduction step. Convolutional neural network (CNN) approaches are capable of learning the specific features relevant to the particular imaging task, but applying them directly to the spectral input data is constrained by the computational efficiency. We propose a novel supervised deep learning approach for combining data reduction and image analysis in an end-to-end architecture. In our approach, the neural network component that performs the reduction is trained such that image features most relevant for the task are preserved in the reduction step. Results for two convolutional neural network architectures and two types of generated datasets show that the proposed Data Reduction CNN (DRCNN) approach can produce more accurate results than existing popular data reduction methods, and can be used in a wide range of problem settings. The integration of knowledge about the task allows for more image compression and higher accuracies compared to standard data reduction methods.

19.
J Imaging ; 6(12)2020 Dec 11.
Artigo em Inglês | MEDLINE | ID: mdl-34460535

RESUMO

X-ray plenoptic cameras acquire multi-view X-ray transmission images in a single exposure (light-field). Their development is challenging: designs have appeared only recently, and they are still affected by important limitations. Concurrently, the lack of available real X-ray light-field data hinders dedicated algorithmic development. Here, we present a physical emulation setup for rapidly exploring the parameter space of both existing and conceptual camera designs. This will assist and accelerate the design of X-ray plenoptic imaging solutions, and provide a tool for generating unlimited real X-ray plenoptic data. We also demonstrate that X-ray light-fields allow for reconstructing sharp spatial structures in three-dimensions (3D) from single-shot data.

20.
Sci Data ; 6(1): 215, 2019 10 22.
Artigo em Inglês | MEDLINE | ID: mdl-31641152

RESUMO

Unlike previous works, this open data collection consists of X-ray cone-beam (CB) computed tomography (CT) datasets specifically designed for machine learning applications and high cone-angle artefact reduction. Forty-two walnuts were scanned with a laboratory X-ray set-up to provide not only data from a single object but from a class of objects with natural variability. For each walnut, CB projections on three different source orbits were acquired to provide CB data with different cone angles as well as being able to compute artefact-free, high-quality ground truth images from the combined data that can be used for supervised learning. We provide the complete image reconstruction pipeline: raw projection data, a description of the scanning geometry, pre-processing and reconstruction scripts using open software, and the reconstructed volumes. Due to this, the dataset can not only be used for high cone-angle artefact reduction but also for algorithm development and evaluation for other tasks, such as image reconstruction from limited or sparse-angle (low-dose) scanning, super resolution, or segmentation.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA