Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Opt Express ; 32(6): 9019-9041, 2024 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-38571146

RESUMO

Many of the recent successes of deep learning-based approaches have been enabled by a framework of flexible, composable computational blocks with their parameters adjusted through an automatic differentiation mechanism to implement various data processing tasks. In this work, we explore how the same philosophy can be applied to existing "classical" (i.e., non-learning) algorithms, focusing on computed tomography (CT) as application field. We apply four key design principles of this approach for CT workflow design: end-to-end optimization, explicit quality criteria, declarative algorithm construction by building the forward model, and use of existing classical algorithms as computational blocks. Through four case studies, we demonstrate that auto-differentiation is remarkably effective beyond the boundaries of neural-network training, extending to CT workflows containing varied combinations of classical and machine learning algorithms.

2.
Sci Rep ; 13(1): 19057, 2023 11 04.
Artigo em Inglês | MEDLINE | ID: mdl-37925540

RESUMO

Automated analysis of the inner ear anatomy in radiological data instead of time-consuming manual assessment is a worthwhile goal that could facilitate preoperative planning and clinical research. We propose a framework encompassing joint semantic segmentation of the inner ear and anatomical landmark detection of helicotrema, oval and round window. A fully automated pipeline with a single, dual-headed volumetric 3D U-Net was implemented, trained and evaluated using manually labeled in-house datasets from cadaveric specimen ([Formula: see text]) and clinical practice ([Formula: see text]). The model robustness was further evaluated on three independent open-source datasets ([Formula: see text] scans) consisting of cadaveric specimen scans. For the in-house datasets, Dice scores of [Formula: see text], intersection-over-union scores of [Formula: see text] and average Hausdorff distances of [Formula: see text] and [Formula: see text] voxel units were achieved. The landmark localization task was performed automatically with an average localization error of [Formula: see text] voxel units. A robust, albeit reduced performance could be attained for the catalogue of three open-source datasets. Results of the ablation studies with 43 mono-parametric variations of the basal architecture and training protocol provided task-optimal parameters for both categories. Ablation studies against single-task variants of the basal architecture showed a clear performance benefit of coupling landmark localization with segmentation and a dataset-dependent performance impact on segmentation ability.


Assuntos
Aprendizado Profundo , Orelha Interna , Humanos , Orelha Interna/diagnóstico por imagem , Cadáver , Processamento de Imagem Assistida por Computador/métodos
3.
PNAS Nexus ; 1(4): pgac183, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36329726

RESUMO

Host cell invasion by intracellular, eukaryotic parasites within the phylum Apicomplexa is a remarkable and active process involving the coordinated action of apical organelles and other structures. To date, capturing how these structures interact during invasion has been difficult to observe in detail. Here, we used cryogenic electron tomography to image the apical complex of Toxoplasma gondii tachyzoites under conditions that mimic resting parasites and those primed to invade through stimulation with calcium ionophore. Through the application of mixed-scale dense networks for image processing, we developed a highly efficient pipeline for annotation of tomograms, enabling us to identify and extract densities of relevant subcellular organelles and accurately analyze features in 3-D. The results reveal a dramatic change in the shape of the anteriorly located apical vesicle upon its apparent fusion with a rhoptry that occurs only in the stimulated parasites. We also present information indicating that this vesicle originates from the vesicles that parallel the intraconoidal microtubules and that the latter two structures are linked by a novel tether. We show that a rosette structure previously proposed to be involved in rhoptry secretion is associated with apical vesicles beyond just the most anterior one. This result, suggesting multiple vesicles are primed to enable rhoptry secretion, may shed light on the mechanisms Toxoplasma employs to enable repeated invasion attempts. Using the same approach, we examine Plasmodium falciparum merozoites and show that they too possess an apical vesicle just beneath a rosette, demonstrating evolutionary conservation of this overall subcellular organization.

4.
Nat Commun ; 13(1): 7241, 2022 11 24.
Artigo em Inglês | MEDLINE | ID: mdl-36433970

RESUMO

The Klebsiella jumbo myophage ϕKp24 displays an unusually complex arrangement of tail fibers interacting with a host cell. In this study, we combine cryo-electron microscopy methods, protein structure prediction methods, molecular simulations, microbiological and machine learning approaches to explore the capsid, tail, and tail fibers of ϕKp24. We determine the structure of the capsid and tail at 4.1 Šand 3.0 Šresolution. We observe the tail fibers are branched and rearranged dramatically upon cell surface attachment. This complex configuration involves fourteen putative tail fibers with depolymerase activity that provide ϕKp24 with the ability to infect a broad panel of capsular polysaccharide (CPS) types of Klebsiella pneumoniae. Our study provides structural and functional insight into how ϕKp24 adapts to the variable surfaces of capsulated bacterial pathogens, which is useful for the development of phage therapy approaches against pan-drug resistant K. pneumoniae strains.


Assuntos
Bacteriófagos , Microscopia Crioeletrônica , Klebsiella pneumoniae , Klebsiella , Capsídeo , Proteínas do Capsídeo
5.
EBioMedicine ; 82: 104157, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35863292

RESUMO

BACKGROUND: Primary HPV screening, due to its low specificity, requires an additional liquid-based cytology (LBC) triage test. However, even with LBC triage there has been a near doubling in the number of patients referred for colposcopy in recent years, the majority having low-grade disease. METHODS: To counter this, a triage test that generates a spatial map of the cervical surface at a molecular level has been developed which removes the subjectivity associated with LBC by facilitating identification of lesions in their entirety. 50 patients attending colposcopy were recruited to participate in a pilot study to evaluate the test. For each patient, cells were lifted from the cervix onto a membrane (cervical cell lift, CCL) and immunostained with a biomarker of precancerous cells, generating molecular maps of the cervical surface. These maps were analysed to detect high-grade lesions, and the results compared to the final histological diagnosis. FINDINGS: We demonstrated that spatial molecular mapping of the cervix has a sensitivity of 90% (95% CI 69-98) (positive predictive value 81% (95% CI 60-92)) for the detection of high-grade disease, and that AI-based analysis could aid disease detection through automated flagging of biomarker-positive cells. INTERPRETATION: Spatial molecular mapping of the CCL improved the rate of detection of high-grade disease in comparison to LBC, suggesting that this method has the potential to decisively identify patients with clinically relevant disease that requires excisional treatment. FUNDING: CRUK Early Detection Project award, Jordan-Singer BSCCP award, Addenbrooke's Charitable Trust, UK-MRC, Janssen Pharmaceuticals/Advanced Sterilisation Products, and NWO.


Assuntos
Infecções por Papillomavirus , Lesões Pré-Cancerosas , Displasia do Colo do Útero , Neoplasias do Colo do Útero , Colo do Útero , Detecção Precoce de Câncer/métodos , Feminino , Humanos , Papillomaviridae , Projetos Piloto , Lesões Pré-Cancerosas/diagnóstico , Triagem , Neoplasias do Colo do Útero/diagnóstico , Neoplasias do Colo do Útero/patologia , Esfregaço Vaginal/métodos , Displasia do Colo do Útero/diagnóstico
6.
J Synchrotron Radiat ; 29(Pt 1): 254-265, 2022 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-34985443

RESUMO

Tomographic algorithms are often compared by evaluating them on certain benchmark datasets. For fair comparison, these datasets should ideally (i) be challenging to reconstruct, (ii) be representative of typical tomographic experiments, (iii) be flexible to allow for different acquisition modes, and (iv) include enough samples to allow for comparison of data-driven algorithms. Current approaches often satisfy only some of these requirements, but not all. For example, real-world datasets are typically challenging and representative of a category of experimental examples, but are restricted to the acquisition mode that was used in the experiment and are often limited in the number of samples. Mathematical phantoms are often flexible and can sometimes produce enough samples for data-driven approaches, but can be relatively easy to reconstruct and are often not representative of typical scanned objects. In this paper, we present a family of foam-like mathematical phantoms that aims to satisfy all four requirements simultaneously. The phantoms consist of foam-like structures with more than 100000 features, making them challenging to reconstruct and representative of common tomography samples. Because the phantoms are computer-generated, varying acquisition modes and experimental conditions can be simulated. An effectively unlimited number of random variations of the phantoms can be generated, making them suitable for data-driven approaches. We give a formal mathematical definition of the foam-like phantoms, and explain how they can be generated and used in virtual tomographic experiments in a computationally efficient way. In addition, several 4D extensions of the 3D phantoms are given, enabling comparisons of algorithms for dynamic tomography. Finally, example phantoms and tomographic datasets are given, showing that the phantoms can be effectively used to make fair and informative comparisons between tomography algorithms.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Imagens de Fantasmas
7.
Sci Rep ; 12(1): 893, 2022 01 18.
Artigo em Inglês | MEDLINE | ID: mdl-35042961

RESUMO

In x-ray computed tomography (CT), the achievable image resolution is typically limited by several pre-fixed characteristics of the x-ray source and detector. Structuring the x-ray beam using a mask with alternating opaque and transmitting septa can overcome this limit. However, the use of a mask imposes an undersampling problem: to obtain complete datasets, significant lateral sample stepping is needed in addition to the sample rotation, resulting in high x-ray doses and long acquisition times. Cycloidal CT, an alternative scanning scheme by which the sample is rotated and translated simultaneously, can provide high aperture-driven resolution without sample stepping, resulting in a lower radiation dose and faster scans. However, cycloidal sinograms are incomplete and must be restored before tomographic images can be computed. In this work, we demonstrate that high-quality images can be reconstructed by applying the recently proposed Mixed Scale Dense (MS-D) convolutional neural network (CNN) to this task. We also propose a novel training approach by which training data are acquired as part of each scan, thus removing the need for large sets of pre-existing reference data, the acquisition of which is often not practicable or possible. We present results for both simulated datasets and real-world data, showing that the combination of cycloidal CT and machine learning-based data recovery can lead to accurate high-resolution images at a limited dose.

8.
Opt Express ; 29(24): 40494-40513, 2021 Nov 22.
Artigo em Inglês | MEDLINE | ID: mdl-34809388

RESUMO

Tomography is a powerful tool for reconstructing the interior of an object from a series of projection images. Typically, the source and detector traverse a standard path (e.g., circular, helical). Recently, various techniques have emerged that use more complex acquisition geometries. Current software packages require significant handwork, or lack the flexibility to handle such geometries. Therefore, software is needed that can concisely represent, visualize, and compute reconstructions of complex acquisition geometries. We present tomosipo, a Python package that provides these capabilities in a concise and intuitive way. Case studies demonstrate the power and flexibility of tomosipo.

9.
J Synchrotron Radiat ; 28(Pt 5): 1583-1597, 2021 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-34475305

RESUMO

For reconstructing large tomographic datasets fast, filtered backprojection-type or Fourier-based algorithms are still the method of choice, as they have been for decades. These robust and computationally efficient algorithms have been integrated in a broad range of software packages. The continuous mathematical formulas used for image reconstruction in such algorithms are unambiguous. However, variations in discretization and interpolation result in quantitative differences between reconstructed images, and corresponding segmentations, obtained from different software. This hinders reproducibility of experimental results, making it difficult to ensure that results and conclusions from experiments can be reproduced at different facilities or using different software. In this paper, a way to reduce such differences by optimizing the filter used in analytical algorithms is proposed. These filters can be computed using a wrapper routine around a black-box implementation of a reconstruction algorithm, and lead to quantitatively similar reconstructions. Use cases for this approach are demonstrated by computing implementation-adapted filters for several open-source implementations and applying them to simulated phantoms and real-world data acquired at the synchrotron. Our contribution to a reproducible reconstruction step forms a building block towards a fully reproducible synchrotron tomography data processing pipeline.

10.
Nanoscale ; 13(28): 12242-12249, 2021 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-34241619

RESUMO

The combination of energy-dispersive X-ray spectroscopy (EDX) and electron tomography is a powerful approach to retrieve the 3D elemental distribution in nanomaterials, providing an unprecedented level of information for complex, multi-component systems, such as semiconductor devices, as well as catalytic and plasmonic nanoparticles. Unfortunately, the applicability of EDX tomography is severely limited because of extremely long acquisition times and high electron irradiation doses required to obtain 3D EDX reconstructions with an adequate signal-to-noise ratio. One possibility to address this limitation is intelligent denoising of experimental data using prior expectations about the objects of interest. Herein, this approach is followed using the deep learning methodology, which currently demonstrates state-of-the-art performance for an increasing number of data processing problems. Design choices for the denoising approach and training data are discussed with a focus on nanoparticle-like objects and extremely noisy signals typical for EDX experiments. Quantitative analysis of the proposed method demonstrates its significantly enhanced performance in comparison to classical denoising approaches. This allows for improving the tradeoff between the reconstruction quality, acquisition time and radiation dose for EDX tomography. The proposed method is therefore especially beneficial for the 3D EDX investigation of electron beam-sensitive materials and studies of nanoparticle transformations.

11.
Sci Rep ; 11(1): 11895, 2021 06 04.
Artigo em Inglês | MEDLINE | ID: mdl-34088936

RESUMO

Synchrotron X-ray tomography enables the examination of the internal structure of materials at submicron spatial resolution and subsecond temporal resolution. Unavoidable experimental constraints can impose dose and time limits on the measurements, introducing noise in the reconstructed images. Convolutional neural networks (CNNs) have emerged as a powerful tool to remove noise from reconstructed images. However, their training typically requires collecting a dataset of paired noisy and high-quality measurements, which is a major obstacle to their use in practice. To circumvent this problem, methods for CNN-based denoising have recently been proposed that require no separate training data beyond the already available noisy reconstructions. Among these, the Noise2Inverse method is specifically designed for tomography and related inverse problems. To date, applications of Noise2Inverse have only taken into account 2D spatial information. In this paper, we expand the application of Noise2Inverse in space, time, and spectrum-like domains. This development enhances applications to static and dynamic micro-tomography as well as X-ray diffraction tomography. Results on real-world datasets establish that Noise2Inverse is capable of accurate denoising and enables a substantial reduction in acquisition time while maintaining image quality.

12.
Elife ; 92020 12 02.
Artigo em Inglês | MEDLINE | ID: mdl-33264089

RESUMO

Using multiple human annotators and ensembles of trained networks can improve the performance of deep-learning methods in research.


Assuntos
Biologia Computacional , Aprendizado Profundo , Humanos , Reprodutibilidade dos Testes
13.
J Synchrotron Radiat ; 27(Pt 5): 1339-1346, 2020 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-32876609

RESUMO

Hard X-ray nanotomography enables 3D investigations of a wide range of samples with high resolution (<100 nm) with both synchrotron-based and laboratory-based setups. However, the advantage of synchrotron-based setups is the high flux, enabling time resolution, which cannot be achieved at laboratory sources. Here, the nanotomography setup at the imaging beamline P05 at PETRA III is presented, which offers high time resolution not only in absorption but for the first time also in Zernike phase contrast. Two test samples are used to evaluate the image quality in both contrast modalities based on the quantitative analysis of contrast-to-noise ratio (CNR) and spatial resolution. High-quality scans can be recorded in 15 min and fast scans down to 3 min are also possible without significant loss of image quality. At scan times well below 3 min, the CNR values decrease significantly and classical image-filtering techniques reach their limitation. A machine-learning approach shows promising results, enabling acquisition of a full tomography in only 6 s. Overall, the transmission X-ray microscopy instrument offers high temporal resolution in absorption and Zernike phase contrast, enabling in situ experiments at the beamline.

14.
J Imaging ; 6(12)2020 Dec 02.
Artigo em Inglês | MEDLINE | ID: mdl-34460529

RESUMO

An important challenge in hyperspectral imaging tasks is to cope with the large number of spectral bins. Common spectral data reduction methods do not take prior knowledge about the task into account. Consequently, sparsely occurring features that may be essential for the imaging task may not be preserved in the data reduction step. Convolutional neural network (CNN) approaches are capable of learning the specific features relevant to the particular imaging task, but applying them directly to the spectral input data is constrained by the computational efficiency. We propose a novel supervised deep learning approach for combining data reduction and image analysis in an end-to-end architecture. In our approach, the neural network component that performs the reduction is trained such that image features most relevant for the task are preserved in the reduction step. Results for two convolutional neural network architectures and two types of generated datasets show that the proposed Data Reduction CNN (DRCNN) approach can produce more accurate results than existing popular data reduction methods, and can be used in a wide range of problem settings. The integration of knowledge about the task allows for more image compression and higher accuracies compared to standard data reduction methods.

15.
J Imaging ; 6(12)2020 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-34460532

RESUMO

Circular cone-beam (CCB) Computed Tomography (CT) has become an integral part of industrial quality control, materials science and medical imaging. The need to acquire and process each scan in a short time naturally leads to trade-offs between speed and reconstruction quality, creating a need for fast reconstruction algorithms capable of creating accurate reconstructions from limited data. In this paper, we introduce the Neural Network Feldkamp-Davis-Kress (NN-FDK) algorithm. This algorithm adds a machine learning component to the FDK algorithm to improve its reconstruction accuracy while maintaining its computational efficiency. Moreover, the NN-FDK algorithm is designed such that it has low training data requirements and is fast to train. This ensures that the proposed algorithm can be used to improve image quality in high-throughput CT scanning settings, where FDK is currently used to keep pace with the acquisition speed using readily available computational resources. We compare the NN-FDK algorithm to two standard CT reconstruction algorithms and to two popular deep neural networks trained to remove reconstruction artifacts from the 2D slices of an FDK reconstruction. We show that the NN-FDK reconstruction algorithm is substantially faster in computing a reconstruction than all the tested alternative methods except for the standard FDK algorithm and we show it can compute accurate CCB CT reconstructions in cases of high noise, a low number of projection angles or large cone angles. Moreover, we show that the training time of an NN-FDK network is orders of magnitude lower than the considered deep neural networks, with only a slight reduction in reconstruction accuracy.

16.
Sci Rep ; 9(1): 18379, 2019 12 05.
Artigo em Inglês | MEDLINE | ID: mdl-31804524

RESUMO

Tomographic X-ray microscopy beamlines at synchrotron light sources worldwide have pushed the achievable time-resolution for dynamic 3-dimensional structural investigations down to a fraction of a second, allowing the study of quickly evolving systems. The large data rates involved impose heavy demands on computational resources, making it difficult to readily process and interrogate the resulting volumes. The data acquisition is thus performed essentially blindly. Such a sequential process makes it hard to notice problems with the measurement protocol or sample conditions, potentially rendering the acquired data unusable, and it keeps the user from optimizing the experimental parameters of the imaging task at hand. We present an efficient approach to address this issue based on the real-time reconstruction, visualisation and on-the-fly analysis of a small number of arbitrarily oriented slices. This solution, requiring only a single additional computing workstation, has been implemented at the TOMCAT beamline of the Swiss Light Source. The system is able to process multiple sets of slices per second, thus pushing the reconstruction throughput on the same level as the data acquisition. This enables the monitoring of dynamic processes as they occur and represents the next crucial step towards adaptive feedback control of time-resolved in situ tomographic experiments.

17.
Med Phys ; 46(11): 5027-5035, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31463937

RESUMO

PURPOSE: In order to attain anatomical models, surgical guides and implants for computer-assisted surgery, accurate segmentation of bony structures in cone-beam computed tomography (CBCT) scans is required. However, this image segmentation step is often impeded by metal artifacts. Therefore, this study aimed to develop a mixed-scale dense convolutional neural network (MS-D network) for bone segmentation in CBCT scans affected by metal artifacts. METHOD: Training data were acquired from 20 dental CBCT scans affected by metal artifacts. An experienced medical engineer segmented the bony structures in all CBCT scans using global thresholding and manually removed all remaining noise and metal artifacts. The resulting gold standard segmentations were used to train an MS-D network comprising 100 convolutional layers using far fewer trainable parameters than alternative convolutional neural network (CNN) architectures. The bone segmentation performance of the MS-D network was evaluated using a leave-2-out scheme and compared with a clinical snake evolution algorithm and two state-of-the-art CNN architectures (U-Net and ResNet). All segmented CBCT scans were subsequently converted into standard tessellation language (STL) models and geometrically compared with the gold standard. RESULTS: CBCT scans segmented using the MS-D network, U-Net, ResNet and the snake evolution algorithm demonstrated mean Dice similarity coefficients of 0.87 ± 0.06, 0.87 ± 0.07, 0.86 ± 0.05, and 0.78 ± 0.07, respectively. The STL models acquired using the MS-D network, U-Net, ResNet and the snake evolution algorithm demonstrated mean absolute deviations of 0.44 mm ± 0.13 mm, 0.43 mm ± 0.16 mm, 0.40 mm ± 0.12 mm and 0.57 mm ± 0.22 mm, respectively. In contrast to the MS-D network, the ResNet introduced wave-like artifacts in the STL models, whereas the U-Net incorrectly labeled background voxels as bone around the vertebrae in 4 of the 9 CBCT scans containing vertebrae. CONCLUSION: The MS-D network was able to accurately segment bony structures in CBCT scans affected by metal artifacts.


Assuntos
Artefatos , Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador/métodos , Metais , Redes Neurais de Computação , Dente/diagnóstico por imagem , Humanos , Próteses e Implantes
18.
Proc Natl Acad Sci U S A ; 115(2): 254-259, 2018 01 09.
Artigo em Inglês | MEDLINE | ID: mdl-29279403

RESUMO

Deep convolutional neural networks have been successfully applied to many image-processing problems in recent works. Popular network architectures often add additional operations and connections to the standard architecture to enable training deeper networks. To achieve accurate results in practice, a large number of trainable parameters are often required. Here, we introduce a network architecture based on using dilated convolutions to capture features at different image scales and densely connecting all feature maps with each other. The resulting architecture is able to achieve accurate results with relatively few parameters and consists of a single set of operations, making it easier to implement, train, and apply in practice, and automatically adapts to different problems. We compare results of the proposed network architecture with popular existing architectures for several segmentation problems, showing that the proposed architecture is able to achieve accurate results with fewer parameters, with a reduced risk of overfitting the training data.


Assuntos
Diagnóstico por Imagem/métodos , Aprendizado de Máquina , Modelos Teóricos , Redes Neurais de Computação , Algoritmos , Simulação por Computador
19.
J Synchrotron Radiat ; 24(Pt 5): 1065-1077, 2017 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-28862630

RESUMO

Three-dimensional (3D) micro-tomography (µ-CT) has proven to be an important imaging modality in industry and scientific domains. Understanding the properties of material structure and behavior has produced many scientific advances. An important component of the 3D µ-CT pipeline is image partitioning (or image segmentation), a step that is used to separate various phases or components in an image. Image partitioning schemes require specific rules for different scientific fields, but a common strategy consists of devising metrics to quantify performance and accuracy. The present article proposes a set of protocols to systematically analyze and compare the results of unsupervised classification methods used for segmentation of synchrotron-based data. The proposed dataflow for Materials Segmentation and Metrics (MSM) provides 3D micro-tomography image segmentation algorithms, such as statistical region merging (SRM), k-means algorithm and parallel Markov random field (PMRF), while offering different metrics to evaluate segmentation quality, confidence and conformity with standards. Both experimental and synthetic data are assessed, illustrating quantitative results through the MSM dashboard, which can return sample information such as media porosity and permeability. The main contributions of this work are: (i) to deliver tools to improve material design and quality control; (ii) to provide datasets for benchmarking and reproducibility; (iii) to yield good practices in the absence of standards or ground-truth for ceramic composite analysis.

20.
Adv Struct Chem Imaging ; 2(1): 17, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28003954

RESUMO

In advanced tomographic experiments, large detector sizes and large numbers of acquired datasets can make it difficult to process the data in a reasonable time. At the same time, the acquired projections are often limited in some way, for example having a low number of projections or a low signal-to-noise ratio. Direct analytical reconstruction methods are able to produce reconstructions in very little time, even for large-scale data, but the quality of these reconstructions can be insufficient for further analysis in cases with limited data. Iterative reconstruction methods typically produce more accurate reconstructions, but take significantly more time to compute, which limits their usefulness in practice. In this paper, we present the application of the SIRT-FBP method to large-scale real-world tomographic data. The SIRT-FBP method is able to accurately approximate the simultaneous iterative reconstruction technique (SIRT) method by the computationally efficient filtered backprojection (FBP) method, using precomputed experiment-specific filters. We specifically focus on the many implementation details that are important for application on large-scale real-world data, and give solutions to common problems that occur with experimental data. We show that SIRT-FBP filters can be computed in reasonable time, even for large problem sizes, and that precomputed filters can be reused for future experiments. Reconstruction results are given for three different experiments, and are compared with results of popular existing methods. The results show that the SIRT-FBP method is able to accurately approximate iterative reconstructions of experimental data. Furthermore, they show that, in practice, the SIRT-FBP method can produce more accurate reconstructions than standard direct analytical reconstructions with popular filters, without increasing the required computation time.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...