Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Histochem Cell Biol ; 160(3): 253-276, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37284846

RESUMEN

Public participation in research, also known as citizen science, is being increasingly adopted for the analysis of biological volumetric data. Researchers working in this domain are applying online citizen science as a scalable distributed data analysis approach, with recent research demonstrating that non-experts can productively contribute to tasks such as the segmentation of organelles in volume electron microscopy data. This, alongside the growing challenge to rapidly process the large amounts of biological volumetric data now routinely produced, means there is increasing interest within the research community to apply online citizen science for the analysis of data in this context. Here, we synthesise core methodological principles and practices for applying citizen science for analysis of biological volumetric data. We collate and share the knowledge and experience of multiple research teams who have applied online citizen science for the analysis of volumetric biological data using the Zooniverse platform ( www.zooniverse.org ). We hope this provides inspiration and practical guidance regarding how contributor effort via online citizen science may be usefully applied in this domain.


Asunto(s)
Ciencia Ciudadana , Humanos , Participación de la Comunidad
2.
J Synchrotron Radiat ; 28(Pt 3): 889-901, 2021 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-33949996

RESUMEN

In this paper a practical solution for the reconstruction and segmentation of low-contrast X-ray tomographic data of protein crystals from the long-wavelength macromolecular crystallography beamline I23 at Diamond Light Source is provided. The resulting segmented data will provide the path lengths through both diffracting and non-diffracting materials as basis for analytical absorption corrections for X-ray diffraction data taken in the same sample environment ahead of the tomography experiment. X-ray tomography data from protein crystals can be difficult to analyse due to very low or absent contrast between the different materials: the crystal, the sample holder and the surrounding mother liquor. The proposed data processing pipeline consists of two major sequential operations: model-based iterative reconstruction to improve contrast and minimize the influence of noise and artefacts, followed by segmentation. The segmentation aims to partition the reconstructed data into four phases: the crystal, mother liquor, loop and vacuum. In this study three different semi-automated segmentation methods are experimented with by using Gaussian mixture models, geodesic distance thresholding and a novel morphological method, RegionGrow, implemented specifically for the task. The complete reconstruction-segmentation pipeline is integrated into the MPI-based data analysis and reconstruction framework Savu, which is used to reduce computation time through parallelization across a computing cluster and makes the developed methods easily accessible.

3.
J Synchrotron Radiat ; 28(Pt 6): 1985-1995, 2021 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-34738954

RESUMEN

The Dual Imaging and Diffraction (DIAD) beamline at Diamond Light Source is a new dual-beam instrument for full-field imaging/tomography and powder diffraction. This instrument provides the user community with the capability to dynamically image 2D and 3D complex structures and perform phase identification and/or strain mapping using micro-diffraction. The aim is to enable in situ and in operando experiments that require spatially correlated results from both techniques, by providing measurements from the same specimen location quasi-simultaneously. Using an unusual optical layout, DIAD has two independent beams originating from one source that operate in the medium energy range (7-38 keV) and are combined at one sample position. Here, either radiography or tomography can be performed using monochromatic or pink beam, with a 1.4 mm × 1.2 mm field of view and a feature resolution of 1.2 µm. Micro-diffraction is possible with a variable beam size between 13 µm × 4 µm and 50 µm × 50 µm. One key functionality of the beamline is image-guided diffraction, a setup in which the micro-diffraction beam can be scanned over the complete area of the imaging field-of-view. This moving beam setup enables the collection of location-specific information about the phase composition and/or strains at any given position within the image/tomography field of view. The dual beam design allows fast switching between imaging and diffraction mode without the need of complicated and time-consuming mode switches. Real-time selection of areas of interest for diffraction measurements as well as the simultaneous collection of both imaging and diffraction data of (irreversible) in situ and in operando experiments are possible.

4.
J Synchrotron Radiat ; 26(Pt 3): 839-853, 2019 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-31074449

RESUMEN

X-ray computed tomography and, specifically, time-resolved volumetric tomography data collections (4D datasets) routinely produce terabytes of data, which need to be effectively processed after capture. This is often complicated due to the high rate of data collection required to capture at sufficient time-resolution events of interest in a time-series, compelling the researchers to perform data collection with a low number of projections for each tomogram in order to achieve the desired `frame rate'. It is common practice to collect a representative tomogram with many projections, after or before the time-critical portion of the experiment without detrimentally affecting the time-series to aid the analysis process. For this paper these highly sampled data are used to aid feature detection in the rapidly collected tomograms by assisting with the upsampling of their projections, which is equivalent to upscaling the θ-axis of the sinograms. In this paper, a super-resolution approach is proposed based on deep learning (termed an upscaling Deep Neural Network, or UDNN) that aims to upscale the sinogram space of individual tomograms in a 4D dataset of a sample. This is done using learned behaviour from a dataset containing a high number of projections, taken of the same sample and occurring at the beginning or the end of the data collection. The prior provided by the highly sampled tomogram allows the application of an upscaling process with better accuracy than existing interpolation techniques. This upscaling process subsequently permits an increase in the quality of the tomogram's reconstruction, especially in situations that require capture of only a limited number of projections, as is the case in high-frequency time-series capture. The increase in quality can prove very helpful for researchers, as downstream it enables easier segmentation of the tomograms in areas of interest, for example. The method itself comprises a convolutional neural network which through training learns an end-to-end mapping between sinograms with a low and a high number of projections. Since datasets can differ greatly between experiments, this approach specifically develops a lightweight network that can easily and quickly be retrained for different types of samples. As part of the evaluation of our technique, results with different hyperparameter settings are presented, and the method has been tested on both synthetic and real-world data. In addition, accompanying real-world experimental datasets have been released in the form of two 80 GB tomograms depicting a metallic pin that undergoes corruption from a droplet of salt water. Also a new engineering-based phantom dataset has been produced and released, inspired by the experimental datasets.

5.
J Synchrotron Radiat ; 25(Pt 4): 998-1009, 2018 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-29979161

RESUMEN

This manuscript presents the current status and technical details of the Spectroscopy Village at Diamond Light Source. The Village is formed of four beamlines: I18, B18, I20-Scanning and I20-EDE. The village provides the UK community with local access to a hard X-ray microprobe, a quick-scanning multi-purpose XAS beamline, a high-intensity beamline for X-ray absorption spectroscopy of dilute samples and X-ray emission spectroscopy, and an energy-dispersive extended X-ray absorption fine-structure beamline. The optics of B18, I20-scanning and I20-EDE are detailed; moreover, recent developments on the four beamlines, including new detector hardware and changes in acquisition software, are described.

6.
J Struct Biol ; 198(1): 43-53, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-28246039

RESUMEN

Segmentation of biological volumes is a crucial step needed to fully analyse their scientific content. Not having access to convenient tools with which to segment or annotate the data means many biological volumes remain under-utilised. Automatic segmentation of biological volumes is still a very challenging research field, and current methods usually require a large amount of manually-produced training data to deliver a high-quality segmentation. However, the complex appearance of cellular features and the high variance from one sample to another, along with the time-consuming work of manually labelling complete volumes, makes the required training data very scarce or non-existent. Thus, fully automatic approaches are often infeasible for many practical applications. With the aim of unifying the segmentation power of automatic approaches with the user expertise and ability to manually annotate biological samples, we present a new workbench named SuRVoS (Super-Region Volume Segmentation). Within this software, a volume to be segmented is first partitioned into hierarchical segmentation layers (named Super-Regions) and is then interactively segmented with the user's knowledge input in the form of training annotations. SuRVoS first learns from and then extends user inputs to the rest of the volume, while using Super-Regions for quicker and easier segmentation than when using a voxel grid. These benefits are especially noticeable on noisy, low-dose, biological datasets.


Asunto(s)
Conjuntos de Datos como Asunto , Programas Informáticos , Algoritmos , Curaduría de Datos/métodos , Aprendizaje Automático
7.
J Synchrotron Radiat ; 24(Pt 1): 248-256, 2017 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-28009564

RESUMEN

With the development of fourth-generation high-brightness synchrotrons on the horizon, the already large volume of data that will be collected on imaging and mapping beamlines is set to increase by orders of magnitude. As such, an easy and accessible way of dealing with such large datasets as quickly as possible is required in order to be able to address the core scientific problems during the experimental data collection. Savu is an accessible and flexible big data processing framework that is able to deal with both the variety and the volume of data of multimodal and multidimensional scientific datasets output such as those from chemical tomography experiments on the I18 microfocus scanning beamline at Diamond Light Source.

8.
J Synchrotron Radiat ; 22(3): 828-38, 2015 May.
Artículo en Inglés | MEDLINE | ID: mdl-25931103

RESUMEN

I12 is the Joint Engineering, Environmental and Processing (JEEP) beamline, constructed during Phase II of the Diamond Light Source. I12 is located on a short (5 m) straight section of the Diamond storage ring and uses a 4.2 T superconducting wiggler to provide polychromatic and monochromatic X-rays in the energy range 50-150 keV. The beam energy enables good penetration through large or dense samples, combined with a large beam size (1 mrad horizontally × 0.3 mrad vertically). The beam characteristics permit the study of materials and processes inside environmental chambers without unacceptable attenuation of the beam and without the need to use sample sizes which are atypically small for the process under study. X-ray techniques available to users are radiography, tomography, energy-dispersive diffraction, monochromatic and white-beam two-dimensional diffraction/scattering and small-angle X-ray scattering. Since commencing operations in November 2009, I12 has established a broad user community in materials science and processing, chemical processing, biomedical engineering, civil engineering, environmental science, palaeontology and physics.


Asunto(s)
Cristalografía por Rayos X/instrumentación , Rayos Láser , Aceleradores de Partículas/instrumentación , Espectrometría por Rayos X/instrumentación , Rayos X , Transferencia de Energía , Diseño de Equipo , Análisis de Falla de Equipo , Iluminación/instrumentación , Reino Unido
9.
J Synchrotron Radiat ; 22(3): 853-8, 2015 May.
Artículo en Inglés | MEDLINE | ID: mdl-25931106

RESUMEN

Synchrotron light source facilities worldwide generate terabytes of data in numerous incompatible data formats from a wide range of experiment types. The Data Analysis WorkbeNch (DAWN) was developed to address the challenge of providing a single visualization and analysis platform for data from any synchrotron experiment (including single-crystal and powder diffraction, tomography and spectroscopy), whilst also being sufficiently extensible for new specific use case analysis environments to be incorporated (e.g. ARPES, PEEM). In this work, the history and current state of DAWN are presented, with two case studies to demonstrate specific functionality. The first is an example of a data processing and reduction problem using the generic tools, whilst the second shows how these tools can be targeted to a specific scientific area.

10.
Philos Trans A Math Phys Eng Sci ; 373(2043)2015 Jun 13.
Artículo en Inglés | MEDLINE | ID: mdl-25939626

RESUMEN

Tomographic datasets collected at synchrotrons are becoming very large and complex, and, therefore, need to be managed efficiently. Raw images may have high pixel counts, and each pixel can be multidimensional and associated with additional data such as those derived from spectroscopy. In time-resolved studies, hundreds of tomographic datasets can be collected in sequence, yielding terabytes of data. Users of tomographic beamlines are drawn from various scientific disciplines, and many are keen to use tomographic reconstruction software that does not require a deep understanding of reconstruction principles. We have developed Savu, a reconstruction pipeline that enables users to rapidly reconstruct data to consistently create high-quality results. Savu is designed to work in an 'orthogonal' fashion, meaning that data can be converted between projection and sinogram space throughout the processing workflow as required. The Savu pipeline is modular and allows processing strategies to be optimized for users' purposes. In addition to the reconstruction algorithms themselves, it can include modules for identification of experimental problems, artefact correction, general image processing and data quality assessment. Savu is open source, open licensed and 'facility-independent': it can run on standard cluster infrastructure at any institution.

11.
Immunoinformatics (Amst) ; 13: None, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38525047

RESUMEN

The vast potential sequence diversity of TCRs and their ligands has presented an historic barrier to computational prediction of TCR epitope specificity, a holy grail of quantitative immunology. One common approach is to cluster sequences together, on the assumption that similar receptors bind similar epitopes. Here, we provide the first independent evaluation of widely used clustering algorithms for TCR specificity inference, observing some variability in predictive performance between models, and marked differences in scalability. Despite these differences, we find that different algorithms produce clusters with high degrees of similarity for receptors recognising the same epitope. Our analysis strengthens the case for use of clustering models to identify signals of common specificity from large repertoires, whilst highlighting scope for improvement of complex models over simple comparators.

12.
Acta Crystallogr D Struct Biol ; 80(Pt 6): 421-438, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38829361

RESUMEN

For cryo-electron tomography (cryo-ET) of beam-sensitive biological specimens, a planar sample geometry is typically used. As the sample is tilted, the effective thickness of the sample along the direction of the electron beam increases and the signal-to-noise ratio concomitantly decreases, limiting the transfer of information at high tilt angles. In addition, the tilt range where data can be collected is limited by a combination of various sample-environment constraints, including the limited space in the objective lens pole piece and the possible use of fixed conductive braids to cool the specimen. Consequently, most tilt series are limited to a maximum of ±70°, leading to the presence of a missing wedge in Fourier space. The acquisition of cryo-ET data without a missing wedge, for example using a cylindrical sample geometry, is hence attractive for volumetric analysis of low-symmetry structures such as organelles or vesicles, lysis events, pore formation or filaments for which the missing information cannot be compensated by averaging techniques. Irrespective of the geometry, electron-beam damage to the specimen is an issue and the first images acquired will transfer more high-resolution information than those acquired last. There is also an inherent trade-off between higher sampling in Fourier space and avoiding beam damage to the sample. Finally, the necessity of using a sufficient electron fluence to align the tilt images means that this fluence needs to be fractionated across a small number of images; therefore, the order of data acquisition is also a factor to consider. Here, an n-helix tilt scheme is described and simulated which uses overlapping and interleaved tilt series to maximize the use of a pillar geometry, allowing the entire pillar volume to be reconstructed as a single unit. Three related tilt schemes are also evaluated that extend the continuous and classic dose-symmetric tilt schemes for cryo-ET to pillar samples to enable the collection of isotropic information across all spatial frequencies. A fourfold dose-symmetric scheme is proposed which provides a practical compromise between uniform information transfer and complexity of data acquisition.


Asunto(s)
Microscopía por Crioelectrón , Tomografía con Microscopio Electrónico , Tomografía con Microscopio Electrónico/métodos , Microscopía por Crioelectrón/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Análisis de Fourier , Relación Señal-Ruido
13.
Ultramicroscopy ; 256: 113882, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-37979542

RESUMEN

Simulations of cryo-electron microscopy (cryo-EM) images of biological samples can be used to produce test datasets to support the development of instrumentation, methods, and software, as well as to assess data acquisition and analysis strategies. To be useful, these simulations need to be based on physically realistic models which include large volumes of amorphous ice. The gold standard model for EM image simulation is a physical atom-based ice model produced using molecular dynamics simulations. Although practical for small sample volumes; for simulation of cryo-EM data from large sample volumes, this can be too computationally expensive. We have evaluated a Gaussian Random Field (GRF) ice model which is shown to be more computationally efficient for large sample volumes. The simulated EM images are compared with the gold standard atom-based ice model approach and shown to be directly comparable. Comparison with experimentally acquired data shows the Gaussian random field ice model produces realistic simulations. The software required has been implemented in the Parakeet software package and the underlying atomic models are available online for use by the wider community.


Asunto(s)
Hielo , Programas Informáticos , Microscopía por Crioelectrón/métodos , Simulación de Dinámica Molecular
14.
J Appl Crystallogr ; 57(Pt 3): 649-658, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38846772

RESUMEN

Processing of single-crystal X-ray diffraction data from area detectors can be separated into two steps. First, raw intensities are obtained by integration of the diffraction images, and then data correction and reduction are performed to determine structure-factor amplitudes and their uncertainties. The second step considers the diffraction geometry, sample illumination, decay, absorption and other effects. While absorption is only a minor effect in standard macromolecular crystallography (MX), it can become the largest source of uncertainty for experiments performed at long wavelengths. Current software packages for MX typically employ empirical models to correct for the effects of absorption, with the corrections determined through the procedure of minimizing the differences in intensities between symmetry-equivalent reflections; these models are well suited to capturing smoothly varying experimental effects. However, for very long wavelengths, empirical methods become an unreliable approach to model strong absorption effects with high fidelity. This problem is particularly acute when data multiplicity is low. This paper presents an analytical absorption correction strategy (implemented in new software AnACor) based on a volumetric model of the sample derived from X-ray tomography. Individual path lengths through the different sample materials for all reflections are determined by a ray-tracing method. Several approaches for absorption corrections (spherical harmonics correction, analytical absorption correction and a combination of the two) are compared for two samples, the membrane protein OmpK36 GD, measured at a wavelength of λ = 3.54 Å, and chlorite dismutase, measured at λ = 4.13 Å. Data set statistics, the peak heights in the anomalous difference Fourier maps and the success of experimental phasing are used to compare the results from the different absorption correction approaches. The strategies using the new analytical absorption correction are shown to be superior to the standard spherical harmonics corrections. While the improvements are modest in the 3.54 Šdata, the analytical absorption correction outperforms spherical harmonics in the longer-wavelength data (λ = 4.13 Å), which is also reflected in the reduced amount of data being required for successful experimental phasing.

15.
Acta Crystallogr D Biol Crystallogr ; 69(Pt 7): 1252-9, 2013 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-23793151

RESUMEN

The focus in macromolecular crystallography is moving towards even more challenging target proteins that often crystallize on much smaller scales and are frequently mounted in opaque or highly refractive materials. It is therefore essential that X-ray beamline technology develops in parallel to accommodate such difficult samples. In this paper, the use of X-ray microradiography and microtomography is reported as a tool for crystal visualization, location and characterization on the macromolecular crystallography beamlines at the Diamond Light Source. The technique is particularly useful for microcrystals and for crystals mounted in opaque materials such as lipid cubic phase. X-ray diffraction raster scanning can be used in combination with radiography to allow informed decision-making at the beamline prior to diffraction data collection. It is demonstrated that the X-ray dose required for a full tomography measurement is similar to that for a diffraction grid-scan, but for sample location and shape estimation alone just a few radiographic projections may be required.


Asunto(s)
Bacteriorodopsinas/química , Procesamiento de Imagen Asistido por Computador , Lípidos/química , Microrradiografía , Nitrito Reductasas/química , Receptor de Adenosina A2A/química , Tomografía Computarizada por Rayos X , Algoritmos , Cristalografía por Rayos X , Interpretación Estadística de Datos , Procesamiento Automatizado de Datos , Humanos , Programas Informáticos
16.
Opt Express ; 21(5): 5463-74, 2013 Mar 11.
Artículo en Inglés | MEDLINE | ID: mdl-23482117

RESUMEN

This paper reports on the first user/application-driven multi-technology optical sub-wavelength network for intra/inter Data-Centre (DC) communications. Two DCs each with distinct sub-wavelength switching technologies, frame based synchronous TSON and packet based asynchronous OPST are interconnected by a WSON inter-DC communication. The intra/inter DC testbed demonstrates ultra-low latency (packet-delay <270 µs and packet-delay-variation (PDV)<10 µs) flexible data-rate traffic transfer by point-to-point, point-to-multipoint, and multipoint-to-(multi)point connectivity, highly suitable for cloud based applications and high performance computing (HPC). The extended GMPLS-PCE-SLAE based control-plane enables innovative application-driven end-to-end sub-wavelength path setup and resource reservation across the multi technology data-plane, which has been assessed for as many as 25 concurrent requests.

17.
Nat Rev Immunol ; 23(8): 511-521, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-36755161

RESUMEN

Recent advances in machine learning and experimental biology have offered breakthrough solutions to problems such as protein structure prediction that were long thought to be intractable. However, despite the pivotal role of the T cell receptor (TCR) in orchestrating cellular immunity in health and disease, computational reconstruction of a reliable map from a TCR to its cognate antigens remains a holy grail of systems immunology. Current data sets are limited to a negligible fraction of the universe of possible TCR-ligand pairs, and performance of state-of-the-art predictive models wanes when applied beyond these known binders. In this Perspective article, we make the case for renewed and coordinated interdisciplinary effort to tackle the problem of predicting TCR-antigen specificity. We set out the general requirements of predictive models of antigen binding, highlight critical challenges and discuss how recent advances in digital biology such as single-cell technology and machine learning may provide possible solutions. Finally, we describe how predicting TCR specificity might contribute to our understanding of the broader puzzle of antigen immunogenicity.


Asunto(s)
Antígenos , Receptores de Antígenos de Linfocitos T , Humanos , Especificidad del Receptor de Antígeno de Linfocitos T , Aprendizaje Automático , Biología
18.
Biol Imaging ; 3: e10, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38487693

RESUMEN

Electron cryo-tomography is an imaging technique for probing 3D structures with at the nanometer scale. This technique has been used extensively in the biomedical field to study the complex structures of proteins and other macromolecules. With the advancement in technology, microscopes are currently capable of producing images amounting to terabytes of data per day, posing great challenges for scientists as the speed of processing of the images cannot keep up with the ever-higher throughput of the microscopes. Therefore, automation is an essential and natural pathway on which image processing-from individual micrographs to full tomograms-is developing. In this paper, we present Ot2Rec, an open-source pipelining tool which aims to enable scientists to build their own processing workflows in a flexible and automatic manner. The basic building blocks of Ot2Rec are plugins which follow a unified application programming interface structure, making it simple for scientists to contribute to Ot2Rec by adding features which are not already available. In this paper, we also present three case studies of image processing using Ot2Rec, through which we demonstrate the speedup of using a semi-automatic workflow over a manual one, the possibility of writing and using custom (prototype) plugins, and the flexibility of Ot2Rec which enables the mix-and-match of plugins. We also demonstrate, in the Supplementary Material, a built-in reporting feature in Ot2Rec which aggregates the metadata from all process being run, and output them in the Jupyter Notebook and/or HTML formats for quick review of image processing quality. Ot2Rec can be found at https://github.com/rosalindfranklininstitute/ot2rec.

19.
Biol Imaging ; 3: e9, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38487692

RESUMEN

An emergent volume electron microscopy technique called cryogenic serial plasma focused ion beam milling scanning electron microscopy (pFIB/SEM) can decipher complex biological structures by building a three-dimensional picture of biological samples at mesoscale resolution. This is achieved by collecting consecutive SEM images after successive rounds of FIB milling that expose a new surface after each milling step. Due to instrumental limitations, some image processing is necessary before 3D visualization and analysis of the data is possible. SEM images are affected by noise, drift, and charging effects, that can make precise 3D reconstruction of biological features difficult. This article presents Okapi-EM, an open-source napari plugin developed to process and analyze cryogenic serial pFIB/SEM images. Okapi-EM enables automated image registration of slices, evaluation of image quality metrics specific to pFIB-SEM imaging, and mitigation of charging artifacts. Implementation of Okapi-EM within the napari framework ensures that the tools are both user- and developer-friendly, through provision of a graphical user interface and access to Python programming.

20.
Elife ; 122023 02 21.
Artículo en Inglés | MEDLINE | ID: mdl-36805107

RESUMEN

Serial focussed ion beam scanning electron microscopy (FIB/SEM) enables imaging and assessment of subcellular structures on the mesoscale (10 nm to 10 µm). When applied to vitrified samples, serial FIB/SEM is also a means to target specific structures in cells and tissues while maintaining constituents' hydration shells for in situ structural biology downstream. However, the application of serial FIB/SEM imaging of non-stained cryogenic biological samples is limited due to low contrast, curtaining, and charging artefacts. We address these challenges using a cryogenic plasma FIB/SEM. We evaluated the choice of plasma ion source and imaging regimes to produce high-quality SEM images of a range of different biological samples. Using an automated workflow we produced three-dimensional volumes of bacteria, human cells, and tissue, and calculated estimates for their resolution, typically achieving 20-50 nm. Additionally, a tag-free localisation tool for regions of interest is needed to drive the application of in situ structural biology towards tissue. The combination of serial FIB/SEM with plasma-based ion sources promises a framework for targeting specific features in bulk-frozen samples (>100 µm) to produce lamellae for cryogenic electron tomography.


Asunto(s)
Tomografía con Microscopio Electrónico , Imagenología Tridimensional , Humanos , Microscopía Electrónica de Rastreo , Tomografía con Microscopio Electrónico/métodos , Iones , Imagenología Tridimensional/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA