Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Microsc ; 287(2): 81-92, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35638174

RESUMO

High-resolution X-ray microscopy (XRM) is gaining interest for biological investigations of extremely small-scale structures. XRM imaging of bones in living mice could provide new insights into the emergence and treatment of osteoporosis by observing osteocyte lacunae, which are holes in the bone of few micrometres in size. Imaging living animals at that resolution, however, is extremely challenging and requires very sophisticated data processing converting the raw XRM detector output into reconstructed images. This paper presents an open-source, differentiable reconstruction pipeline for XRM data which analytically computes the final image from the raw measurements. In contrast to most proprietary reconstruction software, it offers the user full control over each processing step and, additionally, makes the entire pipeline deep learning compatible by ensuring differentiability. This allows fitting trainable modules both before and after the actual reconstruction step in a purely data-driven way using the gradient-based optimizers of common deep learning frameworks. The value of such differentiability is demonstrated by calibrating the parameters of a simple cupping correction module operating on the raw projection images using only a self-supervisory quality metric based on the reconstructed volume and no further calibration measurements. The retrospective calibration directly improves image quality as it avoids cupping artefacts and decreases the difference in grey values between outer and inner bone by 68-94%. Furthermore, it makes the reconstruction process entirely independent of the XRM manufacturer and paves the way to explore modern deep learning reconstruction methods for arbitrary XRM and, potentially, other flat-panel computed tomography systems. This exemplifies how differentiable reconstruction can be leveraged in the context of XRM and, hence, is an important step towards the goal of reducing the resolution limit of in vivo bone imaging to the single micrometre domain.


Assuntos
Processamento de Imagem Assistida por Computador , Microscopia , Animais , Calibragem , Processamento de Imagem Assistida por Computador/métodos , Camundongos , Microscopia/métodos , Estudos Retrospectivos , Raios X
2.
Adv Sci (Weinh) ; : e2404728, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38924310

RESUMO

Gas marbles are a new family of particle-stabilized soft dispersed system with a soap bubble-like air-in-water-in-air structure. Herein, stimulus-responsive character is successfully introduced to a gas marble system for the first time using polymer particles carrying a poly(tertiary amine methacrylate) (pKa ≈7) steric stabilizer on their surfaces as a particulate stabilizer. The gas marbles exhibited long-term stability when transferred onto the planar surface of liquid water, provided that the solution pH of the subphase is basic and neutral. In contrast, the use of acidic solutions led to immediate disintegration of the gas marbles, resulting in release of the inner gas. The critical minimum solution pH required for long-term gas marble stability correlates closely with the known pKa value for the poly(tertiary amine methacrylate) stabilizer. It also demonstrates amphibious motions of the gas marbles.

3.
Med Phys ; 49(8): 5107-5120, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35583171

RESUMO

BACKGROUND: Computed tomography (CT) is widely used as an imaging tool to visualize three-dimensional structures with expressive bone-soft tissue contrast. However, CT resolution can be severely degraded through low-dose acquisitions, highlighting the importance of effective denoising algorithms. PURPOSE: Most data-driven denoising techniques are based on deep neural networks, and therefore, contain hundreds of thousands of trainable parameters, making them incomprehensible and prone to prediction failures. Developing understandable and robust denoising algorithms achieving state-of-the-art performance helps to minimize radiation dose while maintaining data integrity. METHODS: This work presents an open-source CT denoising framework based on the idea of bilateral filtering. We propose a bilateral filter that can be incorporated into any deep learning pipeline and optimized in a purely data-driven way by calculating the gradient flow toward its hyperparameters and its input. Denoising in pure image-to-image pipelines and across different domains such as raw detector data and reconstructed volume, using a differentiable backprojection layer, is demonstrated. In contrast to other models, our bilateral filter layer consists of only four trainable parameters and constrains the applied operation to follow the traditional bilateral filter algorithm by design. RESULTS: Although only using three spatial parameters and one intensity range parameter per filter layer, the proposed denoising pipelines can compete with deep state-of-the-art denoising architectures with several hundred thousand parameters. Competitive denoising performance is achieved on x-ray microscope bone data and the 2016 Low Dose CT Grand Challenge data set. We report structural similarity index measures of 0.7094 and 0.9674 and peak signal-to-noise ratio values of 33.17 and 43.07 on the respective data sets. CONCLUSIONS: Due to the extremely low number of trainable parameters with well-defined effect, prediction reliance and data integrity is guaranteed at any time in the proposed pipelines, in contrast to most other deep learning-based denoising architectures.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA