RESUMEN
Quantitative susceptibility mapping (QSM) utilizes the relationship between the measured local field and the unknown susceptibility map to perform dipole deconvolution. The aim of this work is to introduce and systematically evaluate the model resolution-based deconvolution for improved estimation of the susceptibility map obtained using the thresholded k-space division (TKD). A two-step approach has been proposed, wherein the first step involves the TKD susceptibility map computation and the second step involves the correction of this susceptibility map using the model-resolution matrix. The TKD-estimated susceptibility map can be expressed as the weighted average of the true susceptibility map, where the weights are determined by the rows of the model-resolution matrix, and hence a deconvolution of the TKD susceptibility map using the model-resolution matrix yields a better approximation to the true susceptibility map. The model resolution-based deconvolution is realized using closed-form, iterative, and sparsity-regularized implementations. The proposed approach was compared with L2 regularization, TKD, rescaled TKD in superfast dipole inversion, the modulated closed-form method, and iterative dipole inversion, as well as sparsity-regularized dipole inversion. It was observed that the proposed approach showed a substantial reduction in the streaking artifacts across 94 test volumes considered in this study. The proposed approach also showed better error reduction and edge preservation compared with other approaches. The proposed model resolution-based deconvolution compensates for the truncation of zero coefficients in the dipole kernel at the magic angle and hence provides a closer approximation to the true susceptibility map compared with other direct methods.
Asunto(s)
Algoritmos , Imagen por Resonancia Magnética , Imagen por Resonancia Magnética/métodos , Encéfalo , Mapeo Encefálico/métodos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
Quantitative Susceptibility Mapping (QSM) is an advanced magnetic resonance imaging (MRI) technique to quantify the magnetic susceptibility of the tissue under investigation. Deep learning methods have shown promising results in deconvolving the susceptibility distribution from the measured local field obtained from the MR phase. Although existing deep learning based QSM methods can produce high-quality reconstruction, they are highly biased toward training data distribution with less scope for generalizability. This work proposes a hybrid two-step reconstruction approach to improve deep learning based QSM reconstruction. The susceptibility map prediction obtained from the deep learning methods has been refined in the framework developed in this work to ensure consistency with the measured local field. The developed method was validated on existing deep learning and model-based deep learning methods for susceptibility mapping of the brain. The developed method resulted in improved reconstruction for MRI volumes obtained with different acquisition settings, including deep learning models trained on constrained (limited) data settings.
Asunto(s)
Encéfalo , Aprendizaje Profundo , Imagen por Resonancia Magnética , Imagen por Resonancia Magnética/métodos , Humanos , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Masculino , Femenino , Algoritmos , AdultoRESUMEN
OBJECTIVE: Quantitative susceptibility mapping (QSM) provides an estimate of the magnetic susceptibility of tissue using magnetic resonance (MR) phase measurements. The tissue magnetic susceptibility (source) from the measured magnetic field distribution/local tissue field (effect) inherent in the MR phase images is estimated by numerically solving the inverse source-effect problem. This study aims to develop an effective model-based deep-learning framework to solve the inverse problem of QSM. MATERIALS AND METHODS: This work proposes a Schatten p -norm-driven model-based deep learning framework for QSM with a learnable norm parameter p to adapt to the data. In contrast to other model-based architectures that enforce the l 2 -norm or l 1 -norm for the denoiser, the proposed approach can enforce any p -norm ( 0 < p ≤ 2 ) on a trainable regulariser. RESULTS: The proposed method was compared with deep learning-based approaches, such as QSMnet, and model-based deep learning approaches, such as learned proximal convolutional neural network (LPCNN). Reconstructions performed using 77 imaging volumes with different acquisition protocols and clinical conditions, such as hemorrhage and multiple sclerosis, showed that the proposed approach outperformed existing state-of-the-art methods by a significant margin in terms of quantitative merits. CONCLUSION: The proposed SpiNet-QSM showed a consistent improvement of at least 5% in terms of the high-frequency error norm (HFEN) and normalized root mean squared error (NRMSE) over other QSM reconstruction methods with limited training data.
Asunto(s)
Algoritmos , Encéfalo , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Humanos , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo/diagnóstico por imagen , Esclerosis Múltiple/diagnóstico por imagen , Campos Magnéticos , Mapeo Encefálico/métodosRESUMEN
Model-based image reconstruction techniques yield better quantitative accuracy in photoacoustic image reconstruction. In this work, an exponential filtering of singular values was proposed for carrying out the image reconstruction in photoacoustic tomography. The results were compared with widely popular Tikhonov regularization, time reversal, and the state of the art least-squares QR-based reconstruction algorithms for three digital phantom cases with varying signal-to-noise ratios of data. It was shown that exponential filtering provides superior photoacoustic images of better quantitative accuracy. Moreover, the proposed filtering approach was observed to be less biased toward the regularization parameter and did not come with any additional computational burden as it was implemented within the Tikhonov filtering framework. It was also shown that the standard Tikhonov filtering becomes an approximation to the proposed exponential filtering.
RESUMEN
The sparse estimation methods that utilize the âp-norm, with p being between 0 and 1, have shown better utility in providing optimal solutions to the inverse problem in diffuse optical tomography. These âp-norm-based regularizations make the optimization function nonconvex, and algorithms that implement âp-norm minimization utilize approximations to the original âp-norm function. In this work, three such typical methods for implementing the âp-norm were considered, namely, iteratively reweighted â1-minimization (IRL1), iteratively reweighted least squares (IRLS), and the iteratively thresholding method (ITM). These methods were deployed for performing diffuse optical tomographic image reconstruction, and a systematic comparison with the help of three numerical and gelatin phantom cases was executed. The results indicate that these three methods in the implementation of âp-minimization yields similar results, with IRL1 fairing marginally in cases considered here in terms of shape recovery and quantitative accuracy of the reconstructed diffuse optical tomographic images.
Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Óptica/métodos , Absorción , Rayos Infrarrojos , Análisis de los Mínimos Cuadrados , Fantasmas de ImagenRESUMEN
PURPOSE: To propose an automated approach for detecting and classifying Intracranial Hemorrhages (ICH) directly from sinograms using a deep learning framework. This method is proposed to overcome the limitations of the conventional diagnosis by eliminating the time-consuming reconstruction step and minimizing the potential noise and artifacts that can occur during the Computed Tomography (CT) reconstruction process. METHODS: This study proposes a two-stage automated approach for detecting and classifying ICH from sinograms using a deep learning framework. The first stage of the framework is Intensity Transformed Sinogram Sythesizer, which synthesizes sinograms that are equivalent to the intensity-transformed CT images. The second stage comprises of a cascaded Convolutional Neural Network-Recurrent Neural Network (CNN-RNN) model that detects and classifies hemorrhages from the synthesized sinograms. The CNN module extracts high-level features from each input sinogram, while the RNN module provides spatial correlation of the neighborhood regions in the sinograms. The proposed method was evaluated on a publicly available RSNA dataset consisting of a large sample size of 8652 patients. RESULTS: The results showed that the proposed method had a notable improvement as high as 27% in patient-wise accuracies when compared to state-of-the-art methods like ResNext-101, Inception-v3 and Vision Transformer. Furthermore, the sinogram-based approach was found to be more robust to noise and offset errors in comparison to CT image-based approaches. The proposed model was also subjected to a multi-label classification analysis to determine the hemorrhage type from a given sinogram. The learning patterns of the proposed model were also examined for explainability using the activation maps. CONCLUSION: The proposed sinogram-based approach can provide an accurate and efficient diagnosis of ICH without the need for the time-consuming reconstruction step and can potentially overcome the limitations of CT image-based approaches. The results show promising outcomes for the use of sinogram-based approaches in detecting hemorrhages, and further research can explore the potential of this approach in clinical settings.
Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos , Hemorragias Intracraneales/diagnóstico por imagen , AlgoritmosRESUMEN
Segmenting the median nerve is essential for identifying nerve entrapment syndromes, guiding surgical planning and interventions, and furthering understanding of nerve anatomy. This study aims to develop an automated tool that can assist clinicians in localizing and segmenting the median nerve from the wrist, mid-forearm, and elbow in ultrasound videos. This is the first fully automated single deep learning model for accurate segmentation of the median nerve from the wrist to the elbow in ultrasound videos, along with the computation of the cross-sectional area (CSA) of the nerve. The visual transformer architecture, which was originally proposed to detect and classify 41 classes in YouTube videos, was modified to predict the median nerve in every frame of ultrasound videos. This is achieved by modifying the bounding box sequence matching block of the visual transformer. The median nerve segmentation is a binary class prediction, and the entire bipartite matching sequence is eliminated, enabling a direct comparison of the prediction with expert annotation in a frame-by-frame fashion. Model training, validation, and testing were performed on a dataset comprising ultrasound videos collected from 100 subjects, which were partitioned into 80, ten, and ten subjects, respectively. The proposed model was compared with U-Net, U-Net++, Siam U-Net, Attention U-Net, LSTM U-Net, and Trans U-Net. The proposed transformer-based model effectively leveraged the temporal and spatial information present in ultrasound video frames and efficiently segmented the median nerve with an average dice similarity coefficient (DSC) of approximately 94% at the wrist and 84% in the entire forearm region.
Asunto(s)
Codo , Muñeca , Humanos , Muñeca/diagnóstico por imagen , Nervio Mediano/diagnóstico por imagen , Ultrasonografía , Suministros de Energía Eléctrica , Procesamiento de Imagen Asistido por ComputadorRESUMEN
Image-guided diffuse optical tomography has the advantage of reducing the total number of optical parameters being reconstructed to the number of distinct tissue types identified by the traditional imaging modality, converting the optical image-reconstruction problem from underdetermined in nature to overdetermined. In such cases, the minimum required measurements might be far less compared to those of the traditional diffuse optical imaging. An approach to choose these optimally based on a data-resolution matrix is proposed, and it is shown that such a choice does not compromise the reconstruction performance.
Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Óptica/instrumentación , Tomografía Óptica/métodos , Absorción , Algoritmos , Encéfalo/patología , Mama/patología , Calibración , Femenino , Análisis de Elementos Finitos , Humanos , Distribución Normal , Óptica y Fotónica/métodos , Fantasmas de Imagen , Reproducibilidad de los ResultadosRESUMEN
The mathematical model for diffuse fluorescence spectroscopy/imaging is represented by coupled partial differential equations (PDEs), which describe the excitation and emission light propagation in soft biological tissues. The generic closed-form solutions for these coupled PDEs are derived in this work for the case of regular geometries using the Green's function approach using both zero and extrapolated boundary conditions. The specific solutions along with the typical data types, such as integrated intensity and the mean time of flight, for various regular geometries were also derived for both time- and frequency-domain cases.
Asunto(s)
Modelos Teóricos , Imagen Molecular/métodos , Espectrometría de FluorescenciaRESUMEN
The analytical solutions for the coupled diffusion equations that are encountered in diffuse fluorescence spectroscopy/imaging for regular geometries were compared with the well-established numerical models, which are based on the finite element method. Comparison among the analytical solutions obtained using zero boundary conditions and extrapolated boundary conditions (EBCs) was also performed. The results reveal that the analytical solutions are in close agreement with the numerical solutions, and solutions obtained using EBCs are more accurate in obtaining the mean time of flight data compared to their counterpart. The analytical solutions were also shown to be capable of providing bulk optical properties through a numerical experiment using a realistic breast model.
Asunto(s)
Imagen Molecular/métodos , Mama/citología , Humanos , Espectrometría de Fluorescencia , Factores de TiempoRESUMEN
A new approach that can easily incorporate any generic penalty function into the diffuse optical tomographic image reconstruction is introduced to show the utility of nonquadratic penalty functions. The penalty functions that were used include quadratic (â2), absolute (â1), Cauchy, and Geman-McClure. The regularization parameter in each of these cases was obtained automatically by using the generalized cross-validation method. The reconstruction results were systematically compared with each other via utilization of quantitative metrics, such as relative error and Pearson correlation. The reconstruction results indicate that, while the quadratic penalty may be able to provide better separation between two closely spaced targets, its contrast recovery capability is limited, and the sparseness promoting penalties, such as â1, Cauchy, and Geman-McClure have better utility in reconstructing high-contrast and complex-shaped targets, with the Geman-McClure penalty being the most optimal one.
Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Análisis Numérico Asistido por Computador , Espectroscopía Infrarroja Corta/métodos , Tomografía Óptica/métodosRESUMEN
A novel approach that can more effectively use the structural information provided by the traditional imaging modalities in multimodal diffuse optical tomographic imaging is introduced. This approach is based on a prior image-constrained-â(1) minimization scheme and has been motivated by the recent progress in the sparse image reconstruction techniques. It is shown that the proposed framework is more effective in terms of localizing the tumor region and recovering the optical property values both in numerical and gelatin phantom cases compared to the traditional methods that use structural information.
Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Óptica/métodos , Rayos Infrarrojos , Fantasmas de Imagen , Relación Señal-RuidoRESUMEN
PURPOSE: To optimize the data-collection strategy for diffuse optical tomography and to obtain a set of independent measurements among the total measurements using the model based data-resolution matrix characteristics. METHODS: The data-resolution matrix is computed based on the sensitivity matrix and the regularization scheme used in the reconstruction procedure by matching the predicted data with the actual one. The diagonal values of data-resolution matrix show the importance of a particular measurement and the magnitude of off-diagonal entries shows the dependence among measurements. Based on the closeness of diagonal value magnitude to off-diagonal entries, the independent measurements choice is made. The reconstruction results obtained using all measurements were compared to the ones obtained using only independent measurements in both numerical and experimental phantom cases. The traditional singular value analysis was also performed to compare the results obtained using the proposed method. RESULTS: The results indicate that choosing only independent measurements based on data-resolution matrix characteristics for the image reconstruction does not compromise the reconstructed image quality significantly, in turn reduces the data-collection time associated with the procedure. When the same number of measurements (equivalent to independent ones) are chosen at random, the reconstruction results were having poor quality with major boundary artifacts. The number of independent measurements obtained using data-resolution matrix analysis is much higher compared to that obtained using the singular value analysis. CONCLUSIONS: The data-resolution matrix analysis is able to provide the high level of optimization needed for effective data-collection in diffuse optical imaging. The analysis itself is independent of noise characteristics in the data, resulting in an universal framework to characterize and optimize a given data-collection strategy.
Asunto(s)
Espectroscopía Infrarroja Corta/métodos , Tomografía Óptica/métodos , Algoritmos , Animales , Anisotropía , Artefactos , Recolección de Datos , Análisis de Elementos Finitos , Gelatina/química , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Modelos Estadísticos , Fantasmas de Imagen , Dispersión de RadiaciónRESUMEN
Diffuse optical tomographic imaging is known to be an ill-posed problem, and a penalty/regularization term is used in image reconstruction (inverse problem) to overcome this limitation. Two schemes that are prevalent are spatially varying (exponential) and constant (standard) regularizations/penalties. A scheme that is also spatially varying but uses the model information is introduced based on the model-resolution matrix. This scheme, along with exponential and standard regularization schemes, is evaluated objectively based on model-resolution and data-resolution matrices. This objective analysis showed that resolution characteristics are better for spatially varying penalties compared to standard regularization; and among spatially varying regularization schemes, the model-resolution based regularization fares well in providing improved data-resolution and model-resolution characteristics. The verification of the same is achieved by performing numerical experiments in reconstructing 1% noisy data involving simple two- and three-dimensional imaging domains.
Asunto(s)
Rayos Infrarrojos , Tomografía Óptica/métodos , Imagenología Tridimensional , Modelos TeóricosRESUMEN
Digital Rock Physics leverages advances in digital image acquisition and analysis techniques to create 3D digital images of rock samples, which are used for computational modeling and simulations to predict petrophysical properties of interest. However, the accuracy of the predictions is crucially dependent on the quality of the digital images, which is currently limited by the resolution of the micro-CT scanning technology. We have proposed a novel Deep Learning based Super-Resolution model called Siamese-SR to digitally boost the resolution of Digital Rock images whilst retaining the texture and providing optimal de-noising. The Siamese-SR model consists of a generator which is adversarially trained with a relativistic and a siamese discriminator utilizing Materials In Context (MINC) loss estimator. This model has been demonstrated to improve the resolution of sandstone rock images acquired using micro-CT scanning by a factor of 2. Another key highlight of our work is that for the evaluation of the super-resolution performance, we propose to move away from image-based metrics such as Structural Similarity (SSIM) and Peak Signal to Noise Ratio (PSNR) because they do not correlate well with expert geological and petrophysical evaluations. Instead, we propose to subject the super-resolved images to the next step in the Digital Rock workflow to calculate a crucial petrophysical property of interest, viz. porosity and use it as a metric for evaluation of our proposed Siamese-SR model against several other existing super-resolution methods like SRGAN, ESRGAN, EDSR and SPSR. Furthermore, we also use Local Attribution Maps to show how our proposed Siamese-SR model focuses optimally on edge-semantics, which is what leads to improvement in the image-based porosity prediction, the permeability prediction from Multiple Relaxation Time Lattice Boltzmann Method (MRTLBM) flow simulations as well as the prediction of other petrophysical properties of interest derived from Mercury Injection Capillary Pressure (MICP) simulations.
RESUMEN
Digital rock is an emerging area of rock physics, which involves scanning reservoir rocks using X-ray micro computed tomography (XCT) scanners and using it for various petrophysical computations and evaluations. The acquired micro CT projections are used to reconstruct the X-ray attenuation maps of the rock. The image reconstruction problem can be solved by utilization of analytical (such as Feldkamp-Davis-Kress (FDK) algorithm) or iterative methods. Analytical schemes are typically computationally more efficient and hence preferred for large datasets such as digital rocks. Iterative schemes like maximum likelihood expectation maximization (MLEM) are known to generate accurate image representation over analytical scheme in limited data (and/or noisy) situations, however iterative schemes are computationally expensive. In this work, we have parallelized the forward and inverse operators used in the MLEM algorithm on multiple graphics processing units (multi-GPU) platforms. The multi-GPU implementation involves dividing the rock volumes and detector geometry into smaller modules (along with overlap regions). Each of the module was passed onto different GPU to enable computation of forward and inverse operations. We observed an acceleration of [Formula: see text] times using our multi-GPU approach compared to the multi-core CPU implementation. Further multi-GPU based MLEM obtained superior reconstruction compared to traditional FDK algorithm.
RESUMEN
Lung ultrasound (US) imaging has the potential to be an effective point-of-care test for detection of COVID-19, due to its ease of operation with minimal personal protection equipment along with easy disinfection. The current state-of-the-art deep learning models for detection of COVID-19 are heavy models that may not be easy to deploy in commonly utilized mobile platforms in point-of-care testing. In this work, we develop a lightweight mobile friendly efficient deep learning model for detection of COVID-19 using lung US images. Three different classes including COVID-19, pneumonia, and healthy were included in this task. The developed network, named as Mini-COVIDNet, was bench-marked with other lightweight neural network models along with state-of-the-art heavy model. It was shown that the proposed network can achieve the highest accuracy of 83.2% and requires a training time of only 24 min. The proposed Mini-COVIDNet has 4.39 times less number of parameters in the network compared to its next best performing network and requires a memory of only 51.29 MB, making the point-of-care detection of COVID-19 using lung US imaging plausible on a mobile platform. Deployment of these lightweight networks on embedded platforms shows that the proposed Mini-COVIDNet is highly versatile and provides optimal performance in terms of being accurate as well as having latency in the same order as other lightweight networks. The developed lightweight models are available at https://github.com/navchetan-awasthi/Mini-COVIDNet.
Asunto(s)
COVID-19/diagnóstico por imagen , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos , Sistemas de Atención de Punto , Ultrasonografía/métodos , Humanos , SARS-CoV-2RESUMEN
SIGNIFICANCE: The proposed binary tomography approach was able to recover the vasculature structures accurately, which could potentially enable the utilization of binary tomography algorithm in scenarios such as therapy monitoring and hemorrhage detection in different organs. AIM: Photoacoustic tomography (PAT) involves reconstruction of vascular networks having direct implications in cancer research, cardiovascular studies, and neuroimaging. Various methods have been proposed for recovering vascular networks in photoacoustic imaging; however, most methods are two-step (image reconstruction and image segmentation) in nature. We propose a binary PAT approach wherein direct reconstruction of vascular network from the acquired photoacoustic sinogram data is plausible. APPROACH: Binary tomography approach relies on solving a dual-optimization problem to reconstruct images with every pixel resulting in a binary outcome (i.e., either background or the absorber). Further, the binary tomography approach was compared against backprojection, Tikhonov regularization, and sparse recovery-based schemes. RESULTS: Numerical simulations, physical phantom experiment, and in-vivo rat brain vasculature data were used to compare the performance of different algorithms. The results indicate that the binary tomography approach improved the vasculature recovery by 10% using in-silico data with respect to the Dice similarity coefficient against the other reconstruction methods. CONCLUSION: The proposed algorithm demonstrates superior vasculature recovery with limited data both visually and based on quantitative image metrics.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Técnicas Fotoacústicas , Algoritmos , Animales , Fantasmas de Imagen , Ratas , TomografíaRESUMEN
The reconstruction methods for solving the ill-posed inverse problem of photoacoustic tomography with limited noisy data are iterative in nature to provide accurate solutions. These methods performance is highly affected by the noise level in the photoacoustic data. A singular value decomposition (SVD) based plug and play priors method for solving photoacoustic inverse problem was proposed in this work to provide robustness to noise in the data. The method was shown to be superior as compared to total variation regularization, basis pursuit deconvolution and Lanczos Tikhonov based regularization and provided improved performance in case of noisy data. The numerical and experimental cases show that the improvement can be as high as 8.1 dB in signal to noise ratio of the reconstructed image and 67.98% in root mean square error in comparison to the state of the art methods.
RESUMEN
Photoacoustic/Optoacoustic tomography aims to reconstruct maps of the initial pressure rise induced by the absorption of light pulses in tissue. This reconstruction is an ill-conditioned and under-determined problem, when the data acquisition protocol involves limited detection positions. The aim of the work is to develop an inversion method which integrates denoising procedure within the iterative model-based reconstruction to improve quantitative performance of optoacoustic imaging. Among the model-based schemes, total-variation (TV) constrained reconstruction scheme is a popular approach. In this work, a two-step approach was proposed for improving the TV constrained optoacoustic inversion by adding a non-local means based filtering step within each TV iteration. Compared to TV-based reconstruction, inclusion of this non-local means step resulted in signal-to-noise ratio improvement of 2.5 dB in the reconstructed optoacoustic images.