RESUMO
Diffuse optical tomography (DOT) uses near-infrared light to image spatially varying optical parameters in biological tissues. In functional brain imaging, DOT uses a perturbation model to estimate the changes in optical parameters, corresponding to changes in measured data due to brain activity. The perturbation model typically uses approximate baseline optical parameters of the different brain compartments, since the actual baseline optical parameters are unknown. We simulated the effects of these approximate baseline optical parameters using parameter variations earlier reported in literature, and brain atlases from four adult subjects. We report the errors in estimated activation contrast, localization, and area when incorrect baseline values were used. Further, we developed a post-processing technique based on deep learning methods that can reduce the effects due to inaccurate baseline optical parameters. The method improved imaging of brain activation changes in the presence of such errors.
RESUMO
Real-time applications in three-dimensional photoacoustic tomography from planar sensors rely on fast reconstruction algorithms that assume the speed of sound (SoS) in the tissue is homogeneous. Moreover, the reconstruction quality depends on the correct choice for the constant SoS. In this study, we discuss the possibility of ameliorating the problem of unknown or heterogeneous SoS distributions by using learned reconstruction methods. This can be done by modelling the uncertainties in the training data. In addition, a correction term can be included in the learned reconstruction method. We investigate the influence of both and while a learned correction component can improve reconstruction quality further, we show that a careful choice of uncertainties in the training data is the primary factor to overcome unknown SoS. We support our findings with simulated and in vivo measurements in 3D.
RESUMO
Objective.To extend the highly successful U-Net Convolutional Neural Network architecture, which is limited to rectangular pixel/voxel domains, to a graph-based equivalent that works flexibly on irregular meshes; and demonstrate the effectiveness on electrical impedance tomography (EIT).Approach.By interpreting the irregular mesh as a graph, we develop a graph U-Net with new cluster pooling and unpooling layers that mimic the classic neighborhood based max-pooling important for imaging applications.Mainresults.The proposed graph U-Net is shown to be flexible and effective for improving early iterate total variation (TV) reconstructions from EIT measurements, using as little as the first iteration. The performance is evaluated for simulated data, and on experimental data from three measurement devices with different measurement geometries and instrumentations. We successfully show that such networks can be trained with a simple two-dimensional simulated training set, and generalize to very different domains, including measurements from a three-dimensional device and subsequent 3D reconstructions.Significance.As many inverse problems are solved on irregular (e.g. finite element) meshes, the proposed graph U-Net and pooling layers provide the added flexibility to process directly on the computational mesh. Post-processing an early iterate reconstruction greatly reduces the computational cost which can become prohibitive in higher dimensions with dense meshes. As the graph structure is independent of 'dimension', the flexibility to extend networks trained on 2D domains to 3D domains offers a possibility to further reduce computational cost in training.
Assuntos
Tomografia Computadorizada por Raios X , Tomografia , Impedância Elétrica , Redes Neurais de Computação , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodosRESUMO
PURPOSE: Instrumented ultrasonic tracking provides needle localisation during ultrasound-guided minimally invasive percutaneous procedures. Here, a post-processing framework based on a convolutional neural network (CNN) is proposed to improve the spatial resolution of ultrasonic tracking images. METHODS: The custom ultrasonic tracking system comprised a needle with an integrated fibre-optic ultrasound (US) transmitter and a clinical US probe for receiving those transmissions and for acquiring B-mode US images. For post-processing of tracking images reconstructed from the received fibre-optic US transmissions, a recently-developed framework based on ResNet architecture, trained with a purely synthetic dataset, was employed. A preliminary evaluation of this framework was performed with data acquired from needle insertions in the heart of a fetal sheep in vivo. The axial and lateral spatial resolution of the tracking images were used as performance metrics of the trained network. RESULTS: Application of the CNN yielded improvements in the spatial resolution of the tracking images. In three needle insertions, in which the tip depth ranged from 23.9 to 38.4 mm, the lateral resolution improved from 2.11 to 1.58 mm, and the axial resolution improved from 1.29 to 0.46 mm. CONCLUSION: The results provide strong indications of the potential of CNNs to improve the spatial resolution of ultrasonic tracking images and thereby to increase the accuracy of needle tip localisation. These improvements could have broad applicability and impact across multiple clinical fields, which could lead to improvements in procedural efficiency and reductions in risk of complications.
Assuntos
Aprendizado Profundo , Ovinos , Animais , Ultrassom , Ultrassonografia/métodos , Agulhas , Redes Neurais de ComputaçãoRESUMO
Prediction of complex traits based on genome-wide marker information is of central importance for both animal and plant breeding. Numerous models have been proposed for the prediction of complex traits and still considerable effort has been given to improve the prediction accuracy of these models, because various genetics factors like additive, dominance and epistasis effects can influence of the prediction accuracy of such models. Recently machine learning (ML) methods have been widely applied for prediction in both animal and plant breeding programs. In this study, we propose a new algorithm for genomic prediction which is based on neural networks, but incorporates classical elements of LASSO. Our new method is able to account for the local epistasis (higher order interaction between the neighboring markers) in the prediction. We compare the prediction accuracy of our new method with the most commonly used prediction methods, such as BayesA, BayesB, Bayesian Lasso (BL), genomic BLUP and Elastic Net (EN) using the heterogenous stock mouse and rice field data sets.
RESUMO
Many interventional surgical procedures rely on medical imaging to visualize and track instruments. Such imaging methods not only need to be real time capable but also provide accurate and robust positional information. In ultrasound (US) applications, typically, only 2-D data from a linear array are available, and as such, obtaining accurate positional estimation in three dimensions is nontrivial. In this work, we first train a neural network, using realistic synthetic training data, to estimate the out-of-plane offset of an object with the associated axial aberration in the reconstructed US image. The obtained estimate is then combined with a Kalman filtering approach that utilizes positioning estimates obtained in previous time frames to improve localization robustness and reduce the impact of measurement noise. The accuracy of the proposed method is evaluated using simulations, and its practical applicability is demonstrated on experimental data obtained using a novel optical US imaging setup. Accurate and robust positional information is provided in real time. Axial and lateral coordinates for out-of-plane objects are estimated with a mean error of 0.1 mm for simulated data and a mean error of 0.2 mm for experimental data. The 3-D localization is most accurate for elevational distances larger than 1 mm, with a maximum distance of 6 mm considered for a 25-mm aperture.
Assuntos
Redes Neurais de Computação , Imagem Óptica , Ultrassonografia/métodosRESUMO
Diffuse optical tomography (DOT) utilises near-infrared light for imaging spatially distributed optical parameters, typically the absorption and scattering coefficients. The image reconstruction problem of DOT is an ill-posed inverse problem, due to the non-linear light propagation in tissues and limited boundary measurements. The ill-posedness means that the image reconstruction is sensitive to measurement and modelling errors. The Bayesian approach for the inverse problem of DOT offers the possibility of incorporating prior information about the unknowns, rendering the problem less ill-posed. It also allows marginalisation of modelling errors utilising the so-called Bayesian approximation error method. A more recent trend in image reconstruction techniques is the use of deep learning, which has shown promising results in various applications from image processing to tomographic reconstructions. In this work, we study the non-linear DOT inverse problem of estimating the (absolute) absorption and scattering coefficients utilising a 'model-based' learning approach, essentially intertwining learned components with the model equations of DOT. The proposed approach was validated with 2D simulations and 3D experimental data. We demonstrated improved absorption and scattering estimates for targets with a mix of smooth and sharp image features, implying that the proposed approach could learn image features that are difficult to model using standard Gaussian priors. Furthermore, it was shown that the approach can be utilised in compensating for modelling errors due to coarse discretisation enabling computationally efficient solutions. Overall, the approach provided improved computation times compared to a standard Gauss-Newton iteration.
Assuntos
Algoritmos , Tomografia Óptica , Teorema de Bayes , Processamento de Imagem Assistida por Computador/métodos , Distribuição Normal , Tomografia Óptica/métodosRESUMO
Deep learning-based image reconstruction approaches have demonstrated impressive empirical performance in many imaging modalities. These approaches usually require a large amount of high-quality paired training data, which is often not available in medical imaging. To circumvent this issue we develop a novel unsupervised knowledge-transfer paradigm for learned reconstruction within a Bayesian framework. The proposed approach learns a reconstruction network in two phases. The first phase trains a reconstruction network with a set of ordered pairs comprising of ground truth images of ellipses and the corresponding simulated measurement data. The second phase fine-tunes the pretrained network to more realistic measurement data without supervision. By construction, the framework is capable of delivering predictive uncertainty information over the reconstructed image. We present extensive experimental results on low-dose and sparse-view computed tomography showing that the approach is competitive with several state-of-the-art supervised and unsupervised reconstruction techniques. Moreover, for test data distributed differently from the training data, the proposed framework can significantly improve reconstruction quality not only visually, but also quantitatively in terms of PSNR and SSIM, when compared with learned methods trained on the synthetic dataset only.
RESUMO
Instrumented ultrasonic tracking is used to improve needle localization during ultrasound guidance of minimally invasive percutaneous procedures. Here, it is implemented with transmitted ultrasound pulses from a clinical ultrasound imaging probe, which is detected by a fiber-optic hydrophone integrated into a needle. The detected transmissions are then reconstructed to form the tracking image. Two challenges are considered with the current implementation of ultrasonic tracking. First, tracking transmissions are interleaved with the acquisition of B-mode images, and thus, the effective B-mode frame rate is reduced. Second, it is challenging to achieve an accurate localization of the needle tip when the signal-to-noise ratio is low. To address these challenges, we present a framework based on a convolutional neural network (CNN) to maintain spatial resolution with fewer tracking transmissions and enhance signal quality. A major component of the framework included the generation of realistic synthetic training data. The trained network was applied to unseen synthetic data and experimental in vivo tracking data. The performance of needle localization was investigated when reconstruction was performed with fewer (up to eightfold) tracking transmissions. CNN-based processing of conventional reconstructions showed that the axial and lateral spatial resolutions could be improved even with an eightfold reduction in tracking transmissions. The framework presented in this study will significantly improve the performance of ultrasonic tracking, leading to faster image acquisition rates and increased localization accuracy.
Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Agulhas , Redes Neurais de Computação , Ultrassom , Ultrassonografia/métodosRESUMO
Electrical and elasticity imaging are promising modalities for a suite of different applications, including medical tomography, non-destructive testing and structural health monitoring. These emerging modalities are capable of providing remote, non-invasive and low-cost opportunities. Unfortunately, both modalities are severely ill-posed nonlinear inverse problems, susceptive to noise and modelling errors. Nevertheless, the ability to incorporate complimentary datasets obtained simultaneously offers mutually beneficial information. By fusing electrical and elastic modalities as a joint problem, we are afforded the possibility to stabilize the inversion process via the utilization of auxiliary information from both modalities as well as joint structural operators. In this study, we will discuss a possible approach to combine electrical and elasticity imaging in a joint reconstruction problem giving rise to novel multi-modality applications for use in both medical and structural engineering. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'.
Assuntos
Técnicas de Imagem por Elasticidade/métodos , Impedância Elétrica , Processamento de Imagem Assistida por Computador/métodos , Tomografia/métodos , Simulação por Computador , Elasticidade , Técnicas de Imagem por Elasticidade/estatística & dados numéricos , Humanos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Conceitos Matemáticos , Imagem Multimodal/métodos , Imagem Multimodal/estatística & dados numéricos , Dinâmica não Linear , Tomografia/estatística & dados numéricosRESUMO
Magnetic Resonance Imaging (MRI) plays a vital role in diagnosis, management and monitoring of many diseases. However, it is an inherently slow imaging technique. Over the last 20 years, parallel imaging, temporal encoding and compressed sensing have enabled substantial speed-ups in the acquisition of MRI data, by accurately recovering missing lines of k-space data. However, clinical uptake of vastly accelerated acquisitions has been limited, in particular in compressed sensing, due to the time-consuming nature of the reconstructions and unnatural looking images. Following the success of machine learning in a wide range of imaging tasks, there has been a recent explosion in the use of machine learning in the field of MRI image reconstruction. A wide range of approaches have been proposed, which can be applied in k-space and/or image-space. Promising results have been demonstrated from a range of methods, enabling natural looking images and rapid computation. In this review article we summarize the current machine learning approaches used in MRI reconstruction, discuss their drawbacks, clinical applications, and current trends.
Assuntos
Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , Algoritmos , Imageamento por Ressonância Magnética , Espectroscopia de Ressonância MagnéticaRESUMO
The majority of model-based learned image reconstruction methods in medical imaging have been limited to uniform domains, such as pixelated images. If the underlying model is solved on nonuniform meshes, arising from a finite element method typical for nonlinear inverse problems, interpolation and embeddings are needed. To overcome this, we present a flexible framework to extend model-based learning directly to nonuniform meshes, by interpreting the mesh as a graph and formulating our network architectures using graph convolutional neural networks. This gives rise to the proposed iterative Graph Convolutional Newton-type Method (GCNM), which includes the forward model in the solution of the inverse problem, while all updates are directly computed by the network on the problem specific mesh. We present results for Electrical Impedance Tomography, a severely ill-posed nonlinear inverse problem that is frequently solved via optimization-based methods, where the forward problem is solved by finite element methods. Results for absolute EIT imaging are compared to standard iterative methods as well as a graph residual network. We show that the GCNM has good generalizability to different domain shapes and meshes, out of distribution data as well as experimental data, from purely simulated training data and without transfer training.
RESUMO
SIGNIFICANCE: Two-dimensional (2-D) fully convolutional neural networks have been shown capable of producing maps of sO2 from 2-D simulated images of simple tissue models. However, their potential to produce accurate estimates in vivo is uncertain as they are limited by the 2-D nature of the training data when the problem is inherently three-dimensional (3-D), and they have not been tested with realistic images. AIM: To demonstrate the capability of deep neural networks to process whole 3-D images and output 3-D maps of vascular sO2 from realistic tissue models/images. APPROACH: Two separate fully convolutional neural networks were trained to produce 3-D maps of vascular blood oxygen saturation and vessel positions from multiwavelength simulated images of tissue models. RESULTS: The mean of the absolute difference between the true mean vessel sO2 and the network output for 40 examples was 4.4% and the standard deviation was 4.5%. CONCLUSIONS: 3-D fully convolutional networks were shown capable of producing accurate sO2 maps using the full extent of spatial information contained within 3-D images generated under conditions mimicking real imaging scenarios. We demonstrate that networks can cope with some of the confounding effects present in real images such as limited-view artifacts and have the potential to produce accurate estimates in vivo.
Assuntos
Artefatos , Redes Neurais de Computação , Oxigênio , Imageamento Tridimensional , Oximetria , Oxigênio/análiseRESUMO
BACKGROUND: Three-dimensional, whole heart, balanced steady state free precession (WH-bSSFP) sequences provide delineation of intra-cardiac and vascular anatomy. However, they have long acquisition times. Here, we propose significant speed-ups using a deep-learning single volume super-resolution reconstruction, to recover high-resolution features from rapidly acquired low-resolution WH-bSSFP images. METHODS: A 3D residual U-Net was trained using synthetic data, created from a library of 500 high-resolution WH-bSSFP images by simulating 50% slice resolution and 50% phase resolution. The trained network was validated with 25 synthetic test data sets. Additionally, prospective low-resolution data and high-resolution data were acquired in 40 patients. In the prospective data, vessel diameters, quantitative and qualitative image quality, and diagnostic scoring was compared between the low-resolution, super-resolution and reference high-resolution WH-bSSFP data. RESULTS: The synthetic test data showed a significant increase in image quality of the low-resolution images after super-resolution reconstruction. Prospectively acquired low-resolution data was acquired ~× 3 faster than the prospective high-resolution data (173 s vs 488 s). Super-resolution reconstruction of the low-resolution data took < 1 s per volume. Qualitative image scores showed super-resolved images had better edge sharpness, fewer residual artefacts and less image distortion than low-resolution images, with similar scores to high-resolution data. Quantitative image scores showed super-resolved images had significantly better edge sharpness than low-resolution or high-resolution images, with significantly better signal-to-noise ratio than high-resolution data. Vessel diameters measurements showed over-estimation in the low-resolution measurements, compared to the high-resolution data. No significant differences and no bias was found in the super-resolution measurements in any of the great vessels. However, a small but significant for the underestimation was found in the proximal left coronary artery diameter measurement from super-resolution data. Diagnostic scoring showed that although super-resolution did not improve accuracy of diagnosis, it did improve diagnostic confidence compared to low-resolution imaging. CONCLUSION: This paper demonstrates the potential of using a residual U-Net for super-resolution reconstruction of rapidly acquired low-resolution whole heart bSSFP data within a clinical setting. We were able to train the network using synthetic training data from retrospective high-resolution whole heart data. The resulting network can be applied very quickly, making these techniques particularly appealing within busy clinical workflow. Thus, we believe that this technique may help speed up whole heart CMR in clinical practice.
Assuntos
Aprendizado Profundo , Coração/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Criança , Pré-Escolar , Feminino , Coração/fisiopatologia , Cardiopatias Congênitas/diagnóstico por imagem , Cardiopatias Congênitas/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Estudos Prospectivos , Reprodutibilidade dos Testes , Fatores de Tempo , Fluxo de Trabalho , Adulto JovemRESUMO
MOTIVATION: Improved DNA technology has made it practical to estimate single-nucleotide polymorphism (SNP)-heritability among distantly related individuals with unknown relationships. For growth- and development-related traits, it is meaningful to base SNP-heritability estimation on longitudinal data due to the time-dependency of the process. However, only few statistical methods have been developed so far for estimating dynamic SNP-heritability and quantifying its full uncertainty. RESULTS: We introduce a completely tuning-free Bayesian Gaussian process (GP)-based approach for estimating dynamic variance components and heritability as their function. For parameter estimation, we use a modern Markov Chain Monte Carlo method which allows full uncertainty quantification. Several datasets are analysed and our results clearly illustrate that the 95% credible intervals of the proposed joint estimation method (which 'borrows strength' from adjacent time points) are significantly narrower than of a two-stage baseline method that first estimates the variance components at each time point independently and then performs smoothing. We compare the method with a random regression model using MTG2 and BLUPF90 software and quantitative measures indicate superior performance of our method. Results are presented for simulated and real data with up to 1000 time points. Finally, we demonstrate scalability of the proposed method for simulated data with tens of thousands of individuals. AVAILABILITY AND IMPLEMENTATION: The C++ implementation dynBGP and simulated data are available in GitHub: https://github.com/aarjas/dynBGP. The programmes can be run in R. Real datasets are available in QTL archive: https://phenome.jax.org/centers/QTLA. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Assuntos
Polimorfismo de Nucleotídeo Único , Software , Teorema de Bayes , Humanos , Método de Monte Carlo , Distribuição NormalRESUMO
Model-based learned iterative reconstruction methods have recently been shown to outperform classical reconstruction algorithms. Applicability of these methods to large scale inverse problems is however limited by the available memory for training and extensive training times, the latter due to computationally expensive forward models. As a possible solution to these restrictions we propose a multi-scale learned iterative reconstruction scheme that computes iterates on discretisations of increasing resolution. This procedure does not only reduce memory requirements, it also considerably speeds up reconstruction and training times, but most importantly is scalable to large scale inverse problems with non-trivial forward operators, such as those that arise in many 3D tomographic applications. In particular, we propose a hybrid network that combines the multiscale iterative approach with a particularly expressive network architecture which in combination exhibits excellent scalability in 3D. Applicability of the algorithm is demonstrated for 3D cone beam computed tomography from real measurement data of an organic phantom. Additionally, we examine scalability and reconstruction quality in comparison to established learned reconstruction methods in two dimensions for low dose computed tomography on human phantoms.
RESUMO
PURPOSE: Real-time assessment of ventricular volumes requires high acceleration factors. Residual convolutional neural networks (CNN) have shown potential for removing artifacts caused by data undersampling. In this study, we investigated the ability of CNNs to reconstruct highly accelerated radial real-time data in patients with congenital heart disease (CHD). METHODS: A 3D (2D plus time) CNN architecture was developed and trained using synthetic training data created from previously acquired breath hold cine images from 250 CHD patients. The trained CNN was then used to reconstruct actual real-time, tiny golden angle (tGA) radial SSFP data (13 × undersampled) acquired in 10 new patients with CHD. The same real-time data was also reconstructed with compressed sensing (CS) to compare image quality and reconstruction time. Ventricular volume measurements made using both the CNN and CS reconstructed images were compared to reference standard breath hold data. RESULTS: It was feasible to train a CNN to remove artifact from highly undersampled radial real-time data. The overall reconstruction time with the CNN (including creation of aliased images) was shown to be >5 × faster than the CS reconstruction. In addition, the image quality and accuracy of biventricular volumes measured from the CNN reconstructed images were superior to the CS reconstructions. CONCLUSION: This article has demonstrated the potential for the use of a CNN for reconstruction of real-time radial data within the clinical setting. Clinical measures of ventricular volumes using real-time data with CNN reconstruction are not statistically significantly different from gold-standard, cardiac-gated, breath-hold techniques.
Assuntos
Aprendizado Profundo , Cardiopatias Congênitas/diagnóstico por imagem , Coração/diagnóstico por imagem , Imagem Cinética por Ressonância Magnética , Adolescente , Adulto , Algoritmos , Artefatos , Suspensão da Respiração , Técnicas de Imagem de Sincronização Cardíaca , Análise de Fourier , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador , Masculino , Pessoa de Meia-Idade , Respiração , Estudos Retrospectivos , Adulto JovemRESUMO
Recent advances in deep learning for tomographic reconstructions have shown great potential to create accurate and high quality images with a considerable speed up. In this paper, we present a deep neural network that is specifically designed to provide high resolution 3-D images from restricted photoacoustic measurements. The network is designed to represent an iterative scheme and incorporates gradient information of the data fit to compensate for limited view artifacts. Due to the high complexity of the photoacoustic forward operator, we separate training and computation of the gradient information. A suitable prior for the desired image structures is learned as part of the training. The resulting network is trained and tested on a set of segmented vessels from lung computed tomography scans and then applied to in-vivo photoacoustic measurement data.