RESUMEN
With the growing use of point-of-care ultrasound (POCUS) in various clinical settings, it is essential for users of ultrasound to have a thorough understanding of the basics of ultrasound physics, including sound wave properties, its interaction with various tissues, common artifacts, and knobology. The authors introduce and discuss these concepts in this article, with a focus on clinical implications.
Asunto(s)
Sistemas de Atención de Punto , Ultrasonografía , Humanos , Ultrasonografía/métodos , Artefactos , FísicaRESUMEN
Proper collimator selection is critical to obtaining high-quality, interpretable nuclear medicine images. Collimators help eliminate scatter, which leads to poor spatial resolution and blurry images. We present the case of a posttherapy 177Lu-DOTATATE (Lutathera) patient who was initially imaged with a low-energy, high-resolution collimator routinely used in 99mTc imaging. On image review, the patient was reimaged with the appropriate medium-energy, high-resolution collimator, which resulted in improved image quality. When reviewing the quality of images, it is important to understand modifications to the imaging that can significantly improve image quality and interpretation.
Asunto(s)
Octreótido , Compuestos Organometálicos , Humanos , Octreótido/análogos & derivados , Octreótido/uso terapéuticoRESUMEN
OBJECTIVE: To compare compressed sensing (CS) and the Cascades of Independently Recurrent Inference Machines (CIRIM) with respect to image quality and reconstruction times when 12-fold accelerated scans of patients with neurological deficits are reconstructed. MATERIALS AND METHODS: Twelve-fold accelerated 3D T2-FLAIR images were obtained from a cohort of 62 patients with neurological deficits on 3 T MRI. Images were reconstructed offline via CS and the CIRIM. Image quality was assessed in a blinded and randomized manner by two experienced interventional neuroradiologists and one experienced pediatric neuroradiologist on imaging artifacts, perceived spatial resolution (sharpness), anatomic conspicuity, diagnostic confidence, and contrast. The methods were also compared in terms of self-referenced quality metrics, image resolution, patient groups and reconstruction time. In ten scans, the contrast ratio (CR) was determined between lesions and white matter. The effect of acceleration factor was assessed in a publicly available fully sampled dataset, since ground truth data are not available in prospectively accelerated clinical scans. Specifically, 451 FLAIR scans, including scans with white matter lesions, were adopted from the FastMRI database to evaluate structural similarity (SSIM) and the CR of lesions and white matter on ranging acceleration factors from four-fold up to 12-fold. RESULTS: Interventional neuroradiologists significantly preferred the CIRIM for imaging artifacts, anatomic conspicuity, and contrast. One rater significantly preferred the CIRIM in terms of sharpness and diagnostic confidence. The pediatric neuroradiologist preferred CS for imaging artifacts and sharpness. Compared to CS, the CIRIM reconstructions significantly improved in terms of imaging artifacts and anatomic conspicuity (p < 0.01) for higher resolution scans while yielding a 28% higher SNR (p = 0.001) and a 5.8% lower CR (p = 0.04). There were no differences between patient groups. Additionally, CIRIM was five times faster than CS was. An increasing acceleration factor did not lead to changes in CR (p = 0.92), but led to lower SSIM (p = 0.002). DISCUSSION: Patients with neurological deficits can undergo MRI at a range of moderate to high acceleration. DL reconstruction outperforms CS in terms of image resolution, efficient denoising with a modest reduction in contrast and reduced reconstruction times.
RESUMEN
Quantitative and objective evaluation tools are essential for assessing the performance of machine learning (ML)-based magnetic resonance imaging (MRI) reconstruction methods. However, the commonly used fidelity metrics, such as mean squared error (MSE), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR), often fail to capture fundamental and clinically relevant MR image quality aspects. To address this, we propose evaluation of ML-based MRI reconstruction using digital image quality phantoms and automated evaluation methods. Our phantoms are based upon the American College of Radiology (ACR) large physical phantom but created in k-space to simulate their MR images, and they can vary in object size, signal-to-noise ratio, resolution, and image contrast. Our evaluation pipeline incorporates evaluation metrics of geometric accuracy, intensity uniformity, percentage ghosting, sharpness, signal-to-noise ratio, resolution, and low-contrast detectability. We demonstrate the utility of our proposed pipeline by assessing an example ML-based reconstruction model across various training and testing scenarios. The performance results indicate that training data acquired with a lower undersampling factor and coils of larger anatomical coverage yield a better performing model. The comprehensive and standardized pipeline introduced in this study can help to facilitate a better understanding of the performance and guide future development and advancement of ML-based reconstruction algorithms.
RESUMEN
Objective. Digital breast tomosynthesis (DBT) has significantly improved the diagnosis of breast cancer due to its high sensitivity and specificity in detecting breast lesions compared to two-dimensional mammography. However, one of the primary challenges in DBT is the image blur resulting from x-ray source motion, particularly in DBT systems with a source in continuous-motion mode. This motion-induced blur can degrade the spatial resolution of DBT images, potentially affecting the visibility of subtle lesions such as microcalcifications.Approach. We addressed this issue by deriving an analytical in-plane source blur kernel for DBT images based on imaging geometry and proposing a post-processing image deblurring method with a generative diffusion model as an image prior.Main results. We showed that the source blur could be approximated by a shift-invariant kernel over the DBT slice at a given height above the detector, and we validated the accuracy of our blur kernel modeling through simulation. We also demonstrated the ability of the diffusion model to generate realistic DBT images. The proposed deblurring method successfully enhanced spatial resolution when applied to DBT images reconstructed with detector blur and correlated noise modeling.Significance. Our study demonstrated the advantages of modeling the imaging system components such as source motion blur for improving DBT image quality.
Asunto(s)
Mamografía , Mamografía/métodos , Humanos , Difusión , Procesamiento de Imagen Asistido por Computador/métodos , Mama/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/fisiopatología , Rayos X , Movimiento , Femenino , Movimiento (Física)RESUMEN
The primary objective of this research was to enhance the quality of semantic segmentation in cytology images by incorporating super-resolution (SR) architectures. An additional contribution was the development of a novel dataset aimed at improving imaging quality in the presence of inaccurate focus. Our experimental results demonstrate that the integration of SR techniques into the segmentation pipeline can lead to a significant improvement of up to 25% in the mean average precision (mAP) metric. These findings suggest that leveraging SR architectures holds great promise for advancing the state-of-the-art in cytology image analysis.
RESUMEN
Data-driven Artificial Intelligence (AI)/Machine learning (ML) image analysis approaches have gained a lot of momentum in analyzing microscopy images in bioengineering, biotechnology, and medicine. The success of these approaches crucially relies on the availability of high-quality microscopy images, which is often a challenge due to the diverse experimental conditions and modes under which these images are obtained. In this study, we propose the use of recent ML-based image super-resolution (SR) techniques for improving the image quality of microscopy images, incorporating them into multiple ML-based image analysis tasks, and describing a comprehensive study, investigating the impact of SR techniques on the segmentation of microscopy images. The impacts of four Generative Adversarial Network (GAN)- and transformer-based SR techniques on microscopy image quality are measured using three well-established quality metrics. These SR techniques are incorporated into multiple deep network pipelines using supervised, contrastive, and non-contrastive self-supervised methods to semantically segment microscopy images from multiple datasets. Our results show that the image quality of microscopy images has a direct influence on the ML model performance and that both supervised and self-supervised network pipelines using SR images perform better by 2%-6% in comparison to baselines, not using SR. Based on our experiments, we also establish that the image quality improvement threshold range [20-64] for the complemented Perception-based Image Quality Evaluator(PIQE) metric can be used as a pre-condition by domain experts to incorporate SR techniques to significantly improve segmentation performance. A plug-and-play software platform developed to integrate SR techniques with various deep networks using supervised and self-supervised learning methods is also presented.
RESUMEN
In this article, we present a new method called point spread function (PSF)-Radon transform algorithm. This algorithm consists on recovering the instrument PSF from the Radon transform (in the line direction axis) of the line spread function (i.e., the image of a line). We present the method and tested with synthetic images, and real images from macro lens camera and microscopy. A stand-alone program along with a tutorial is available, for any interested user, in Martinez (PSF-Radon transform algorithm, standalone program). RESEARCH HIGHLIGHTS: Determining the instrument PSF is a key issue. Precise PSF determinations are mandatory if image improvement is performed numerically by deconvolution. Much less exposure time to achieve the same performance than a measurement of the PSF from a very small bead. Does not require having to adjust the PSF by an analytical function to overcome the noise uncertainties.
RESUMEN
Purpose: The objective of this research was to investigate the efficacy of various parameter combinations of Convolutional Neural Networks (CNNs) models, namely MobileNet and DenseNet121, and different input image resolutions (REZs) ranging from 64×64 to 512×512 pixels, for diagnosing breast cancer. Materials and methods: During the period of June 2015 to November 2020, two hospitals were involved in the collection of two-dimensional ultrasound breast images for this retrospective multicenter study. The diagnostic performance of the computer models MobileNet and DenseNet 121 was compared at different resolutions. Results: The results showed that MobileNet had the best breast cancer diagnosis performance at 320×320pixel REZ and DenseNet121 had the best breast cancer diagnosis performance at 448×448pixel REZ. Conclusion: Our study reveals a significant correlation between image resolution and breast cancer diagnosis accuracy. Through the comparison of MobileNet and DenseNet121, it is highlighted that lightweight neural networks (LW-CNNs) can achieve model performance similar to or even slightly better than large neural networks models (HW-CNNs) in ultrasound images, and LW-CNNs' prediction time per image is lower.
RESUMEN
Purpose: Deep learning (DL) models have received much attention lately for their ability to achieve expert-level performance on the accurate automated analysis of chest X-rays (CXRs). Recently available public CXR datasets include high resolution images, but state-of-the-art models are trained on reduced size images due to limitations on graphics processing unit memory and training time. As computing hardware continues to advance, it has become feasible to train deep convolutional neural networks on high-resolution images without sacrificing detail by downscaling. This study examines the effect of increased resolution on CXR classification performance. Approach: We used the publicly available MIMIC-CXR-JPG dataset, comprising 377,110 high resolution CXR images for this study. We applied image downscaling from native resolution to 2048×2048 pixels, 1024×1024 pixels, 512×512 pixels, and 256×256 pixels and then we used the DenseNet121 and EfficientNet-B4 DL models to evaluate clinical task performance using these four downscaled image resolutions. Results: We find that while some clinical findings are more reliably labeled using high resolutions, many other findings are actually labeled better using downscaled inputs. We qualitatively verify that tasks requiring a large receptive field are better suited to downscaled low resolution input images, by inspecting effective receptive fields and class activation maps of trained models. Finally, we show that stacking an ensemble across resolutions outperforms each individual learner at all input resolutions while providing interpretable scale weights, indicating that diverse information is extracted across resolutions. Conclusions: This study suggests that instead of focusing solely on the finest image resolutions, multi-scale features should be emphasized for information extraction from high-resolution CXRs.
RESUMEN
BACKGROUND: Alzheimer's disease-related pattern (ADRP) is a metabolic brain biomarker of Alzheimer's disease (AD). While ADRP is being introduced into research, the effect of the size of the identification cohort and the effect of the resolution of identification and validation images on ADRP's performance need to be clarified. METHODS: 240 2-[18F]fluoro-2-deoxy-D-glucose positron emission tomography images [120 AD/120 cognitive normals (CN)] were selected from the Alzheimer's disease neuroimaging initiative database. A total of 200 images (100 AD/100 CN) were used to identify different versions of ADRP using a scaled subprofile model/principal component analysis. For this purpose, five identification groups were randomly selected 25 times. The identification groups differed in the number of images (20 AD/20 CN, 30 AD/30 CN, 40 AD/40 CN, 60 AD/60 CN, and 80 AD/80 CN) and image resolutions (6, 8, 10, 12, 15 and 20 mm). A total of 750 ADRPs were identified and validated through the area under the curve (AUC) values on the remaining 20 AD/20 CN with six different image resolutions. RESULTS: ADRP's performance for the differentiation between AD patients and CN demonstrated only a marginal average AUC increase, when the number of subjects in the identification group increases (AUC increase for about 0.03 from 20 AD/20 CN to 80 AD/80 CN). However, the average of the lowest five AUC values increased with the increasing number of participants (AUC increase for about 0.07 from 20 AD/20 CN to 30 AD/30 CN and for an additional 0.02 from 30 AD/30 CN to 40 AD/40 CN). The resolution of the identification images affects ADRP's diagnostic performance only marginally in the range from 8 to 15 mm. ADRP's performance stayed optimal even when applied to validation images of resolution differing from the identification images. CONCLUSIONS: While small (20 AD/20 CN images) identification cohorts may be adequate in a favorable selection of cases, larger cohorts (at least 30 AD/30 CN images) shall be preferred to overcome possible/random biological differences and improve ADRP's diagnostic performance. ADRP's performance stays stable even when applied to the validation images with a resolution different than the resolution of the identification ones.
RESUMEN
Animal and human tissues are used extensively in physiological and pathophysiological research. Due to both ethical considerations and low availability, it is essential to maximize the use of these tissues. Therefore, the aim was to develop a new method allowing for multiplex immunofluorescence (IF) staining of kidney sections in order to reuse the same tissue section multiple times. The paraffin-embedded kidney sections were placed onto coated coverslips and multiplex IF staining was performed. Five rounds of staining were performed where each round consisted of indirect antibody labelling, imaging on a widefield epifluorescence microscope, removal of the antibodies using a stripping buffer, and then re-staining. In the final round, the tissue was stained with hematoxylin/eosin. Using this method, tubular segments in the nephron, blood vessels, and interstitial cells were labeled. Furthermore, by placing the tissue on coverslips, confocal-like resolution was obtained using a conventional widefield epifluorescence microscope and a 60x oil objective. Thus, using standard reagents and equipment, paraffin-embedded tissue was used for multiplex IF staining with increased Z-resolution. In summary, this method offers time-saving multiplex IF staining and allows for the retrieval of both quantitative and spatial expressional information of multiple proteins and subsequently for an assessment of the tissue morphology. Due to the simplicity and integrated effectivity of this multiplex IF protocol, it holds the potential to supplement standard IF staining protocols and maximize use of tissue.
Asunto(s)
Riñón , Animales , Humanos , Adhesión en Parafina/métodos , Coloración y Etiquetado , Técnica del Anticuerpo FluorescenteRESUMEN
Deep learning (DL) models are state-of-the-art in segmenting anatomical and disease regions of interest (ROIs) in medical images. Particularly, a large number of DL-based techniques have been reported using chest X-rays (CXRs). However, these models are reportedly trained on reduced image resolutions for reasons related to the lack of computational resources. Literature is sparse in discussing the optimal image resolution to train these models for segmenting the tuberculosis (TB)-consistent lesions in CXRs. In this study, we investigated the performance variations with an Inception-V3 UNet model using various image resolutions with/without lung ROI cropping and aspect ratio adjustments and identified the optimal image resolution through extensive empirical evaluations to improve TB-consistent lesion segmentation performance. We used the Shenzhen CXR dataset for the study, which includes 326 normal patients and 336 TB patients. We proposed a combinatorial approach consisting of storing model snapshots, optimizing segmentation threshold and test-time augmentation (TTA), and averaging the snapshot predictions, to further improve performance with the optimal resolution. Our experimental results demonstrate that higher image resolutions are not always necessary; however, identifying the optimal image resolution is critical to achieving superior performance.
RESUMEN
Estimating the gaze direction with a digital video-based pupil and corneal reflection (P-CR) eye tracker is challenging partly since a video camera is limited in terms of spatial and temporal resolution, and because the captured eye images contain noise. Through computer simulation, we evaluated the localization accuracy of pupil-, and CR centers in the eye image for small eye rotations (⪠1 deg). Results highlight how inaccuracies in center localization are related to 1) how many pixels the pupil and CR span in the eye camera image, 2) the method to compute the center of the pupil and CRs, and 3) the level of image noise. Our results provide a possible explanation to why the amplitude of small saccades may not be accurately estimated by many currently used video-based eye trackers. We conclude that eye movements with arbitrarily small amplitudes can be accurately estimated using the P-CR eye-tracking principle given that the level of image noise is low and the pupil and CR span enough pixels in the eye camera, or if localization of the CR is based on the intensity values in the eye image instead of a binary representation.
Asunto(s)
Movimientos Oculares , Movimientos Sacádicos , Humanos , Simulación por Computador , PupilaRESUMEN
BACKGROUND: Improvements in angiographic imaging systems technology provide options to decrease radiation exposure. The effect of these variations on image resolution is unknown. METHODS: Using an American National Standards Institution phantom, a high-contrast (line-pair) and low contrast (Gammex 151) phantoms, 5 second images were acquired using a Phillips Allure angiographic suite, using fluoroscopic capture (FC) as well as cineangiography (CA) in posterior anterior (PA) and left anterior oblique (LAO) projections as well as high and low table positions. Image resolutions were measured as ranked by three independent trained observers blinded to the purpose of the assessments. Comparative analyses were performed. Interobserver agreement was evaluated. RESULTS: High contrast image resolution was significantly lower with FC compared to CA (median [interquartile range], 1.69 [1.52-1.69] mm, vs 2.09 [1.88-2.09] mm, P < 0.001). No significant differences were observed in between PA and LAO projections as well as low and high table positions. Low contrast resolution was also lower with FC compared to CA (5 [6.5-5] vs 3 [5-3] mm, P < 0.001). No significant differences in high-contrast or low-contrast resolution were noted between PA and LAO projections, or high and low table positions. Both low and high-contrast image resolution improved with higher radiation exposure. Good interobserver agreement was noted (Fleiss-Kappa ranging from 0.69-0.74). CONCLUSION: Image resolution was perceived to be better with CA compared to FC, although not significantly affected by beam angulation or table height. Aligning resolution needs with imaging modality and maximizing table height may improve procedural efficacy and safety.
Asunto(s)
Fantasmas de Imagen , Angiografía Coronaria , Fluoroscopía , Humanos , Dosis de RadiaciónRESUMEN
BACKGROUND: Despite the advances in the techniques of indirect estimation of leaf area, the destructive measurement approaches have still remained as the reference and the most accurate methods. However, even utilizing the modern sensors and applications usually requires the laborious and time-consuming practice of unfolding and analyzing the single leaves, separately. In the present study, a volumetric approach was tested to determine the pile leaf area based on the ratio of leaf volume divided by thickness. For this purpose, the suspension technique was used for volumetry, which is based on the simple practice and calculations of the Archimedes' principle. RESULTS: Wheat volumetric leaf area (VLA), had a high agreement and approximately 1:1 correlation with the conventionally measured optical leaf area (OLA). Exclusion of the midrib volume from calculations, did not affect the estimation error (NRMSE < 2.61%); however, improved the slope of the linear model by about 6%, and also reduced the bias between the methods. The error of sampling for determining mean leaf thickness of the pile, was also less than 2% throughout the season. Besides, a more practical and facilitated version of pile volumetry was tested using Specific Gravity Bench (SGB), which is currently available as a laboratory equipment. As an important observation, which was also expectable according to the leaf 3D expansion (i.e., in a given 2D plane), it was evidenced that the variations in the OLA exactly follows the pattern of the changes in the leaf volume. Accordingly, it was suggested that the relative leaf areas of various experimental treatments might be compared directly based on volume, and independently of leaf thickness. Furthermore, no considerable difference was observed among the OLAs measured using various image resolutions (NRMSE < 0.212%); which indicates that even the superfast scanners with low resolutions as 200 dpi may be used for a precision optical measurement of leaf area. CONCLUSIONS: It is expected that utilizing the reliable and simple concept of volumetric leaf area, based on which the measurement time might be independent of sample size, facilitate the laborious practice of leaf area measurement; and consequently, improve the precision of field experiments.
RESUMEN
Objective.Modern preclinical small animal radiation platforms utilize cone beam computerized tomography (CBCT) for image guidance and experiment planning purposes. The resolution of CBCT images is of particular importance for visualizing fine animal anatomical structures. One major cause of spatial resolution reduction is the finite size of the x-ray focal spot. In this work, we proposed a simple method to measure x-ray focal spot intensity map and a CBCT image domain deblurring model to mitigate the effect of focal spot-induced image blurring.Approach.We measured a projection image of a tungsten ball bearing using the flat panel detector of the CBCT platform. We built a forward blurring model of the projection image and derived the spot intensity map by deconvolving the measured projection image. Based on the measured spot intensity map, we derived a CBCT image domain blurring model for images reconstructed by the filtered backprojection algorithm. Based on this model, we computed image domain blurring kernel and improved the CBCT image resolution by deconvolving the CBCT image.Main results.We successfully measured the x-ray focal spot intensity map. The spot size characterized by full width at half maximum was â¼0.75 × 0.55 mm2at 40 kVp. We computed image domain convolution kernels caused by the x-ray focal spot. A simulation study on noiseless projections was performed to evaluate the spatial resolution improvement exclusively by the focal spot kernel, and the modulation transfer function (MTF) at 50% was increased from 1.40 to 1.65 mm-1for in-plane images and 1.05-1.32 mm-1for cross-plane images. Experimental studies on a CT insert phantom and a plastinated mouse phantom demonstrated improved spatial resolution after image domain deconvolution, as indicated by visually improved resolution of fine structures. MTF at 50% was improved from 1.00 to 1.12 mm-1for in-plane direction and from 0.72 to 0.84 mm-1for cross-plane direction.Significance.The proposed method to mitigate blurring caused by finite x-ray spot size and improve CBCT image resolution is simple and effective.
Asunto(s)
Algoritmos , Tomografía Computarizada de Haz Cónico , Animales , Simulación por Computador , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Ratones , Fantasmas de Imagen , Rayos XRESUMEN
Determining spatial resolution from images is crucial when optimizing focus, determining smallest resolvable object, and assessing size measurement uncertainties. However, no standard algorithm exists to measure resolution from electron microscopy (EM) images, though several have been proposed, where most require user decisions. We present the Spatial Image Resolution Assessment by Fourier analysis (SIRAF) algorithm that uses fast Fourier transform analysis to estimate resolution directly from a single image without user inputs. The method is derived from the underlying assumption that objects display intensity transitions, resembling a step function blurred by a Gaussian point spread function. This hypothesis is tested and verified on simulated EM images with known resolution. To identify potential pitfalls, the algorithm is also tested on simulated images with a variety of settings, and on real SEM images acquired at different magnification and defocus settings. Finally, the versatility of the method is investigated by assessing resolution in images from several microscopy techniques. It is concluded that the algorithm can assess resolution from a large selection of image types, thereby providing a measure of this fundamental image parameter. It may also improve autofocus methods and guide the optimization of magnification settings when balancing spatial resolution and field of view.
RESUMEN
The generative adversarial network (GAN) has demonstrated superb performance in generating synthetic images in recent studies. However, in the conventional framework of GAN, the maximum resolution of generated images is limited to the resolution of real images that are used as the training set. In this paper, in order to address this limitation, we propose a novel GAN framework using a pre-trained network called evaluator. The proposed model, higher resolution GAN (HRGAN), employs additional up-sampling convolutional layers to generate higher resolution. Then, using the evaluator, an additional target for the training of the generator is introduced to calibrate the generated images to have realistic features. In experiments with the CIFAR-10 and CIFAR-100 datasets, HRGAN successfully generates images of 64 × 64 and 128 × 128 resolutions, while the training sets consist of images of 32 × 32 resolution. In addition, HRGAN outperforms other existing models in terms of the Inception score, one of the conventional methods to evaluate GANs. For instance, in the experiment with CIFAR-10, a HRGAN generating 128 × 128 resolution demonstrates an Inception score of 12.32, outperforming an existing model by 28.6%. Thus, the proposed HRGAN demonstrates the possibility of generating higher resolution than training images.