Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 155
Filtrar
1.
ArXiv ; 2024 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-38745699

RESUMEN

Background: The findings of the 2023 AAPM Grand Challenge on Deep Generative Modeling for Learning Medical Image Statistics are reported in this Special Report. Purpose: The goal of this challenge was to promote the development of deep generative models for medical imaging and to emphasize the need for their domain-relevant assessments via the analysis of relevant image statistics. Methods: As part of this Grand Challenge, a common training dataset and an evaluation procedure was developed for benchmarking deep generative models for medical image synthesis. To create the training dataset, an established 3D virtual breast phantom was adapted. The resulting dataset comprised about 108,000 images of size 512×512. For the evaluation of submissions to the Challenge, an ensemble of 10,000 DGM-generated images from each submission was employed. The evaluation procedure consisted of two stages. In the first stage, a preliminary check for memorization and image quality (via the Fréchet Inception Distance (FID)) was performed. Submissions that passed the first stage were then evaluated for the reproducibility of image statistics corresponding to several feature families including texture, morphology, image moments, fractal statistics and skeleton statistics. A summary measure in this feature space was employed to rank the submissions. Additional analyses of submissions was performed to assess DGM performance specific to individual feature families, the four classes in the training data, and also to identify various artifacts. Results: Fifty-eight submissions from 12 unique users were received for this Challenge. Out of these 12 submissions, 9 submissions passed the first stage of evaluation and were eligible for ranking. The top-ranked submission employed a conditional latent diffusion model, whereas the joint runners-up employed a generative adversarial network, followed by another network for image superresolution. In general, we observed that the overall ranking of the top 9 submissions according to our evaluation method (i) did not match the FID-based ranking, and (ii) differed with respect to individual feature families. Another important finding from our additional analyses was that different DGMs demonstrated similar kinds of artifacts. Conclusions: This Grand Challenge highlighted the need for domain-specific evaluation to further DGM design as well as deployment. It also demonstrated that the specification of a DGM may differ depending on its intended use.

2.
J Biomed Opt ; 29(4): 046001, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38585417

RESUMEN

Significance: Endoscopic screening for esophageal cancer (EC) may enable early cancer diagnosis and treatment. While optical microendoscopic technology has shown promise in improving specificity, the limited field of view (<1 mm) significantly reduces the ability to survey large areas efficiently in EC screening. Aim: To improve the efficiency of endoscopic screening, we propose a novel concept of end-expandable endoscopic optical fiber probe for larger field of visualization and for the first time evaluate a deep-learning-based image super-resolution (DL-SR) method to overcome the issue of limited sampling capability. Approach: To demonstrate feasibility of the end-expandable optical fiber probe, DL-SR was applied on simulated low-resolution microendoscopic images to generate super-resolved (SR) ones. Varying the degradation model of image data acquisition, we identified the optimal parameters for optical fiber probe prototyping. The proposed screening method was validated with a human pathology reading study. Results: For various degradation parameters considered, the DL-SR method demonstrated different levels of improvement of traditional measures of image quality. The endoscopists' interpretations of the SR images were comparable to those performed on the high-resolution ones. Conclusions: This work suggests avenues for development of DL-SR-enabled sparse image reconstruction to improve high-yield EC screening and similar clinical applications.


Asunto(s)
Esófago de Barrett , Aprendizaje Profundo , Neoplasias Esofágicas , Humanos , Fibras Ópticas , Neoplasias Esofágicas/diagnóstico por imagen , Esófago de Barrett/patología , Procesamiento de Imagen Asistido por Computador
3.
Nat Commun ; 15(1): 2932, 2024 Apr 04.
Artículo en Inglés | MEDLINE | ID: mdl-38575577

RESUMEN

Ultrasound localization microscopy (ULM) enables deep tissue microvascular imaging by localizing and tracking intravenously injected microbubbles circulating in the bloodstream. However, conventional localization techniques require spatially isolated microbubbles, resulting in prolonged imaging time to obtain detailed microvascular maps. Here, we introduce LOcalization with Context Awareness (LOCA)-ULM, a deep learning-based microbubble simulation and localization pipeline designed to enhance localization performance in high microbubble concentrations. In silico, LOCA-ULM enhanced microbubble detection accuracy to 97.8% and reduced the missing rate to 23.8%, outperforming conventional and deep learning-based localization methods up to 17.4% in accuracy and 37.6% in missing rate reduction. In in vivo rat brain imaging, LOCA-ULM revealed dense cerebrovascular networks and spatially adjacent microvessels undetected by conventional ULM. We further demonstrate the superior localization performance of LOCA-ULM in functional ULM (fULM) where LOCA-ULM significantly increased the functional imaging sensitivity of fULM to hemodynamic responses invoked by whisker stimulations in the rat brain.


Asunto(s)
Aprendizaje Profundo , Microscopía , Ratas , Animales , Microscopía/métodos , Microburbujas , Ultrasonografía/métodos , Microscopía Intravital , Microvasos/diagnóstico por imagen
4.
Commun Biol ; 7(1): 268, 2024 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-38443460

RESUMEN

The combination of a good quality embryo and proper maternal health factors promise higher chances of a successful in vitro fertilization (IVF) procedure leading to clinical pregnancy and live birth. Of these two factors, selection of a good embryo is a controllable aspect. The current gold standard in clinical practice is visual assessment of an embryo based on its morphological appearance by trained embryologists. More recently, machine learning has been incorporated into embryo selection "packages". Here, we report EVATOM: a machine-learning assisted embryo health assessment tool utilizing an optical quantitative phase imaging technique called artificial confocal microscopy (ACM). We present a label-free nucleus detection method with, to the best of our knowledge, novel quantitative embryo health biomarkers. Two viability assessment models are presented for grading embryos into two classes: healthy/intermediate (H/I) or sick (S) class. The models achieve a weighted F1 score of 1.0 and 0.99 respectively on the in-distribution test set of 72 fixed embryos and a weighted F1 score of 0.9 and 0.95 respectively on the out-of-distribution test dataset of 19 time-instances from 8 live embryos.


Asunto(s)
Embrión de Mamíferos , Fertilización In Vitro , Femenino , Embarazo , Humanos , Estado de Salud , Aprendizaje Automático , Microscopía Confocal
5.
ArXiv ; 2024 Jan 16.
Artículo en Inglés | MEDLINE | ID: mdl-38313204

RESUMEN

BACKGROUND: Wide-field calcium imaging (WFCI) with genetically encoded calcium indicators allows for spatiotemporal recordings of neuronal activity in mice. When applied to the study of sleep, WFCI data are manually scored into the sleep states of wakefulness, non-REM (NREM) and REM by use of adjunct EEG and EMG recordings. However, this process is time-consuming, invasive and often suffers from low inter- and intra-rater reliability. Therefore, an automated sleep state classification method that operates on spatiotemporal WFCI data is desired. NEW METHOD: A hybrid network architecture consisting of a convolutional neural network (CNN) to extract spatial features of image frames and a bidirectional long short-term memory network (BiLSTM) with attention mechanism to identify temporal dependencies among different time points was proposed to classify WFCI data into states of wakefulness, NREM and REM sleep. RESULTS: Sleep states were classified with an accuracy of 84% and Cohen's kappa of 0.64. Gradient-weighted class activation maps revealed that the frontal region of the cortex carries more importance when classifying WFCI data into NREM sleep while posterior area contributes most to the identification of wakefulness. The attention scores indicated that the proposed network focuses on short- and long-range temporal dependency in a state-specific manner. COMPARISON WITH EXISTING METHOD: On a 3-hour WFCI recording, the CNN-BiLSTM achieved a kappa of 0.67, comparable to a kappa of 0.65 corresponding to the human EEG/EMG-based scoring. CONCLUSIONS: The CNN-BiLSTM effectively classifies sleep states from spatiotemporal WFCI data and will enable broader application of WFCI in sleep.

6.
Artículo en Inglés | MEDLINE | ID: mdl-38415197

RESUMEN

Over the past two decades Biomedical Engineering has emerged as a major discipline that bridges societal needs of human health care with the development of novel technologies. Every medical institution is now equipped at varying degrees of sophistication with the ability to monitor human health in both non-invasive and invasive modes. The multiple scales at which human physiology can be interrogated provide a profound perspective on health and disease. We are at the nexus of creating "avatars" (herein defined as an extension of "digital twins") of human patho/physiology to serve as paradigms for interrogation and potential intervention. Motivated by the emergence of these new capabilities, the IEEE Engineering in Medicine and Biology Society, the Departments of Biomedical Engineering at Johns Hopkins University and Bioengineering at University of California at San Diego sponsored an interdisciplinary workshop to define the grand challenges that face biomedical engineering and the mechanisms to address these challenges. The Workshop identified five grand challenges with cross-cutting themes and provided a roadmap for new technologies, identified new training needs, and defined the types of interdisciplinary teams needed for addressing these challenges. The themes presented in this paper include: 1) accumedicine through creation of avatars of cells, tissues, organs and whole human; 2) development of smart and responsive devices for human function augmentation; 3) exocortical technologies to understand brain function and treat neuropathologies; 4) the development of approaches to harness the human immune system for health and wellness; and 5) new strategies to engineer genomes and cells.

7.
J Biomed Opt ; 29(Suppl 1): S11516, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38249994

RESUMEN

Significance: Dynamic photoacoustic computed tomography (PACT) is a valuable imaging technique for monitoring physiological processes. However, current dynamic PACT imaging techniques are often limited to two-dimensional spatial imaging. Although volumetric PACT imagers are commercially available, these systems typically employ a rotating measurement gantry in which the tomographic data are sequentially acquired as opposed to being acquired simultaneously at all views. Because the dynamic object varies during the data-acquisition process, the sequential data-acquisition process poses substantial challenges to image reconstruction associated with data incompleteness. The proposed image reconstruction method is highly significant in that it will address these challenges and enable volumetric dynamic PACT imaging with existing preclinical imagers. Aim: The aim of this study is to develop a spatiotemporal image reconstruction (STIR) method for dynamic PACT that can be applied to commercially available volumetric PACT imagers that employ a sequential scanning strategy. The proposed reconstruction method aims to overcome the challenges caused by the limited number of tomographic measurements acquired per frame. Approach: A low-rank matrix estimation-based STIR (LRME-STIR) method is proposed to enable dynamic volumetric PACT. The LRME-STIR method leverages the spatiotemporal redundancies in the dynamic object to accurately reconstruct a four-dimensional (4D) spatiotemporal image. Results: The conducted numerical studies substantiate the LRME-STIR method's efficacy in reconstructing 4D dynamic images from tomographic measurements acquired with a rotating measurement gantry. The experimental study demonstrates the method's ability to faithfully recover the flow of a contrast agent with a frame rate of 10 frames per second, even when only a single tomographic measurement per frame is available. Conclusions: The proposed LRME-STIR method offers a promising solution to the challenges faced by enabling 4D dynamic imaging using commercially available volumetric PACT imagers. By enabling accurate STIRs, this method has the potential to significantly advance preclinical research and facilitate the monitoring of critical physiological biomarkers.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Tomografía Computarizada por Rayos X , Medios de Contraste , Procesamiento de Imagen Asistido por Computador
8.
IEEE Trans Med Imaging ; 43(5): 1753-1765, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38163307

RESUMEN

Interpretability is highly desired for deep neural network-based classifiers, especially when addressing high-stake decisions in medical imaging. Commonly used post-hoc interpretability methods have the limitation that they can produce plausible but different interpretations of a given model, leading to ambiguity about which one to choose. To address this problem, a novel decision-theory-inspired approach is investigated to establish a self-interpretable model, given a pre-trained deep binary black-box medical image classifier. This approach involves utilizing a self-interpretable encoder-decoder model in conjunction with a single-layer fully connected network with unity weights. The model is trained to estimate the test statistic of the given trained black-box deep binary classifier to maintain a similar accuracy. The decoder output image, referred to as an equivalency map, is an image that represents a transformed version of the to-be-classified image that, when processed by the fixed fully connected layer, produces the same test statistic value as the original classifier. The equivalency map provides a visualization of the transformed image features that directly contribute to the test statistic value and, moreover, permits quantification of their relative contributions. Unlike the traditional post-hoc interpretability methods, the proposed method is self-interpretable, quantitative. Detailed quantitative and qualitative analyses have been performed with three different medical image binary classification tasks.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Aprendizaje Profundo
9.
IEEE Trans Biomed Eng ; 71(6): 1969-1979, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38265912

RESUMEN

OBJECTIVE: To develop a new method that integrates subspace and generative image models for high-dimensional MR image reconstruction. METHODS: We proposed a formulation that synergizes a low-dimensional subspace model of high-dimensional images, an adaptive generative image prior serving as spatial constraints on the sequence of "contrast-weighted" images or spatial coefficients of the subspace model, and a conventional sparsity regularization. A special pretraining plus subject-specific network adaptation strategy was proposed to construct an accurate generative-network-based representation for images with varying contrasts. An iterative algorithm was introduced to jointly update the subspace coefficients and the multi-resolution latent space of the generative image model that leveraged an recently proposed intermediate layer optimization technique for network inversion. RESULTS: We evaluated the utility of the proposed method for two high-dimensional imaging applications: accelerated MR parameter mapping and high-resolution MR spectroscopic imaging. Improved performance over state-of-the-art subspace-based methods was demonstrated in both cases. CONCLUSION: The proposed method provided a new way to address high-dimensional MR image reconstruction problems by incorporating an adaptive generative model as a data-driven spatial prior for constraining subspace reconstruction. SIGNIFICANCE: Our work demonstrated the potential of integrating data-driven and adaptive generative priors with canonical low-dimensional modeling for high-dimensional imaging problems.


Asunto(s)
Algoritmos , Encéfalo , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Imagen por Resonancia Magnética/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo/diagnóstico por imagen
10.
bioRxiv ; 2023 Nov 28.
Artículo en Inglés | MEDLINE | ID: mdl-38076976

RESUMEN

Modern neuroimaging modalities, particularly functional MRI (fMRI), can decode detailed human experiences. Thousands of viewed images can be identified or classified, and sentences can be reconstructed. Decoding paradigms often leverage encoding models that reduce the stimulus space into a smaller yet generalizable feature set. However, the neuroimaging devices used for detailed decoding are non-portable, like fMRI, or invasive, like electrocorticography, excluding application in naturalistic use. Wearable, non-invasive, but lower-resolution devices such as electroencephalography and functional near-infrared spectroscopy (fNIRS) have been limited to decoding between stimuli used during training. Herein we develop and evaluate model-based decoding with high-density diffuse optical tomography (HD-DOT), a higher-resolution expansion of fNIRS with demonstrated promise as a surrogate for fMRI. Using a motion energy model of visual content, we decoded the identities of novel movie clips outside the training set with accuracy far above chance for single-trial decoding. Decoding was robust to modulations of testing time window, different training and test imaging sessions, hemodynamic contrast, and optode array density. Our results suggest that HD-DOT can translate detailed decoding into naturalistic use.

11.
J Med Imaging (Bellingham) ; 10(5): 055501, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37767114

RESUMEN

Purpose: The objective assessment of image quality (IQ) has been advocated for the analysis and optimization of medical imaging systems. One method of computing such IQ metrics is through a numerical observer. The Hotelling observer (HO) is the optimal linear observer, but conventional methods for obtaining the HO can become intractable due to large image sizes or insufficient data. Channelized methods are sometimes employed in such circumstances to approximate the HO. The performance of such channelized methods varies, with different methods obtaining superior performance to others depending on the imaging conditions and detection task. A channelized HO method using an AE is presented and implemented across several tasks to characterize its performance. Approach: The process for training an AE is demonstrated to be equivalent to developing a set of channels for approximating the HO. The efficiency of the learned AE-channels is increased by modifying the conventional AE loss function to incorporate task-relevant information. Multiple binary detection tasks involving lumpy and breast phantom backgrounds across varying dataset sizes are considered to evaluate the performance of the proposed method and compare to current state-of-the-art channelized methods. Additionally, the ability of the channelized methods to generalize to images outside of the training dataset is investigated. Results: AE-learned channels are demonstrated to have comparable performance with other state-of-the-art channel methods in the detection studies and superior performance in the generalization studies. Incorporating a cleaner estimate of the signal for the detection task is also demonstrated to significantly improve the performance of the proposed method, particularly in datasets with fewer images. Conclusions: AEs are demonstrated to be capable of learning efficient channels for the HO. The resulting significant increase in detection performance for small dataset sizes when incorporating a signal prior holds promising implications for future assessments of imaging technologies.

12.
ArXiv ; 2023 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-37693178

RESUMEN

Ultrasound computed tomography (USCT) is an emerging imaging modality that holds great promise for breast imaging. Full-waveform inversion (FWI)-based image reconstruction methods incorporate accurate wave physics to produce high spatial resolution quantitative images of speed of sound or other acoustic properties of the breast tissues from USCT measurement data. However, the high computational cost of FWI reconstruction represents a significant burden for its widespread application in a clinical setting. The research reported here investigates the use of a convolutional neural network (CNN) to learn a mapping from USCT waveform data to speed of sound estimates. The CNN was trained using a supervised approach with a task-informed loss function aiming at preserving features of the image that are relevant to the detection of lesions. A large set of anatomically and physiologically realistic numerical breast phantoms (NBPs) and corresponding simulated USCT measurements was employed during training. Once trained, the CNN can perform real-time FWI image reconstruction from USCT waveform data. The performance of the proposed method was assessed and compared against FWI using a hold-out sample of 41 NBPs and corresponding USCT data. Accuracy was measured using relative mean square error (RMSE), structural self-similarity index measure (SSIM), and lesion detection performance (DICE score). This numerical experiment demonstrates that a supervised learning model can achieve accuracy comparable to FWI in terms of RMSE and SSIM, and better performance in terms of task performance, while significantly reducing computational time.

13.
IEEE Trans Ultrason Ferroelectr Freq Control ; 70(10): 1339-1354, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37682648

RESUMEN

Ultrasound computed tomography (USCT) is an emerging medical imaging modality that holds great promise for improving human health. Full-waveform inversion (FWI)-based image reconstruction methods account for the relevant wave physics to produce high spatial resolution images of the acoustic properties of the breast tissues. A practical USCT design employs a circular ring-array comprised of elevation-focused ultrasonic transducers, and volumetric imaging is achieved by translating the ring-array orthogonally to the imaging plane. In commonly deployed slice-by-slice (SBS) reconstruction approaches, the 3-D volume is reconstructed by stacking together 2-D images reconstructed for each position of the ring-array. A limitation of the SBS reconstruction approach is that it does not account for 3-D wave propagation physics and the focusing properties of the transducers, which can result in significant image artifacts and inaccuracies. To perform 3-D image reconstruction when elevation-focused transducers are employed, a numerical description of the focusing properties of the transducers should be included in the forward model. To address this, a 3-D computational model of an elevation-focused transducer is developed to enable 3-D FWI-based reconstruction methods to be deployed in ring-array-based USCT. The focusing is achieved by applying a spatially varying temporal delay to the ultrasound pulse (emitter mode) and recorded signal (receiver mode). The proposed numerical transducer model is quantitatively validated and employed in computer simulation studies that demonstrate its use in image reconstruction for ring-array USCT.

14.
Artículo en Inglés | MEDLINE | ID: mdl-37566494

RESUMEN

Super-resolution ultrasound microvessel imaging based on ultrasound localization microscopy (ULM) is an emerging imaging modality that is capable of resolving micrometer-scaled vessels deep into tissue. In practice, ULM is limited by the need for contrast injection, long data acquisition, and computationally expensive postprocessing times. In this study, we present a contrast-free super-resolution power Doppler (CS-PD) technique that uses deep networks to achieve super-resolution with short data acquisition. The training dataset is comprised of spatiotemporal ultrafast ultrasound signals acquired from in vivo mouse brains, while the testing dataset includes in vivo mouse brain, chicken embryo chorioallantoic membrane (CAM), and healthy human subjects. The in vivo mouse imaging studies demonstrate that CS-PD could achieve an approximate twofold improvement in spatial resolution when compared with conventional power Doppler. In addition, the microvascular images generated by CS-PD showed good agreement with the corresponding ULM images as indicated by a structural similarity index of 0.7837 and a peak signal-to-noise ratio (PSNR) of 25.52. Moreover, CS-PD was able to preserve the temporal profile of the blood flow (e.g., pulsatility) that is similar to conventional power Doppler. Finally, the generalizability of CS-PD was demonstrated on testing data of different tissues using different imaging settings. The fast inference time of the proposed deep neural network also allows CS-PD to be implemented for real-time imaging. These features of CS-PD offer a practical, fast, and robust microvascular imaging solution for many preclinical and clinical applications of Doppler ultrasound.


Asunto(s)
Microvasos , Ultrasonografía Doppler , Embrión de Pollo , Humanos , Ratones , Animales , Microvasos/diagnóstico por imagen , Ultrasonografía Doppler/métodos , Ultrasonografía/métodos , Redes Neurales de la Computación , Pollos
15.
bioRxiv ; 2023 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-37547014

RESUMEN

The combination of a good quality embryo and proper maternal health factors promise higher chances of a successful in vitro fertilization (IVF) procedure leading to clinical pregnancy and live birth. Of these two factors, selection of a good embryo is a controllable aspect. The current gold standard in clinical practice is visual assessment of an embryo based on its morphological appearance by trained embryologists. More recently, machine learning has been incorporated into embryo selection "packages". Here, we report a machine-learning assisted embryo health assessment tool utilizing a quantitative phase imaging technique called artificial confocal microscopy (ACM). We present a label-free nucleus detection method with novel quantitative embryo health biomarkers. Two viability assessment models are presented for grading embryos into two classes: healthy/intermediate (H/I) or sick (S) class. The models achieve a weighted F1 score of 1.0 and 0.99 respectively on the in-distribution test set of 72 fixed embryos and a weighted F1 score of 0.9 and 0.95 respectively on the out-of-distribution test dataset of 19 time-instances from 8 live embryos.

16.
IEEE Trans Med Imaging ; 42(12): 3715-3724, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37578916

RESUMEN

Medical imaging systems are often evaluated and optimized via objective, or task-specific, measures of image quality (IQ) that quantify the performance of an observer on a specific clinically-relevant task. The performance of the Bayesian Ideal Observer (IO) sets an upper limit among all observers, numerical or human, and has been advocated for use as a figure-of-merit (FOM) for evaluating and optimizing medical imaging systems. However, the IO test statistic corresponds to the likelihood ratio that is intractable to compute in the majority of cases. A sampling-based method that employs Markov-chain Monte Carlo (MCMC) techniques was previously proposed to estimate the IO performance. However, current applications of MCMC methods for IO approximation have been limited to a small number of situations where the considered distribution of to-be-imaged objects can be described by a relatively simple stochastic object model (SOM). As such, there remains an important need to extend the domain of applicability of MCMC methods to address a large variety of scenarios where IO-based assessments are needed but the associated SOMs have not been available. In this study, a novel MCMC method that employs a generative adversarial network (GAN)-based SOM, referred to as MCMC-GAN, is described and evaluated. The MCMC-GAN method was quantitatively validated by use of test-cases for which reference solutions were available. The results demonstrate that the MCMC-GAN method can extend the domain of applicability of MCMC methods for conducting IO analyses of medical imaging systems.


Asunto(s)
Teorema de Bayes , Humanos , Cadenas de Markov , Método de Montecarlo
17.
J Biomed Opt ; 28(6): 066002, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37347003

RESUMEN

Significance: When developing a new quantitative optoacoustic computed tomography (OAT) system for diagnostic imaging of breast cancer, objective assessments of various system designs through human trials are infeasible due to cost and ethical concerns. In prototype stages, however, different system designs can be cost-efficiently assessed via virtual imaging trials (VITs) employing ensembles of digital breast phantoms, i.e., numerical breast phantoms (NBPs), that convey clinically relevant variability in anatomy and optoacoustic tissue properties. Aim: The aim is to develop a framework for generating ensembles of realistic three-dimensional (3D) anatomical, functional, optical, and acoustic NBPs and numerical lesion phantoms (NLPs) for use in VITs of OAT applications in the diagnostic imaging of breast cancer. Approach: The generation of the anatomical NBPs was accomplished by extending existing NBPs developed by the U.S. Food and Drug Administration. As these were designed for use in mammography applications, substantial modifications were made to improve blood vasculature modeling for use in OAT. The NLPs were modeled to include viable tumor cells only or a combination of viable tumor cells, necrotic core, and peripheral angiogenesis region. Realistic optoacoustic tissue properties were stochastically assigned in the NBPs and NLPs. Results: To advance optoacoustic and optical imaging research, 84 datasets have been released; these consist of anatomical, functional, optical, and acoustic NBPs and the corresponding simulated multi-wavelength optical fluence, initial pressure, and OAT measurements. The generated NBPs were compared with clinical data with respect to the volume of breast blood vessels and spatially averaged effective optical attenuation. The usefulness of the proposed framework was demonstrated through a case study to investigate the impact of acoustic heterogeneity on OAT images of the breast. Conclusions: The proposed framework will enhance the authenticity of virtual OAT studies and can be widely employed for the investigation and development of advanced image reconstruction and machine learning-based methods, as well as the objective evaluation and optimization of the OAT system designs.


Asunto(s)
Neoplasias de la Mama , Humanos , Femenino , Neoplasias de la Mama/diagnóstico por imagen , Algoritmos , Tomografía Computarizada por Rayos X , Mama , Tomografía/métodos , Fantasmas de Imagen
18.
IEEE Trans Med Imaging ; 42(6): 1799-1808, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37022374

RESUMEN

In recent years, generative adversarial networks (GANs) have gained tremendous popularity for potential applications in medical imaging, such as medical image synthesis, restoration, reconstruction, translation, as well as objective image quality assessment. Despite the impressive progress in generating high-resolution, perceptually realistic images, it is not clear if modern GANs reliably learn the statistics that are meaningful to a downstream medical imaging application. In this work, the ability of a state-of-the-art GAN to learn the statistics of canonical stochastic image models (SIMs) that are relevant to objective assessment of image quality is investigated. It is shown that although the employed GAN successfully learned several basic first- and second-order statistics of the specific medical SIMs under consideration and generated images with high perceptual quality, it failed to correctly learn several per-image statistics pertinent to the these SIMs, highlighting the urgent need to assess medical image GANs in terms of objective measures of image quality.

19.
Phys Med Biol ; 68(8)2023 04 03.
Artículo en Inglés | MEDLINE | ID: mdl-36889005

RESUMEN

Objective.Quantitative phase retrieval (QPR) in propagation-based x-ray phase contrast imaging of heterogeneous and structurally complicated objects is challenging under laboratory conditions due to partial spatial coherence and polychromaticity. A deep learning-based method (DLBM) provides a nonlinear approach to this problem while not being constrained by restrictive assumptions about object properties and beam coherence. The objective of this work is to assess a DLBM for its applicability under practical scenarios by evaluating its robustness and generalizability under typical experimental variations.Approach.Towards this end, an end-to-end DLBM was employed for QPR under laboratory conditions and its robustness was investigated across various system and object conditions. The robustness of the method was tested via varying propagation distances and its generalizability with respect to object structure and experimental data was also tested.Main results.Although the end-to-end DLBM was stable under the studied variations, its successful deployment was found to be affected by choices pertaining to data pre-processing, network training considerations and system modeling.Significance.To our knowledge, we demonstrated for the first time, the potential applicability of an end-to-end learning-based QPR method, trained on simulated data, to experimental propagation-based x-ray phase contrast measurements acquired under laboratory conditions with a commercial x-ray source and a conventional detector. We considered conditions of polychromaticity, partial spatial coherence, and high noise levels, typical to laboratory conditions. This work further explored the robustness of this method to practical variations in propagation distances and object structure with the goal of assessing its potential for experimental use. Such an exploration of any DLBM (irrespective of its network architecture) before practical deployment provides an understanding of its potential behavior under experimental settings.


Asunto(s)
Aprendizaje Profundo , Rayos X , Radiografía , Microscopía de Contraste de Fase
20.
ArXiv ; 2023 Sep 14.
Artículo en Inglés | MEDLINE | ID: mdl-36713246

RESUMEN

Ultrasound computed tomography (USCT) is an emerging medical imaging modality that holds great promise for improving human health. Full-waveform inversion (FWI)-based image reconstruction methods account for the relevant wave physics to produce high spatial resolution images of the acoustic properties of the breast tissues. A practical USCT design employs a circular ring-array comprised of elevation-focused ultrasonic transducers, and volumentric imaging is achieved by translating the ring-array orthogonally to the imaging plane. In commonly deployed slice-by-slice (SBS) reconstruction approaches, the three-dimensional (3D) volume is reconstructed by stacking together two-dimensional (2D) images reconstructed for each position of the ring-array. A limitation of the SBS reconstruction approach is that it does not account for 3D wave propagation physics and the focusing properties of the transducers, which can result in significant image artifacts and inaccuracies. To perform 3D image reconstruction when elevation-focused transducers are employed, a numerical description of the focusing properties of the transducers should be included in the forward model. To address this, a 3D computational model of an elevation-focused transducer is developed to enable 3D FWI-based reconstruction methods to be deployed in ring-array-based USCT. The focusing is achieved by applying a spatially varying temporal delay to the ultrasound pulse (emitter mode) and recorded signal (receiver mode). The proposed numerical transducer model is quantitatively validated and employed in computer-simulation studies that demonstrate its use in image reconstruction for ring-array USCT.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA