Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
Med Image Anal ; 76: 102326, 2022 02.
Article in English | MEDLINE | ID: mdl-34936967

ABSTRACT

We study the use of raw ultrasound waveforms, often referred to as the "Radio Frequency" (RF) data, for the semantic segmentation of ultrasound scans to carry out dense and diagnostic labeling. We present W-Net, a novel Convolution Neural Network (CNN) framework that employs the raw ultrasound waveforms in addition to the grey ultrasound image to semantically segment and label tissues for anatomical, pathological, or other diagnostic purposes. To the best of our knowledge, this is also the first deep-learning or CNN approach for segmentation that analyzes ultrasound raw RF data along with the grey image. We chose subcutaneous tissue (SubQ) segmentation as our initial clinical goal for dense segmentation since it has diverse intermixed tissues, is challenging to segment, and is an underrepresented research area. SubQ potential applications include plastic surgery, adipose stem-cell harvesting, lymphatic monitoring, and possibly detection/treatment of certain types of tumors. Unlike prior work, we seek to label every pixel in the image, without the use of a background class. A custom dataset consisting of hand-labeled images by an expert clinician and trainees are used for the experimentation, currently labeled into the following categories: skin, fat, fat fascia/stroma, muscle, and muscle fascia. We compared W-Net and attention variant of W-Net (AW-Net) with U-Net and Attention U-Net (AU-Net). Our novel W-Net's RF-Waveform encoding architecture outperformed regular U-Net and AU-Net, achieving the best mIoU accuracy (averaged across all tissue classes). We study the impact of RF data on dense labeling of the SubQ region, which is followed by the analyses of the generalization capability of the networks to patients and analysis on the SubQ tissue classes, determining that fascia tissues, especially muscle fascia in particular, are the most difficult anatomic class to recognize for both humans and AI algorithms. We present diagnostic semantic segmentation, which is semantic segmentation carried out for the purposes of direct diagnostic pixel labeling, and apply it to breast tumor detection task on a publicly available dataset to segment pixels into malignant tumor, benign tumor, and background tissue class. Using the segmented image we diagnose the patient by classifying the breast lesion as either benign or malignant. We demonstrate the diagnostic capability of RF data with the use of W-Net, which achieves the best segmentation scores across all classes.


Subject(s)
Semantics , Subcutaneous Tissue , Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Ultrasonography
2.
BME Front ; 2022: 9837076, 2022.
Article in English | MEDLINE | ID: mdl-37850165

ABSTRACT

Objective and Impact Statement. We propose a weakly- and semisupervised, probabilistic needle-and-reverberation-artifact segmentation algorithm to separate the desired tissue-based pixel values from the superimposed artifacts. Our method models the intensity decay of artifact intensities and is designed to minimize the human labeling error. Introduction. Ultrasound image quality has continually been improving. However, when needles or other metallic objects are operating inside the tissue, the resulting reverberation artifacts can severely corrupt the surrounding image quality. Such effects are challenging for existing computer vision algorithms for medical image analysis. Needle reverberation artifacts can be hard to identify at times and affect various pixel values to different degrees. The boundaries of such artifacts are ambiguous, leading to disagreement among human experts labeling the artifacts. Methods. Our learning-based framework consists of three parts. The first part is a probabilistic segmentation network to generate the soft labels based on the human labels. These soft labels are input into the second part which is the transform function, where the training labels for the third part are generated. The third part outputs the final masks which quantifies the reverberation artifacts. Results. We demonstrate the applicability of the approach and compare it against other segmentation algorithms. Our method is capable of both differentiating between the reverberations from artifact-free patches and modeling the intensity fall-off in the artifacts. Conclusion. Our method matches state-of-the-art artifact segmentation performance and sets a new standard in estimating the per-pixel contributions of artifact vs underlying anatomy, especially in the immediately adjacent regions between reverberation lines. Our algorithm is also able to improve the performance of downstream image analysis algorithms.

3.
Int J Comput Assist Radiol Surg ; 16(11): 1957-1968, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34357525

ABSTRACT

PURPOSE: Ultrasound compounding is to combine sonographic information captured from different angles and produce a single image. It is important for multi-view reconstruction, but as of yet there is no consensus on best practices for compounding. Current popular methods inevitably suppress or altogether leave out bright or dark regions that are useful and potentially introduce new artifacts. In this work, we establish a new algorithm to compound the overlapping pixels from different viewpoints in ultrasound. METHODS: Inspired by image fusion algorithms and ultrasound confidence, we uniquely leverage Laplacian and Gaussian pyramids to preserve the maximum boundary contrast without overemphasizing noise, speckles, and other artifacts in the compounded image, while taking the direction of the ultrasound probe into account. Besides, we designed an algorithm that detects the useful boundaries in ultrasound images to further improve the boundary contrast. RESULTS: We evaluate our algorithm by comparing it with previous algorithms both qualitatively and quantitatively, and we show that our approach not only preserves both light and dark details, but also somewhat suppresses noise and artifacts, rather than amplifying them. We also show that our algorithm can improve the performance of downstream tasks like segmentation. CONCLUSION: Our proposed method that is based on confidence, contrast, and both Gaussian and Laplacian pyramids appears to be better at preserving contrast at anatomic boundaries while suppressing artifacts than any of the other approaches we tested. This algorithm may have future utility with downstream tasks such as 3D ultrasound volume reconstruction and segmentation.


Subject(s)
Algorithms , Artifacts , Humans , Ultrasonography
4.
Biomed Opt Express ; 10(10): 5291-5324, 2019 Oct 01.
Article in English | MEDLINE | ID: mdl-31646047

ABSTRACT

Optical Coherence Tomography (OCT) is an imaging modality that has been widely adopted for visualizing corneal, retinal and limbal tissue structure with micron resolution. It can be used to diagnose pathological conditions of the eye, and for developing pre-operative surgical plans. In contrast to the posterior retina, imaging the anterior tissue structures, such as the limbus and cornea, results in B-scans that exhibit increased speckle noise patterns and imaging artifacts. These artifacts, such as shadowing and specularity, pose a challenge during the analysis of the acquired volumes as they substantially obfuscate the location of tissue interfaces. To deal with the artifacts and speckle noise patterns and accurately segment the shallowest tissue interface, we propose a cascaded neural network framework, which comprises of a conditional Generative Adversarial Network (cGAN) and a Tissue Interface Segmentation Network (TISN). The cGAN pre-segments OCT B-scans by removing undesired specular artifacts and speckle noise patterns just above the shallowest tissue interface, and the TISN combines the original OCT image with the pre-segmentation to segment the shallowest interface. We show the applicability of the cascaded framework to corneal datasets, demonstrate that it precisely segments the shallowest corneal interface, and also show its generalization capacity to limbal datasets. We also propose a hybrid framework, wherein the cGAN pre-segmentation is passed to a traditional image analysis-based segmentation algorithm, and describe the improved segmentation performance. To the best of our knowledge, this is the first approach to remove severe specular artifacts and speckle noise patterns (prior to the shallowest interface) that affects the interpretation of anterior segment OCT datasets, thereby resulting in the accurate segmentation of the shallowest tissue interface. To the best of our knowledge, this is the first work to show the potential of incorporating a cGAN into larger deep learning frameworks for improved corneal and limbal OCT image segmentation. Our cGAN design directly improves the visualization of corneal and limbal OCT images from OCT scanners, and improves the performance of current OCT segmentation algorithms.

5.
Mach Vis Appl ; 29(8): 1227-1236, 2018 Nov.
Article in English | MEDLINE | ID: mdl-31511756

ABSTRACT

Cellular processes are governed by macromolecular complexes inside the cell. Study of the native structures of macromolecular complexes has been extremely difficult due to lack of data. With recent breakthroughs in Cellular Electron Cryo-Tomography (CECT) 3D imaging technology, it is now possible for researchers to gain accesses to fully study and understand the macro-molecular structures single cells. However, systematic recovery of macromolecular structures from CECT is very difficult due to high degree of structural complexity and practical imaging limitations. Specifically, we proposed a deep learning-based image classification approach for large-scale systematic macromolecular structure separation from CECT data. However, our previous work was only a very initial step toward exploration of the full potential of deep learning-based macromolecule separation. In this paper, we focus on improving classification performance by proposing three newly designed individual CNN models: an extended version of (Deep Small Receptive Field) DSRF3D, donated as DSRF3D-v2, a 3D residual block-based neural network, named as RB3D, and a convolutional 3D (C3D)-based model, CB3D. We compare them with our previously developed model (DSRF3D) on 12 datasets with different SNRs and tilt angle ranges. The experiments show that our new models achieved significantly higher classification accuracies. The accuracies are not only higher than 0.9 on normal datasets, but also demonstrate potentials to operate on datasets with high levels of noises and missing wedge effects presented.

6.
Cogn Res Princ Implic ; 2(1): 34, 2017.
Article in English | MEDLINE | ID: mdl-28890919

ABSTRACT

This paper describes a novel method for displaying data obtained by three-dimensional medical imaging, by which the position and orientation of a freely movable screen are optically tracked and used in real time to select the current slice from the data set for presentation. With this method, which we call a "freely moving in-situ medical image", the screen and imaged data are registered to a common coordinate system in space external to the user, at adjustable scale, and are available for free exploration. The three-dimensional image data occupy empty space, as if an invisible patient is being sliced by the moving screen. A behavioral study using real computed tomography lung vessel data established the superiority of the in situ display over a control condition with the same free exploration, but displaying data on a fixed screen (ex situ), with respect to accuracy in the task of tracing along a vessel and reporting spatial relations between vessel structures. A "freely moving in-situ medical image" display appears from these measures to promote spatial navigation and understanding of medical data.

7.
Hum Factors ; 59(7): 1128-1138, 2017 11.
Article in English | MEDLINE | ID: mdl-28771376

ABSTRACT

Objective These studies used threshold and slant-matching tasks to assess and quantitatively measure human perception of 3-D planar images viewed through a stereomicroscope. The results are intended for use in developing augmented-reality surgical aids. Background Substantial research demonstrates that slant perception is performed with high accuracy from monocular and binocular cues, but less research concerns the effects of magnification. Viewing through a microscope affects the utility of monocular and stereo slant cues, but its impact is as yet unknown. Method Participants performed in a threshold slant-detection task and matched the slant of a tool to a surface. Different stimuli and monocular versus binocular viewing conditions were implemented to isolate stereo cues alone, stereo with perspective cues, accommodation cue only, and cues intrinsic to optical-coherence-tomography images. Results At magnification of 5x, slant thresholds with stimuli providing stereo cues approximated those reported for direct viewing, about 12°. Most participants (75%) who passed a stereoacuity pretest could match a tool to the slant of a surface viewed with stereo at 5x magnification, with mean compressive error of about 20% for optimized surfaces. Slant matching to optical coherence tomography images of the cornea viewed under the microscope was also demonstrated. Conclusion Despite the distortions and cue loss introduced by viewing under the stereomicroscope, most participants were able to detect and interact with slanted surfaces. Application The experiments demonstrated sensitivity to surface slant that supports the development of augmented-reality systems to aid microscope-aided surgery.


Subject(s)
Depth Perception/physiology , Microscopy , Visual Perception/physiology , Adult , Humans
8.
Methods ; 115: 128-143, 2017 02 15.
Article in English | MEDLINE | ID: mdl-27965119

ABSTRACT

This article is a review of registration algorithms for use between ultrasound images (monomodal image-based ultrasound registration). Ultrasound is safe, inexpensive, and real-time, providing many advantages for clinical and scientific use on both humans and animals, but ultrasound images are also notoriously noisy and subject to several unique artifacts/distortions. This paper introduces the topic and unique aspects of ultrasound-to-ultrasound image registration, providing a broad introduction and summary of the literature and the field. Both theoretical and practical aspects are introduced. The first half of the paper is theoretical, organized according to the basic components of a registration framework, namely preprocessing, image-similarity metrics, optimizers, etc. It further subdivides these methods between those suitable for elastic (non-rigid) vs. inelastic (matrix) transforms. The second half of the paper is organized by anatomy and is practical in nature, presenting and discussing the complete published systems that have been validated for registration in specific anatomic regions.


Subject(s)
Algorithms , Image Interpretation, Computer-Assisted/methods , Organs at Risk/diagnostic imaging , Pattern Recognition, Automated/statistics & numerical data , Ultrasonography/statistics & numerical data , Animals , Artifacts , Humans , Image Processing, Computer-Assisted , Organs at Risk/anatomy & histology , Pattern Recognition, Automated/standards , Reproducibility of Results , Ultrasonography/instrumentation
9.
Hum Factors ; 57(3): 523-37, 2015 May.
Article in English | MEDLINE | ID: mdl-25875439

ABSTRACT

OBJECTIVE: This study investigated the effectiveness of force augmentation in haptic perception tasks. BACKGROUND: Considerable engineering effort has been devoted to developing force augmented reality (AR) systems to assist users in delicate procedures like microsurgery. In contrast, far less has been done to characterize the behavioral outcomes of these systems, and no research has systematically examined the impact of sensory and perceptual processes on force augmentation effectiveness. METHOD: Using a handheld force magnifier as an exemplar haptic AR, we conducted three experiments to characterize its utility in the perception of force and stiffness. Experiments 1 and 2 measured, respectively, the user's ability to detect and differentiate weak force (<0.5 N) with or without the assistance of the device and compared it to direct perception. Experiment 3 examined the perception of stiffness through the force augmentation. RESULTS: The user's ability to detect and differentiate small forces was significantly improved by augmentation at both threshold and suprathreshold levels. The augmentation also enhanced stiffness perception. However, although perception of augmented forces matches that of the physical equivalent for weak forces, it falls off with increasing intensity. CONCLUSION: The loss in the effectiveness reflects the nature of sensory and perceptual processing. Such perceptual limitations should be taken into consideration in the design and development of haptic AR systems to maximize utility. APPLICATION: The findings provide useful information for building effective haptic AR systems, particularly for use in microsurgery.


Subject(s)
Psychophysics/instrumentation , Psychophysics/methods , Touch Perception/physiology , Touch/physiology , Adult , Equipment Design , Female , Humans , Male , User-Computer Interface , Young Adult
10.
Appl Opt ; 53(24): 5421-4, 2014 Aug 20.
Article in English | MEDLINE | ID: mdl-25321114

ABSTRACT

This paper describes a projection system for augmenting a scanned laser projector to create very small, very bright images for use in a microsurgical augmented reality system. Normal optical design approaches are insufficient because the laser beam profile differs optically from the aggregate image. We propose a novel arrangement of two lens groups working together to simultaneously adjust both the laser beam of the projector (individual pixels) and the spatial envelope containing them (the entire image) to the desired sizes. The present work models such a system using paraxial beam equations and ideal lenses to demonstrate that there is an "in-focus" range, or depth of field, defined by the intersection of the resulting beam-waist radius curve and the ideal pixel radius for a given image size. Images within this depth of field are in focus and can be adjusted to the desired size by manipulating the lenses.


Subject(s)
Imaging, Three-Dimensional/instrumentation , Lasers , Lighting/instrumentation , Microsurgery/instrumentation , Ophthalmologic Surgical Procedures/instrumentation , Surgery, Computer-Assisted/instrumentation , Tomography, Optical Coherence/instrumentation , Equipment Design , Equipment Failure Analysis
11.
IEEE J Transl Eng Health Med ; 2: 2700109, 2014.
Article in English | MEDLINE | ID: mdl-27170882

ABSTRACT

We present a novel device mounted on the fingertip for acquiring and transmitting visual information through haptic channels. In contrast to previous systems in which the user interrogates an intermediate representation of visual information, such as a tactile display representing a camera generated image, our device uses a fingertip-mounted camera and haptic stimulator to allow the user to feel visual features directly from the environment. Visual features ranging from simple intensity or oriented edges to more complex information identified automatically about objects in the environment may be translated in this manner into haptic stimulation of the finger. Experiments using an initial prototype to trace a continuous straight edge have quantified the user's ability to discriminate the angle of the edge, a potentially useful feature for higher levels analysis of the visual scene.

12.
J Pathol Inform ; 2: S5, 2011.
Article in English | MEDLINE | ID: mdl-22811961

ABSTRACT

With modern automated microscopes and digital cameras, pathologists no longer have to examine samples looking through microscope binoculars. Instead, the slide is digitized to an image, which can then be examined on a screen. This creates the possibility for computers to analyze the image. In this work, a fully automated approach to region of interest (ROI) segmentation in prostate biopsy images is proposed. This will allow the pathologists to focus on the most important areas of the image. The method proposed is based on level-set and mean filtering techniques for lumen centered expansion and cell density localization respectively. The novelty of the technique lies in the ability to detect complete ROIs, where a ROI is composed by the conjunction of three different structures, that is, lumen, cytoplasm, and cells, as well as regions with a high density of cells. The method is capable of dealing with full biopsies digitized at different magnifications. In this paper, results are shown with a set of 100 H and E slides, digitized at 5×, and ranging from 12 MB to 500 MB. The tests carried out show an average specificity above 99% across the board and average sensitivities of 95% and 80%, respectively, for the lumen centered expansion and cell density localization. The algorithms were also tested with images at 10× magnification (up to 1228 MB) obtaining similar results.

13.
Opt Lett ; 35(14): 2352-4, 2010 Jul 15.
Article in English | MEDLINE | ID: mdl-20634827

ABSTRACT

The concept and instantiation of real-time tomographic holography (RTTH) for augmented reality is presented. RTTH enables natural hand-eye coordination to guide invasive medical procedures without requiring tracking or a head-mounted device. It places a real-time virtual image of an object's cross section into its actual location, without noticeable viewpoint dependence (e.g., parallax error). The virtual image is viewed through a flat narrowband holographic optical element (HOE) with optical power that generates an in-situ virtual image (within 1 m of the HOE) from a small spatial light modulator display without obscuring a direct view of the physical world. Rigidly fixed upon a medical ultrasound probe, an RTTH device could show the scan in its actual location inside the patient, even as the probe was moved relative to the patient.


Subject(s)
Diagnostic Imaging/methods , Fetal Development/physiology , Holography/methods , Ultrasonography, Prenatal , Computer Simulation , Female , Head , Humans , Pregnancy , Reproducibility of Results , Tomography , Tomography, X-Ray Computed
14.
Int J Biomed Imaging ; 2010: 980872, 2010.
Article in English | MEDLINE | ID: mdl-20634912

ABSTRACT

We have developed a method for extracting anatomical shape models from n-dimensional images using an image analysis framework we call Shells and Spheres. This framework utilizes a set of spherical operators centered at each image pixel, grown to reach, but not cross, the nearest object boundary by incorporating "shells" of pixel intensity values while analyzing intensity mean, variance, and first-order moment. Pairs of spheres on opposite sides of putative boundaries are then analyzed to determine boundary reflectance which is used to further constrain sphere size, establishing a consensus as to boundary location. The centers of a subset of spheres identified as medial (touching at least two boundaries) are connected to identify the interior of a particular anatomical structure. For the automated 3D algorithm, the only manual interaction consists of tracing a single contour on a 2D slice to optimize parameters, and identifying an initial point within the target structure.

SELECTION OF CITATIONS
SEARCH DETAIL