Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
J Biomed Opt ; 28(6): 066003, 2023 06.
Article in English | MEDLINE | ID: mdl-37334207

ABSTRACT

Significance: Cholesteatoma is an expansile destructive lesion of the middle ear and mastoid, which can result in significant complications by eroding adjacent bony structures. Currently, there is an inability to accurately distinguish cholesteatoma tissue margins from middle ear mucosa tissue, causing a high recidivism rate. Accurately differentiating cholesteatoma and mucosa will enable a more complete removal of the tissue. Aim: Develop an imaging system to enhance the visibility of cholesteatoma tissue and margins during surgery. Approach: Cholesteatoma and mucosa tissue samples were excised from the inner ear of patients and illuminated with 405, 450, and 520 nm narrowband lights. Measurements were made with a spectroradiometer equipped with a series of different longpass filters. Images were obtained using a red-green-blue (RGB) digital camera equipped with a long pass filter to block reflected light. Results: Cholesteatoma tissue fluoresced under 405 and 450 nm illumination. Middle ear mucosa tissue did not fluoresce under the same illumination and measurement conditions. All measurements were negligible under 520 nm illumination conditions. All spectroradiometric measurements of cholesteatoma tissue fluorescence can be predicted by a linear combination of emissions from keratin and flavin adenine dinucleotide. We built a prototype of a fluorescence imaging system using a 495 nm longpass filter in combination with an RGB camera. The system was used to capture calibrated digital camera images of cholesteatoma and mucosa tissue samples. The results confirm that cholesteatoma emits light when it is illuminated with 405 and 450 nm, whereas mucosa tissue does not. Conclusions: We prototyped an imaging system that is capable of measuring cholesteatoma tissue autofluorescence.


Subject(s)
Cholesteatoma, Middle Ear , Humans , Cholesteatoma, Middle Ear/diagnostic imaging , Cholesteatoma, Middle Ear/surgery , Cholesteatoma, Middle Ear/pathology , Ear, Middle/pathology , Mucous Membrane/pathology , Mastoid/pathology , Mastoid/surgery , Optical Imaging
2.
J Biomed Opt ; 28(1): 016004, 2023 01.
Article in English | MEDLINE | ID: mdl-36726664

ABSTRACT

Significance: Accurate identification of tissues is critical for performing safe surgery. Combining multispectral imaging (MSI) with deep learning is a promising approach to increasing tissue discrimination and classification. Evaluating the contributions of spectral channels to tissue discrimination is important for improving MSI systems. Aim: Develop a metric to quantify the contributions of individual spectral channels to tissue classification in MSI. Approach: MSI was integrated into a digital operating microscope with three sensors and seven illuminants. Two convolutional neural network (CNN) models were trained to classify 11 head and neck tissue types using white light (RGB) or MSI images. The signal to noise ratio (SNR) of spectral channels was compared with the impact of channels on tissue classification performance as determined using CNN visualization methods. Results: Overall tissue classification accuracy was higher with use of MSI images compared with RGB images, both for classification of all 11 tissue types and binary classification of nerve and parotid ( p < 0.001 ). Removing spectral channels with SNR > 20 reduced tissue classification accuracy. Conclusions: The spectral channel SNR is a useful metric for both understanding CNN tissue classification and quantifying the contributions of different spectral channels in an MSI system.


Subject(s)
Deep Learning , Humans , Signal-To-Noise Ratio , Neural Networks, Computer , Diagnostic Imaging
3.
Opt Express ; 30(13): 24031-24047, 2022 Jun 20.
Article in English | MEDLINE | ID: mdl-36225073

ABSTRACT

Combining image sensor simulation tools with physically based ray tracing enables the design and evaluation (soft prototyping) of novel imaging systems. These methods can also synthesize physically accurate, labeled images for machine learning applications. One practical limitation of soft prototyping has been simulating the optics precisely: lens manufacturers generally prefer to keep lens design confidential. We present a pragmatic solution to this problem using a black box lens model in Zemax; such models provide necessary optical information while preserving the lens designer's intellectual property. First, we describe and provide software to construct a polynomial ray transfer function that characterizes how rays entering the lens at any position and angle subsequently exit the lens. We implement the ray-transfer calculation as a camera model in PBRT and confirm that the PBRT ray-transfer calculations match the Zemax lens calculations for edge spread functions and relative illumination.

4.
Biomed Opt Express ; 12(7): 4276-4292, 2021 Jul 01.
Article in English | MEDLINE | ID: mdl-34457414

ABSTRACT

We describe an end-to-end image systems simulation that models a device capable of measuring fluorescence in the oral cavity. Our software includes a 3D model of the oral cavity and excitation-emission matrices of endogenous fluorophores that predict the spectral radiance of oral mucosal tissue. The predicted radiance is transformed by a model of the optics and image sensor to generate expected sensor image values. We compare simulated and real camera data from tongues in healthy individuals and show that the camera sensor chromaticity values can be used to quantify the fluorescence from porphyrins relative to the bulk fluorescence from multiple fluorophores (elastin, NADH, FAD, and collagen). Validation of the simulations supports the use of soft-prototyping in guiding system design for fluorescence imaging.

5.
Otolaryngol Head Neck Surg ; 164(2): 328-335, 2021 02.
Article in English | MEDLINE | ID: mdl-32838646

ABSTRACT

OBJECTIVE: Safe surgery requires the accurate discrimination of tissue intraoperatively. We assess the feasibility of using multispectral imaging and deep learning to enhance surgical vision by automated identification of normal human head and neck tissues. STUDY DESIGN: Construction and feasibility testing of novel multispectral imaging system for surgery. SETTING: Academic university hospital. SUBJECTS AND METHODS: Multispectral images of fresh-preserved human cadaveric tissues were captured with our adapted digital operating microscope. Eleven tissue types were sampled, each sequentially exposed to 6 lighting conditions. Two convolutional neural network machine learning models were developed to classify tissues based on multispectral and white-light color images (ARRInet-M and ARRInet-W, respectively). Blinded otolaryngology residents were asked to identify tissue specimens from white-light color images, and their performance was compared with that of the ARRInet models. RESULTS: A novel multispectral imaging system was developed with minimal adaptation to an existing digital operating microscope. With 81.8% accuracy in tissue identification of full-size images, the multispectral ARRInet-M classifier outperformed the white-light-only ARRInet-W model (45.5%) and surgical residents (69.7%). Challenges with discrimination occurred with parotid vs fat and blood vessels vs nerve. CONCLUSIONS: A deep learning model using multispectral imaging outperformed a similar model and surgical residents using traditional white-light imaging at the task of classifying normal human head and neck tissue ex vivo. These results suggest that multispectral imaging can enhance surgical vision and augment surgeons' ability to identify tissues during a procedure.


Subject(s)
Machine Learning , Multimodal Imaging/instrumentation , Neural Networks, Computer , Surgical Procedures, Operative , Cadaver , Equipment Design , Humans
6.
Article in English | MEDLINE | ID: mdl-32112682

ABSTRACT

There is widespread interest in estimating the fluorescence properties of natural materials in an image. However, the separation between reflected and fluoresced components is difficult, because it is impossible to distinguish reflected and fluoresced photons without controlling the illuminant spectrum. We show how to jointly estimate the reflectance and fluorescence from a single set of images acquired under multiple illuminants. We present a framework based on a linear approximation to the physical equations describing image formation in terms of surface spectral reflectance and fluorescence due to multiple fluorophores. We relax the non-convex, inverse estimation problem in order to jointly estimate the reflectance and fluorescence properties in a single optimization step. We provide a software implementation of the solver for our method and prior methods. We evaluate the accuracy and reliability of the method using both simulations and experimental data. To evaluate the methods experimentally we built a custom imaging system using a monochrome camera, a filter wheel with bandpass transmissive filters and a small number of light emitting diodes. We compared the methods based upon our framework with the ground truth as well as with prior methods.

7.
IEEE Trans Image Process ; 26(10): 5032-5042, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28613172

ABSTRACT

Many creative ideas are being proposed for image sensor designs, and these may be useful in applications ranging from consumer photography to computer vision. To understand and evaluate each new design, we must create a corresponding image processing pipeline that transforms the sensor data into a form, that is appropriate for the application. The need to design and optimize these pipelines is time-consuming and costly. We explain a method that combines machine learning and image systems simulation that automates the pipeline design. The approach is based on a new way of thinking of the image processing pipeline as a large collection of local linear filters. We illustrate how the method has been used to design pipelines for novel sensor architectures in consumer photography applications.

8.
J Vis ; 13(12): 16, 2013 Oct 23.
Article in English | MEDLINE | ID: mdl-24155345

ABSTRACT

Vision researchers rely on visual display technology for the presentation of stimuli to human and nonhuman observers. Verifying that the desired and displayed visual patterns match along dimensions such as luminance, spectrum, and spatial and temporal frequency is an essential part of developing controlled experiments. With cathode-ray tubes (CRTs) becoming virtually unavailable on the commercial market, it is useful to determine the characteristics of newly available displays based on organic light emitting diode (OLED) panels to determine how well they may serve to produce visual stimuli. This report describes a series of measurements summarizing the properties of images displayed on two commercially available OLED displays: the Sony Trimaster EL BVM-F250 and PVM-2541. The results show that the OLED displays have large contrast ratios, wide color gamuts, and precise, well-behaved temporal responses. Correct adjustment of the settings on both models produced luminance nonlinearities that were well predicted by a power function ("gamma correction"). Both displays have adjustable pixel independence and can be set to have little to no spatial pixel interactions. OLED displays appear to be a suitable, or even preferable, option for many vision research applications.


Subject(s)
Biomedical Research , Computer Terminals , Data Display , Lighting/instrumentation , Organic Chemicals/chemistry , Signal Processing, Computer-Assisted , Humans , Spatio-Temporal Analysis , Tin Compounds/chemistry
9.
Appl Opt ; 51(4): A80-90, 2012 Feb 01.
Article in English | MEDLINE | ID: mdl-22307132

ABSTRACT

We describe a simulation of the complete image processing pipeline of a digital camera, beginning with a radiometric description of the scene captured by the camera and ending with a radiometric description of the image rendered on a display. We show that there is a good correspondence between measured and simulated sensor performance. Through the use of simulation, we can quantify the effects of individual digital camera components on system performance and image quality. This computational approach can be helpful for both camera design and image quality assessment.


Subject(s)
Computer-Aided Design , Image Interpretation, Computer-Assisted/instrumentation , Image Interpretation, Computer-Assisted/methods , Models, Theoretical , Photography/instrumentation , Signal Processing, Computer-Assisted/instrumentation , Transducers , Computer Simulation , Equipment Design , Equipment Failure Analysis , Reproducibility of Results , Sensitivity and Specificity
10.
Appl Opt ; 51(4): ISA1, 2012 Feb 01.
Article in English | MEDLINE | ID: mdl-22307134

ABSTRACT

Imaging systems are used in consumer, medical, and military applications. Designing, developing, and building imaging systems requires a multidisciplinary approach. This issue features current research in imaging systems that ranges from fundamental theories to novel applications. Although the papers collected are diverse, their unique compilation provides a systems perspective to imaging.

SELECTION OF CITATIONS
SEARCH DETAIL
...