Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
Sci Rep ; 14(1): 5896, 2024 03 11.
Artigo em Inglês | MEDLINE | ID: mdl-38467700

RESUMO

How the human eye focuses for near; i.e. accommodates, is still being evaluated after more than 165 years. The mechanism of accommodation is essential for understanding the etiology and potential treatments for myopia, glaucoma and presbyopia. Presbyopia affects 100% of the population in the fifth decade of life. The lens is encased in a semi-elastic capsule with attached ligaments called zonules that mediate ciliary muscle forces to alter lens shape. The zonules are attached at the lens capsule equator. The fundamental issue is whether during accommodation all the zonules relax causing the central and peripheral lens surfaces to steepen, or the equatorial zonules are under increased tension while the anterior and posterior zonules relax causing the lens surface to peripherally flatten and centrally steepen while maintaining lens stability. Here we show with a balloon capsule zonular force model that increased equatorial zonular tension with relaxation of the anterior and posterior zonules replicates the topographical changes observed during in vivo rhesus and human accommodation of the lens capsule without lens stroma. The zonular forces required to simulate lens capsule configuration during in vivo accommodation are inconsistent with the general belief that all the zonules relax during accommodation.


Assuntos
Cápsula do Cristalino , Cristalino , Presbiopia , Animais , Humanos , Acomodação Ocular , Cristalino/fisiologia , Macaca mulatta
2.
Artigo em Inglês | MEDLINE | ID: mdl-38329870

RESUMO

Effective use of gaze and head orientation can strengthen the sense of inclusion in multi-party interactions, including job interviews. Not making significant eye contact with the interlocutors, or not turning towards them, may be interpreted as disinterest, which could worsen job interview outcomes. This study aims to support the situational solo practice of gaze behavior and head orientation using a triadic (three-way) virtual reality (VR) job interview simulation. The system lets users encounter common interview questions and see how they share attention among the interviewers based on their conversational role (speaking or listening). Given the yaw and position readings of the VR headset, we use a machine learning-based approach to analyze head orientations relative to the interviewers in the virtual environment, and achieve low angular error in a low complexity way. We examine the degree to which interviewer backchannels trigger attention shifts or behavioral mirroring and investigate the social modulation of gaze and head orientation for autistic and non-autistic individuals. In both speaking and listening roles, the autistic participants gazed at, and oriented towards the two virtual interviewers less often, and they displayed less behavioral mirroring (mirroring the head turn of one avatar towards another) compared to the non-autistic participants.


Assuntos
Transtorno Autístico , Realidade Virtual , Humanos , Atenção , Comunicação , Avatar
3.
Exp Eye Res ; 237: 109709, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37923162

RESUMO

To determine the effect of zonular forces on lens capsule topography, a finite element (FE) analyses of lens capsules with no lens stroma and constant and variable thickness with anterior capsulotomies of 1.5 mm-6.5 mm were evaluated when subjected to equatorial (Ez), anterior (Az) and posterior (Pz) zonular forces. The lens capsule was considered in the unaccommodated state when the total initial zonular force was 0.00075 N or 0.3 N. From the total 0.00075 N zonular force, the Ez force was increased in 0.000125 N steps for a maximum force of 0.03 N and simultaneously the Az plus Pz force was reduced in 0.000125 N steps to zero. In addition, the force of all the zonules was reduced from 0.00075 N and separately from 0.3 N in 0.000125 N steps to zero. Only when Ez force was increased as Az and Pz force was reduced did the capsule topography simulate in vivo observations with the posterior capsule pole bowing posteriorly. The posterior bowing was directly related to Ez force and capsulotomy size. Whether the total force of all the zonules in the unaccommodated state was 0.00075 N or 0.3 N and reduced in steps to zero, the lens capsule topography did not emulate the in vivo observations. The FE analysis demonstrated that Ez tension increases while the Az and Pz tension decreases and that all the zonules do not relax during ciliary muscle contraction.


Assuntos
Cápsula do Cristalino , Cristalino , Análise de Elementos Finitos , Cristalino/fisiologia , Cápsula do Cristalino/fisiologia , Corpo Ciliar , Músculo Liso
4.
Artigo em Inglês | MEDLINE | ID: mdl-35969548

RESUMO

Gaze behavior in dyadic conversations can indicate active listening and attention. However, gaze behavior that is different from the engagement expected during neurotypical social interaction cues may be interpreted as uninterested or inattentive, which can be problematic in both personal and professional situations. Neurodivergent individuals, such as those with autism spectrum conditions, often exhibit social communication differences broadly including via gaze behavior. This project aims to support situational social gaze practice through a virtual reality (VR) mock job interview practice using the HTC Vive Pro Eye VR headset. We show how gaze behavior varies in the mock job interview between neurodivergent and neurotypical participants. We also investigate the social modulation of gaze behavior based on conversational role (speaking and listening). Our three main contributions are: (i) a system for fully-automatic analysis of social modulation of gaze behavior using a portable VR headset with a novel realistic mock job interview, (ii) a signal processing pipeline, which employs Kalman filtering and spatial-temporal density-based clustering techniques, that can improve the accuracy of the headset's built-in eye-tracker, and (iii) being the first to investigate social modulation of gaze behavior among neurotypical/divergent individuals in the realm of immersive VR.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Realidade Virtual , Atenção , Fixação Ocular , Humanos
5.
Science ; 377(6601): 35-37, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35771928

RESUMO

Some bias persiste d, but rubric use should be encouraged.

6.
IEEE Trans Image Process ; 31: 2175-2189, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35196238

RESUMO

Due to limited transmission resources and storage capacity, efficient rate control is important in Video-based Point Cloud Compression (V-PCC). In this paper, we propose a learning-based rate control method to improve the rate-distortion (RD) performance of V-PCC. A low-latency synchronous rate control structure is designed to reduce the overhead of pre-coding. The basic unit (BU) parameters are predicted accurately based on our proposed CNN-LSTM neural network, instead of the online updating approach, which can be inaccurate due to low consistency between adjacent 2D frames in V-PCC. When determining the quantization parameters for the BU, a patch-based clipping method is proposed to avoid unnecessary clipping. This approach is able to improve the RD performance and subjective dynamic point cloud quality. Experiments show that our proposed rate control method outperforms present approaches.

7.
IEEE Trans Image Process ; 30: 1245-1260, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33315563

RESUMO

Intra/inter switching-based error resilient video coding effectively enhances the robustness of video streaming when transmitting over error-prone networks. But it has a high computation complexity, due to the detailed end-to-end distortion prediction and brute-force search for rate-distortion optimization. In this article, a Low Complexity Mode Switching based Error Resilient Encoding (LC-MSERE) method is proposed to reduce the complexity of the encoder through a deep learning approach. By designing and training multi-scale information fusion-based convolutional neural networks (CNN), intra and inter mode coding unit (CU) partitions can be predicted by the networks rapidly and accurately, instead of using brute-force search and a large number of end-to-end distortion estimations. In the intra CU partition prediction, we propose a spatial multi-scale information fusion based CNN (SMIF-Intra). In this network a shortcut convolution architecture is designed to learn the multi-scale and multi-grained image information, which is correlated with the CU partition. In the inter CU partition, we propose a spatial-temporal multi-scale information fusion-based CNN (STMIF-Inter), in which a two-stream convolution architecture is designed to learn the spatial-temporal image texture and the distortion propagation among frames. With information from the image, and coding and transmission parameters, the networks are able to accurately predict CU partitions for both intra and inter coding tree units (CTUs). Experiments show that our approach significantly reduces computation time for error resilient video encoding with acceptable quality decrement.

8.
IEEE Trans Image Process ; 27(10): 4901-4915, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29969400

RESUMO

When images and videos are displayed on a mobile device in bright ambient illumination, fewer details can be perceived than in the dark. The detail loss in dark areas of the images/videos is usually more severe. The reflected ambient light and the reduced sensitivity of viewer's eyes are the major factors. We propose two tone mapping operators to enhance the contrast and details in images/videos. One is content independent and thus can be applied to any image/video for the given device and the given ambient illumination. The other tone mapping operator uses simple statistics of the content. Display contrast and human visual adaptation are considered to construct the tone mapping operators. Both operators can be solved efficiently. Subjective tests and objective measurement show the improved quality achieved by the proposed methods.

9.
IEEE Trans Image Process ; 27(6): 2856-2868, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29570087

RESUMO

Images degraded by light scattering and absorption, such as hazy, sandstorm, and underwater images, often suffer color distortion and low contrast because of light traveling through turbid media. In order to enhance and restore such images, we first estimate ambient light using the depth-dependent color change. Then, via calculating the difference between the observed intensity and the ambient light, which we call the scene ambient light differential, scene transmission can be estimated. Additionally, adaptive color correction is incorporated into the image formation model (IFM) for removing color casts while restoring contrast. Experimental results on various degraded images demonstrate the new method outperforms other IFM-based methods subjectively and objectively. Our approach can be interpreted as a generalization of the common dark channel prior (DCP) approach to image restoration, and our method reduces to several DCP variants for different special cases of ambient lighting and turbid medium conditions.

10.
IEEE Trans Image Process ; 26(4): 1579-1594, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28182556

RESUMO

Underwater images often suffer from color distortion and low contrast, because light is scattered and absorbed when traveling through water. Such images with different color tones can be shot in various lighting conditions, making restoration and enhancement difficult. We propose a depth estimation method for underwater scenes based on image blurriness and light absorption, which can be used in the image formation model (IFM) to restore and enhance underwater images. Previous IFM-based image restoration methods estimate scene depth based on the dark channel prior or the maximum intensity prior. These are frequently invalidated by the lighting conditions in underwater images, leading to poor restoration results. The proposed method estimates underwater scene depth more accurately. Experimental results on restoring real and synthesized underwater images demonstrate that the proposed method outperforms other IFM-based underwater image restoration methods.

11.
J Xray Sci Technol ; 23(4): 435-51, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26410655

RESUMO

BACKGROUND: In aviation security, checked luggage is screened by computed tomography scanning. Metal objects in the bags create artifacts that degrade image quality. Though there exist metal artifact reduction (MAR) methods mainly in medical imaging literature, they require knowledge of the materials in the scan, or are outlier rejection methods. OBJECTIVE: To improve and evaluate a MAR method we previously introduced, that does not require knowledge of the materials in the scan, and gives good results on data with large quantities and different kinds of metal. METHODS: We describe in detail an optimization which de-emphasizes metal projections and has a constraint for beam hardening and scatter. This method isolates and reduces artifacts in an intermediate image, which is then fed to a previously published sinogram replacement method. We evaluate the algorithm for luggage data containing multiple and large metal objects. We define measures of artifact reduction, and compare this method against others in MAR literature. RESULTS: Metal artifacts were reduced in our test images, even for multiple and large metal objects, without much loss of structure or resolution. CONCLUSION: Our MAR method outperforms the methods with which we compared it. Our approach does not make assumptions about image content, nor does it discard metal projections.


Assuntos
Algoritmos , Artefatos , Aviação , Processamento de Imagem Assistida por Computador/métodos , Medidas de Segurança , Tomografia Computadorizada por Raios X/métodos , Metais , Imagens de Fantasmas , Viagem
12.
Artigo em Inglês | MEDLINE | ID: mdl-26736672

RESUMO

Registration is difficult when images to be registered contain sparse but large-valued differences. We present a method for robust registration that ignores some fraction of large differences, while constraining the sparseness of these errors. We apply the method to stabilize microscopy videos of C. elegans tissues, in which bright moving filaments and tissue wounding appear as sparse large-valued differences. We demonstrate the advantage of the method on both synthetic and real data compared to state-of-the-art methods.


Assuntos
Processamento de Imagem Assistida por Computador , Algoritmos , Animais , Caenorhabditis elegans , Microscopia de Fluorescência
13.
IEEE Trans Biomed Eng ; 62(4): 1020-33, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24771564

RESUMO

Quantitative analysis of cell shape in live samples is an important goal in developmental biology. Automated or semi-automated segmentation and tracking of cell nuclei has been successfully implemented in several biological systems. Segmentation and tracking of cell surfaces has been more challenging. Here, we present a new approach to tracking cell junctions in the developing epidermis of C. elegans embryos. Epithelial junctions as visualized with DLG-1::GFP form lines at the subapical circumference of differentiated epidermal cells and delineate changes in epidermal cell shape and position. We develop and compare two approaches for junction segmentation. For the first method (projection approach), 3-D cell boundaries are projected into 2D for segmentation using active contours with a nonintersecting force, and subsequently tracked using scale-invariant feature transform (SIFT) flow. The resulting 2-D tracked boundaries are then back-projected into 3-D space. The second method (volumetric approach) uses a 3-D extended version of active contours guided by SIFT flow in 3-D space. In both methods, cell junctions are manually located at the first time point and tracked in a fully automated way for the remainder of the video. Using these methods, we have generated the first quantitative description of ventral epidermal cell movements and shape changes during epidermal enclosure.


Assuntos
Caenorhabditis elegans/embriologia , Embrião não Mamífero/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Junções Íntimas/fisiologia , Imagem com Lapso de Tempo/métodos , Algoritmos , Animais , Caenorhabditis elegans/química , Bases de Dados Factuais , Embrião não Mamífero/química , Microscopia Confocal , Junções Íntimas/química
14.
Development ; 141(22): 4354-65, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25344071

RESUMO

Wnt signals orient mitotic spindles in development, but it remains unclear how Wnt signaling is spatially controlled to achieve precise spindle orientation. Here, we show that C. elegans syndecan (SDN-1) is required for precise orientation of a mitotic spindle in response to a Wnt cue. We find that SDN-1 is the predominant heparan sulfate (HS) proteoglycan in the early C. elegans embryo, and that loss of HS biosynthesis or of the SDN-1 core protein results in misorientation of the spindle of the ABar blastomere. The ABar and EMS spindles both reorient in response to Wnt signals, but only ABar spindle reorientation is dependent on a new cell contact and on HS and SDN-1. SDN-1 transiently accumulates on the ABar surface as it contacts C, and is required for local concentration of Dishevelled (MIG-5) in the ABar cortex adjacent to C. These findings establish a new role for syndecan in Wnt-dependent spindle orientation.


Assuntos
Caenorhabditis elegans/embriologia , Fuso Acromático/fisiologia , Sindecana-1/metabolismo , Via de Sinalização Wnt/fisiologia , Animais , Proteínas de Caenorhabditis elegans/metabolismo , Imunofluorescência , Microscopia Confocal , Interferência de RNA
15.
IEEE Trans Image Process ; 23(4): 1791-804, 2014 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24808347

RESUMO

We consider a wireless relay network with a single source, a single destination, and a multiple relay. The relays are half-duplex and use the decode-and-forward protocol. The transmit source is a layered video bitstream, which can be partitioned into two layers, a base layer (BL) and an enhancement layer (EL), where the BL is more important than the EL in terms of the source distortion. The source broadcasts both layers to the relays and the destination using hierarchical 16-QAM. Each relay detects and transmits successfully decoded layers to the destination using either hierarchical 16-QAM or QPSK. The destination can thus receive multiple signals, each of which can include either only the BL or both the BL and the EL. We derive the optimal linear combining method at the destination, where the uncoded bit error rate is minimized. We also present a suboptimal combining method with a closed-form solution, which performs very close to the optimal. We use the proposed double-layer transmission scheme with our combining methods for transmitting layered video bitstreams. Numerical results show that the double-layer scheme can gain 2-2.5 dB in channel signal-to-noise ratio or 5-7 dB in video peak signal-to-noise ratio, compared with the classical single-layer scheme using conventional modulation.

16.
J Xray Sci Technol ; 22(2): 175-95, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24699346

RESUMO

BACKGROUND: Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE: To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS: We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS: Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS: Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.


Assuntos
Aeroportos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Medidas de Segurança , Tomografia Computadorizada por Raios X/métodos , Reprodutibilidade dos Testes , Viagem , Estados Unidos
17.
Med Phys ; 39(10): 5857-68, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23039624

RESUMO

PURPOSE: Metal objects present in x-ray computed tomography (CT) scans are accompanied by physical phenomena that render CT projections inconsistent with the linear assumption made for analytical reconstruction. The inconsistencies create artifacts in reconstructed images. Metal artifact reduction algorithms replace the inconsistent projection data passing through metals with estimates of the true underlying projection data, but when the data estimates are inaccurate, secondary artifacts are generated. The secondary artifacts may be as unacceptable as the original metal artifacts; therefore, better projection data estimation is critical. This research uses computer vision techniques to create better estimates of the underlying projection data using observations about the appearance and nature of metal artifacts. METHODS: The authors developed a method of estimating underlying projection data through the use of an intermediate image, called the prior image. This method generates the prior image by segmenting regions of the originally reconstructed image, and discriminating between regions that are likely to be metal artifacts and those that are likely to represent anatomical structures. Regions identified as metal artifact are replaced with a constant soft-tissue value, while structures such as bone or air pockets are preserved. This prior image is reprojected (forward projected), and the reprojections guide the estimation of the underlying projection data using previously published interpolation techniques. The algorithm is tested on head CT test cases containing metal implants and compared against existing methods. RESULTS: Using the new method of prior image generation on test images, metal artifacts were eliminated or reduced and fewer secondary artifacts were present than with previous methods. The results apply even in the case of multiple metal objects, which is a challenging problem. The authors did not observe secondary artifacts that were comparable to or worse than the original metal artifacts, as sometimes occurred with the other methods. The accuracy of the prior was found to be more critical than the particular interpolation method. CONCLUSIONS: Metals produce predictable artifacts in CT images of the head. Using the new method, metal artifacts can be discriminated from anatomy, and the discrimination can be used to reduce metal artifacts.


Assuntos
Artefatos , Processamento de Imagem Assistida por Computador/métodos , Metais , Tomografia Computadorizada por Raios X/métodos , Humanos , Modelos Teóricos
18.
Development ; 139(22): 4271-9, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23052905

RESUMO

A quantitative understanding of tissue morphogenesis requires description of the movements of individual cells in space and over time. In transparent embryos, such as C. elegans, fluorescently labeled nuclei can be imaged in three-dimensional time-lapse (4D) movies and automatically tracked through early cleavage divisions up to ~350 nuclei. A similar analysis of later stages of C. elegans development has been challenging owing to the increased error rates of automated tracking of large numbers of densely packed nuclei. We present Nucleitracker4D, a freely available software solution for tracking nuclei in complex embryos that integrates automated tracking of nuclei in local searches with manual curation. Using these methods, we have been able to track >99% of all nuclei generated in the C. elegans embryo. Our analysis reveals that ventral enclosure of the epidermis is accompanied by complex coordinated migration of the neuronal substrate. We can efficiently track large numbers of migrating nuclei in 4D movies of zebrafish cardiac morphogenesis, suggesting that this approach is generally useful in situations in which the number, packing or dynamics of nuclei present challenges for automated tracking.


Assuntos
Caenorhabditis elegans/embriologia , Processamento de Imagem Assistida por Computador/métodos , Morfogênese , Software , Peixe-Zebra/embriologia , Animais , Diferenciação Celular , Divisão Celular , Movimento Celular , Núcleo Celular/metabolismo , Computadores , Embrião não Mamífero , Epiderme/metabolismo , Análise de Célula Única , Estatística como Assunto
19.
IEEE Trans Image Process ; 21(8): 3586-97, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22547456

RESUMO

The original design of standard digital fountain codes assumes that the coded information symbols are equally important. In many applications, some source symbols are more important than others, and they must be recovered prior to the rest. Unequal Error Protection (UEP) designs are attractive solutions for such source transmissions. In this study, we introduce a more generalized design for the first universal fountain code design, LT codes, that makes it particularly suited for progressive bit stream transmissions. We apply the generalized LT codes to a progressive source and show that it has better UEP properties than other published results in the literature. For example, using the proposed generalization, we obtained up to 1.7dB PSNR gain in a progressive image transmission scenario over the two major UEP fountain code designs.


Assuntos
Algoritmos , Artefatos , Compressão de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
IEEE Trans Image Process ; 21(8): 3353-63, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22453638

RESUMO

We examine the visual effect of whole frame loss by different decoders. Whole frame losses are introduced in H.264/AVC compressed videos which are then decoded by two different decoders with different common concealment effects: frame copy and frame interpolation. The videos are seen by human observers who respond to each glitch they spot. We found that about 39% of whole frame losses of B frames are not observed by any of the subjects, and over 58% of the B frame losses are observed by 20% or fewer of the subjects. Using simple predictive features which can be calculated inside a network node with no access to the original video and no pixel level reconstruction of the frame, we developed models which can predict the visibility of whole B frame losses. The models are then used in a router to predict the visual impact of a frame loss and perform intelligent frame dropping to relieve network congestion. Dropping frames based on their visual scores proves superior to random dropping of B frames.


Assuntos
Redes de Comunicação de Computadores , Compressão de Dados/métodos , Aumento da Imagem/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Gravação em Vídeo/métodos , Algoritmos , Reprodutibilidade dos Testes , Tamanho da Amostra , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA