Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
1.
Sensors (Basel) ; 23(14)2023 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-37514540

RESUMO

We propose a high-quality, three-dimensional display system based on a simplified light field image acquisition method, and a custom-trained full-connected deep neural network is proposed. The ultimate goal of the proposed system is to acquire and reconstruct the light field images with possibly the most elevated quality from the real-world objects in a general environment. A simplified light field image acquisition method acquires the three-dimensional information of natural objects in a simple way, with high-resolution/high-quality like multicamera-based methods. We trained a full-connected deep neural network model to output desired viewpoints of the object with the same quality. The custom-trained instant neural graphics primitives model with hash encoding output the overall desired viewpoints of the object within the acquired viewing angle in the same quality, based on the input perspectives, according to the pixel density of a display device and lens array specifications within the significantly short processing time. Finally, the elemental image array was rendered through the pixel re-arrangement from the entire viewpoints to visualize the entire field-of-view and re-constructed as a high-quality three-dimensional visualization on the integral imaging display. The system was implemented successfully, and the displayed visualizations and corresponding evaluated results confirmed that the proposed system offers a simple and effective way to acquire light field images from real objects with high-resolution and present high-quality three-dimensional visualization on the integral imaging display system.

2.
Sensors (Basel) ; 22(14)2022 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-35890968

RESUMO

This study proposes a robust depth map framework based on a convolutional neural network (CNN) to calculate disparities using multi-direction epipolar plane images (EPIs). A combination of three-dimensional (3D) and two-dimensional (2D) CNN-based deep learning networks is used to extract the features from each input stream separately. The 3D convolutional blocks are adapted according to the disparity of different directions of epipolar images, and 2D-CNNs are employed to minimize data loss. Finally, the multi-stream networks are merged to restore the depth information. A fully convolutional approach is scalable, which can handle any size of input and is less prone to overfitting. However, there is some noise in the direction of the edge. A weighted median filtering (WMF) is used to acquire the boundary information and improve the accuracy of the results to overcome this issue. Experimental results indicate that the suggested deep learning network architecture outperforms other architectures in terms of depth estimation accuracy.


Assuntos
Microscopia , Redes Neurais de Computação
3.
Appl Opt ; 60(14): 4235-4244, 2021 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-33983180

RESUMO

Holographic stereogram (HS) printing requires extensive memory capacity and long computation time during perspective acquisition and implementation of the pixel re-arrangement algorithm. Hogels contain very weak depth information of the object. We propose a HS printing system that uses simplified digital content generation based on the inverse-directed propagation (IDP) algorithm for hogel generation. Specifically, the IDP algorithm generates an array of hogels using a simple process that acquires the full three-dimensional (3D) information of the object, including parallax, depth, color, and shading, via a computer-generated integral imaging technique. This technique requires a short computation time and is capable of accounting for occlusion and accommodation effects of the object points via the IDP algorithm. Parallel computing is utilized to produce a high-resolution hologram based on the properties of independent hogels. To demonstrate the proposed approach, optical experiments are conducted in which the natural 3D visualizations of real and virtual objects are printed on holographic material. Experimental results demonstrate the simplified computation involved in content generation using the proposed IDP-based HS printing system and the improved image quality of the holograms.

4.
Opt Express ; 27(21): 29746-29758, 2019 Oct 14.
Artigo em Inglês | MEDLINE | ID: mdl-31684232

RESUMO

A multiple-camera holographic system using non-uniformly sampled 2D images and compressed point cloud gridding (C-PCG) is suggested. High-quality, digital single-lens reflex cameras are used to acquire the depth and color information from real scenes; these are then virtually reconstructed by the uniform point cloud using a non-uniform sampling method. The C-PCG method is proposed to generate efficient depth grids by classifying groups of object points with the same depth values in the red, green, and blue channels. Holograms are obtained by applying fast Fourier transform diffraction calculations to the grids. Compared to wave-front recording plane methods, the quality of the reconstructed images is substantially better, and the computational complexity is dramatically reduced. The feasibility of our method is confirmed both numerically and optically.

5.
Appl Opt ; 58(5): A242-A250, 2019 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-30873983

RESUMO

Recently, computer-generated holograms (CGHs) of real three-dimensional (3D) objects have become widely used to support holographic displays. Here, a multiple-camera holographic system featuring an efficient depth grid is developed to provide the correct depth cue. Multidepth cameras are used to acquire depth and color information from real scenes, and then to virtually reconstruct point cloud models. Arranging the depth cameras in an inward-facing configuration allowed simultaneous capture of objects from different directions, facilitating rendering of the entire surface. The multiple relocated point cloud gridding method is proposed to generate efficient depth grids by classifying groups of object points with the same depth values in the red, green, and blue channels. CGHs are obtained by applying a fast Fourier transform diffraction calculation to the grids. Full-color reconstructed images were obtained flexibly and efficiently. The utility of our method was confirmed both numerically and optically.

6.
Appl Opt ; 57(15): 4253-4262, 2018 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-29791403

RESUMO

The calculation of realistic full-color holographic displays is hindered by the high computational cost. Previously, we suggested a point cloud gridding (PCG) method to calculate monochrome holograms of real objects. In this research, a relocated point cloud gridding (R-PCG) method is proposed to enhance the reconstruction quality and accelerate the calculation speed in GPU for a full-color holographic system. We use a depth camera to acquire depth and color information from the real scene then reconstruct the point cloud model virtually. The R-PCG method allows us to classify groups of object points with the same depth values into grids in the red, green, and blue (RGB) channels. Computer-generated holograms (CGHs) are obtained by applying a fast Fourier transform (FFT) diffraction calculation to the grids. The feasibility of the R-PCG method is confirmed by numerical and optical reconstruction.

7.
Sensors (Basel) ; 18(1)2018 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-29342964

RESUMO

This article presents a new data-driven model design for rendering force responses from elastic tool deformation. The new design incorporates a six-dimensional input describing the initial position of the contact, as well as the state of the tool deformation. The input-output relationship of the model was represented by a radial basis functions network, which was optimized based on training data collected from real tool-surface contact. Since the input space of the model is represented in the local coordinate system of a tool, the model is independent of recording and rendering devices and can be easily deployed to an existing simulator. The model also supports complex interactions, such as self and multi-contact collisions. In order to assess the proposed data-driven model, we built a custom data acquisition setup and developed a proof-of-concept rendering simulator. The simulator was evaluated through numerical and psychophysical experiments with four different real tools. The numerical evaluation demonstrated the perceptual soundness of the proposed model, meanwhile the user study revealed the force feedback of the proposed simulator to be realistic.

8.
Opt Lett ; 42(13): 2599-2602, 2017 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-28957294

RESUMO

We propose a full-color polygon-based holographic system for real three-dimensional (3D) objects using a depth-layer weighted prediction method. The proposed system is composed of four main stages: acquisition, preprocessing, hologram generation, and reconstruction. In the preprocessing stage, the point cloud model is separated into red, green, and blue channels with depth-layer weighted prediction. The color component values are characterized based on the depth information of the real object, then color prediction is derived from the measurement data. The computer-generated holograms reconstruct 3D full-color images with a strong sensation of depth resulting from the polygon approach. The feasibility of the proposed method was confirmed by numerical and optical reconstruction.

9.
Methods ; 67(3): 373-9, 2014 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-24530970

RESUMO

With the exponential growth of biological sequence data (DNA or Protein Sequence), DNA sequence analysis has become an essential task for biologist to understand the features, functions, structures, and evolution of species. Encoding DNA sequences is an effective method to extract the features from DNA sequences. It is commonly used for visualizing DNA sequences and analyzing similarities/dissimilarities between different species or cells. Although there have been many encoding approaches proposed for DNA sequence analysis, we require more elegant approaches for higher accuracy. In this paper, we propose a noble encoding approach for measuring the degree of similarity/dissimilarity between different species. Our approach can preserve the physiochemical properties, positional information, and the codon usage bias of nucleotides. An extensive performance study shows that our approach provides higher accuracy than existing approaches in terms of the degree of similarity.


Assuntos
Códon , Análise de Sequência de DNA/métodos , Análise Mutacional de DNA , Filogenia
10.
Sensors (Basel) ; 14(6): 9628-68, 2014 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-24887042

RESUMO

The acceptance and usability of context-aware systems have given them the edge of wide use in various domains and has also attracted the attention of researchers in the area of context-aware computing. Making user context information available to such systems is the center of attention. However, there is very little emphasis given to the process of context representation and context fusion which are integral parts of context-aware systems. Context representation and fusion facilitate in recognizing the dependency/relationship of one data source on another to extract a better understanding of user context. The problem is more critical when data is emerging from heterogeneous sources of diverse nature like sensors, user profiles, and social interactions and also at different timestamps. Both the processes of context representation and fusion are followed in one way or another; however, they are not discussed explicitly for the realization of context-aware systems. In other words most of the context-aware systems underestimate the importance context representation and fusion. This research has explicitly focused on the importance of both the processes of context representation and fusion and has streamlined their existence in the overall architecture of context-aware systems' design and development. Various applications of context representation and fusion in context-aware systems are also highlighted in this research. A detailed review on both the processes is provided in this research with their applications. Future research directions (challenges) are also highlighted which needs proper attention for the purpose of achieving the goal of realizing context-aware systems.


Assuntos
Metodologias Computacionais , Prestação Integrada de Cuidados de Saúde , Monitoramento Ambiental , Modelos Teóricos , Processamento de Sinais Assistido por Computador , Condução de Veículo , Identificação Biométrica , Humanos , Internet , Semântica
11.
Int J Biomed Imaging ; 2024: 8972980, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38725808

RESUMO

We present a deep learning-based method that corrects motion artifacts and thus accelerates data acquisition and reconstruction of magnetic resonance images. The novel model, the Motion Artifact Correction by Swin Network (MACS-Net), uses a Swin transformer layer as the fundamental block and the Unet architecture as the neural network backbone. We employ a hierarchical transformer with shifted windows to extract multiscale contextual features during encoding. A new dual upsampling technique is employed to enhance the spatial resolutions of feature maps in the Swin transformer-based decoder layer. A raw magnetic resonance imaging dataset is used for network training and testing; the data contain various motion artifacts with ground truth images of the same subjects. The results were compared to six state-of-the-art MRI image motion correction methods using two types of motions. When motions were brief (within 5 s), the method reduced the average normalized root mean square error (NRMSE) from 45.25% to 17.51%, increased the mean structural similarity index measure (SSIM) from 79.43% to 91.72%, and increased the peak signal-to-noise ratio (PSNR) from 18.24 to 26.57 dB. Similarly, when motions were extended from 5 to 10 s, our approach decreased the average NRMSE from 60.30% to 21.04%, improved the mean SSIM from 33.86% to 90.33%, and increased the PSNR from 15.64 to 24.99 dB. The anatomical structures of the corrected images and the motion-free brain data were similar.

12.
Sci Rep ; 13(1): 11684, 2023 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-37468571

RESUMO

The current study strives to provide a haptic attribute space where texture surfaces are located based on their haptic attributes. The main aim of the haptic attribute space is to come up with a standardized model for representing and identifying haptic textures analogous to the RGB model for colors. To this end, a four dimensional haptic attribute space is established by conducting a psychophysical experiment where human participants rate 100 real-life texture surfaces according to their haptic attributes. The four dimensions of the haptic attribute space are rough-smooth, flat-bumpy, sticky-slippery, and hard-soft. The generalization and scalability of the haptic attribute space is achieved by training a 1D-CNN model for predicting attributes of haptic textures. The 1D-CNN is trained using the attribute data from psychophysical experiments and image features collected from the images of real textures. The prediction power granted by the 1D-CNN renders scalability to the haptic attribute space. The prediction accuracy of the proposed 1D-CNN model is compared against other machine learning and deep learning algorithms. The results show that the proposed method outperforms the other models on MAE and RMSE metrics.

13.
IEEE Trans Haptics ; 15(1): 62-67, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34941523

RESUMO

Data-driven approaches are commonly used to model and render haptic textures for rigid stylus-based interaction. Current state-of-the-art data-driven methodologies synthesize acceleration signals through the interpolation of samples with different input parameters based on neural networks or parametric spectral estimation methods. In this paper, we see the potential of emerging deep learning methods in this area. To this end, we designed a complete end-to-end data-driven framework to synthesize acceleration profiles based on the proposed deep spatio-temporal network. The network is trained using contact acceleration data collected through our manual scanning stylus and interaction parameters, i.e., scanning velocities, directions, and forces. The proposed network is composed of attention-aware 1D CNNs and attention-aware encoder-decoder networks to adequately capture both the local spatial features and the temporal dynamics of the acceleration signals, which are further augmented with attention mechanisms that assign weights to the features according to their contributions. For rendering, the trained network generates synthesized signals in real-time in accordance with the user's input parameters. The whole framework was numerically compared with existing state-of-the-art approaches, showing the effectiveness of the approach. Additionally, a pilot user study is conducted to demonstrate subjective similarity.


Assuntos
Redes Neurais de Computação , Humanos
14.
Bioengineering (Basel) ; 10(1)2022 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-36671594

RESUMO

When sparsely sampled data are used to accelerate magnetic resonance imaging (MRI), conventional reconstruction approaches produce significant artifacts that obscure the content of the image. To remove aliasing artifacts, we propose an advanced convolutional neural network (CNN) called fully dense attention CNN (FDA-CNN). We updated the Unet model with the fully dense connectivity and attention mechanism for MRI reconstruction. The main benefit of FDA-CNN is that an attention gate in each decoder layer increases the learning process by focusing on the relevant image features and provides a better generalization of the network by reducing irrelevant activations. Moreover, densely interconnected convolutional layers reuse the feature maps and prevent the vanishing gradient problem. Additionally, we also implement a new, proficient under-sampling pattern in the phase direction that takes low and high frequencies from the k-space both randomly and non-randomly. The performance of FDA-CNN was evaluated quantitatively and qualitatively with three different sub-sampling masks and datasets. Compared with five current deep learning-based and two compressed sensing MRI reconstruction techniques, the proposed method performed better as it reconstructed smoother and brighter images. Furthermore, FDA-CNN improved the mean PSNR by 2 dB, SSIM by 0.35, and VIFP by 0.37 compared with Unet for the acceleration factor of 5.

15.
Artigo em Inglês | MEDLINE | ID: mdl-34682315

RESUMO

Extracting clinical concepts, such as problems, diagnosis, and treatment, from unstructured clinical narrative documents enables data-driven approaches such as machine and deep learning to support advanced applications such as clinical decision-support systems, the assessment of disease progression, and the intelligent analysis of treatment efficacy. Various tools such as cTAKES, Sophia, MetaMap, and other rules-based approaches and algorithms have been used for automatic concept extraction. Recently, machine- and deep-learning approaches have been used to extract, classify, and accurately annotate terms and phrases. However, the requirement of an annotated dataset, which is labor-intensive, impedes the success of data-driven approaches. A rule-based mechanism could support the process of annotation, but existing rule-based approaches fail to adequately capture contextual, syntactic, and semantic patterns. This study intends to introduce a comprehensive rule-based system that automatically extracts clinical concepts from unstructured narratives with higher accuracy and transparency. The proposed system is a pipelined approach, capable of recognizing clinical concepts of three types, problem, treatment, and test, in the dataset collected from a published repository as a part of the I2b2 challenge 2010. The system's performance is compared with that of three existing systems: Quick UMLS, BIO-CRF, and the Rules (i2b2) model. Compared to the baseline systems, the average F1-score of 72.94% was found to be 13% better than Quick UMLS, 3% better than BIO CRF, and 30.1% better than the Rules (i2b2) model. Individually, the system performance was noticeably higher for problem-related concepts, with an F1-score of 80.45%, followed by treatment-related concepts and test-related concepts, with F1-scores of 76.06% and 55.3%, respectively. The proposed methodology significantly improves the performance of concept extraction from unstructured clinical narratives by exploiting the linguistic and lexical semantic features. The approach can ease the automatic annotation process of clinical data, which ultimately improves the performance of supervised data-driven applications trained with these data.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Semântica , Algoritmos , Linguística
16.
IEEE Trans Haptics ; 13(3): 611-627, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31940552

RESUMO

Data-driven modeling of human hand contact dynamics starts with a tedious process of data collection. The data of contact dynamics consist of an input describing an applied action and response stimuli from the environment. The quality and stability of the model mainly depend on how well data points cover the model space. Thus, in order to build a reliable data-driven model, a user usually collects data dozens of times. In this article, we aim to build an interactive system that assists a user in data collection. We develop an online segmentation framework that partitions a multivariate streaming signal. Real-time segmentation allows for tracking the process of how the model space is being populated. We applied the proposed framework for a haptic texture modeling use-case. In order to guide a user in data collection, we designed a user interface mapping applied input to alternative visual modalities based on the theory of direct perception. A combination of the segmentation framework and user interface implements a human-in-loop system, where the user interface assigns the target combination of input variables and the user tries to acquire them. Experimental results show that the proposed data collection schema considerably increases the approximation quality of the model, whereas the proposed user interface considerably reduces mental workload experienced during data collection.


Assuntos
Modelos Teóricos , Fenômenos Físicos , Percepção do Tato , Tato , Interface Usuário-Computador , Percepção Visual , Coleta de Dados , Humanos
17.
Comput Biol Med ; 96: 166-177, 2018 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-29597142

RESUMO

The currently available prostate palpation simulators are based on either a physical mock-up or pure virtual simulation. Both cases have their inherent limitations. The former lacks flexibility in presenting abnormalities and scenarios because of the static nature of the mock-up and has usability issues because the prostate model must be replaced in different scenarios. The latter has realism issues, particularly in haptic feedback, because of the very limited performance of haptic hardware and inaccurate haptic simulation. This paper presents a highly flexible and programmable simulator with high haptic fidelity. Our new approach is based on a pneumatic-driven, property-changing, silicone prostate mock-up that can be embedded in a human torso mannequin. The mock-up has seven pneumatically controlled, multi-layered bladder cells to mimic the stiffness, size, and location changes of nodules in the prostate. The size is controlled by inflating the bladder with positive pressure in the chamber, and a hard nodule can be generated using the particle jamming technique; the fine sand in the bladder becomes stiff when it is vacuumed. The programmable valves and system identification process enable us to precisely control the size and stiffness, which results in a simulator that can realistically generate many different diseases without replacing anything. The three most common abnormalities in a prostate are selected for demonstration, and multiple progressive stages of each abnormality are carefully designed based on medical data. A human perception experiment is performed by actual medical professionals and confirms that our simulator exhibits higher realism and usability than do the conventional simulators.


Assuntos
Modelos Biológicos , Palpação/instrumentação , Palpação/métodos , Próstata/fisiologia , Bexiga Urinária/fisiologia , Adulto , Engenharia Biomédica/instrumentação , Instrução por Computador/instrumentação , Desenho de Equipamento , Feminino , Humanos , Masculino
18.
IEEE Trans Haptics ; 11(2): 291-303, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29911984

RESUMO

In this paper, we focused on building a universal haptic texture models library and automatic assignment of haptic texture models to any given surface from the library based on image features. It is shown that a relationship exists between perceived haptic texture and its image features, and this relationship is effectively used for automatic haptic texture model assignment. An image feature space and a perceptual haptic texture space are defined, and the correlation between the two spaces is found. A haptic texture library was built, using 84 real life textured surfaces, by training a multi-class support vector machine with radial basis function kernel. The perceptual space was classified into perceptually similar clusters using K-means. Haptic texture models were assigned to new surfaces in a two step process; classification into a perceptually similar group using the trained multi-class support vector machine, and finding a unique match from within the group using binarized statistical image features. The system was evaluated using 21 new real life texture surfaces and an accuracy of 71.4 percent was achieved in assigning haptic models to these surfaces.


Assuntos
Bases de Dados como Assunto , Modelos Teóricos , Psicofísica/métodos , Máquina de Vetores de Suporte , Percepção do Tato , Percepção Visual/fisiologia , Humanos
19.
IEEE Trans Haptics ; 9(4): 548-559, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27244750

RESUMO

In this paper, we present an extended data-driven haptic rendering method capable of reproducing force responses during pushing and sliding interaction on a large surface area. The main part of the approach is a novel input variable set for the training of an interpolation model, which incorporates the position of a proxy - an imaginary contact point on the undeformed surface. This allows us to estimate friction in both sliding and sticking states in a unified framework. Estimating the proxy position is done in real-time based on simulation using a sliding yield surface - a surface defining a border between the sliding and sticking regions in the external force space. During modeling, the sliding yield surface is first identified via an automated palpation procedure. Then, through manual palpation on a target surface, input data and resultant force data are acquired. The data are used to build a radial basis interpolation model. During rendering, this input-output mapping interpolation model is used to estimate force responses in real-time in accordance with the interaction input. Physical performance evaluation demonstrates that our approach achieves reasonably high estimation accuracy. A user study also shows plausible perceptual realism under diverse and extensive exploration.


Assuntos
Modelos Teóricos , Desempenho Psicomotor/fisiologia , Percepção do Tato/fisiologia , Adulto , Gráficos por Computador , Elasticidade , Feminino , Fricção , Humanos , Masculino , Adulto Jovem
20.
IEEE Trans Haptics ; 8(1): 90-101, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25794366

RESUMO

This work was motivated by the need for perceptualizing nano-scale scientific data, e.g., those acquired by a scanning probe microscope, where collocated topography and stiffness distribution of a surface can be measured. Previous research showed that when the topography of a surface with spatially varying stiffness is rendered using the conventional penalty-based haptic rendering method, the topography perceived by the user could be significantly distorted from its original model. In the worst case, a higher region with a smaller stiffness value can be perceived to be lower than a lower region with a larger stiffness value. This problem was explained by the theory of force constancy: the user tends to maintain an invariant contact force when s/he strokes the surface to perceive its topography. In this paper, we present a haptization algorithm that can render the shape of a mesh surface and its stiffness distribution with high perceptual accuracy. Our algorithm adaptively changes the surface topography on the basis of the force constancy theory to deliver adequate shape information to the user while preserving the stiffness perception. We also evaluated the performance of the proposed haptization algorithm in comparison to the constraint-based algorithm by examining relevant proximal stimuli and carrying out a user experiment. Results demonstrated that our algorithm could improve the perceptual accuracy of shape and reduce the exploration time, thereby leading to more accurate and efficient haptization.


Assuntos
Algoritmos , Modelos Estruturais , Tato/fisiologia , Adulto , Feminino , Humanos , Masculino , Microscopia de Força Atômica/métodos , Interface Usuário-Computador , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA