Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
IEEE J Biomed Health Inform ; 27(11): 5622-5633, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37556336

RESUMEN

Deep neural networks (DNNs) have successfully classified EEG-based brain-computer interface (BCI) systems. However, recent studies have found that well-designed input samples, known as adversarial examples, can easily fool well-performed deep neural networks model with minor perturbations undetectable by a human. This paper proposes an efficient generative model named generative perturbation network (GPN), which can generate universal adversarial examples with the same architecture for non-targeted and targeted attacks. Furthermore, the proposed model can be efficiently extended to conditionally or simultaneously generate perturbations for various targets and victim models. Our experimental evaluation demonstrates that perturbations generated by the proposed model outperform previous approaches for crafting signal-agnostic perturbations. We demonstrate that the extended network for signal-specific methods also significantly reduces generation time while performing similarly. The transferability across classification networks of the proposed method is superior to the other methods, which shows our perturbations' high level of generality.


Asunto(s)
Interfaces Cerebro-Computador , Humanos , Redes Neurales de la Computación
2.
Sensors (Basel) ; 23(12)2023 Jun 12.
Artículo en Inglés | MEDLINE | ID: mdl-37420693

RESUMEN

Solubility measurements are essential in various research and industrial fields. With the automation of processes, the importance of automatic and real-time solubility measurements has increased. Although end-to-end learning methods are commonly used for classification tasks, the use of handcrafted features is still important for specific tasks with the limited labeled images of solutions used in industrial settings. In this study, we propose a method that uses computer vision algorithms to extract nine handcrafted features from images and train a DNN-based classifier to automatically classify solutions based on their dissolution states. To validate the proposed method, a dataset was constructed using various solution images ranging from undissolved solutes in the form of fine particles to those completely covering the solution. Using the proposed method, the solubility status can be automatically screened in real time by using a display and camera on a tablet or mobile phone. Therefore, by combining an automatic solubility changing system with the proposed method, a fully automated process could be achieved without human intervention.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Humanos , Solubilidad , Automatización
3.
Sensors (Basel) ; 22(10)2022 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-35632050

RESUMEN

The detection and segmentation of thrombi are essential for monitoring the disease progression of abdominal aortic aneurysms (AAAs) and for patient care and management. As they have inherent capabilities to learn complex features, deep convolutional neural networks (CNNs) have been recently introduced to improve thrombus detection and segmentation. However, investigations into the use of CNN methods is in the early stages and most of the existing methods are heavily concerned with the segmentation of thrombi, which only works after they have been detected. In this work, we propose a fully automated method for the whole process of the detection and segmentation of thrombi, which is based on a well-established mask region-based convolutional neural network (Mask R-CNN) framework that we improve with optimized loss functions. The combined use of complete intersection over union (CIoU) and smooth L1 loss was designed for accurate thrombus detection and then thrombus segmentation was improved with a modified focal loss. We evaluated our method against 60 clinically approved patient studies (i.e., computed tomography angiography (CTA) image volume data) by conducting 4-fold cross-validation. The results of comparisons to multiple other state-of-the-art methods suggested the superior performance of our method, which achieved the highest F1 score for thrombus detection (0.9197) and outperformed most metrics for thrombus segmentation.


Asunto(s)
Aneurisma de la Aorta Abdominal , Trombosis , Aneurisma de la Aorta Abdominal/diagnóstico por imagen , Humanos , Redes Neurales de la Computación , Trombosis/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos
4.
Sensors (Basel) ; 23(1)2022 Dec 24.
Artículo en Inglés | MEDLINE | ID: mdl-36616773

RESUMEN

Abdominal aortic aneurysm (AAA) is a fatal clinical condition with high mortality. Computed tomography angiography (CTA) imaging is the preferred minimally invasive modality for the long-term postoperative observation of AAA. Accurate segmentation of the thrombus region of interest (ROI) in a postoperative CTA image volume is essential for quantitative assessment and rapid clinical decision making by clinicians. Few investigators have proposed the adoption of convolutional neural networks (CNN). Although these methods demonstrated the potential of CNN architectures by automating the thrombus ROI segmentation, the segmentation performance can be further improved. The existing methods performed the segmentation process independently per 2D image and were incapable of using adjacent images, which could be useful for the robust segmentation of thrombus ROIs. In this work, we propose a thrombus ROI segmentation method to utilize not only the spatial features of a target image, but also the volumetric coherence available from adjacent images. We newly adopted a recurrent neural network, bi-directional convolutional long short-term memory (Bi-CLSTM) architecture, which can learn coherence between a sequence of data. This coherence learning capability can be useful for challenging situations, for example, when the target image exhibits inherent postoperative artifacts and noises, the inclusion of adjacent images would facilitate learning more robust features for thrombus ROI segmentation. We demonstrate the segmentation capability of our Bi-CLSTM-based method with a comparison of the existing 2D-based thrombus ROI segmentation counterpart as well as other established 2D- and 3D-based alternatives. Our comparison is based on a large-scale clinical dataset of 60 patient studies (i.e., 60 CTA image volumes). The results suggest the superior segmentation performance of our Bi-CLSTM-based method by achieving the highest scores of the evaluation metrics, e.g., our Bi-CLSTM results were 0.0331 higher on total overlap and 0.0331 lower on false negative when compared to 2D U-net++ as the second-best.


Asunto(s)
Angiografía por Tomografía Computarizada , Trombosis , Humanos , Angiografía por Tomografía Computarizada/métodos , Memoria a Corto Plazo , Tomografía Computarizada por Rayos X , Redes Neurales de la Computación , Trombosis/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
5.
Sensors (Basel) ; 20(12)2020 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-32575708

RESUMEN

Emotion recognition plays an important role in the field of human-computer interaction (HCI). An electroencephalogram (EEG) is widely used to estimate human emotion owing to its convenience and mobility. Deep neural network (DNN) approaches using an EEG for emotion recognition have recently shown remarkable improvement in terms of their recognition accuracy. However, most studies in this field still require a separate process for extracting handcrafted features despite the ability of a DNN to extract meaningful features by itself. In this paper, we propose a novel method for recognizing an emotion based on the use of three-dimensional convolutional neural networks (3D CNNs), with an efficient representation of the spatio-temporal representations of EEG signals. First, we spatially reconstruct raw EEG signals represented as stacks of one-dimensional (1D) time series data to two-dimensional (2D) EEG frames according to the original electrode position. We then represent a 3D EEG stream by concatenating the 2D EEG frames to the time axis. These 3D reconstructions of the raw EEG signals can be efficiently combined with 3D CNNs, which have shown a remarkable feature representation from spatio-temporal data. Herein, we demonstrate the accuracy of the emotional classification of the proposed method through extensive experiments on the DEAP (a Dataset for Emotion Analysis using EEG, Physiological, and video signals) dataset. Experimental results show that the proposed method achieves a classification accuracy of 99.11%, 99.74%, and 99.73% in the binary classification of valence and arousal, and, in four-class classification, respectively. We investigate the spatio-temporal effectiveness of the proposed method by comparing it to several types of input methods with 2D/3D CNN. We then verify the best performing shape of both the kernel and input data experimentally. We verify that an efficient representation of an EEG and a network that fully takes advantage of the data characteristics can outperform methods that apply handcrafted features.


Asunto(s)
Electroencefalografía , Emociones , Redes Neurales de la Computación , Nivel de Alerta , Humanos , Análisis Espacio-Temporal
6.
Sensors (Basel) ; 18(8)2018 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-30096938

RESUMEN

In this paper, we propose an automated calibration system for an eye-tracked autostereoscopic display (ETAD). Instead of calibrating each device sequentially and individually, our method calibrates all parameters of the devices at the same time in a fixed environment. To achieve this, we first identify and classify all parameters by establishing a physical model of the ETAD and describe a rendering method based on a viewer's eye position. Then, we propose a calibration method that estimates all parameters at the same time using two images. To automate the proposed method, we use a calibration module of our own design. Consequently, the calibration process is performed by analyzing two images captured by onboard camera of the ETAD and the external camera of the calibration module. For validation, we conducted two types of experiments, one with simulation for quantitative evaluation, and the other with a real prototype ETAD device for qualitative assessment. Experimental results demonstrate the crosstalk of the ETAD was improved to 8.32%. The visual quality was also improved to 30.44% in the peak-signal-to-noise ratio (PSNR) and 40.14% in the structural similarity (SSIM) indexes when the proposed calibration method is applied. The whole calibration process was carried out within 1.5 s without any external manipulation.

7.
Opt Express ; 26(16): 20233, 2018 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-30119336

RESUMEN

In this paper we present an autostereoscopic 3D display using a directional subpixel rendering algorithm in which clear left-right images are expressed in real time based on a viewer's 3D eye positions. In order to maintain the 3D image quality over a wide viewing range, we designed an optical layer that generates a uniformly distributed light field. The proposed 3D rendering method is simple, and each pixel processing can be performed independently in parallel computing environments. To prove the effectiveness of our display system, we implemented 31.5" 3D monitor and 10.1" 3D tablet prototypes in which the 3D rendering is processed in the GPU and FPGA board, respectively.

8.
Opt Express ; 25(10): 10801-10814, 2017 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-28788769

RESUMEN

Calibration is vital to autostereoscopic 3D displays. This paper proposes a local calibration method that copes with any type of deformation in the optical layer. The proposed method is based on visual pattern analysis. Given the observations, we manage to localize the optical slits by matching the observations to the input pattern. In a principled optimization framework, we find an efficient calibration algorithm. Experimental validation follows. The local calibration shows significant improvement in 3D visual quality over the global calibration method. This paper also finds a new intuitive insight on the calibration in terms of the light field theory.

9.
IEEE Trans Image Process ; 26(5): 2090-2102, 2017 May.
Artículo en Inglés | MEDLINE | ID: mdl-28186891

RESUMEN

Nearly all 3D displays need calibration for correct rendering. More often than not, the optical elements in a 3D display are misaligned from the designed parameter setting. As a result, 3D magic does not perform well as intended. The observed images tend to get distorted. In this paper, we propose a novel display calibration method to fix the situation. In our method, a pattern image is displayed on the panel and a camera takes its pictures twice at different positions. Then, based on a quantitative model, we extract all display parameters (i.e., pitch, slanted angle, gap or thickness, and offset) from the observed patterns in the captured images. For high accuracy and robustness, our method analyzes the patterns mostly in the frequency domain. We conduct two types of experiments for validation; one with optical simulation for quantitative results and the other with real-life displays for qualitative assessment. Experimental results demonstrate that our method is quite accurate, about a half order of magnitude higher than prior work; is efficient, spending less than 2s for computation; and is robust to noise, working well in the SNR regime as low as 6dB.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...