Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Sensors (Basel) ; 22(21)2022 Oct 29.
Artículo en Inglés | MEDLINE | ID: mdl-36366001

RESUMEN

Amyotrophic lateral sclerosis (ALS) causes people to have difficulty communicating with others or devices. In this paper, multi-task learning with denoising and classification tasks is used to develop a robust steady-state visual evoked potential-based brain-computer interface (SSVEP-based BCI), which can help people communicate with others. To ease the operation of the input interface, a single channel-based SSVEP-based BCI is selected. To increase the practicality of SSVEP-based BCI, multi-task learning is adopted to develop the neural network-based intelligent system, which can suppress the noise components and obtain a high level of accuracy of classification. Thus, denoising and classification tasks are selected in multi-task learning. The experimental results show that the proposed multi-task learning can effectively integrate the advantages of denoising and discriminative characteristics and outperform other approaches. Therefore, multi-task learning with denoising and classification tasks is very suitable for developing an SSVEP-based BCI for practical applications. In the future, an augmentative and alternative communication interface can be implemented and examined for helping people with ALS communicate with others in their daily lives.


Asunto(s)
Esclerosis Amiotrófica Lateral , Interfaces Cerebro-Computador , Humanos , Potenciales Evocados Visuales , Redes Neurales de la Computación , Electroencefalografía/métodos , Estimulación Luminosa , Algoritmos
2.
Sensors (Basel) ; 21(15)2021 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-34372256

RESUMEN

For subjects with amyotrophic lateral sclerosis (ALS), the verbal and nonverbal communication is greatly impaired. Steady state visually evoked potential (SSVEP)-based brain computer interfaces (BCIs) is one of successful alternative augmentative communications to help subjects with ALS communicate with others or devices. For practical applications, the performance of SSVEP-based BCIs is severely reduced by the effects of noises. Therefore, developing robust SSVEP-based BCIs is very important to help subjects communicate with others or devices. In this study, a noise suppression-based feature extraction and deep neural network are proposed to develop a robust SSVEP-based BCI. To suppress the effects of noises, a denoising autoencoder is proposed to extract the denoising features. To obtain an acceptable recognition result for practical applications, the deep neural network is used to find the decision results of SSVEP-based BCIs. The experimental results showed that the proposed approaches can effectively suppress the effects of noises and the performance of SSVEP-based BCIs can be greatly improved. Besides, the deep neural network outperforms other approaches. Therefore, the proposed robust SSVEP-based BCI is very useful for practical applications.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Potenciales Evocados , Potenciales Evocados Visuales , Humanos , Estimulación Luminosa
3.
Sensors (Basel) ; 21(13)2021 Jun 23.
Artículo en Inglés | MEDLINE | ID: mdl-34201774

RESUMEN

Solar cells may possess defects during the manufacturing process in photovoltaic (PV) industries. To precisely evaluate the effectiveness of solar PV modules, manufacturing defects are required to be identified. Conventional defect inspection in industries mainly depends on manual defect inspection by highly skilled inspectors, which may still give inconsistent, subjective identification results. In order to automatize the visual defect inspection process, an automatic cell segmentation technique and a convolutional neural network (CNN)-based defect detection system with pseudo-colorization of defects is designed in this paper. High-resolution Electroluminescence (EL) images of single-crystalline silicon (sc-Si) solar PV modules are used in our study for the detection of defects and their quality inspection. Firstly, an automatic cell segmentation methodology is developed to extract cells from an EL image. Secondly, defect detection can be actualized by CNN-based defect detector and can be visualized with pseudo-colors. We used contour tracing to accurately localize the panel region and a probabilistic Hough transform to identify gridlines and busbars on the extracted panel region for cell segmentation. A cell-based defect identification system was developed using state-of-the-art deep learning in CNNs. The detected defects are imposed with pseudo-colors for enhancing defect visualization using K-means clustering. Our automatic cell segmentation methodology can segment cells from an EL image in about 2.71 s. The average segmentation errors along the x-direction and y-direction are only 1.6 pixels and 1.4 pixels, respectively. The defect detection approach on segmented cells achieves 99.8% accuracy. Along with defect detection, the defect regions on a cell are furnished with pseudo-colors to enhance the visualization.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Silicio
4.
Int J Audiol ; 57(2): 135-142, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-28906160

RESUMEN

OBJECTIVE: This study explored tone production, tone perception and intelligibility of produced speech in Mandarin-speaking prelingually deaf children with at least 5 years of cochlear implant (CI) experience. Another focus was on the predictive value of tone perception and tone production as they relate to speech intelligibility. DESIGN: Cross-sectional research. STUDY SAMPLE: Thirty-three prelingually deafened children aged over eight years with over five years of experience with CI underwent tests for tone perception, tone production, and the Speech Intelligibility Rating (SIR). A Pearson correlation and a stepwise regression analysis were used to estimate the correlations among tone perception, tone production, and SIR scores. RESULTS: The mean scores for tone perception, tone production, and SIR were 76.88%, 90.08%, and 4.08, respectively. Moderately positive Pearson correlations were found between tone perception and production, tone production and SIR, and tone perception and SIR (p < 0.01, p < 0.01 and p < 0.01, respectively). In the stepwise regression analysis, tone production, as the major predictor, accounted for 29% of the variations in the SIR (p < 0.01). CONCLUSIONS: Mandarin-speaking cochlear-implanted children with sufficient duration of CI use produce intelligent speech. Speech intelligibility can be predicted by tone production performance.


Asunto(s)
Implantes Cocleares , Sordera/fisiopatología , Fonética , Inteligibilidad del Habla , Percepción del Habla , Adolescente , Pueblo Asiatico , Niño , Estudios Transversales , Sordera/psicología , Sordera/cirugía , Femenino , Humanos , Lenguaje , Masculino , Factores de Tiempo
5.
Bioengineering (Basel) ; 11(6)2024 May 29.
Artículo en Inglés | MEDLINE | ID: mdl-38927785

RESUMEN

Cardiovascular disease (CVD) is one of the leading causes of death globally. Currently, clinical diagnosis of CVD primarily relies on electrocardiograms (ECG), which are relatively easier to identify compared to other diagnostic methods. However, ensuring the accuracy of ECG readings requires specialized training for healthcare professionals. Therefore, developing a CVD diagnostic system based on ECGs can provide preliminary diagnostic results, effectively reducing the workload of healthcare staff and enhancing the accuracy of CVD diagnosis. In this study, a deep neural network with a cross-stage partial network and a cross-attention-based transformer is used to develop an ECG-based CVD decision system. To accurately represent the characteristics of ECG, the cross-stage partial network is employed to extract embedding features. This network can effectively capture and leverage partial information from different stages, enhancing the feature extraction process. To effectively distill the embedding features, a cross-attention-based transformer model, known for its robust scalability that enables it to process data sequences with different lengths and complexities, is employed to extract meaningful embedding features, resulting in more accurate outcomes. The experimental results showed that the challenge scoring metric of the proposed approach is 0.6112, which outperforms others. Therefore, the proposed ECG-based CVD decision system is useful for clinical diagnosis.

6.
Bioengineering (Basel) ; 10(11)2023 Nov 03.
Artículo en Inglés | MEDLINE | ID: mdl-38002405

RESUMEN

(1) Background: Patients with severe physical impairments (spinal cord injury, cerebral palsy, amyotrophic lateral sclerosis) often have limited mobility due to physical limitations, and may even be bedridden all day long, losing the ability to take care of themselves. In more severe cases, the ability to speak may even be lost, making even basic communication very difficult. (2) Methods: This research will design a set of image-assistive communication equipment based on artificial intelligence to solve communication problems of daily needs. Using artificial intelligence for facial positioning, and facial-motion-recognition-generated Morse code, and then translating it into readable characters or commands, it allows users to control computer software by themselves and communicate through wireless networks or a Bluetooth protocol to control environment peripherals. (3) Results: In this study, 23 human-typed data sets were subjected to recognition using fuzzy algorithms. The average recognition rates for expert-generated data and data input by individuals with disabilities were 99.83% and 98.6%, respectively. (4) Conclusions: Through this system, users can express their thoughts and needs through their facial movements, thereby improving their quality of life and having an independent living space. Moreover, the system can be used without touching external switches, greatly improving convenience and safety.

7.
IEEE Trans Biomed Eng ; 58(11): 3061-8, 2011 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-22009868

RESUMEN

Articulation errors seriously reduce speech intelligibility and the ease of spoken communication. Speech-language pathologists manually identify articulation error patterns based on their clinical experience, which is a time-consuming and expensive process. This study proposes an automatic pronunciation error identification system that uses a novel dependence network (DN) approach. In order to derive a subject's articulatory information, a photo naming task is performed to obtain the subject's speech patterns. Based on clinical knowledge about speech evaluation, a DN scheme was used to model the relationships of a test word, a subject, a speech pattern, and an articulation error pattern. To integrate DN into automatic speech recognition (ASR), a pronunciation confusion network is proposed to model the probability of DN and is then used to guide the search space of the ASR. Further, to increase the accuracy of the ASR, an appropriate threshold based on a histogram of pronunciation errors is selected in order to disregard rare pronunciation errors. Finally, the articulation error patterns were well identified by integrating the likelihoods of the DNs of each phoneme. The results of this study indicate that it is feasible to clinically implement this dynamic network approach to achieve satisfactory performance in articulation evaluation.


Asunto(s)
Trastornos de la Articulación/fisiopatología , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Niño , Bases de Datos Factuales , Femenino , Humanos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA