Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Digit Health ; 10: 20552076241249874, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38726217

RESUMEN

Automated epileptic seizure detection from ectroencephalogram (EEG) signals has attracted significant attention in the recent health informatics field. The serious brain condition known as epilepsy, which is characterized by recurrent seizures, is typically described as a sudden change in behavior caused by a momentary shift in the excessive electrical discharges in a group of brain cells, and EEG signal is primarily used in most cases to identify seizure to revitalize the close loop brain. The development of various deep learning (DL) algorithms for epileptic seizure diagnosis has been driven by the EEG's non-invasiveness and capacity to provide repetitive patterns of seizure-related electrophysiological information. Existing DL models, especially in clinical contexts where irregular and unordered structures of physiological recordings make it difficult to think of them as a matrix; this has been a key disadvantage to producing a consistent and appropriate diagnosis outcome due to EEG's low amplitude and nonstationary nature. Graph neural networks have drawn significant improvement by exploiting implicit information that is present in a brain anatomical system, whereas inter-acting nodes are connected by edges whose weights can be determined by either temporal associations or anatomical connections. Considering all these aspects, a novel hybrid framework is proposed for epileptic seizure detection by combined with a sequential graph convolutional network (SGCN) and deep recurrent neural network (DeepRNN). Here, DepRNN is developed by fusing a gated recurrent unit (GRU) with a traditional RNN; its key benefit is that it solves the vanishing gradient problem and achieve this hybrid framework greater sophistication. The line length feature, auto-covariance, auto-correlation, and periodogram are applied as a feature from the raw EEG signal and then grouped the resulting matrix into time-frequency domain as inputs for the SGCN to use for seizure classification. This model extracts both spatial and temporal information, resulting in improved accuracy, precision, and recall for seizure detection. Extensive experiments conducted on the CHB-MIT and TUH datasets showed that the SGCN-DeepRNN model outperforms other deep learning models for seizure detection, achieving an accuracy of 99.007%, with high sensitivity and specificity.

2.
PLoS One ; 19(5): e0299009, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38805494

RESUMEN

Maintaining stable voltage levels is essential for power systems' efficiency and reliability. Voltage fluctuations during load changes can lead to equipment damage and costly disruptions. Automatic voltage regulators (AVRs) are traditionally used to address this issue, regulating generator terminal voltage. Despite progress in control methodologies, challenges persist, including robustness and response time limitations. Therefore, this study introduces a novel approach to AVR control, aiming to enhance robustness and efficiency. A custom optimizer, the quadratic wavelet-enhanced gradient-based optimization (QWGBO) algorithm, is developed. QWGBO refines the gradient-based optimization (GBO) by introducing exploration and exploitation improvements. The algorithm integrates quadratic interpolation mutation and wavelet mutation strategy to enhance search efficiency. Extensive tests using benchmark functions demonstrate the QWGBO's effectiveness in optimization. Comparative assessments against existing optimization algorithms and recent techniques confirm QWGBO's superior performance. In AVR control, QWGBO is coupled with a cascaded real proportional-integral-derivative with second order derivative (RPIDD2) and fractional-order proportional-integral (FOPI) controller, aiming for precision, stability, and quick response. The algorithm's performance is verified through rigorous simulations, emphasizing its effectiveness in optimizing complex engineering problems. Comparative analyses highlight QWGBO's superiority over existing algorithms, positioning it as a promising solution for optimizing power system control and contributing to the advancement of robust and efficient power systems.


Asunto(s)
Algoritmos , Suministros de Energía Eléctrica
3.
Sci Rep ; 13(1): 16975, 2023 10 09.
Artículo en Inglés | MEDLINE | ID: mdl-37813932

RESUMEN

Sign Language Recognition is a breakthrough for communication among deaf-mute society and has been a critical research topic for years. Although some of the previous studies have successfully recognized sign language, it requires many costly instruments including sensors, devices, and high-end processing power. However, such drawbacks can be easily overcome by employing artificial intelligence-based techniques. Since, in this modern era of advanced mobile technology, using a camera to take video or images is much easier, this study demonstrates a cost-effective technique to detect American Sign Language (ASL) using an image dataset. Here, "Finger Spelling, A" dataset has been used, with 24 letters (except j and z as they contain motion). The main reason for using this dataset is that these images have a complex background with different environments and scene colors. Two layers of image processing have been used: in the first layer, images are processed as a whole for training, and in the second layer, the hand landmarks are extracted. A multi-headed convolutional neural network (CNN) model has been proposed and tested with 30% of the dataset to train these two layers. To avoid the overfitting problem, data augmentation and dynamic learning rate reduction have been used. With the proposed model, 98.981% test accuracy has been achieved. It is expected that this study may help to develop an efficient human-machine communication system for a deaf-mute society.


Asunto(s)
Inteligencia Artificial , Lengua de Signos , Humanos , Redes Neurales de la Computación , Mano , Procesamiento de Imagen Asistido por Computador/métodos
4.
Materials (Basel) ; 16(3)2023 Jan 23.
Artículo en Inglés | MEDLINE | ID: mdl-36770037

RESUMEN

This work focused on the novel and compact 1-bit symmetrical coding-based metamaterial for radar cross section reduction in terahertz frequencies. A couple of coding particles were constructed to impersonate the elements '0' and '1', which have phase differences of 180°. All the analytical simulations were performed by adopting Computer Simulation Technology Microwave Studio 2019 software. Moreover, the transmission coefficient of the element '1' was examined as well by adopting similar software and validated by a high-frequency structure simulator. Meanwhile, the frequency range from 0 to 3 THz was set in this work. The phase response properties of each element were examined before constructing various coding metamaterial designs in smaller and bigger lattices. The proposed unit cells exhibit phase responses at 0.84 THz and 1.54 THz, respectively. Meanwhile, the analysis of various coding sequences was carried out and they manifest interesting monostatic and bistatic radar cross section (RCS) reduction performances. The Coding Sequence 2 manifests the best bistatic RCS reduction values in smaller lattices, which reduced from -69.8 dBm2 to -65.5 dBm2 at 1.54 THz. On the other hand, the monostatic RCS values for all lattices have an inclined line until they reach a frequency of 1.0 THz from more than -60 dBm2. However, from the 1.0 THz to 3.0 THz frequency range the RCS values have moderate discrepancies among the horizontal line for each lattice. Furthermore, two parametric studies were performed to examine the RCS reduction behaviour, for instance, multi-layer structures and as well tilt positioning of the proposed coding metamaterial. Overall it indicates that the integration of coding-based metamaterial successfully reduced the RCS values.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...