Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Phys Med Biol ; 69(11)2024 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-38657624

RESUMEN

Objective. Automatic and accurate airway segmentation is necessary for lung disease diagnosis. The complex tree-like structures leads to gaps in the different generations of the airway tree, and thus airway segmentation is also considered to be a multi-scale problem. In recent years, convolutional neural networks have facilitated the development of medical image segmentation. In particular, 2D CNNs and 3D CNNs can extract different scale features. Hence, we propose a two-stage and 2D + 3D framework for multi-scale airway tree segmentation.Approach. In stage 1, we use a 2D full airway SegNet(2D FA-SegNet) to segment the complete airway tree. Multi-scale atros spatial pyramid and Atros Residual Skip connection modules are inserted to extract different scales feature. We designed a hard sample selection strategy to increase the proportion of intrapulmonary airway samples in stage 2. 3D airway RefineNet (3D ARNet) as stage 2 takes the results of stage 1 asa prioriinformation. Spatial information extracted by 3D convolutional kernel compensates for the loss of in 2D FA-SegNet. Furthermore, we added false positive losses and false negative losses to improve the segmentation performance of airway branches within the lungs.Main results. We performed data enhancement on the publicly available dataset of ISICDM 2020 Challenge 3, and on which evaluated our method. Comprehensive experiments show that the proposed method has the highest dice similarity coefficient (DSC) of 0.931, and IoU of 0.871 for the whole airway tree and DSC of 0.699, and IoU of 0.543 for the intrapulmonary bronchi tree. In addition, 3D ARNet proposed in this paper cascaded with other state-of-the-art methods to increase detected tree length rate by up to 46.33% and detected tree branch rate by up to 42.97%.Significance. The quantitative and qualitative evaluation results show that our proposed method performs well in segmenting the airway at different scales.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Pulmón , Tomografía Computarizada por Rayos X , Pulmón/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Redes Neurales de la Computación
2.
Biosensors (Basel) ; 14(1)2024 Jan 22.
Artículo en Inglés | MEDLINE | ID: mdl-38275310

RESUMEN

Carcinoembryonic antigen (CEACAM5), as a broad-spectrum tumor biomarker, plays a crucial role in analyzing the therapeutic efficacy and progression of cancer. Herein, we propose a novel biosensor based on specklegrams of tapered multimode fiber (MMF) and two-dimensional convolutional neural networks (2D-CNNs) for the detection of CEACAM5. The microfiber is modified with CEA antibodies to specifically recognize antigens. The biosensor utilizes the interference effect of tapered MMF to generate highly sensitive specklegrams in response to different CEACAM5 concentrations. A zero mean normalized cross-correlation (ZNCC) function is explored to calculate the image matching degree of the specklegrams. Profiting from the extremely high detection limit of the speckle sensor, variations in the specklegrams of antibody concentrations from 1 to 1000 ng/mL are measured in the experiment. The surface sensitivity of the biosensor is 0.0012 (ng/mL)-1 within a range of 1 to 50 ng/mL. Moreover, a 2D-CNN was introduced to solve the problem of nonlinear detection surface sensitivity variation in a large dynamic range, and in the search for image features to improve evaluation accuracy, achieving more accurate CEACAM5 monitoring, with a maximum detection error of 0.358%. The proposed fiber specklegram biosensing scheme is easy to implement and has great potential in analyzing the postoperative condition of patients.


Asunto(s)
Técnicas Biosensibles , Neoplasias , Humanos , Antígeno Carcinoembrionario , Proteínas Ligadas a GPI
3.
Anal Chim Acta ; 1282: 341908, 2023 Nov 22.
Artículo en Inglés | MEDLINE | ID: mdl-37923405

RESUMEN

BACKGROUND: Raman spectroscopy has been extensively utilized as a marker-free detection method in the complementary diagnosis of cancer. Multivariate statistical classification analysis is frequently employed for Raman spectral data classification. Nevertheless, traditional multivariate statistical classification analysis performs poorly when analyzing large samples and multicategory spectral data. In addition, with the advancement of computer vision, convolutional neural networks (CNNs) have demonstrated extraordinarily precise analysis of two-dimensional image processing. RESULT: Combining 2D Raman spectrograms with automatic weighted feature fusion network (AWFFN) for bladder cancer detection is presented in this paper. Initially, the s-transform (ST) is implemented for the first time to convert 1D Raman data into 2D spectrograms, achieving 99.2% detection accuracy. Second, four upscaling techniques, including short time fourier transform (STFT), recurrence map (RP), markov transform field (MTF), and grammy angle field (GAF), were used to transform the 1D Raman spectral data into a variety of 2D Raman spectrograms. In addition, a particle swarm optimization (PSO) algorithm is combined with VGG19, ResNet50, and ResNet101 to construct a weighted feature fusion network, and this parallel network is employed for evaluating multiple spectrograms. Class activation mapping (CAM) is additionally employed to illustrate and evaluate the process of feature extraction via the three parallel network branches. The results demonstrate that the combination of a 2D Raman spectrogram along with a CNN for the diagnosis of bladder cancer obtains a 99.2% accuracy rate,which indicates that it is an extremely promising auxiliary technology for cancer diagnosis. SIGNIFICANCE: The proposed two-dimensional Raman spectroscopy method has an improved precision than one-dimensional spectroscopic data, which presents a potential methodology for assisted cancer detection and providing crucial technical support for assisted diagnosis.


Asunto(s)
Neoplasias de la Vejiga Urinaria , Humanos , Neoplasias de la Vejiga Urinaria/diagnóstico , Algoritmos , Procesamiento de Imagen Asistido por Computador , Análisis Multivariante , Espectrometría Raman , Tecnología
4.
Front Neurosci ; 17: 1177424, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37614342

RESUMEN

Background: The convolutional neural network (CNN) is a mainstream deep learning (DL) algorithm, and it has gained great fame in solving problems from clinical examination and diagnosis, such as Alzheimer's disease (AD). AD is a degenerative disease difficult to clinical diagnosis due to its unclear underlying pathological mechanism. Previous studies have primarily focused on investigating structural abnormalities in the brain's functional networks related to the AD or proposing different deep learning approaches for AD classification. Objective: The aim of this study is to leverage the advantages of combining brain topological features extracted from functional network exploration and deep features extracted by the CNN. We establish a novel fMRI-based classification framework that utilizes Resting-state functional magnetic resonance imaging (rs-fMRI) with the phase synchronization index (PSI) and 2D-CNN to detect abnormal brain functional connectivity in AD. Methods: First, PSI was applied to construct the brain network by region of interest (ROI) signals obtained from data preprocessing stage, and eight topological features were extracted. Subsequently, the 2D-CNN was applied to the PSI matrix to explore the local and global patterns of the network connectivity by extracting eight deep features from the 2D-CNN convolutional layer. Results: Finally, classification analysis was carried out on the combined PSI and 2D-CNN methods to recognize AD by using support vector machine (SVM) with 5-fold cross-validation strategy. It was found that the classification accuracy of combined method achieved 98.869%. Conclusion: These findings show that our framework can adaptively combine the best brain network features to explore network synchronization, functional connections, and characterize brain functional abnormalities, which could effectively detect AD anomalies by the extracted features that may provide new insights into exploring the underlying pathogenesis of AD.

5.
Entropy (Basel) ; 25(6)2023 May 25.
Artículo en Inglés | MEDLINE | ID: mdl-37372188

RESUMEN

Human action recognition is an essential process in surveillance video analysis, which is used to understand the behavior of people to ensure safety. Most of the existing methods for HAR use computationally heavy networks such as 3D CNN and two-stream networks. To alleviate the challenges in the implementation and training of 3D deep learning networks, which have more parameters, a customized lightweight directed acyclic graph-based residual 2D CNN with fewer parameters was designed from scratch and named HARNet. A novel pipeline for the construction of spatial motion data from raw video input is presented for the latent representation learning of human actions. The constructed input is fed to the network for simultaneous operation over spatial and motion information in a single stream, and the latent representation learned at the fully connected layer is extracted and fed to the conventional machine learning classifiers for action recognition. The proposed work was empirically verified, and the experimental results were compared with those for existing methods. The results show that the proposed method outperforms state-of-the-art (SOTA) methods with a percentage improvement of 2.75% on UCF101, 10.94% on HMDB51, and 0.18% on the KTH dataset.

6.
Front Neuroinform ; 17: 1081160, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37035716

RESUMEN

This paper presents a time-efficient preprocessing framework that converts any given 1D physiological signal recordings into a 2D image representation for training image-based deep learning models. The non-stationary signal is rasterized into the 2D image using Bresenham's line algorithm with time complexity O(n). The robustness of the proposed approach is evaluated based on two publicly available datasets. This study classified three different neural spikes (multi-class) and EEG epileptic seizure and non-seizure (binary class) based on shapes using a modified 2D Convolution Neural Network (2D CNN). The multi-class dataset consists of artificially simulated neural recordings with different Signal-to-Noise Ratios (SNR). The 2D CNN architecture showed significant performance for all individual SNRs scores with (SNR/ACC): 0.5/99.69, 0.75/99.69, 1.0/99.49, 1.25/98.85, 1.5/97.43, 1.75/95.20 and 2.0/91.98. Additionally, the binary class dataset also achieved 97.52% accuracy by outperforming several others proposed algorithms. Likewise, this approach could be employed on other biomedical signals such as Electrocardiograph (EKG) and Electromyography (EMG).

7.
Biomed Tech (Berl) ; 68(5): 457-468, 2023 Oct 26.
Artículo en Inglés | MEDLINE | ID: mdl-37099486

RESUMEN

Internet Gaming Disorder (IGD), as one of worldwide mental health issues, leads to negative effects on physical and mental health and has attracted public attention. Most studies on IGD are based on screening scales and subjective judgments of doctors, without objective quantitative assessment. However, public understanding of internet gaming disorder lacks objectivity. Therefore, the researches on internet gaming disorder still have many limitations. In this paper, a stop-signal task (SST) was designed to assess inhibitory control in patients with IGD based on prefrontal functional near-infrared spectroscopy (fNIRS). According to the scale, the subjects were divided into health and gaming disorder. A total of 40 subjects (24 internet gaming disorders; 16 healthy controls) signals were used for deep learning-based classification. The seven algorithms used for classification and comparison were deep learning algorithms (DL) and machine learning algorithms (ML), with four and three algorithms in each category, respectively. After applying hold-out method, the performance of the model was verified by accuracy. DL models outperformed traditional ML algorithms. Furthermore, the classification accuracy of the two-dimensional convolution neural network (2D-CNN) was 87.5% among all models. This was the highest accuracy out of all models that were tested. The 2D-CNN was able to outperform the other models due to its ability to learn complex patterns in data. This makes it well-suited for image classification tasks. The findings suggested that a 2D-CNN model is an effective approach for predicting internet gaming disorder. The results show that this is a reliable method with high accuracy to identify patients with IGD and demonstrate that the use of fNIRS to facilitate the development of IGD diagnosis has great potential.


Asunto(s)
Conducta Adictiva , Juegos de Video , Humanos , Trastorno de Adicción a Internet , Espectroscopía Infrarroja Corta , Juegos de Video/psicología , Conducta Adictiva/diagnóstico , Conducta Adictiva/psicología , Redes Neurales de la Computación , Internet
8.
Int J Neurosci ; 133(5): 512-522, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-34042552

RESUMEN

BACKGROUND: Moyamoya disease (MMD) is a serious intracranial cerebrovascular disease. Cerebral hemorrhage caused by MMD will bring life risk to patients. Therefore, MMD detection is of great significance in the prevention of cerebral hemorrhage. In order to improve the accuracy of digital subtraction angiography (DSA) in the diagnosis of ischemic MMD, in this paper, a deep network architecture combined with 3D convolutional neural network (3D CNN) and bidirectional convolutional gated recurrent unit (BiConvGRU) is proposed to learn the spatiotemporal features for ischemic MMD detection. METHODS: Firstly, 2D convolutional neural network (2D CNN) is utilized to extract spatial features for each frame of DSA. Secondly, the long-term spatiotemporal features of DSA sequence are extracted by BiConvGRU. Thirdly, the short-term spatiotemporal features of DSA are further extracted by 3D convolutional neural network (3D CNN). In addition, different features are extracted when gray images and optical flow images pass through the network, and multiple features are extracted by features fusion. Finally, the fused features are utilized to classify. RESULTS: The proposed method was quantitatively evaluated on a data sets of 630 cases. The experimental results showed a detection accuracy of 0.9788, sensitivity and specificity were 0.9780 and 0.9796, respectively, and area under curve (AUC) was 0.9856. Compared with other methods, we can get the highest accuracy and AUC. CONCLUSIONS: The experimental results show that the proposed method is stable and reliable for ischemic MMD detection, which provides an option for doctors to accurately diagnose ischemic MMD.


Asunto(s)
Enfermedad de Moyamoya , Humanos , Enfermedad de Moyamoya/diagnóstico por imagen , Angiografía de Substracción Digital/métodos , Redes Neurales de la Computación , Hemorragia Cerebral
9.
Diagnostics (Basel) ; 12(12)2022 Dec 06.
Artículo en Inglés | MEDLINE | ID: mdl-36553066

RESUMEN

Human falls, especially for elderly people, can cause serious injuries that might lead to permanent disability. Approximately 20-30% of the aged people in the United States who experienced fall accidents suffer from head trauma, injuries, or bruises. Fall detection is becoming an important public healthcare problem. Timely and accurate fall incident detection could enable the instant delivery of medical services to the injured. New advances in vision-based technologies, including deep learning, have shown significant results in action recognition, where some focus on the detection of fall actions. In this paper, we propose an automatic human fall detection system using multi-stream convolutional neural networks with fusion. The system is based on a multi-level image-fusion approach of every 16 frames of an input video to highlight movement differences within this range. This results of four consecutive preprocessed images are fed to a new proposed and efficient lightweight multi-stream CNN model that is based on a four-branch architecture (4S-3DCNN) that classifies whether there is an incident of a human fall. The evaluation included the use of more than 6392 generated sequences from the Le2i fall detection dataset, which is a publicly available fall video dataset. The proposed method, using three-fold cross-validation to validate generalization and susceptibility to overfitting, achieved a 99.03%, 99.00%, 99.68%, and 99.00% accuracy, sensitivity, specificity, and precision, respectively. The experimental results prove that the proposed model outperforms state-of-the-art models, including GoogleNet, SqueezeNet, ResNet18, and DarkNet19, for fall incident detection.

10.
Innov Syst Softw Eng ; : 1-14, 2022 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-36060497

RESUMEN

Hand gestures are useful tools for many applications in the human-computer interaction community. Here, the objective is to track the movement of the hand irrespective of the shape, size and color of the hand. And, for this, a motion template guided by optical flow (OFMT) is proposed. OFMT is a compact representation of the motion information of a gesture encoded into a single image. Recently, deep networks have shown impressive improvements as compared to conventional hand-crafted feature-based techniques. Moreover, it is seen that the use of different streams with informative input data helps to increase the recognition performance. This work basically proposes a two-stream fusion model for hand gesture recognition. The two-stream network consists of two layers-a 3D convolutional neural network (C3D) that takes gesture videos as input and a 2D-CNN that takes OFMT images as input. C3D has shown its efficiency in capturing spatiotemporal information of a video, whereas OFMT helps to eliminate irrelevant gestures providing additional motion information. Though each stream can work independently, they are combined with a fusion scheme to boost the recognition results. We have shown the efficiency of the proposed two-stream network on two databases.

11.
Entropy (Basel) ; 24(8)2022 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-36010726

RESUMEN

The segmentation of cerebral aneurysms is a challenging task because of their similar imaging features to blood vessels and the great imbalance between the foreground and background. However, the existing 2D segmentation methods do not make full use of 3D information and ignore the influence of global features. In this study, we propose an automatic solution for the segmentation of cerebral aneurysms. The proposed method relies on the 2D U-Net as the backbone and adds a Transformer block to capture remote information. Additionally, through the new entropy selection strategy, the network pays more attention to the indistinguishable blood vessels and aneurysms, so as to reduce the influence of class imbalance. In order to introduce global features, three continuous patches are taken as inputs, and a segmentation map corresponding to the central patch is generated. In the inference phase, using the proposed recombination strategy, the segmentation map was generated, and we verified the proposed method on the CADA dataset. We achieved a Dice coefficient (DSC) of 0.944, an IOU score of 0.941, recall of 0.946, an F2 score of 0.942, a mAP of 0.896 and a Hausdorff distance of 3.12 mm.

12.
Front Genet ; 13: 851688, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35937990

RESUMEN

The major mechanism of proteolysis in the cytosol and nucleus is the ubiquitin-proteasome pathway (UPP). The highly controlled UPP has an effect on a wide range of cellular processes and substrates, and flaws in the system can lead to the pathogenesis of a number of serious human diseases. Knowledge about UPPs provide useful hints to understand the cellular process and drug discovery. The exponential growth in next-generation sequencing wet lab approaches have accelerated the accumulation of unannotated data in online databases, making the UPP characterization/analysis task more challenging. Thus, computational methods are used as an alternative for fast and accurate identification of UPPs. Aiming this, we develop a novel deep learning-based predictor named "2DCNN-UPP" for identifying UPPs with low error rate. In the proposed method, we used proposed algorithm with a two-dimensional convolutional neural network with dipeptide deviation features. To avoid the over fitting problem, genetic algorithm is employed to select the optimal features. Finally, the optimized attribute set are fed as input to the 2D-CNN learning engine for building the model. Empirical evidence or outcomes demonstrates that the proposed predictor achieved an overall accuracy and AUC (ROC) value using 10-fold cross validation test. Superior performance compared to other state-of-the art methods for discrimination the relations UPPs classification. Both on and independent test respectively was trained on 10-fold cross validation method and then evaluated through independent test. In the case where experimentally validated ubiquitination sites emerged, we must devise a proteomics-based predictor of ubiquitination. Meanwhile, we also evaluated the generalization power of our trained modal via independent test, and obtained remarkable performance in term of 0.862 accuracy, 0.921 sensitivity, 0.803 specificity 0.803, and 0.730 Matthews correlation coefficient (MCC) respectively. Four approaches were used in the sequences, and the physical properties were calculated combined. When used a 10-fold cross-validation, 2D-CNN-UPP obtained an AUC (ROC) value of 0.862 predicted score. We analyzed the relationship between UPP protein and non-UPP protein predicted score. Last but not least, this research could effectively analyze the large scale relationship between UPP proteins and non-UPP proteins in particular and other protein problems in general and our research work might improve computational biological research. Therefore, we could utilize the latest features in our model framework and Dipeptide Deviation from Expected Mean (DDE) -based protein structure features for the prediction of protein structure, functions, and different molecules, such as DNA and RNA.

13.
Med Phys ; 49(9): 5855-5869, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35894542

RESUMEN

BACKGROUND: In recent years, two-dimensional convolutional neural network (2D CNN) have been widely used in the diagnosis of Alzheimer's disease (AD) based on structural magnetic resonance imaging (sMRI). However, due to the lack of targeted processing of the key slices of sMRI images, the classification performance of the CNN model needs to be improved. PURPOSE: Therefore, in this paper, we propose a key slice processing technique called the structural highlighting key slice stacking (SHKSS) technique, and we apply it to a 2D transfer learning model for AD classification. METHODS: Specifically, first, 3D MR images were preprocessed. Second, the 2D axial middle-layer image was extracted from the MR image as a key slice. Then, the image was normalized by intensity and mapped to the red, green, and blue (RGB) space, and histogram specification was performed on the obtained RGB image to generate the final three-channel image. The final three-channel image was input into a pretrained CNN model for AD classification. Finally, classification and generalization experiments were conducted to verify the validity of the proposed method. RESULTS: The experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) data set show that our SHKSS method can effectively highlight the structural information in MRI slices. Compared with existing key slice processing techniques, our SHKSS method has an average accuracy improvement of at least 26% on the same test data set, and it has better performance and generalization ability. CONCLUSIONS: Our SHKSS method not only converts single-channel images into three-channel images to match the input requirements of the 2D transfer learning model but also highlights the structural information of MRI slices to improve the accuracy of AD diagnosis.


Asunto(s)
Enfermedad de Alzheimer , Enfermedad de Alzheimer/diagnóstico por imagen , Humanos , Aprendizaje Automático , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Neuroimagen/métodos
14.
Multimed Syst ; : 1-19, 2022 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-35125671

RESUMEN

The pandemic caused by the COVID-19 virus affects the world widely and heavily. When examining the CT, X-ray, and ultrasound images, radiologists must first determine whether there are signs of COVID-19 in the images. That is, COVID-19/Healthy detection is made. The second determination is the separation of pneumonia caused by the COVID-19 virus and pneumonia caused by a bacteria or virus other than COVID-19. This distinction is key in determining the treatment and isolation procedure to be applied to the patient. In this study, which aims to diagnose COVID-19 early using X-ray images, automatic two-class classification was carried out in four different titles: COVID-19/Healthy, COVID-19 Pneumonia/Bacterial Pneumonia, COVID-19 Pneumonia/Viral Pneumonia, and COVID-19 Pneumonia/Other Pneumonia. For this study, 3405 COVID-19, 2780 Bacterial Pneumonia, 1493 Viral Pneumonia, and 1989 Healthy images obtained by combining eight different data sets with open access were used. In the study, besides using the original X-ray images alone, classification results were obtained by accessing the images obtained using Local Binary Pattern (LBP) and Local Entropy (LE). The classification procedures were repeated for the images that were combined with the original images, LBP, and LE images in various combinations. 2-D CNN (Two-Dimensional Convolutional Neural Networks) and 3-D CNN (Three-Dimensional Convolutional Neural Networks) architectures were used as classifiers within the scope of the study. Mobilenetv2, Resnet101, and Googlenet architectures were used in the study as a 2-D CNN. A 24-layer 3-D CNN architecture has also been designed and used. Our study is the first to analyze the effect of diversification of input data type on classification results of 2-D/3-D CNN architectures. The results obtained within the scope of the study indicate that diversifying X-ray images with tissue analysis methods in the diagnosis of COVID-19 and including CNN input provides significant improvements in the results. Also, it is understood that the 3-D CNN architecture can be an important alternative to achieve a high classification result.

15.
J Neurosci Methods ; 366: 109421, 2022 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-34822945

RESUMEN

BACKGROUND: Wide-field calcium imaging (WFCI) allows for monitoring of cortex-wide neural dynamics in mice. When applied to the study of sleep, WFCI data are manually scored into the sleep states of wakefulness, non-REM (NREM) and REM by use of adjunct EEG and EMG recordings. However, this process is time-consuming and often suffers from low inter- and intra-rater reliability and invasiveness. Therefore, an automated sleep state classification method that operates on WFCI data alone is needed. NEW METHOD: A hybrid, two-step method is proposed. In the first step, spatial-temporal WFCI data is mapped to multiplex visibility graphs (MVGs). Subsequently, a two-dimensional convolutional neural network (2D CNN) is employed on the MVGs to be classified as wakefulness, NREM and REM. RESULTS: Sleep states were classified with an accuracy of 84% and Cohen's κ of 0.67. The method was also effectively applied on a binary classification of wakefulness/sleep (accuracy=0.82, κ = 0.62) and a four-class wakefulness/sleep/anesthesia/movement classification (accuracy=0.74, κ = 0.66). Gradient-weighted class activation maps revealed that the CNN focused on short- and long-term temporal connections of MVGs in a sleep state-specific manner. Sleep state classification performance when using individual brain regions was highest for the posterior area of the cortex and when cortex-wide activity was considered. COMPARISON WITH EXISTING METHOD: On a 3-hour WFCI recording, the MVG-CNN achieved a κ of 0.65, comparable to a κ of 0.60 corresponding to the human EEG/EMG-based scoring. CONCLUSIONS: The hybrid MVG-CNN method accurately classifies sleep states from WFCI data and will enable future sleep-focused studies with WFCI.


Asunto(s)
Aprendizaje Profundo , Fases del Sueño , Animales , Calcio , Electroencefalografía , Ratones , Reproducibilidad de los Resultados , Sueño/fisiología , Fases del Sueño/fisiología , Vigilia
16.
Sensors (Basel) ; 21(24)2021 Dec 18.
Artículo en Inglés | MEDLINE | ID: mdl-34960562

RESUMEN

Recently, several computer applications provided operating mode through pointing fingers, waving hands, and with body movement instead of a mouse, keyboard, audio, or touch input such as sign language recognition, robot control, games, appliances control, and smart surveillance. With the increase of hand-pose-based applications, new challenges in this domain have also emerged. Support vector machines and neural networks have been extensively used in this domain using conventional RGB data, which are not very effective for adequate performance. Recently, depth data have become popular due to better understating of posture attributes. In this study, a multiple parallel stream 2D CNN (two-dimensional convolution neural network) model is proposed to recognize the hand postures. The proposed model comprises multiple steps and layers to detect hand poses from image maps obtained from depth data. The hyper parameters of the proposed model are tuned through experimental analysis. Three publicly available benchmark datasets: Kaggle, First Person, and Dexter, are used independently to train and test the proposed approach. The accuracy of the proposed method is 99.99%, 99.48%, and 98% using the Kaggle hand posture dataset, First Person hand posture dataset, and Dexter dataset, respectively. Further, the results obtained for F1 and AUC scores are also near-optimal. Comparative analysis with state-of-the-art shows that the proposed model outperforms the previous methods.


Asunto(s)
Mano , Redes Neurales de la Computación , Movimiento , Postura
17.
Clocks Sleep ; 3(4): 581-597, 2021 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-34842647

RESUMEN

Sleep-stage classification is essential for sleep research. Various automatic judgment programs, including deep learning algorithms using artificial intelligence (AI), have been developed, but have limitations with regard to data format compatibility, human interpretability, cost, and technical requirements. We developed a novel program called GI-SleepNet, generative adversarial network (GAN)-assisted image-based sleep staging for mice that is accurate, versatile, compact, and easy to use. In this program, electroencephalogram and electromyography data are first visualized as images, and then classified into three stages (wake, NREM, and REM) by a supervised image learning algorithm. To increase its accuracy, we adopted GAN and artificially generated fake REM sleep data to equalize the number of stages. This resulted in improved accuracy, and as little as one mouse's data yielded significant accuracy. Due to its image-based nature, the program is easy to apply to data of different formats, different species of animals, and even outside sleep research. Image data can be easily understood; thus, confirmation by experts is easily obtained, even when there are prediction anomalies. As deep learning in image processing is one of the leading fields in AI, numerous algorithms are also available.

18.
Comput Biol Med ; 133: 104323, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33934067

RESUMEN

Mutations in proto-oncogenes (ONGO) and the loss of regulatory function of tumor suppression genes (TSG) are the common underlying mechanism for uncontrolled tumor growth. While cancer is a heterogeneous complex of distinct diseases, finding the potentiality of the genes related functionality to ONGO or TSG through computational studies can help develop drugs that target the disease. This paper proposes a classification method that starts with a preprocessing stage to extract the feature map sets from the input 3D protein structural information. The next stage is a deep convolutional neural network stage (DCNN) that outputs the probability of functional classification of genes. We explored and tested two approaches: in Approach 1, all filtered and cleaned 3D-protein-structures (PDB) are pooled together, whereas in Approach 2, the primary structures and their corresponding PDBs are separated according to the genes' primary structural information. Following the DCNN stage, a dynamic programming-based method is used to determine the final prediction of the primary structures' functionality. We validated our proposed method using the COSMIC online database. For the ONGO vs TSG classification problem the AUROC of the DCNN stage for Approach 1 and Approach 2 DCNN are 0.978 and 0.765, respectively. The AUROCs of the final genes' primary structure functionality classification for Approach 1 and Approach 2 are 0.989, and 0.879, respectively. For comparison, the current state-of-the-art reported AUROC is 0.924. Our results warrant further study to apply the deep learning models to humans' (GRCh38) genes, for predicting their corresponding probabilities of functionality in the cancer drivers.


Asunto(s)
Aprendizaje Profundo , Bases de Datos Factuales , Genes Supresores de Tumor , Humanos , Redes Neurales de la Computación , Oncogenes/genética
19.
Neural Netw ; 141: 52-60, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-33866302

RESUMEN

A challenging issue in the field of the automatic recognition of emotion from speech is the efficient modelling of long temporal contexts. Moreover, when incorporating long-term temporal dependencies between features, recurrent neural network (RNN) architectures are typically employed by default. In this work, we aim to present an efficient deep neural network architecture incorporating Connectionist Temporal Classification (CTC) loss for discrete speech emotion recognition (SER). Moreover, we also demonstrate the existence of further opportunities to improve SER performance by exploiting the properties of convolutional neural networks (CNNs) when modelling contextual information. Our proposed model uses parallel convolutional layers (PCN) integrated with Squeeze-and-Excitation Network (SEnet), a system herein denoted as PCNSE, to extract relationships from 3D spectrograms across timesteps and frequencies; here, we use the log-Mel spectrogram with deltas and delta-deltas as input. In addition, a self-attention Residual Dilated Network (SADRN) with CTC is employed as a classification block for SER. To the best of the authors' knowledge, this is the first time that such a hybrid architecture has been employed for discrete SER. We further demonstrate the effectiveness of our proposed approach on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) and FAU-Aibo Emotion corpus (FAU-AEC). Our experimental results reveal that the proposed method is well-suited to the task of discrete SER, achieving a weighted accuracy (WA) of 73.1% and an unweighted accuracy (UA) of 66.3% on IEMOCAP, as well as a UA of 41.1% on the FAU-AEC dataset.


Asunto(s)
Emociones , Redes Neurales de la Computación , Habla , Niño , Femenino , Humanos , Masculino
20.
Sensors (Basel) ; 21(3)2021 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-33535397

RESUMEN

Electrocardiogram (ECG) signals play a vital role in diagnosing and monitoring patients suffering from various cardiovascular diseases (CVDs). This research aims to develop a robust algorithm that can accurately classify the electrocardiogram signal even in the presence of environmental noise. A one-dimensional convolutional neural network (CNN) with two convolutional layers, two down-sampling layers, and a fully connected layer is proposed in this work. The same 1D data was transformed into two-dimensional (2D) images to improve the model's classification accuracy. Then, we applied the 2D CNN model consisting of input and output layers, three 2D-convolutional layers, three down-sampling layers, and a fully connected layer. The classification accuracy of 97.38% and 99.02% is achieved with the proposed 1D and 2D model when tested on the publicly available Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. Both proposed 1D and 2D CNN models outperformed the corresponding state-of-the-art classification algorithms for the same data, which validates the proposed models' effectiveness.


Asunto(s)
Electrocardiografía , Redes Neurales de la Computación , Algoritmos , Arritmias Cardíacas/diagnóstico , Frecuencia Cardíaca , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA