Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 6.299
Filtrar
1.
Sensors (Basel) ; 21(11)2021 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-34199559

RESUMO

Traditional pattern recognition approaches have gained a lot of popularity. However, these are largely dependent upon manual feature extraction, which makes the generalized model obscure. The sequences of accelerometer data recorded can be classified by specialized smartphones into well known movements that can be done with human activity recognition. With the high success and wide adaptation of deep learning approaches for the recognition of human activities, these techniques are widely used in wearable devices and smartphones to recognize the human activities. In this paper, convolutional layers are combined with long short-term memory (LSTM), along with the deep learning neural network for human activities recognition (HAR). The proposed model extracts the features in an automated way and categorizes them with some model attributes. In general, LSTM is alternative form of recurrent neural network (RNN) which is famous for temporal sequences' processing. In the proposed architecture, a dataset of UCI-HAR for Samsung Galaxy S2 is used for various human activities. The CNN classifier, which should be taken single, and LSTM models should be taken in series and take the feed data. For each input, the CNN model is applied, and each input image's output is transferred to the LSTM classifier as a time step. The number of filter maps for mapping of the various portions of image is the most important hyperparameter used. Transformation on the basis of observations takes place by using Gaussian standardization. CNN-LSTM, a proposed model, is an efficient and lightweight model that has shown high robustness and better activity detection capability than traditional algorithms by providing the accuracy of 97.89%.


Assuntos
Aprendizado Profundo , Algoritmos , Atividades Humanas , Humanos , Redes Neurais de Computação , Smartphone
2.
Sensors (Basel) ; 21(11)2021 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-34200400

RESUMO

This study discusses convolutional neural networks (CNNs) for vibration signals analysis, including applications in machining surface roughness estimation, bearing faults diagnosis, and tool wear detection. The one-dimensional CNNs (1DCNN) and two-dimensional CNNs (2DCNN) are applied for regression and classification applications using different types of inputs, e.g., raw signals, and time-frequency spectra images by short time Fourier transform. In the application of regression and the estimation of machining surface roughness, the 1DCNN is utilized and the corresponding CNN structure (hyper parameters) optimization is proposed by using uniform experimental design (UED), neural network, multiple regression, and particle swarm optimization. It demonstrates the effectiveness of the proposed approach to obtain a structure with better performance. In applications of classification, bearing faults and tool wear classification are carried out by vibration signals analysis and CNN. Finally, the experimental results are shown to demonstrate the effectiveness and performance of our approach.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Vibração
3.
Sensors (Basel) ; 21(11)2021 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-34200461

RESUMO

Recently, Doppler radar-based foot gesture recognition has attracted attention as a hands-free tool. Doppler radar-based recognition for various foot gestures is still very challenging. So far, no studies have yet dealt deeply with recognition of various foot gestures based on Doppler radar and a deep learning model. In this paper, we propose a method of foot gesture recognition using a new high-compression radar signature image and deep learning. By means of a deep learning AlexNet model, a new high-compression radar signature is created by extracting dominant features via Singular Value Decomposition (SVD) processing; four different foot gestures including kicking, swinging, sliding, and tapping are recognized. Instead of using an original radar signature, the proposed method improves the memory efficiency required for deep learning training by using a high-compression radar signature. Original and reconstructed radar images with high compression values of 90%, 95%, and 99% were applied for the deep learning AlexNet model. As experimental results, movements of all four different foot gestures and of a rolling baseball were recognized with an accuracy of approximately 98.64%. In the future, due to the radar's inherent robustness to the surrounding environment, this foot gesture recognition sensor using Doppler radar and deep learning will be widely useful in future automotive and smart home industry fields.


Assuntos
Compressão de Dados , Aprendizado Profundo , Algoritmos , Gestos , Reconhecimento Automatizado de Padrão , Radar
4.
Sensors (Basel) ; 21(12)2021 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-34198497

RESUMO

Breast-conserving surgery requires supportive radiotherapy to prevent cancer recurrence. However, the task of localizing the tumor bed to be irradiated is not trivial. The automatic image registration could significantly aid the tumor bed localization and lower the radiation dose delivered to the surrounding healthy tissues. This study proposes a novel image registration method dedicated to breast tumor bed localization addressing the problem of missing data due to tumor resection that may be applied to real-time radiotherapy planning. We propose a deep learning-based nonrigid image registration method based on a modified U-Net architecture. The algorithm works simultaneously on several image resolutions to handle large deformations. Moreover, we propose a dedicated volume penalty that introduces the medical knowledge about tumor resection into the registration process. The proposed method may be useful for improving real-time radiation therapy planning after the tumor resection and, thus, lower the surrounding healthy tissues' irradiation. The data used in this study consist of 30 computed tomography scans acquired in patients with diagnosed breast cancer, before and after tumor surgery. The method is evaluated using the target registration error between manually annotated landmarks, the ratio of tumor volume, and the subjective visual assessment. We compare the proposed method to several other approaches and show that both the multilevel approach and the volume regularization improve the registration results. The mean target registration error is below 6.5 mm, and the relative volume ratio is close to zero. The registration time below 1 s enables the real-time processing. These results show improvements compared to the classical, iterative methods or other learning-based approaches that do not introduce the knowledge about tumor resection into the registration process. In future research, we plan to propose a method dedicated to automatic localization of missing regions that may be used to automatically segment tumors in the source image and scars in the target image.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Algoritmos , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina Supervisionado , Tomografia Computadorizada por Raios X
5.
J Acoust Soc Am ; 149(5): 3626, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-34241100

RESUMO

In the current pandemic, lung ultrasound (LUS) played a useful role in evaluating patients affected by COVID-19. However, LUS remains limited to the visual inspection of ultrasound data, thus negatively affecting the reliability and reproducibility of the findings. Moreover, many different imaging protocols have been proposed, most of which lacked proper clinical validation. To address these problems, we were the first to propose a standardized imaging protocol and scoring system. Next, we developed the first deep learning (DL) algorithms capable of evaluating LUS videos providing, for each video-frame, the score as well as semantic segmentation. Moreover, we have analyzed the impact of different imaging protocols and demonstrated the prognostic value of our approach. In this work, we report on the level of agreement between the DL and LUS experts, when evaluating LUS data. The results show a percentage of agreement between DL and LUS experts of 85.96% in the stratification between patients at high risk of clinical worsening and patients at low risk. These encouraging results demonstrate the potential of DL models for the automatic scoring of LUS data, when applied to high quality data acquired accordingly to a standardized imaging protocol.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , Pulmão/diagnóstico por imagem , Reprodutibilidade dos Testes , SARS-CoV-2 , Ultrassonografia
6.
N Engl J Med ; 385(3): 217-227, 2021 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-34260835

RESUMO

BACKGROUND: Technology to restore the ability to communicate in paralyzed persons who cannot speak has the potential to improve autonomy and quality of life. An approach that decodes words and sentences directly from the cerebral cortical activity of such patients may represent an advancement over existing methods for assisted communication. METHODS: We implanted a subdural, high-density, multielectrode array over the area of the sensorimotor cortex that controls speech in a person with anarthria (the loss of the ability to articulate speech) and spastic quadriparesis caused by a brain-stem stroke. Over the course of 48 sessions, we recorded 22 hours of cortical activity while the participant attempted to say individual words from a vocabulary set of 50 words. We used deep-learning algorithms to create computational models for the detection and classification of words from patterns in the recorded cortical activity. We applied these computational models, as well as a natural-language model that yielded next-word probabilities given the preceding words in a sequence, to decode full sentences as the participant attempted to say them. RESULTS: We decoded sentences from the participant's cortical activity in real time at a median rate of 15.2 words per minute, with a median word error rate of 25.6%. In post hoc analyses, we detected 98% of the attempts by the participant to produce individual words, and we classified words with 47.1% accuracy using cortical signals that were stable throughout the 81-week study period. CONCLUSIONS: In a person with anarthria and spastic quadriparesis caused by a brain-stem stroke, words and sentences were decoded directly from cortical activity during attempted speech with the use of deep-learning models and a natural-language model. (Funded by Facebook and others; ClinicalTrials.gov number, NCT03698149.).


Assuntos
Infartos do Tronco Encefálico/complicações , Interfaces Cérebro-Computador , Aprendizado Profundo , Disartria/reabilitação , Próteses Neurais , Fala , Adulto , Disartria/etiologia , Eletrocorticografia , Eletrodos Implantados , Humanos , Masculino , Processamento de Linguagem Natural , Quadriplegia/etiologia , Córtex Sensório-Motor/fisiologia
7.
Sensors (Basel) ; 21(13)2021 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-34283083

RESUMO

Currently, greenhouses are widely applied for plant growth, and environmental parameters can also be controlled in the modern greenhouse to guarantee the maximum crop yield. In order to optimally control greenhouses' environmental parameters, one indispensable requirement is to accurately predict crop yields based on given environmental parameter settings. In addition, crop yield forecasting in greenhouses plays an important role in greenhouse farming planning and management, which allows cultivators and farmers to utilize the yield prediction results to make knowledgeable management and financial decisions. It is thus important to accurately predict the crop yield in a greenhouse considering the benefits that can be brought by accurate greenhouse crop yield prediction. In this work, we have developed a new greenhouse crop yield prediction technique, by combining two state-of-the-arts networks for temporal sequence processing-temporal convolutional network (TCN) and recurrent neural network (RNN). Comprehensive evaluations of the proposed algorithm have been made on multiple datasets obtained from multiple real greenhouse sites for tomato growing. Based on a statistical analysis of the root mean square errors (RMSEs) between the predicted and actual crop yields, it is shown that the proposed approach achieves more accurate yield prediction performance than both traditional machine learning methods and other classical deep neural networks. Moreover, the experimental study also shows that the historical yield information is the most important factor for accurately predicting future crop yields.


Assuntos
Aprendizado Profundo , Agricultura , Algoritmos , Aprendizado de Máquina , Redes Neurais de Computação
8.
Sensors (Basel) ; 21(13)2021 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-34283105

RESUMO

Ultrasound breast imaging is a promising alternative to conventional mammography because it does not expose women to harmful ionising radiation and it can successfully image dense breast tissue. However, conventional ultrasound imaging only provides morphological information with limited diagnostic value. Ultrasound computed tomography (USCT) uses energy in both transmission and reflection when imaging the breast to provide more diagnostically relevant quantitative tissue properties, but it is often based on time-of-flight tomography or similar ray approximations of the wave equation, resulting in reconstructed images with low resolution. Full-waveform inversion (FWI) is based on a more accurate approximation of wave-propagation phenomena and can consequently produce very high resolution images using frequencies below 1 megahertz. These low frequencies, however, are not available in most USCT acquisition systems, as they use transducers with central frequencies well above those required in FWI. To circumvent this problem, we designed, trained, and implemented a two-dimensional convolutional neural network to artificially generate missing low frequencies in USCT data. Our results show that FWI reconstructions using experiment data after the application of the proposed method successfully converged, showing good agreement with X-ray CT and reflection ultrasound-tomography images.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Densidade da Mama , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Mamografia , Imagens de Fantasmas , Ultrassonografia Mamária
9.
Sensors (Basel) ; 21(13)2021 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-34283110

RESUMO

With the increase in the digitization efforts of herbarium collections worldwide, dataset repositories such as iDigBio and GBIF now have hundreds of thousands of herbarium sheet images ready for exploration. Although this serves as a new source of plant leaves data, herbarium datasets have an inherent challenge to deal with the sheets containing other non-plant objects such as color charts, barcodes, and labels. Even for the plant part itself, a combination of different overlapping, damaged, and intact individual leaves exist together with other plant organs such as stems and fruits, which increases the complexity of leaf trait extraction and analysis. Focusing on segmentation and trait extraction on individual intact herbarium leaves, this study proposes a pipeline consisting of deep learning semantic segmentation model (DeepLabv3+), connected component analysis, and a single-leaf classifier trained on binary images to automate the extraction of an intact individual leaf with phenotypic traits. The proposed method achieved a higher F1-score for both the in-house dataset (96%) and on a publicly available herbarium dataset (93%) compared to object detection-based approaches including Faster R-CNN and YOLOv5. Furthermore, using the proposed approach, the phenotypic measurements extracted from the segmented individual leaves were closer to the ground truth measurements, which suggests the importance of the segmentation process in handling background noise. Compared to the object detection-based approaches, the proposed method showed a promising direction toward an autonomous tool for the extraction of individual leaves together with their trait data directly from herbarium specimen images.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Folhas de Planta , Plantas , Semântica
10.
Sensors (Basel) ; 21(13)2021 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-34283149

RESUMO

Implicit authentication mechanisms are expected to prevent security and privacy threats for mobile devices using behavior modeling. However, recently, researchers have demonstrated that the performance of behavioral biometrics is insufficiently accurate. Furthermore, the unique characteristics of mobile devices, such as limited storage and energy, make it subject to constrained capacity of data collection and processing. In this paper, we propose an implicit authentication architecture based on edge computing, coined Edge computing-based mobile Device Implicit Authentication (EDIA), which exploits edge-based gait biometric identification using a deep learning model to authenticate users. The gait data captured by a device's accelerometer and gyroscope sensors is utilized as the input of our optimized model, which consists of a CNN and a LSTM in tandem. Especially, we deal with extracting the features of gait signal in a two-dimensional domain through converting the original signal into an image, and then input it into our network. In addition, to reduce computation overhead of mobile devices, the model for implicit authentication is generated on the cloud server, and the user authentication process also takes place on the edge devices. We evaluate the performance of EDIA under different scenarios where the results show that i) we achieve a true positive rate of 97.77% and also a 2% false positive rate; and ii) EDIA still reaches high accuracy with limited dataset size.


Assuntos
Identificação Biométrica , Aprendizado Profundo , Computadores de Mão , Marcha , Privacidade
11.
Sensors (Basel) ; 21(13)2021 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-34283157

RESUMO

Fluorescent probes can be used to detect various types of asbestos (serpentine and amphibole groups); however, the fiber counting using our previously developed software was not accurate for samples with low fiber concentration. Machine learning-based techniques (e.g., deep learning) for image analysis, particularly Convolutional Neural Networks (CNN), have been widely applied to many areas. The objectives of this study were to (1) create a database of a wide-range asbestos concentration (0-50 fibers/liter) fluorescence microscopy (FM) images in the laboratory; and (2) determine the applicability of the state-of-the-art object detection CNN model, YOLOv4, to accurately detect asbestos. We captured the fluorescence microscopy images containing asbestos and labeled the individual asbestos in the images. We trained the YOLOv4 model with the labeled images using one GTX 1660 Ti Graphics Processing Unit (GPU). Our results demonstrated the exceptional capacity of the YOLOv4 model to learn the fluorescent asbestos morphologies. The mean average precision at a threshold of 0.5 (mAP@0.5) was 96.1% ± 0.4%, using the National Institute for Occupational Safety and Health (NIOSH) fiber counting Method 7400 as a reference method. Compared to our previous counting software (Intec/HU), the YOLOv4 achieved higher accuracy (0.997 vs. 0.979), particularly much higher precision (0.898 vs. 0.418), recall (0.898 vs. 0.780) and F-1 score (0.898 vs. 0.544). In addition, the YOLOv4 performed much better for low fiber concentration samples (<15 fibers/liter) compared to Intec/HU. Therefore, the FM method coupled with YOLOv4 is remarkable in detecting asbestos fibers and differentiating them from other non-asbestos particles.


Assuntos
Asbestos , Aprendizado Profundo , Asbestos/toxicidade , Asbestos Serpentinas/análise , Processamento de Imagem Assistida por Computador , Microscopia de Fluorescência , Estados Unidos
12.
Nat Commun ; 12(1): 4221, 2021 07 09.
Artigo em Inglês | MEDLINE | ID: mdl-34244504

RESUMO

Deep learning algorithms trained on instances that violate the assumption of being independent and identically distributed (i.i.d.) are known to experience destructive interference, a phenomenon characterized by a degradation in performance. Such a violation, however, is ubiquitous in clinical settings where data are streamed temporally from different clinical sites and from a multitude of physiological sensors. To mitigate this interference, we propose a continual learning strategy, entitled CLOPS, that employs a replay buffer. To guide the storage of instances into the buffer, we propose end-to-end trainable parameters, termed task-instance parameters, that quantify the difficulty with which data points are classified by a deep-learning system. We validate the interpretation of these parameters via clinical domain knowledge. To replay instances from the buffer, we exploit uncertainty-based acquisition functions. In three of the four continual learning scenarios, reflecting transitions across diseases, time, data modalities, and healthcare institutions, we show that CLOPS outperforms the state-of-the-art methods, GEM1 and MIR2. We also conduct extensive ablation studies to demonstrate the necessity of the various components of our proposed strategy. Our framework has the potential to pave the way for diagnostic systems that remain robust over time.


Assuntos
Arritmias Cardíacas/diagnóstico , Tomada de Decisão Clínica/métodos , Sistemas de Apoio a Decisões Clínicas , Aprendizado Profundo , Conjuntos de Dados como Assunto , Eletrocardiografia , Humanos , Modelos Cardiovasculares , Curva ROC , Estações do Ano , Fatores de Tempo
13.
Int J Mol Sci ; 22(11)2021 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-34199677

RESUMO

The new advances in deep learning methods have influenced many aspects of scientific research, including the study of the protein system. The prediction of proteins' 3D structural components is now heavily dependent on machine learning techniques that interpret how protein sequences and their homology govern the inter-residue contacts and structural organization. Especially, methods employing deep neural networks have had a significant impact on recent CASP13 and CASP14 competition. Here, we explore the recent applications of deep learning methods in the protein structure prediction area. We also look at the potential opportunities for deep learning methods to identify unknown protein structures and functions to be discovered and help guide drug-target interactions. Although significant problems still need to be addressed, we expect these techniques in the near future to play crucial roles in protein structural bioinformatics as well as in drug discovery.


Assuntos
Aprendizado Profundo , Aprendizado de Máquina , Conformação Proteica , Software , Sequência de Aminoácidos , Biologia Computacional , Bases de Dados de Proteínas , Evolução Molecular , Humanos , Redes Neurais de Computação , Alinhamento de Sequência/métodos
14.
Sensors (Basel) ; 21(13)2021 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-34201774

RESUMO

Solar cells may possess defects during the manufacturing process in photovoltaic (PV) industries. To precisely evaluate the effectiveness of solar PV modules, manufacturing defects are required to be identified. Conventional defect inspection in industries mainly depends on manual defect inspection by highly skilled inspectors, which may still give inconsistent, subjective identification results. In order to automatize the visual defect inspection process, an automatic cell segmentation technique and a convolutional neural network (CNN)-based defect detection system with pseudo-colorization of defects is designed in this paper. High-resolution Electroluminescence (EL) images of single-crystalline silicon (sc-Si) solar PV modules are used in our study for the detection of defects and their quality inspection. Firstly, an automatic cell segmentation methodology is developed to extract cells from an EL image. Secondly, defect detection can be actualized by CNN-based defect detector and can be visualized with pseudo-colors. We used contour tracing to accurately localize the panel region and a probabilistic Hough transform to identify gridlines and busbars on the extracted panel region for cell segmentation. A cell-based defect identification system was developed using state-of-the-art deep learning in CNNs. The detected defects are imposed with pseudo-colors for enhancing defect visualization using K-means clustering. Our automatic cell segmentation methodology can segment cells from an EL image in about 2.71 s. The average segmentation errors along the x-direction and y-direction are only 1.6 pixels and 1.4 pixels, respectively. The defect detection approach on segmented cells achieves 99.8% accuracy. Along with defect detection, the defect regions on a cell are furnished with pseudo-colors to enhance the visualization.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Silício
15.
Sensors (Basel) ; 21(13)2021 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-34202090

RESUMO

Wi-Fi-based indoor positioning systems have a simple layout and a low cost, and they have gradually become popular in both academia and industry. However, due to the poor stability of Wi-Fi signals, it is difficult to accurately decide the position based on a received signal strength indicator (RSSI) by using a traditional dataset and a deep learning classifier. To overcome this difficulty, we present a clustering-based noise elimination scheme (CNES) for RSSI-based datasets. The scheme facilitates the region-based clustering of RSSIs through density-based spatial clustering of applications with noise. In this scheme, the RSSI-based dataset is preprocessed and noise samples are removed by CNES. This experiment was carried out in a dynamic environment, and we evaluated the lab simulation results of CNES using deep learning classifiers. The results showed that applying CNES to the test database to eliminate noise will increase the success probability of fingerprint location. The lab simulation results show that after using CNES, the average positioning accuracy of margin-zero (zero-meter error), margin-one (two-meter error), and margin-two (four-meter error) in the database increased by 17.78%, 7.24%, and 4.75%, respectively. We evaluated the simulation results with a real time testing experiment, where the result showed that CNES improved the average positioning accuracy to 22.43%, 9.15%, and 5.21% for margin-zero, margin-one, and margin-two error, respectively.


Assuntos
Aprendizado Profundo , Tecnologia sem Fio , Algoritmos , Análise por Conglomerados
16.
Sensors (Basel) ; 21(13)2021 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-34203508

RESUMO

The influence of earthquake disasters on human social life is positively related to the magnitude and intensity of the earthquake, and effectively avoiding casualties and property losses can be attributed to the accurate prediction of earthquakes. In this study, an electromagnetic sensor is investigated to assess earthquakes in advance by collecting earthquake signals. At present, the mainstream earthquake magnitude prediction comprises two methods. On the one hand, most geophysicists or data analysis experts extract a series of basic features from earthquake precursor signals for seismic classification. On the other hand, the obtained data related to earth activities by seismograph or space satellite are directly used in classification networks. This article proposes a CNN and designs a 3D feature-map which can be used to solve the problem of earthquake magnitude classification by combining the advantages of shallow features and high-dimensional information. In addition, noise simulation technology and SMOTE oversampling technology are applied to overcome the problem of seismic data imbalance. The signals collected by electromagnetic sensors are used to evaluate the method proposed in this article. The results show that the method proposed in this paper can classify earthquake magnitudes well.


Assuntos
Aprendizado Profundo , Desastres , Terremotos , Simulação por Computador , Fenômenos Eletromagnéticos , Humanos
17.
Sensors (Basel) ; 21(13)2021 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-34206540

RESUMO

The emergence of an aging society is inevitable due to the continued increases in life expectancy and decreases in birth rate. These social changes require new smart healthcare services for use in daily life, and COVID-19 has also led to a contactless trend necessitating more non-face-to-face health services. Due to the improvements that have been achieved in healthcare technologies, an increasing number of studies have attempted to predict and analyze certain diseases in advance. Research on stroke diseases is actively underway, particularly with the aging population. Stroke, which is fatal to the elderly, is a disease that requires continuous medical observation and monitoring, as its recurrence rate and mortality rate are very high. Most studies examining stroke disease to date have used MRI or CT images for simple classification. This clinical approach (imaging) is expensive and time-consuming while requiring bulky equipment. Recently, there has been increasing interest in using non-invasive measurable EEGs to compensate for these shortcomings. However, the prediction algorithms and processing procedures are both time-consuming because the raw data needs to be separated before the specific attributes can be obtained. Therefore, in this paper, we propose a new methodology that allows for the immediate application of deep learning models on raw EEG data without using the frequency properties of EEG. This proposed deep learning-based stroke disease prediction model was developed and trained with data collected from real-time EEG sensors. We implemented and compared different deep-learning models (LSTM, Bidirectional LSTM, CNN-LSTM, and CNN-Bidirectional LSTM) that are specialized in time series data classification and prediction. The experimental results confirmed that the raw EEG data, when wielded by the CNN-bidirectional LSTM model, can predict stroke with 94.0% accuracy with low FPR (6.0%) and FNR (5.7%), thus showing high confidence in our system. These experimental results demonstrate the feasibility of non-invasive methods that can easily measure brain waves alone to predict and monitor stroke diseases in real time during daily life. These findings are expected to lead to significant improvements for early stroke detection with reduced cost and discomfort compared to other measuring techniques.


Assuntos
COVID-19 , Aprendizado Profundo , Acidente Vascular Cerebral , Idoso , Humanos , Redes Neurais de Computação , SARS-CoV-2
18.
Sensors (Basel) ; 21(13)2021 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-34206620

RESUMO

We present a deep learning solution to the problem of localization of magnetoencephalography (MEG) brain signals. The proposed deep model architectures are tuned to single and multiple time point MEG data, and can estimate varying numbers of dipole sources. Results from simulated MEG data on the cortical surface of a real human subject demonstrated improvements against the popular RAP-MUSIC localization algorithm in specific scenarios with varying SNR levels, inter-source correlation values, and number of sources. Importantly, the deep learning models had robust performance to forward model errors resulting from head translation and rotation and a significant reduction in computation time, to a fraction of 1 ms, paving the way to real-time MEG source localization.


Assuntos
Aprendizado Profundo , Magnetoencefalografia , Algoritmos , Encéfalo , Mapeamento Encefálico , Simulação por Computador , Eletroencefalografia , Humanos , Modelos Neurológicos
19.
Sensors (Basel) ; 21(13)2021 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-34209571

RESUMO

It is important to obtain accurate information about kiwifruit vines to monitoring their physiological states and undertake precise orchard operations. However, because vines are small and cling to trellises, and have branches laying on the ground, numerous challenges exist in the acquisition of accurate data for kiwifruit vines. In this paper, a kiwifruit canopy distribution prediction model is proposed on the basis of low-altitude unmanned aerial vehicle (UAV) images and deep learning techniques. First, the location of the kiwifruit plants and vine distribution are extracted from high-precision images collected by UAV. The canopy gradient distribution maps with different noise reduction and distribution effects are generated by modifying the threshold and sampling size using the resampling normalization method. The results showed that the accuracies of the vine segmentation using PSPnet, support vector machine, and random forest classification were 71.2%, 85.8%, and 75.26%, respectively. However, the segmentation image obtained using depth semantic segmentation had a higher signal-to-noise ratio and was closer to the real situation. The average intersection over union of the deep semantic segmentation was more than or equal to 80% in distribution maps, whereas, in traditional machine learning, the average intersection was between 20% and 60%. This indicates the proposed model can quickly extract the vine distribution and plant position, and is thus able to perform dynamic monitoring of orchards to provide real-time operation guidance.


Assuntos
Aprendizado Profundo , Tecnologia de Sensoriamento Remoto , Altitude , Frutas , Aprendizado de Máquina
20.
Int J Mol Sci ; 22(12)2021 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-34203866

RESUMO

Peroxisomes are ubiquitous membrane-bound organelles, and aberrant localisation of peroxisomal proteins contributes to the pathogenesis of several disorders. Many computational methods focus on assigning protein sequences to subcellular compartments, but there are no specific tools tailored for the sub-localisation (matrix vs. membrane) of peroxisome proteins. We present here In-Pero, a new method for predicting protein sub-peroxisomal cellular localisation. In-Pero combines standard machine learning approaches with recently proposed multi-dimensional deep-learning representations of the protein amino-acid sequence. It showed a classification accuracy above 0.9 in predicting peroxisomal matrix and membrane proteins. The method is trained and tested using a double cross-validation approach on a curated data set comprising 160 peroxisomal proteins with experimental evidence for sub-peroxisomal localisation. We further show that the proposed approach can be easily adapted (In-Mito) to the prediction of mitochondrial protein localisation obtaining performances for certain classes of proteins (matrix and inner-membrane) superior to existing tools.


Assuntos
Aprendizado Profundo , Proteínas de Membrana/química , Proteínas de Membrana/metabolismo , Peroxissomos/metabolismo , Software , Algoritmos , Sequência de Aminoácidos , Proteínas Mitocondriais/metabolismo , Transporte Proteico , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...