Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 3.888
Filtrar
1.
BMC Bioinformatics ; 22(1): 231, 2021 May 05.
Artículo en Inglés | MEDLINE | ID: mdl-33952199

RESUMEN

BACKGROUND: Epitope prediction is a useful approach in cancer immunology and immunotherapy. Many computational methods, including machine learning and network analysis, have been developed quickly for such purposes. However, regarding clinical applications, the existing tools are insufficient because few of the predicted binding molecules are immunogenic. Hence, to develop more potent and effective vaccines, it is important to understand binding and immunogenic potential. Here, we observed that the interactive association constituted by human leukocyte antigen (HLA)-peptide pairs can be regarded as a network in which each HLA and peptide is taken as a node. We speculated whether this network could detect the essential interactive propensities embedded in HLA-peptide pairs. Thus, we developed a network-based deep learning method called DeepNetBim by harnessing binding and immunogenic information to predict HLA-peptide interactions. RESULTS: Quantitative class I HLA-peptide binding data and qualitative immunogenic data (including data generated from T cell activation assays, major histocompatibility complex (MHC) binding assays and MHC ligand elution assays) were retrieved from the Immune Epitope Database database. The weighted HLA-peptide binding network and immunogenic network were integrated into a network-based deep learning algorithm constituted by a convolutional neural network and an attention mechanism. The results showed that the integration of network centrality metrics increased the power of both binding and immunogenicity predictions, while the new model significantly outperformed those that did not include network features and those with shuffled networks. Applied on benchmark and independent datasets, DeepNetBim achieved an AUC score of 93.74% in HLA-peptide binding prediction, outperforming 11 state-of-the-art relevant models. Furthermore, the performance enhancement of the combined model, which filtered out negative immunogenic predictions, was confirmed on neoantigen identification by an increase in both positive predictive value (PPV) and the proportion of neoantigen recognition. CONCLUSIONS: We developed a network-based deep learning method called DeepNetBim as a pan-specific epitope prediction tool. It extracted the attributes of the network as new features from HLA-peptide binding and immunogenic models. We observed that not only did DeepNetBim binding model outperform other updated methods but the combination of our two models showed better performance. This indicates further applications in clinical practice.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Epítopos , Antígenos HLA/genética , Antígenos HLA/metabolismo , Antígenos de Histocompatibilidad Clase I/metabolismo , Antígenos de Histocompatibilidad Clase II , Humanos , Unión Proteica
2.
Int J Mol Sci ; 22(6)2021 Mar 22.
Artículo en Inglés | MEDLINE | ID: mdl-33810175

RESUMEN

G protein-coupled receptor (GPCR) oligomerization, while contentious, continues to attract the attention of researchers. Numerous experimental investigations have validated the presence of GPCR dimers, and the relevance of dimerization in the effectuation of physiological functions intensifies the attractiveness of this concept as a potential therapeutic target. GPCRs, as a single entity, have been the main source of scrutiny for drug design objectives for multiple diseases such as cancer, inflammation, cardiac, and respiratory diseases. The existence of dimers broadens the research scope of GPCR functions, revealing new signaling pathways that can be targeted for disease pathogenesis that have not previously been reported when GPCRs were only viewed in their monomeric form. This review will highlight several aspects of GPCR dimerization, which include a summary of the structural elucidation of the allosteric modulation of class C GPCR activation offered through recent solutions to the three-dimensional, full-length structures of metabotropic glutamate receptor and γ-aminobutyric acid B receptor as well as the role of dimerization in the modification of GPCR function and allostery. With the growing influence of computational methods in the study of GPCRs, we will also be reviewing recent computational tools that have been utilized to map protein-protein interactions (PPI).


Asunto(s)
Modelos Moleculares , Conformación Proteica , Multimerización de Proteína , Receptores Acoplados a Proteínas G/química , Receptores Acoplados a Proteínas G/metabolismo , Regulación Alostérica , Animales , Aprendizaje Profundo , Humanos , Ligandos , Aprendizaje Automático , Péptidos/química , Péptidos/metabolismo , Unión Proteica , Dominios y Motivos de Interacción de Proteínas , Relación Estructura-Actividad
4.
Sensors (Basel) ; 21(7)2021 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-33800704

RESUMEN

Human activity recognition (HAR) has been a vital human-computer interaction service in smart homes. It is still a challenging task due to the diversity and similarity of human actions. In this paper, a novel hierarchical deep learning-based methodology equipped with low-cost sensors is proposed for high-accuracy device-free human activity recognition. ESP8266, as the sensing hardware, was utilized to deploy the WiFi sensor network and collect multi-dimensional received signal strength indicator (RSSI) records. The proposed learning model presents a coarse-to-fine hierarchical classification framework with two-level perception modules. In the coarse-level stage, twelve statistical features of time-frequency domains were extracted from the RSSI measurements filtered by a butterworth low-pass filter, and a support vector machine (SVM) model was employed to quickly recognize the basic human activities by classifying the signal statistical features. In the fine-level stage, the gated recurrent unit (GRU), a representative type of recurrent neural network (RNN), was applied to address issues of the confused recognition of similar activities. The GRU model can realize automatic multi-level feature extraction from the RSSI measurements and accurately discriminate the similar activities. The experimental results show that the proposed approach achieved recognition accuracies of 96.45% and 94.59% for six types of activities in two different environments and performed better compared the traditional pattern-based methods. The proposed hierarchical learning method provides a low-cost sensor-based HAR framework to enhance the recognition accuracy and modeling efficiency.


Asunto(s)
Aprendizaje Profundo , Actividades Humanas , Humanos , Redes Neurales de la Computación , Teléfono Inteligente , Máquina de Vectores de Soporte
5.
Sensors (Basel) ; 21(6)2021 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-33803891

RESUMEN

Human activity recognition (HAR) remains a challenging yet crucial problem to address in computer vision. HAR is primarily intended to be used with other technologies, such as the Internet of Things, to assist in healthcare and eldercare. With the development of deep learning, automatic high-level feature extraction has become a possibility and has been used to optimize HAR performance. Furthermore, deep-learning techniques have been applied in various fields for sensor-based HAR. This study introduces a new methodology using convolution neural networks (CNN) with varying kernel dimensions along with bi-directional long short-term memory (BiLSTM) to capture features at various resolutions. The novelty of this research lies in the effective selection of the optimal video representation and in the effective extraction of spatial and temporal features from sensor data using traditional CNN and BiLSTM. Wireless sensor data mining (WISDM) and UCI datasets are used for this proposed methodology in which data are collected through diverse methods, including accelerometers, sensors, and gyroscopes. The results indicate that the proposed scheme is efficient in improving HAR. It was thus found that unlike other available methods, the proposed method improved accuracy, attaining a higher score in the WISDM dataset compared to the UCI dataset (98.53% vs. 97.05%).


Asunto(s)
Aprendizaje Profundo , Minería de Datos , Actividades Humanas , Humanos , Memoria a Largo Plazo , Redes Neurales de la Computación
6.
Artículo en Inglés | MEDLINE | ID: mdl-33807714

RESUMEN

While the clinical approval process is able to filter out medications whose utility does not offset their adverse drug reaction profile in humans, it is not well suited to characterizing lower frequency issues and idiosyncratic multi-drug interactions that can happen in real world diverse patient populations. With a growing abundance of real-world evidence databases containing hundreds of thousands of patient records, it is now feasible to build machine learning models that incorporate individual patient information to provide personalized adverse event predictions. In this study, we build models that integrate patient specific demographic, clinical, and genetic features (when available) with drug structure to predict adverse drug reactions. We develop an extensible graph convolutional approach to be able to integrate molecular effects from the variable number of medications a typical patient may be taking. Our model outperforms standard machine learning methods at the tasks of predicting hospitalization and death in the UK Biobank dataset yielding an R2 of 0.37 and an AUC of 0.90, respectively. We believe our model has potential for evaluating new therapeutic compounds for individualized toxicities in real world diverse populations. It can also be used to prioritize medications when there are multiple options being considered for treatment.


Asunto(s)
Aprendizaje Profundo , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos , Preparaciones Farmacéuticas , Bases de Datos Factuales , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/epidemiología , Humanos , Aprendizaje Automático
7.
Medicine (Baltimore) ; 100(16): e25663, 2021 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-33879750

RESUMEN

ABSTRACT: Along with recent developments in deep learning techniques, computer-aided diagnosis (CAD) has been growing rapidly in the medical imaging field. In this work, we evaluate the deep learning-based CAD algorithm (DCAD) for detecting and localizing 3 major thoracic abnormalities visible on chest radiographs (CR) and to compare the performance of physicians with and without the assistance of the algorithm. A subset of 244 subjects (60% abnormal CRs) was evaluated. Abnormal findings included mass/nodules (55%), consolidation (21%), and pneumothorax (24%). Observer performance tests were conducted to assess whether the performance of physicians could be enhanced with the algorithm. The area under the receiver operating characteristic (ROC) curve (AUC) and the area under the jackknife alternative free-response ROC (JAFROC) were measured to evaluate the performance of the algorithm and physicians in image classification and lesion detection, respectively. The AUCs for nodule/mass, consolidation, and pneumothorax were 0.9883, 1.000, and 0.9997, respectively. For the image classification, the overall AUC of the pooled physicians was 0.8679 without DCAD and 0.9112 with DCAD. Regarding lesion detection, the pooled observers exhibited a weighted JAFROC figure of merit (FOM) of 0.8426 without DCAD and 0.9112 with DCAD. DCAD for CRs could enhance physicians' performance in the detection of 3 major thoracic abnormalities.


Asunto(s)
Aprendizaje Profundo/estadística & datos numéricos , Enfermedades Pulmonares/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/estadística & datos numéricos , Radiografía Torácica/estadística & datos numéricos , Neoplasias Torácicas/diagnóstico por imagen , Anciano , Área Bajo la Curva , Estudios de Casos y Controles , Femenino , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Masculino , Persona de Mediana Edad , Variaciones Dependientes del Observador , Neumotórax/diagnóstico por imagen , Curva ROC , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía Torácica/métodos , Reproducibilidad de los Resultados
8.
Analyst ; 146(8): 2490-2498, 2021 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-33899058

RESUMEN

As with other proteins, the conformation of the silk protein is critical for determining the mechanical, optical and biological performance of materials. However, an efficient, accurate and time-efficient method for evaluating the protein conformation from Fourier transform infrared (FTIR) spectra is still desired. A set of convolutional neural network (CNN)-based deep learning models was developed in this study to identify the silk proteins and evaluate their relative content of each conformation from FTIR spectra. Compared with the conventional deconvolution algorithm, our CNN models are highly accurate and time-efficient, showing promise in processing massive FTIR data sets, such as data from FTIR imaging, and in quick analysis feedback, such as on-line and time-resolved FTIR measurements. We compiled an open-source and user-friendly graphical Python program that allows users to analyze their own FTIR data set, which can be from the silk protein or other proteins, for the encouragement and convenience of interested researchers to use the CNN models.


Asunto(s)
Aprendizaje Profundo , Conformación Proteica , Seda , Algoritmos , Redes Neurales de la Computación
9.
Lancet Digit Health ; 3(5): e295-e305, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33858815

RESUMEN

BACKGROUND: Survival of liver transplant recipients beyond 1 year since transplantation is compromised by an increased risk of cancer, cardiovascular events, infection, and graft failure. Few clinical tools are available to identify patients at risk of these complications, which would flag them for screening tests and potentially life-saving interventions. In this retrospective analysis, we aimed to assess the ability of deep learning algorithms of longitudinal data from two prospective cohorts to predict complications resulting in death after liver transplantation over multiple timeframes, compared with logistic regression models. METHODS: In this machine learning analysis, model development was done on a set of 42 146 liver transplant recipients (mean age 48·6 years [SD 17·3]; 17 196 [40·8%] women) from the Scientific Registry of Transplant Recipients (SRTR) in the USA. Transferability of the model was further evaluated by fine-tuning on a dataset from the University Health Network (UHN) in Canada (n=3269; mean age 52·5 years [11·1]; 1079 [33·0%] women). The primary outcome was cause of death, as recorded in the databases, due to cardiovascular causes, infection, graft failure, or cancer, within 1 year and 5 years of each follow-up examination after transplantation. We compared the performance of four deep learning models against logistic regression, assessing performance using the area under the receiver operating characteristic curve (AUROC). FINDINGS: In both datasets, deep learning models outperformed logistic regression, with the Transformer model achieving the highest AUROCs in both datasets (p<0·0001). The AUROC for the Transformer model across all outcomes in the SRTR dataset was 0·804 (99% CI 0·795-0·854) for 1-year predictions and 0·733 (0·729-0·769) for 5-year predictions. In the UHN dataset, the AUROC for the top-performing deep learning model was 0·807 (0·795-0·842) for 1-year predictions and 0·722 (0·705-0·764) for 5-year predictions. AUROCs ranged from 0·695 (0·680-0·713) for prediction of death from infection within 5 years to 0·859 (0·847-0·871) for prediction of death by graft failure within 1 year. INTERPRETATION: Deep learning algorithms can incorporate longitudinal information to continuously predict long-term outcomes after liver transplantation, outperforming logistic regression models. Physicians could use these algorithms at routine follow-up visits to identify liver transplant recipients at risk for adverse outcomes and prevent these complications by modifying management based on ranked features. FUNDING: Canadian Donation and Transplant Research Program, CIFAR AI Chairs Program.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Trasplante de Hígado/efectos adversos , Trasplante de Hígado/mortalidad , Medición de Riesgo/métodos , Adulto , Anciano , Área Bajo la Curva , Canadá/epidemiología , Bases de Datos Factuales , Femenino , Humanos , Modelos Logísticos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Curva ROC , Estudios Retrospectivos , Estados Unidos/epidemiología
10.
Lancet Digit Health ; 3(5): e306-e316, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33890578

RESUMEN

BACKGROUND: Coronary artery calcium (CAC) score is a clinically validated marker of cardiovascular disease risk. We developed and validated a novel cardiovascular risk stratification system based on deep-learning-predicted CAC from retinal photographs. METHODS: We used 216 152 retinal photographs from five datasets from South Korea, Singapore, and the UK to train and validate the algorithms. First, using one dataset from a South Korean health-screening centre, we trained a deep-learning algorithm to predict the probability of the presence of CAC (ie, deep-learning retinal CAC score, RetiCAC). We stratified RetiCAC scores into tertiles and used Cox proportional hazards models to evaluate the ability of RetiCAC to predict cardiovascular events based on external test sets from South Korea, Singapore, and the UK Biobank. We evaluated the incremental values of RetiCAC when added to the Pooled Cohort Equation (PCE) for participants in the UK Biobank. FINDINGS: RetiCAC outperformed all single clinical parameter models in predicting the presence of CAC (area under the receiver operating characteristic curve of 0·742, 95% CI 0·732-0·753). Among the 527 participants in the South Korean clinical cohort, 33 (6·3%) had cardiovascular events during the 5-year follow-up. When compared with the current CAC risk stratification (0, >0-100, and >100), the three-strata RetiCAC showed comparable prognostic performance with a concordance index of 0·71. In the Singapore population-based cohort (n=8551), 310 (3·6%) participants had fatal cardiovascular events over 10 years, and the three-strata RetiCAC was significantly associated with increased risk of fatal cardiovascular events (hazard ratio [HR] trend 1·33, 95% CI 1·04-1·71). In the UK Biobank (n=47 679), 337 (0·7%) participants had fatal cardiovascular events over 10 years. When added to the PCE, the three-strata RetiCAC improved cardiovascular risk stratification in the intermediate-risk group (HR trend 1·28, 95% CI 1·07-1·54) and borderline-risk group (1·62, 1·04-2·54), and the continuous net reclassification index was 0·261 (95% CI 0·124-0·364). INTERPRETATION: A deep learning and retinal photograph-derived CAC score is comparable to CT scan-measured CAC in predicting cardiovascular events, and improves on current risk stratification approaches for cardiovascular disease events. These data suggest retinal photograph-based deep learning has the potential to be used as an alternative measure of CAC, especially in low-resource settings. FUNDING: Yonsei University College of Medicine; Ministry of Health and Welfare, Korea Institute for Advancement of Technology, South Korea; Agency for Science, Technology, and Research; and National Medical Research Council, Singapore.


Asunto(s)
Algoritmos , Enfermedades Cardiovasculares/diagnóstico , Enfermedad de la Arteria Coronaria/complicaciones , Aprendizaje Profundo , Retina/diagnóstico por imagen , Medición de Riesgo/métodos , Calcificación Vascular/complicaciones , Adulto , Anciano , Área Bajo la Curva , Femenino , Humanos , Estimación de Kaplan-Meier , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Modelos de Riesgos Proporcionales , Curva ROC , República de Corea , Singapur , Reino Unido
11.
Lancet Digit Health ; 3(5): e317-e329, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33890579

RESUMEN

BACKGROUND: By 2050, almost 5 billion people globally are projected to have myopia, of whom 20% are likely to have high myopia with clinically significant risk of sight-threatening complications such as myopic macular degeneration. These are diagnoses that typically require specialist assessment or measurement with multiple unconnected pieces of equipment. Artificial intelligence (AI) approaches might be effective for risk stratification and to identify individuals at highest risk of visual loss. However, unresolved challenges for AI medical studies remain, including paucity of transparency, auditability, and traceability. METHODS: In this retrospective multicohort study, we developed and tested retinal photograph-based deep learning algorithms for detection of myopic macular degeneration and high myopia, using a total of 226 686 retinal images. First we trained and internally validated the algorithms on datasets from Singapore, and then externally tested them on datasets from China, Taiwan, India, Russia, and the UK. We also compared the performance of the deep learning algorithms against six human experts in the grading of a randomly selected dataset of 400 images from the external datasets. As proof of concept, we used a blockchain-based AI platform to demonstrate the real-world application of secure data transfer, model transfer, and model testing across three sites in Singapore and China. FINDINGS: The deep learning algorithms showed robust diagnostic performance with areas under the receiver operating characteristic curves [AUC] of 0·969 (95% CI 0·959-0·977) or higher for myopic macular degeneration and 0·913 (0·906-0·920) or higher for high myopia across the external testing datasets with available data. In the randomly selected dataset, the deep learning algorithms outperformed all six expert graders in detection of each condition (AUC of 0·978 [0·957-0·994] for myopic macular degeneration and 0·973 [0·941-0·995] for high myopia). We also successfully used blockchain technology for data transfer, model transfer, and model testing between sites and across two countries. INTERPRETATION: Deep learning algorithms can be effective tools for risk stratification and screening of myopic macular degeneration and high myopia among the large global population with myopia. The blockchain platform developed here could potentially serve as a trusted platform for performance testing of future AI models in medicine. FUNDING: None.


Asunto(s)
Algoritmos , Inteligencia Artificial , Cadena de Bloques , Aprendizaje Profundo , Degeneración Macular/diagnóstico , Miopía/diagnóstico , Retina/diagnóstico por imagen , Área Bajo la Curva , Investigación Biomédica/instrumentación , Investigación Biomédica/métodos , Estudios de Cohortes , Conjuntos de Datos como Asunto , Humanos , Prueba de Estudio Conceptual , Curva ROC , Reproducibilidad de los Resultados , Estudios Retrospectivos
12.
BMC Bioinformatics ; 22(1): 198, 2021 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-33874881

RESUMEN

BACKGROUND: Genotype-phenotype predictions are of great importance in genetics. These predictions can help to find genetic mutations causing variations in human beings. There are many approaches for finding the association which can be broadly categorized into two classes, statistical techniques, and machine learning. Statistical techniques are good for finding the actual SNPs causing variation where Machine Learning techniques are good where we just want to classify the people into different categories. In this article, we examined the Eye-color and Type-2 diabetes phenotype. The proposed technique is a hybrid approach consisting of some parts from statistical techniques and remaining from Machine learning. RESULTS: The main dataset for Eye-color phenotype consists of 806 people. 404 people have Blue-Green eyes where 402 people have Brown eyes. After preprocessing we generated 8 different datasets, containing different numbers of SNPs, using the mutation difference and thresholding at individual SNP. We calculated three types of mutation at each SNP no mutation, partial mutation, and full mutation. After that data is transformed for machine learning algorithms. We used about 9 classifiers, RandomForest, Extreme Gradient boosting, ANN, LSTM, GRU, BILSTM, 1DCNN, ensembles of ANN, and ensembles of LSTM which gave the best accuracy of 0.91, 0.9286, 0.945, 0.94, 0.94, 0.92, 0.95, and 0.96% respectively. Stacked ensembles of LSTM outperformed other algorithms for 1560 SNPs with an overall accuracy of 0.96, AUC = 0.98 for brown eyes, and AUC = 0.97 for Blue-Green eyes. The main dataset for Type-2 diabetes consists of 107 people where 30 people are classified as cases and 74 people as controls. We used different linear threshold to find the optimal number of SNPs for classification. The final model gave an accuracy of 0.97%. CONCLUSION: Genotype-phenotype predictions are very useful especially in forensic. These predictions can help to identify SNP variant association with traits and diseases. Given more datasets, machine learning model predictions can be increased. Moreover, the non-linearity in the Machine learning model and the combination of SNPs Mutations while training the model increases the prediction. We considered binary classification problems but the proposed approach can be extended to multi-class classification.


Asunto(s)
Aprendizaje Profundo , Diabetes Mellitus Tipo 2 , Diabetes Mellitus Tipo 2/genética , Color del Ojo , Genotipo , Humanos , Fenotipo
13.
Sensors (Basel) ; 21(6)2021 Mar 23.
Artículo en Inglés | MEDLINE | ID: mdl-33806741

RESUMEN

Intuitive user interfaces are indispensable to interact with the human centric smart environments. In this paper, we propose a unified framework that recognizes both static and dynamic gestures, using simple RGB vision (without depth sensing). This feature makes it suitable for inexpensive human-robot interaction in social or industrial settings. We employ a pose-driven spatial attention strategy, which guides our proposed Static and Dynamic gestures Network-StaDNet. From the image of the human upper body, we estimate his/her depth, along with the region-of-interest around his/her hands. The Convolutional Neural Network (CNN) in StaDNet is fine-tuned on a background-substituted hand gestures dataset. It is utilized to detect 10 static gestures for each hand as well as to obtain the hand image-embeddings. These are subsequently fused with the augmented pose vector and then passed to the stacked Long Short-Term Memory blocks. Thus, human-centred frame-wise information from the augmented pose vector and from the left/right hands image-embeddings are aggregated in time to predict the dynamic gestures of the performing person. In a number of experiments, we show that the proposed approach surpasses the state-of-the-art results on the large-scale Chalearn 2016 dataset. Moreover, we transfer the knowledge learned through the proposed methodology to the Praxis gestures dataset, and the obtained results also outscore the state-of-the-art on this dataset.


Asunto(s)
Aprendizaje Profundo , Gestos , Femenino , Mano , Humanos , Masculino , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas
14.
Sensors (Basel) ; 21(6)2021 Mar 19.
Artículo en Inglés | MEDLINE | ID: mdl-33808922

RESUMEN

In intelligent vehicles, it is essential to monitor the driver's condition; however, recognizing the driver's emotional state is one of the most challenging and important tasks. Most previous studies focused on facial expression recognition to monitor the driver's emotional state. However, while driving, many factors are preventing the drivers from revealing the emotions on their faces. To address this problem, we propose a deep learning-based driver's real emotion recognizer (DRER), which is a deep learning-based algorithm to recognize the drivers' real emotions that cannot be completely identified based on their facial expressions. The proposed algorithm comprises of two models: (i) facial expression recognition model, which refers to the state-of-the-art convolutional neural network structure; and (ii) sensor fusion emotion recognition model, which fuses the recognized state of facial expressions with electrodermal activity, a bio-physiological signal representing electrical characteristics of the skin, in recognizing even the driver's real emotional state. Hence, we categorized the driver's emotion and conducted human-in-the-loop experiments to acquire the data. Experimental results show that the proposed fusing approach achieves 114% increase in accuracy compared to using only the facial expressions and 146% increase in accuracy compare to using only the electrodermal activity. In conclusion, our proposed method achieves 86.8% recognition accuracy in recognizing the driver's induced emotion while driving situation.


Asunto(s)
Conducción de Automóvil , Aprendizaje Profundo , Emociones , Expresión Facial , Humanos , Redes Neurales de la Computación
15.
Sensors (Basel) ; 21(6)2021 Mar 19.
Artículo en Inglés | MEDLINE | ID: mdl-33808925

RESUMEN

Arterial blood pressure (ABP) is an important vital sign from which it can be extracted valuable information about the subject's health. After studying its morphology it is possible to diagnose cardiovascular diseases such as hypertension, so ABP routine control is recommended. The most common method of controlling ABP is the cuff-based method, from which it is obtained only the systolic and diastolic blood pressure (SBP and DBP, respectively). This paper proposes a cuff-free method to estimate the morphology of the average ABP pulse (ABPM¯) through a deep learning model based on a seq2seq architecture with attention mechanism. It only needs raw photoplethysmogram signals (PPG) from the finger and includes the capacity to integrate both categorical and continuous demographic information (DI). The experiments were performed on more than 1100 subjects from the MIMIC database for which their corresponding age and gender were consulted. Without allowing the use of data from the same subjects to train and test, the mean absolute errors (MAE) were 6.57 ± 0.20 and 14.39 ± 0.42 mmHg for DBP and SBP, respectively. For ABPM¯, R correlation coefficient and the MAE were 0.98 ± 0.001 and 8.89 ± 0.10 mmHg. In summary, this methodology is capable of transforming PPG into an ABP pulse, which obtains better results when DI of the subjects is used, potentially useful in times when wireless devices are becoming more popular.


Asunto(s)
Aprendizaje Profundo , Fotopletismografía , Presión Sanguínea , Determinación de la Presión Sanguínea , Demografía , Humanos
16.
Sensors (Basel) ; 21(6)2021 Mar 12.
Artículo en Inglés | MEDLINE | ID: mdl-33809080

RESUMEN

Given the high prevalence and detrimental effects of unintentional falls in the elderly, fall detection has become a pertinent public concern. A Fall Detection System (FDS) gathers information from sensors to distinguish falls from routine activities in order to provide immediate medical assistance. Hence, the integrity of collected data becomes imperative. Presence of missing values in data, caused by unreliable data delivery, lossy sensors, local interference and synchronization disturbances and so forth, greatly hamper the credibility and usefulness of data making it unfit for reliable fall detection. This paper presents a noise tolerant FDS performing in presence of missing values in data. The work focuses on Deep Learning (DL) particularly Recurrent Neural Networks (RNNs) with an underlying Bidirectional Long Short-Term Memory (BiLSTM) stack to implement FDS based on wearable sensors. The proposed technique is evaluated on two publicly available datasets-SisFall and UP-Fall Detection. Our system produces an accuracy of 97.21% and 97.41%, sensitivity of 96.97% and 99.77% and specificity of 93.18% and 91.45% on SisFall and UP-Fall Detection respectively, thus outperforming the existing state of the art on these benchmark datasets. The resultant outcomes suggest that the ability of BiLSTM to retain long term dependencies from past and future make it an appropriate model choice to handle missing values for wearable fall detection systems.


Asunto(s)
Aprendizaje Profundo , Dispositivos Electrónicos Vestibles , Actividades Cotidianas , Anciano , Algoritmos , Humanos , Redes Neurales de la Computación
17.
Sensors (Basel) ; 21(6)2021 Mar 12.
Artículo en Inglés | MEDLINE | ID: mdl-33809165

RESUMEN

Resolution plays an essential role in oral imaging for periodontal disease assessment. Nevertheless, due to limitations in acquisition tools, a considerable number of oral examinations have low resolution, making the evaluation of this kind of lesion difficult. Recently, the use of deep-learning methods for image resolution improvement has seen an increase in the literature. In this work, we performed two studies to evaluate the effects of using different resolution improvement methods (nearest, bilinear, bicubic, Lanczos, SRCNN, and SRGAN). In the first one, specialized dentists visually analyzed the quality of images treated with these techniques. In the second study, we used those methods as different pre-processing steps for inputs of convolutional neural network (CNN) classifiers (Inception and ResNet) and evaluated whether this process leads to better results. The deep-learning methods lead to a substantial improvement in the visual quality of images but do not necessarily promote better classifier performance.


Asunto(s)
Pérdida de Hueso Alveolar , Aprendizaje Profundo , Diagnóstico por Imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación
18.
Sensors (Basel) ; 21(6)2021 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-33809537

RESUMEN

The life cycle of leaves, from sprout to senescence, is the phenomenon of regular changes such as budding, branching, leaf spreading, flowering, fruiting, leaf fall, and dormancy due to seasonal climate changes. It is the effect of temperature and moisture in the life cycle on physiological changes, so the detection of newly grown leaves (NGL) is helpful for the estimation of tree growth and even climate change. This study focused on the detection of NGL based on deep learning convolutional neural network (CNN) models with sparse enhancement (SE). As the NGL areas found in forest images have similar sparse characteristics, we used a sparse image to enhance the signal of the NGL. The difference between the NGL and the background could be further improved. We then proposed hybrid CNN models that combined U-net and SegNet features to perform image segmentation. As the NGL in the image were relatively small and tiny targets, in terms of data characteristics, they also belonged to the problem of imbalanced data. Therefore, this paper further proposed 3-Layer SegNet, 3-Layer U-SegNet, 2-Layer U-SegNet, and 2-Layer Conv-U-SegNet architectures to reduce the pooling degree of traditional semantic segmentation models, and used a loss function to increase the weight of the NGL. According to the experimental results, our proposed algorithms were indeed helpful for the image segmentation of NGL and could achieve better kappa results by 0.743.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Hojas de la Planta , Árboles
19.
Sensors (Basel) ; 21(6)2021 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-33809710

RESUMEN

Manual segmentation of muscle and adipose compartments from computed tomography (CT) axial images is a potential bottleneck in early rapid detection and quantification of sarcopenia. A prototype deep learning neural network was trained on a multi-center collection of 3413 abdominal cancer surgery subjects to automatically segment truncal muscle, subcutaneous adipose tissue and visceral adipose tissue at the L3 lumbar vertebral level. Segmentations were externally tested on 233 polytrauma subjects. Although after severe trauma abdominal CT scans are quickly and robustly delivered, with often motion or scatter artefacts, incomplete vertebral bodies or arms that influence image quality, the concordance was generally very good for the body composition indices of Skeletal Muscle Radiation Attenuation (SMRA) (Concordance Correlation Coefficient (CCC) = 0.92), Visceral Adipose Tissue index (VATI) (CCC = 0.99) and Subcutaneous Adipose Tissue Index (SATI) (CCC = 0.99). In conclusion, this article showed an automated and accurate segmentation system to segment the cross-sectional muscle and adipose area L3 lumbar spine level on abdominal CT. Future perspectives will include fine-tuning the algorithm and minimizing the outliers.


Asunto(s)
Aprendizaje Profundo , Traumatismo Múltiple , Tejido Adiposo/diagnóstico por imagen , Estudios Transversales , Humanos , Traumatismo Múltiple/diagnóstico por imagen , Músculo Esquelético/diagnóstico por imagen , Tomografía Computarizada por Rayos X
20.
Sensors (Basel) ; 21(6)2021 Mar 22.
Artículo en Inglés | MEDLINE | ID: mdl-33809972

RESUMEN

A rotator cuff tear (RCT) is an injury in adults that causes difficulty in moving, weakness, and pain. Only limited diagnostic tools such as magnetic resonance imaging (MRI) and ultrasound Imaging (UI) systems can be utilized for an RCT diagnosis. Although UI offers comparable performance at a lower cost to other diagnostic instruments such as MRI, speckle noise can occur the degradation of the image resolution. Conventional vision-based algorithms exhibit inferior performance for the segmentation of diseased regions in UI. In order to achieve a better segmentation for diseased regions in UI, deep-learning-based diagnostic algorithms have been developed. However, it has not yet reached an acceptable level of performance for application in orthopedic surgeries. In this study, we developed a novel end-to-end fully convolutional neural network, denoted as Segmentation Model Adopting a pRe-trained Classification Architecture (SMART-CA), with a novel integrated on positive loss function (IPLF) to accurately diagnose the locations of RCT during an orthopedic examination using UI. Using the pre-trained network, SMART-CA can extract remarkably distinct features that cannot be extracted with a normal encoder. Therefore, it can improve the accuracy of segmentation. In addition, unlike other conventional loss functions, which are not suited for the optimization of deep learning models with an imbalanced dataset such as the RCT dataset, IPLF can efficiently optimize the SMART-CA. Experimental results have shown that SMART-CA offers an improved precision, recall, and dice coefficient of 0.604% (+38.4%), 0.942% (+14.0%) and 0.736% (+38.6%) respectively. The RCT segmentation from a normal ultrasound image offers the improved precision, recall, and dice coefficient of 0.337% (+22.5%), 0.860% (+15.8%) and 0.484% (+28.5%), respectively, in the RCT segmentation from an ultrasound image with severe speckle noise. The experimental results demonstrated the IPLF outperforms other conventional loss functions, and the proposed SMART-CA optimized with the IPLF showed better performance than other state-of-the-art networks for the RCT segmentation with high robustness to speckle noise.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Manguito de los Rotadores/diagnóstico por imagen , Ultrasonografía
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...