RESUMO
Ensuring the safety and efficacy of chemical compounds is crucial in small-molecule drug development. In the later stages of drug development, toxic compounds pose a significant challenge, losing valuable resources and time. Early and accurate prediction of compound toxicity using deep learning models offers a promising solution to mitigate these risks during drug discovery. In this study, we present the development of several deep-learning models aimed at evaluating different types of compound toxicity, including acute toxicity, carcinogenicity, hERG_cardiotoxicity (the human ether-a-go-go related gene caused cardiotoxicity), hepatotoxicity, and mutagenicity. To address the inherent variations in data size, label type, and distribution across different types of toxicity, we employed diverse training strategies. Our first approach involved utilizing a graph convolutional network (GCN) regression model to predict acute toxicity, which achieved notable performance with Pearson R 0.76, 0.74, and 0.65 for intraperitoneal, intravenous, and oral administration routes, respectively. Furthermore, we trained multiple GCN binary classification models, each tailored to a specific type of toxicity. These models exhibited high area under the curve (AUC) scores, with an impressive AUC of 0.69, 0.77, 0.88, and 0.79 for predicting carcinogenicity, hERG_cardiotoxicity, mutagenicity, and hepatotoxicity, respectively. Additionally, we have used the approved drug dataset to determine the appropriate threshold value for the prediction score in model usage. We integrated these models into a virtual screening pipeline to assess their effectiveness in identifying potential low-toxicity drug candidates. Our findings indicate that this deep learning approach has the potential to significantly reduce the cost and risk associated with drug development by expediting the selection of compounds with low toxicity profiles. Therefore, the models developed in this study hold promise as critical tools for early drug candidate screening and selection.
Assuntos
Aprendizado Profundo , Humanos , Descoberta de Drogas/métodos , Animais , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Cardiotoxicidade/etiologiaRESUMO
In sub-Saharan Africa, acute-onset severe malaria anaemia (SMA) is a critical challenge, particularly affecting children under five. The acute drop in haematocrit in SMA is thought to be driven by an increased phagocytotic pathological process in the spleen, leading to the presence of distinct red blood cells (RBCs) with altered morphological characteristics. We hypothesized that these RBCs could be detected systematically and at scale in peripheral blood films (PBFs) by harnessing the capabilities of deep learning models. Assessment of PBFs by a microscopist does not scale for this task and is subject to variability. Here we introduce a deep learning model, leveraging a weakly supervised Multiple Instance Learning framework, to Identify SMA (MILISMA) through the presence of morphologically changed RBCs. MILISMA achieved a classification accuracy of 83% (receiver operating characteristic area under the curve [AUC] of 87%; precision-recall AUC of 76%). More importantly, MILISMA's capabilities extend to identifying statistically significant morphological distinctions (p < 0.01) in RBCs descriptors. Our findings are enriched by visual analyses, which underscore the unique morphological features of SMA-affected RBCs when compared to non-SMA cells. This model aided detection and characterization of RBC alterations could enhance the understanding of SMA's pathology and refine SMA diagnostic and prognostic evaluation processes at scale.
Assuntos
Anemia , Aprendizado Profundo , Eritrócitos , Humanos , Eritrócitos/patologia , Anemia/sangue , Anemia/patologia , Anemia/diagnóstico , Feminino , Masculino , Pré-Escolar , Malária/sangue , Malária/diagnóstico , Malária/patologia , Lactente , CriançaRESUMO
This research delves into the exploration of the potential of tocopherol-based nanoemulsion as a therapeutic agent for cardiovascular diseases (CVD) through an in-depth molecular docking analysis. The study focuses on elucidating the molecular interactions between tocopherol and seven key proteins (1O8a, 4YAY, 4DLI, 1HW9, 2YCW, 1BO9 and 1CX2) that play pivotal roles in CVD development. Through rigorous in silico docking investigations, assessment was conducted on the binding affinities, inhibitory potentials and interaction patterns of tocopherol with these target proteins. The findings revealed significant interactions, particularly with 4YAY, displaying a robust binding energy of -6.39 kcal/mol and a promising Ki value of 20.84 µM. Notable interactions were also observed with 1HW9, 4DLI, 2YCW and 1CX2, further indicating tocopherol's potential therapeutic relevance. In contrast, no interaction was observed with 1BO9. Furthermore, an examination of the common residues of 4YAY bound to tocopherol was carried out, highlighting key intermolecular hydrophobic bonds that contribute to the interaction's stability. Tocopherol complies with pharmacokinetics (Lipinski's and Veber's) rules for oral bioavailability and proves safety non-toxic and non-carcinogenic. Thus, deep learning-based protein language models ESM1-b and ProtT5 were leveraged for input encodings to predict interaction sites between the 4YAY protein and tocopherol. Hence, highly accurate predictions of these critical protein-ligand interactions were achieved. This study not only advances the understanding of these interactions but also highlights deep learning's immense potential in molecular biology and drug discovery. It underscores tocopherol's promise as a cardiovascular disease management candidate, shedding light on its molecular interactions and compatibility with biomolecule-like characteristics.
Assuntos
Doenças Cardiovasculares , Aprendizado Profundo , Simulação de Acoplamento Molecular , Doenças Cardiovasculares/tratamento farmacológico , Doenças Cardiovasculares/metabolismo , Humanos , Tocoferóis/química , Tocoferóis/metabolismo , Ligação Proteica , Proteínas/química , Proteínas/metabolismoRESUMO
OBJECTIVE: Brain metastases (BM) are associated with poor prognosis and increased mortality rates, making them a significant clinical challenge. Studying BMs can aid in improving early detection and monitoring. Systematic comparisons of anatomical distributions of BM from different primary cancers, however, remain largely unavailable. METHODS: To test the hypothesis that anatomical BM distributions differ based on primary cancer type, we analyze the spatial coordinates of BMs for five different primary cancer types along principal component (PC) axes. The dataset includes 3949 intracranial metastases, labeled by primary cancer types and with six features. We employ PC coordinates to highlight the distinctions between various cancer types. We utilized different Machine Learning (ML) algorithms (RF, SVM, TabNet DL) models to establish the relationship between primary cancer diagnosis, spatial coordinates of BMs, age, and target volume. RESULTS: Our findings revealed that PC1 aligns most with the Y axis, followed by the Z axis, and has minimal correlation with the X axis. Based on PC1 versus PC2 plots, we identified notable differences in anatomical spreading patterns between Breast and Lung cancer, as well as Breast and Renal cancer. In contrast, Renal and Lung cancer, as well as Lung and Melanoma, showed similar patterns. Our ML and DL results demonstrated high accuracy in distinguishing BM distribution for different primary cancers, with the SVM algorithm achieving 97% accuracy using a polynomial kernel and TabNet achieving 96%. The RF algorithm ranked PC1 as the most important discriminating feature. CONCLUSIONS: In summary, our results support accurate multiclass ML classification regarding brain metastases distribution.
Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Aprendizado de Máquina , Humanos , Neoplasias Encefálicas/secundário , Feminino , Masculino , Neoplasias/patologia , Algoritmos , Pessoa de Meia-IdadeRESUMO
BACKGROUND: Chronic obstructive pulmonary disease (COPD) is underdiagnosed with the current gold standard measure pulmonary function test (PFT). A more sensitive and simple option for early detection and severity evaluation of COPD could benefit practitioners and patients. METHODS: In this multicenter retrospective study, frontal chest X-ray (CXR) images and related clinical information of 1055 participants were collected and processed. Different deep learning algorithms and transfer learning models were trained to classify COPD based on clinical data and CXR images from 666 subjects, and validated in internal test set based on 284 participants. External test including 105 participants was also performed to verify the generalization ability of the learning algorithms in diagnosing COPD. Meanwhile, the model was further used to evaluate disease severity of COPD by predicting different grads. RESULTS: The Ensemble model showed an AUC of 0.969 in distinguishing COPD by simultaneously extracting fusion features of clinical parameters and CXR images in internal test, better than models that used clinical parameters (AUC = 0.963) or images (AUC = 0.946) only. For the external test set, the AUC slightly declined to 0.934 in predicting COPD based on clinical parameters and CXR images. When applying the Ensemble model to determine disease severity of COPD, the AUC reached 0.894 for three-classification and 0.852 for five-classification respectively. CONCLUSION: The present study used DL algorithms to screen COPD and predict disease severity based on CXR imaging and clinical parameters. The models showed good performance and the approach might be an effective case-finding tool with low radiation dose for COPD diagnosis and staging.
Assuntos
Aprendizado Profundo , Doença Pulmonar Obstrutiva Crônica , Humanos , Estudos Retrospectivos , Raios X , TóraxRESUMO
Artificial intelligence (AI) models can play a more effective role in managing patients with the explosion of digital health records available in the healthcare industry. Machine-learning (ML) and deep-learning (DL) techniques are two methods used to develop predictive models that serve to improve the clinical processes in the healthcare industry. These models are also implemented in medical imaging machines to empower them with an intelligent decision system to aid physicians in their decisions and increase the efficiency of their routine clinical practices. The physicians who are going to work with these machines need to have an insight into what happens in the background of the implemented models and how they work. More importantly, they need to be able to interpret their predictions, assess their performance, and compare them to find the one with the best performance and fewer errors. This review aims to provide an accessible overview of key evaluation metrics for physicians without AI expertise. In this review, we developed four real-world diagnostic AI models (two ML and two DL models) for breast cancer diagnosis using ultrasound images. Then, 23 of the most commonly used evaluation metrics were reviewed uncomplicatedly for physicians. Finally, all metrics were calculated and used practically to interpret and evaluate the outputs of the models. Accessible explanations and practical applications empower physicians to effectively interpret, evaluate, and optimize AI models to ensure safety and efficacy when integrated into clinical practice.
RESUMO
PURPOSE: Deep learning-based auto-segmentation algorithms can improve clinical workflow by defining accurate regions of interest while reducing manual labor. Over the past decade, convolutional neural networks (CNNs) have become prominent in medical image segmentation applications. However, CNNs have limitations in learning long-range spatial dependencies due to the locality of the convolutional layers. Transformers were introduced to address this challenge. In transformers with self-attention mechanism, even the first layer of information processing makes connections between distant image locations. Our paper presents a novel framework that bridges these two unique techniques, CNNs and transformers, to segment the gross tumor volume (GTV) accurately and efficiently in computed tomography (CT) images of non-small cell-lung cancer (NSCLC) patients. METHODS: Under this framework, input of multiple resolution images was used with multi-depth backbones to retain the benefits of high-resolution and low-resolution images in the deep learning architecture. Furthermore, a deformable transformer was utilized to learn the long-range dependency on the extracted features. To reduce computational complexity and to efficiently process multi-scale, multi-depth, high-resolution 3D images, this transformer pays attention to small key positions, which were identified by a self-attention mechanism. We evaluated the performance of the proposed framework on a NSCLC dataset which contains 563 training images and 113 test images. Our novel deep learning algorithm was benchmarked against five other similar deep learning models. RESULTS: The experimental results indicate that our proposed framework outperforms other CNN-based, transformer-based, and hybrid methods in terms of Dice score (0.92) and Hausdorff Distance (1.33). Therefore, our proposed model could potentially improve the efficiency of auto-segmentation of early-stage NSCLC during the clinical workflow. This type of framework may potentially facilitate online adaptive radiotherapy, where an efficient auto-segmentation workflow is required. CONCLUSIONS: Our deep learning framework, based on CNN and transformer, performs auto-segmentation efficiently and could potentially assist clinical radiotherapy workflow.
Assuntos
Carcinoma Pulmonar de Células não Pequenas , Aprendizado Profundo , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/radioterapia , Tomografia Computadorizada por Raios X , Redes Neurais de Computação , Algoritmos , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/radioterapia , Processamento de Imagem Assistida por Computador/métodosRESUMO
Sign language is designed as a natural communication method to convey messages among the deaf community. In the study of sign language recognition through wearable sensors, the data sources are limited, and the data acquisition process is complex. This research aims to collect an American sign language dataset with a wearable inertial motion capture system and realize the recognition and end-to-end translation of sign language sentences with deep learning models. In this work, a dataset consisting of 300 commonly used sentences is gathered from 3 volunteers. In the design of the recognition network, the model mainly consists of three layers: convolutional neural network, bi-directional long short-term memory, and connectionist temporal classification. The model achieves accuracy rates of 99.07% in word-level evaluation and 97.34% in sentence-level evaluation. In the design of the translation network, the encoder-decoder structured model is mainly based on long short-term memory with global attention. The word error rate of end-to-end translation is 16.63%. The proposed method has the potential to recognize more sign language sentences with reliable inertial data from the device.
Assuntos
Língua de Sinais , Dispositivos Eletrônicos Vestíveis , Humanos , Estados Unidos , Captura de Movimento , Neurônios , PercepçãoRESUMO
Deep learning models provide a more powerful method for accurate and stable prediction of water quality in rivers, which is crucial for the intelligent management and control of the water environment. To increase the accuracy of predicting the water quality parameters and learn more about the impact of complex spatial information based on deep learning models, this study proposes two ensemble models TNX (with temporal attention) and STNX (with spatio-temporal attention) based on seasonal and trend decomposition (STL) method to predict water quality using geo-sensory time series data. Dissolved oxygen, total phosphorus, and ammonia nitrogen were predicted in short-step (1 h, and 2 h) and long-step (12 h, and 24 h) with seven water quality monitoring sites in a river. The ensemble model TNX improved the performance by 2.1%-6.1% and 4.3%-22.0% relative to the best baseline deep learning model for the short-step and long-step water quality prediction, and it can capture the variation pattern of water quality parameters by only predicting the trend component of raw data after STL decomposition. The STNX model, with spatio-temporal attention, obtained 0.5%-2.4% and 2.3%-5.7% higher performance compared to the TNX model for the short-step and long-step water quality prediction, and such improvement was more effective in mitigating the prediction shift patterns of long-step prediction. Moreover, the model interpretation results consistently demonstrated positive relationship patterns across all monitoring sites. However, the significance of seven specific monitoring sites diminished as the distance between the predicted and input monitoring sites increased. This study provides an ensemble modeling approach based on STL decomposition for improving short-step and long-step prediction of river water quality parameter, and understands the impact of complex spatial information on deep learning model.
Assuntos
Aprendizado Profundo , Rios , Qualidade da Água , Rios/química , Monitoramento Ambiental/métodos , Fósforo/análise , Modelos TeóricosRESUMO
Drought is an extended shortage of rainfall resulting in water scarcity and affecting a region's social and economic conditions through environmental deterioration. Its adverse environmental effects can be minimised by timely prediction. Drought detection uses only ground observation stations, but satellite-based supervision scans huge land mass stretches and offers highly effective monitoring. This paper puts forward a novel drought monitoring system using satellite imagery by considering the effects of droughts that devastated agriculture in Thanjavur district, Tamil Nadu, between 2000 and 2022. The proposed method uses Holt Winter Conventional 2D-Long Short-Term Memory (HW-Conv2DLSTM) to forecast meteorological and agricultural droughts. It employs Climate Hazards Group InfraRed Precipitation with Station (CHIRPS) data precipitation index datasets, MODIS 11A1 temperature index, and MODIS 13Q1 vegetation index. It extracts the time series data from satellite images using trend and seasonal patterns and smoothens them using Holt Winter alpha, beta, and gamma parameters. Finally, an effective drought prediction procedure is developed using Conv2D-LSTM to calculate the spatiotemporal correlation amongst drought indices. The HW-Conv2DLSTM offers a better R2 value of 0.97. It holds promise as an effective computer-assisted strategy to predict droughts and maintain agricultural productivity, which is vital to feed the ever-increasing human population.
Assuntos
Agricultura , Secas , Monitoramento Ambiental , Imagens de Satélites , Estações do Ano , Agricultura/métodos , Monitoramento Ambiental/métodos , Índia , PrevisõesRESUMO
Pathogenic bacteria present a major threat to human health, causing various infections and illnesses, and in some cases, even death. The accurate identification of these bacteria is crucial, but it can be challenging due to the similarities between different species and genera. This is where automated classification using convolutional neural network (CNN) models can help, as it can provide more accurate, authentic, and standardized results.In this study, we aimed to create a larger and balanced dataset by image patching and applied different variations of CNN models, including training from scratch, fine-tuning, and weight adjustment, and data augmentation through random rotation, reflection, and translation. The results showed that the best results were achieved through augmentation and fine-tuning of deep models. We also modified existing architectures, such as InceptionV3 and MobileNetV2, to better capture complex features. The robustness of the proposed ensemble model was evaluated using two data splits (7:2:1 and 6:2:2) to see how performance changed as the training data was increased from 10 to 20%. In both cases, the model exhibited exceptional performance. For the 7:2:1 split, the model achieved an accuracy of 99.91%, F-Score of 98.95%, precision of 98.98%, recall of 98.96%, and MCC of 98.92%. For the 6:2:2 split, the model yielded an accuracy of 99.94%, F-Score of 99.28%, precision of 99.31%, recall of 98.96%, and MCC of 99.26%. This demonstrates that automatic classification using the ensemble model can be a valuable tool for diagnostic staff and microbiologists in accurately identifying pathogenic bacteria, which in turn can help control epidemics and minimize their social and economic impact.
Assuntos
Epidemias , Humanos , Redes Neurais de ComputaçãoRESUMO
BACKGROUND: Laryngopharyngeal cancer (LPC) includes laryngeal and hypopharyngeal cancer, whose early diagnosis can significantly improve the prognosis and quality of life of patients. Pathological biopsy of suspicious cancerous tissue under the guidance of laryngoscopy is the gold standard for diagnosing LPC. However, this subjective examination largely depends on the skills and experience of laryngologists, which increases the possibility of missed diagnoses and repeated unnecessary biopsies. We aimed to develop and validate a deep convolutional neural network-based Laryngopharyngeal Artificial Intelligence Diagnostic System (LPAIDS) for real-time automatically identifying LPC in both laryngoscopy white-light imaging (WLI) and narrow-band imaging (NBI) images to improve the diagnostic accuracy of LPC by reducing diagnostic variation among on-expert laryngologists. METHODS: All 31,543 laryngoscopic images from 2382 patients were categorised into training, verification, and test sets to develop, validate, and internal test LPAIDS. Another 25,063 images from five other hospitals were used as external tests. Overall, 551 videos were used to evaluate the real-time performance of the system, and 200 randomly selected videos were used to compare the diagnostic performance of the LPAIDS with that of laryngologists. Two deep-learning models using either WLI (model W) or NBI (model N) images were constructed to compare with LPAIDS. RESULTS: LPAIDS had a higher diagnostic performance than models W and N, with accuracies of 0·956 and 0·949 in the internal image and video tests, respectively. The robustness and stability of LPAIDS were validated in external sets with the area under the receiver operating characteristic curve values of 0·965-0·987. In the laryngologist-machine competition, LPAIDS achieved an accuracy of 0·940, which was comparable to expert laryngologists and outperformed other laryngologists with varying qualifications. CONCLUSIONS: LPAIDS provided high accuracy and stability in detecting LPC in real-time, which showed great potential for using LPAIDS to improve the diagnostic accuracy of LPC by reducing diagnostic variation among on-expert laryngologists.
Assuntos
Inteligência Artificial , Neoplasias , Humanos , Qualidade de Vida , Laringoscopia/métodos , Redes Neurais de Computação , Curva ROCRESUMO
The significant surge in Internet of Things (IoT) devices presents substantial challenges to network security. Hackers are afforded a larger attack surface to exploit as more devices become interconnected. Furthermore, the sheer volume of data these devices generate can overwhelm conventional security systems, compromising their detection capabilities. To address these challenges posed by the increasing number of interconnected IoT devices and the data overload they generate, this paper presents an approach based on meta-learning principles to identify attacks within IoT networks. The proposed approach constructs a meta-learner model by stacking the predictions of three Deep-Learning (DL) models: RNN, LSTM, and CNN. Subsequently, the identification by the meta-learner relies on various methods, namely Logistic Regression (LR), Multilayer Perceptron (MLP), Support Vector Machine (SVM), and Extreme Gradient Boosting (XGBoost). To assess the effectiveness of this approach, extensive evaluations are conducted using the IoT dataset from 2020. The XGBoost model showcased outstanding performance, achieving the highest accuracy (98.75%), precision (98.30%), F1-measure (98.53%), and AUC-ROC (98.75%). On the other hand, the SVM model exhibited the highest recall (98.90%), representing a slight improvement of 0.14% over the performance achieved by XGBoost.
RESUMO
In the realm of hyperspectral image classification, the pursuit of heightened accuracy and comprehensive feature extraction has led to the formulation of an advance architectural paradigm. This study proposed a model encapsulated within the framework of a unified model, which synergistically leverages the capabilities of three distinct branches: the swin transformer, convolutional neural network, and encoder-decoder. The main objective was to facilitate multiscale feature learning, a pivotal facet in hyperspectral image classification, with each branch specializing in unique facets of multiscale feature extraction. The swin transformer, recognized for its competence in distilling long-range dependencies, captures structural features across different scales; simultaneously, convolutional neural networks undertake localized feature extraction, engendering nuanced spatial information preservation. The encoder-decoder branch undertakes comprehensive analysis and reconstruction, fostering the assimilation of both multiscale spectral and spatial intricacies. To evaluate our approach, we conducted experiments on publicly available datasets and compared the results with state-of-the-art methods. Our proposed model obtains the best classification result compared to others. Specifically, overall accuracies of 96.87%, 98.48%, and 98.62% were obtained on the Xuzhou, Salinas, and LK datasets.
RESUMO
Color face images are often transmitted over public channels, where they are vulnerable to tampering attacks. To address this problem, the present paper introduces a novel scheme called Authentication and Color Face Self-Recovery (AuCFSR) for ensuring the authenticity of color face images and recovering the tampered areas in these images. AuCFSR uses a new two-dimensional hyperchaotic system called two-dimensional modular sine-cosine map (2D MSCM) to embed authentication and recovery data into the least significant bits of color image pixels. This produces high-quality output images with high security level. When tampered color face image is detected, AuCFSR executes two deep learning models: the CodeFormer model to enhance the visual quality of the recovered color face image and the DeOldify model to improve the colorization of this image. Experimental results demonstrate that AuCFSR outperforms recent similar schemes in tamper detection accuracy, security level, and visual quality of the recovered images.
RESUMO
The Internet of Things (IoT) has transformed our interaction with technology and introduced security challenges. The growing number of IoT attacks poses a significant threat to organizations and individuals. This paper proposes an approach for detecting attacks on IoT networks using ensemble feature selection and deep learning models. Ensemble feature selection combines filter techniques such as variance threshold, mutual information, Chi-square, ANOVA, and L1-based methods. By leveraging the strengths of each technique, the ensemble is formed by the union of selected features. However, this union operation may overlook redundancy and irrelevance, potentially leading to a larger feature set. To address this, a wrapper algorithm called Recursive Feature Elimination (RFE) is applied to refine the feature selection. The impact of the selected feature set on the performance of Deep Learning (DL) models (CNN, RNN, GRU, and LSTM) is evaluated using the IoT-Botnet 2020 dataset, considering detection accuracy, precision, recall, F1-measure, and False Positive Rate (FPR). All DL models achieved the highest detection accuracy, precision, recall, and F1 measure values, ranging from 97.05% to 97.87%, 96.99% to 97.95%, 99.80% to 99.95%, and 98.45% to 98.87%, respectively.
RESUMO
Deep learning networks powered by AI are essential predictive tools relying on image data availability and processing hardware advancements. However, little attention has been paid to explainable AI (XAI) in application fields, including environmental management. This study develops an explainability framework with a triadic structure to focus on input, AI model and output. The framework provides three main contributions. (1) A context-based augmentation of input data to maximize generalizability and minimize overfitting. (2) A direct monitoring of AI model layers and parameters to use leaner (lighter) networks suitable for edge device deployment, (3) An output explanation procedure focusing on interpretability and robustness of predictive decisions by AI networks. These contributions significantly advance state of the art in XAI for environmental management research, offering implications for improved understanding and utilization of AI networks in this field.
Assuntos
Conservação dos Recursos Naturais , Aprendizado ProfundoRESUMO
In this article, the maximum and minimum daily temperature data for Indian cities were tested, together with the predicted diurnal temperature range (DTR) for monthly time horizons. RClimDex, a user interface for extreme computing indices, was used to advance the estimation because it allowed for statistical analysis and comparison of climatological elements such time series, means, extremes, and trends. During these 69 years, a more erratic DTR trend was seen in the research area. This study investigates the suitability of three deep neural networks for one-step-ahead DTR time series (DTRTS) forecasting, including recurrent neural network (RNN), long short-term memory (LSTM), gated recurrent unit (GRU), and auto-regressive integrated moving average exogenous (ARIMAX). To evaluate the effectiveness of models in the testing set, six statistical error indicators, including root mean square error (RMSE), mean absolute error (MAE), coefficient of correlation (R), percent bias (PBIAS), modified index of agreement (md), and relative index of agreement (rd), were chosen. The Wilson score approach was used to do a quantitative uncertainty analysis on the prediction error to forecast the outcome DTR. The findings show that the LSTM outperforms the other models in terms of its capacity to forget, remember, and update information. It is more accurate on datasets with longer sequences and displays noticeably more volatility throughout its gradient descent. The results of a sensitivity analysis on the LSTM model, which used RMSE values as an output and took into account different look-back periods, showed that the amount of history used to fit a time series forecast model had a direct impact on the model's performance. As a result, this model can be applied as a fresh, trustworthy deep learning method for DTRTS forecasting.
Assuntos
Aprendizado Profundo , Temperatura , Cidades , Monitoramento Ambiental , Previsões , IncertezaRESUMO
Nowadays in modern societies, a sedentary lifestyle is almost inevitable for a majority of the population. Long hours of sitting, especially in wrong postures, may result in health complications. A smart chair with the capability to identify sitting postures can help reduce health risks induced by a modern lifestyle. This paper presents the design, realization and evaluation of a new smart chair sensors system capable of sitting postures identification. The system consists of eight pressure sensors placed on the chair's sitting cushion and the backrest. A signal acquisition board was designed from scratch to acquire data generated by the pressure sensors and transmit them via a Wi-Fi network to a purposely developed graphical user interface which monitors and stores the acquired sensors' data on a computer. The designed system was tested by means of an extensive sitting experiment involving 40 subjects, and from the acquired data, the classification of the respective sitting postures out of eight possible postures was performed. Hereby, the performance of seven deep-learning algorithms was assessed. The best accuracy of 91.68% was achieved by an echo memory network model. The designed smart chair sensors system is simple and versatile, low cost and accurate, and it can easily be deployed in several smart chair environments, both for public and private contexts.
Assuntos
Aprendizado Profundo , Algoritmos , Humanos , Postura , Comportamento Sedentário , Postura SentadaRESUMO
Hashtags have been an integral element of social media platforms over the years and are widely used by users to promote, organize and connect users. Despite the intensive use of hashtags, there is no basis for using congruous tags, which causes the creation of many unrelated contents in hashtag searches. The presence of mismatched content in the hashtag creates many problems for individuals and brands. Although several methods have been presented to solve the problem by recommending hashtags based on the users' interest, the detection and analysis of the characteristics of these repetitive contents with irrelevant hashtags have rarely been addressed. To this end, we propose a novel hybrid deep learning hashtag incongruity detection by fusing visual and textual modality. We fine-tune BERT and ResNet50 pre-trained models to encode textual and visual information to encode textual and visual data simultaneously. We further attempt to show the capability of logo detection and face recognition in discriminating images. To extract faces, we introduce a pipeline that ranks faces based on the number of times they appear on Instagram accounts using face clustering. Moreover, we conduct our analysis and experiments on a dataset of Instagram posts that we collect from hashtags related to brands and celebrities. Unlike the existing works, we analyze these contents from both content and user perspectives and show a significant difference between data. In light of our results, we show that our multimodal model outperforms other models and the effectiveness of object detection in detecting mismatched information.