RESUMO
Systemic activation of toll-like receptor 3 (TLR3) signaling using poly(I:C), a TLR3 agonist, drives ethanol consumption in several rodent models, while global knockout of Tlr3 reduces drinking in C57BL/6J male mice. To determine if brain TLR3 pathways are involved in drinking behavior, we used CRISPR/Cas9 genome editing to generate a Tlr3 floxed (Tlr3F/F) mouse line. After sequence confirmation and functional validation of Tlr3 brain transcripts, we injected Tlr3F/F male mice with an adeno-associated virus expressing Cre recombinase (AAV5-CMV-Cre-GFP) to knockdown Tlr3 in the medial prefrontal cortex, nucleus accumbens, or dorsal striatum (DS). Only Tlr3 knockdown in the DS decreased two-bottle choice, every-other-day (2BC-EOD) ethanol consumption. DS-specific deletion of Tlr3 also increased intoxication and prevented acute functional tolerance to ethanol. In contrast, poly(I:C)-induced activation of TLR3 signaling decreased intoxication in male C57BL/6J mice, consistent with its ability to increase 2BC-EOD ethanol consumption in these mice. We also found that TLR3 was highly colocalized with DS neurons. AAV5-Cre transfection occurred predominantly in neurons, but there was minimal transfection in astrocytes and microglia. Collectively, our previous and current studies show that activating or inhibiting TLR3 signaling produces opposite effects on acute responses to ethanol and on ethanol consumption. While previous studies, however, used global knockout or systemic TLR3 activation (which alter peripheral and brain innate immune responses), the current results provide new evidence that brain TLR3 signaling regulates ethanol drinking. We propose that activation of TLR3 signaling in DS neurons increases ethanol consumption and that a striatal TLR3 pathway is a potential target to reduce excessive drinking.
Assuntos
Etanol , Receptor 3 Toll-Like , Camundongos , Masculino , Animais , Receptor 3 Toll-Like/metabolismo , Camundongos Endogâmicos C57BL , Etanol/farmacologia , Transdução de Sinais , Consumo de Bebidas Alcoólicas/metabolismo , Poli I-C/farmacologiaRESUMO
An efficient microbial conversion for simultaneous synthesis of multiple high-value compounds, such as biosurfactants and enzymes, is one of the most promising aspects for an economical bioprocess leading to a marked reduction in production cost. Although biosurfactant and enzyme production separately have been much explored, there are limited reports on the predictions and optimization studies on simultaneous production of biosurfactants and other industrially important enzymes, including lipase, protease, and amylase. Enzymes are suited for an integrated production process with biosurfactants as multiple common industrial processes and applications are catalysed by these molecules. However, the complexity in microbial metabolism complicates the production process. This study details the work done on biosurfactant and enzyme co-production and explores the application and scope of various statistical tools and methodologies in this area of research. The use of advanced computational tools is yet to be explored for the optimization of downstream strategies in the co-production process. Given the complexity of the co-production process and with various new methodologies based on artificial intelligence (AI) being invented, the scope of AI in shaping the biosurfactant-enzyme co-production process is immense and would lead to not only efficient and rapid optimization, but economical extraction of multiple biomolecules as well.
Assuntos
Inteligência Artificial , Tensoativos , Tensoativos/metabolismo , Fermentação , Lipase/metabolismo , EndopeptidasesRESUMO
BACKGROUND: The World Health Organization (WHO) Labour Care Guide (LCG) is a paper-based labour monitoring tool designed to facilitate the implementation of WHO's latest guidelines for effective, respectful care during labour and childbirth. Implementing the LCG into routine intrapartum care requires a strategy that improves healthcare provider practices during labour and childbirth. Such a strategy might optimize the use of Caesarean section (CS), along with potential benefits on the use of other obstetric interventions, maternal and perinatal health outcomes, and women's experience of care. However, the effects of a strategy to implement the LCG have not been evaluated in a randomised trial. This study aims to: (1) develop and optimise a strategy for implementing the LCG (formative phase); and (2) To evaluate the implementation of the LCG strategy compared with usual care (trial phase). METHODS: In the formative phase, we will co-design the LCG strategy with key stakeholders informed by facility assessments and provider surveys, which will be field tested in one hospital. The LCG strategy includes a LCG training program, ongoing supportive supervision from senior clinical staff, and audit and feedback using the Robson Classification. We will then conduct a stepped-wedge, cluster-randomized pilot trial in four public hospitals in India, to evaluate the effect of the LCG strategy intervention compared to usual care (simplified WHO partograph). The primary outcome is the CS rate in nulliparous women with singleton, term, cephalic pregnancies in spontaneous labour (Robson Group 1). Secondary outcomes include clinical and process of care outcomes, as well as women's experience of care outcomes. We will also conduct a process evaluation during the trial, using standardized facility assessments, in-depth interviews and surveys with providers, audits of completed LCGs, labour ward observations and document reviews. An economic evaluation will consider implementation costs and cost-effectiveness. DISCUSSION: Findings of this trial will guide clinicians, administrators and policymakers on how to effectively implement the LCG, and what (if any) effects the LCG strategy has on process of care, health and experience outcomes. The trial findings will inform the rollout of LCG internationally. TRIAL REGISTRATION: CTRI/2021/01/030695 (Protocol version 1.4, 25 April 2022).
The new WHO Labour Care Guide (LCG) is an innovative partograph that emphasises women-centred, evidence-based care during labour and childbirth. Together with clinicians working at four hospitals in India, we will develop and test a strategy to implement the LCG into routine care in labour wards of these hospitals. We will use a randomised trial design where this LCG strategy is introduced sequentially in each of the four hospitals, in a random order. We will collect data on all women giving birth and their newborns during this period and analyse whether the LCG strategy has any effects on the use of Caesarean section, women's and newborn's health outcomes, and women's experiences during labour and childbirth. While the trial is being conducted, we will also collect qualitative and quantitative data from doctors, nurses and midwives working in these hospitals, to understand their perspectives and experiences of using the LCG in their day-to-day work. In addition, we will collect economic data to understand how much the LCG strategy costs, and how much money it might save if it is effective. Through this study, our international collaboration will generate critical evidence and innovative tools to support implementation of the LCG in other countries.
Assuntos
Cesárea , Parto , Feminino , Humanos , Gravidez , Hospitais , Projetos Piloto , Ensaios Clínicos Controlados Aleatórios como Assunto , Organização Mundial da Saúde , Ensaios Clínicos Pragmáticos como AssuntoRESUMO
Fused deposition modelling (FDM)-based 3D printing is a trending technology in the era of Industry 4.0 that manufactures products in layer-by-layer form. It shows remarkable benefits such as rapid prototyping, cost-effectiveness, flexibility, and a sustainable manufacturing approach. Along with such advantages, a few defects occur in FDM products during the printing stage. Diagnosing defects occurring during 3D printing is a challenging task. Proper data acquisition and monitoring systems need to be developed for effective fault diagnosis. In this paper, the authors proposed a low-cost multi-sensor data acquisition system (DAQ) for detecting various faults in 3D printed products. The data acquisition system was developed using an Arduino micro-controller that collects real-time multi-sensor signals using vibration, current, and sound sensors. The different types of fault conditions are referred to introduce various defects in 3D products to analyze the effect of the fault conditions on the captured sensor data. Time and frequency domain analyses were performed on captured data to create feature vectors by selecting the chi-square method, and the most significant features were selected to train the CNN model. The K-means cluster algorithm was used for data clustering purposes, and the bell curve or normal distribution curve was used to define individual sensor threshold values under normal conditions. The CNN model was used to classify the normal and fault condition data, which gave an accuracy of around 94%, by evaluating the model performance based on recall, precision, and F1 score.
RESUMO
Human ideas and sentiments are mirrored in facial expressions. They give the spectator a plethora of social cues, such as the viewer's focus of attention, intention, motivation, and mood, which can help develop better interactive solutions in online platforms. This could be helpful for children while teaching them, which could help in cultivating a better interactive connect between teachers and students, since there is an increasing trend toward the online education platform due to the COVID-19 pandemic. To solve this, the authors proposed kids' emotion recognition based on visual cues in this research with a justified reasoning model of explainable AI. The authors used two datasets to work on this problem; the first is the LIRIS Children Spontaneous Facial Expression Video Database, and the second is an author-created novel dataset of emotions displayed by children aged 7 to 10. The authors identified that the LIRIS dataset has achieved only 75% accuracy, and no study has worked further on this dataset in which the authors have achieved the highest accuracy of 89.31% and, in the authors' dataset, an accuracy of 90.98%. The authors also realized that the face construction of children and adults is different, and the way children show emotions is very different and does not always follow the same way of facial expression for a specific emotion as compared with adults. Hence, the authors used 3D 468 landmark points and created two separate versions of the dataset from the original selected datasets, which are LIRIS-Mesh and Authors-Mesh. In total, all four types of datasets were used, namely LIRIS, the authors' dataset, LIRIS-Mesh, and Authors-Mesh, and a comparative analysis was performed by using seven different CNN models. The authors not only compared all dataset types used on different CNN models but also explained for every type of CNN used on every specific dataset type how test images are perceived by the deep-learning models by using explainable artificial intelligence (XAI), which helps in localizing features contributing to particular emotions. The authors used three methods of XAI, namely Grad-CAM, Grad-CAM++, and SoftGrad, which help users further establish the appropriate reason for emotion detection by knowing the contribution of its features in it.
Assuntos
COVID-19 , Aprendizado Profundo , Adulto , Criança , Animais , Humanos , Inteligência Artificial , Pandemias , EmoçõesRESUMO
Glaucoma is a multifactorial disease leading to irreversible blindness. Primary open-angle glaucoma (POAG) is the most common form and is associated with the elevation of intraocular pressure (IOP). Reduced aqueous humor (AH) outflow due to trabecular meshwork (TM) dysfunction is responsible for IOP elevation in POAG. Extracellular matrix (ECM) accumulation, actin cytoskeletal reorganization, and stiffening of the TM are associated with increased outflow resistance. Transforming growth factor (TGF) ß2, a profibrotic cytokine, is known to play an important role in the development of ocular hypertension (OHT) in POAG. An appropriate mouse model is critical in understanding the underlying molecular mechanism of TGFß2-induced OHT. To achieve this, TM can be targeted with recombinant viral vectors to express a gene of interest. Lentiviruses (LV) are known for their tropism towards TM with stable transgene expression and low immunogenicity. We, therefore, developed a novel mouse model of IOP elevation using LV gene transfer of active human TGFß2 in the TM. We developed an LV vector-encoding active hTGFß2C226,228S under the control of a cytomegalovirus (CMV) promoter. Adult C57BL/6J mice were injected intravitreally with LV expressing null or hTGFß2C226,228S. We observed a significant increase in IOP 3 weeks post-injection compared to control eyes with an average delta change of 3.3 mmHg. IOP stayed elevated up to 7 weeks post-injection, which correlated with a significant drop in the AH outflow facility (40.36%). Increased expression of active TGFß2 was observed in both AH and anterior segment samples of injected mice. The morphological assessment of the mouse TM region via hematoxylin and eosin (H&E) staining and direct ophthalmoscopy examination revealed no visible signs of inflammation or other ocular abnormalities in the injected eyes. Furthermore, transduction of primary human TM cells with LV_hTGFß2C226,228S exhibited alterations in actin cytoskeleton structures, including the formation of F-actin stress fibers and crossed-linked actin networks (CLANs), which are signature arrangements of actin cytoskeleton observed in the stiffer fibrotic-like TM. Our study demonstrated a mouse model of sustained IOP elevation via lentiviral gene delivery of active hTGFß2C226,228S that induces TM dysfunction and outflow resistance.
Assuntos
Glaucoma de Ângulo Aberto , Hipertensão Ocular , Actinas/metabolismo , Animais , Humor Aquoso/metabolismo , Células Cultivadas , Modelos Animais de Doenças , Glaucoma de Ângulo Aberto/genética , Glaucoma de Ângulo Aberto/metabolismo , Pressão Intraocular , Camundongos , Camundongos Endogâmicos C57BL , Hipertensão Ocular/genética , Hipertensão Ocular/metabolismo , Malha Trabecular/metabolismo , Fator de Crescimento Transformador beta2/metabolismoRESUMO
BACKGROUND AND AIM: The present study was carried out to compare the efficacy of continuous epidural infusion of two amide local anesthetics, ropivacaine and bupivacaine with fentanyl for postoperative analgesia in major abdominal surgeries. MATERIAL AND METHODS: A total of 60 patients scheduled for major abdominal surgery were randomized into two study Groups B and R with thirty patients in each group. All patients were administered general anesthesia after placing epidural catheter. Patients received continuous epidural infusion of either 0.25% bupivacaine with 1 ug/ml fentanyl (Group B) or of 0.25% ropivacaine with 1 ug/ml fentanyl (Group R) at the rate 6 ml/h intraoperatively. Postoperatively, they received 0.125% bupivacaine with 1 ug/ml fentanyl (Group B) or 0.125% ropivacaine with 1 ug/ml fentanyl (Group R) at the rate 6 ml/h. Hemodynamic parameters, visual analog scale (VAS), level of sensory block, and degree of motor block (based on Bromage scale) were monitored for 24 h postoperatively. RESULTS: Hemodynamic parameters and VAS scores were comparable in the two groups. The level of sensory block was higher in bupivacaine group. There were more patients with higher Bromage score in the (23.3%) bupivacaine group than in (6.7%) ropivacaine group though the difference was not statistically significant. CONCLUSION: Both ropivacaine and bupivacaine in the concentration of 0.125% with fentanyl 1 ug/ml are equally safe, with minimal motor block and are effective in providing postoperative analgesia.
RESUMO
Handwritten text recognition (HTR) within computer vision and image processing stands as a prominent and challenging research domain, holding significant implications for diverse applications. Among these, it finds usefulness in reading bank checks, prescriptions, and deciphering characters on various forms. Optical character recognition (OCR) technology, specifically tailored for handwritten documents, plays a pivotal role in translating characters from a range of file formats, encompassing both word and image documents. Challenges in HTR encompass intricate layout designs, varied handwriting styles, limited datasets, and less accuracy achieved. Recent advancements in Deep Learning and Machine Learning algorithms, coupled with the vast repositories of unprocessed data, have propelled researchers to achieve remarkable progress in HTR. This paper aims to address the challenges in handwritten text recognition by proposing a hybrid approach. The primary objective is to enhance the accuracy of recognizing handwritten text from images. Through the integration of Convolutional Neural Networks (CNN) and Bidirectional Long Short-Term Memory (BiLSTM) with a Connectionist Temporal Classification (CTC) decoder, the results indicate substantial improvement. The proposed hybrid model achieved an impressive 98.50% and 98.80% accuracy on the IAM and RIMES datasets, respectively. This underscores the potential and efficacy of the consecutive use of these advanced neural network architectures in enhancing handwritten text recognition accuracy. â¢The proposed method introduces a hybrid approach for handwritten text recognition, employing CNN and BiLSTM with CTC decoder.â¢Results showcase a remarkable accuracy improvement of 98.50% and 98.80% on IAM and RIMES datasets, emphasizing the potential of this model for enhanced accuracy in recognizing handwritten text from images.
RESUMO
Object detection methods based on deep learning have been used in a variety of sectors including banking, healthcare, e-governance, and academia. In recent years, there has been a lot of attention paid to research endeavors made towards text detection and recognition from different scenesor images of unstructured document processing. The article's novelty lies in the detailed discussion and implementation of the various transfer learning-based different backbone architectures for printed text recognition. In this research article, the authors compared the ResNet50, ResNet50V2, ResNet152V2, Inception, Xception, and VGG19 backbone architectures with preprocessing techniques as data resizing, normalization, and noise removal on a standard OCR Kaggle dataset. Further, the top three backbone architectures selected based on the accuracy achieved and then hyper parameter tunning has been performed to achieve more accurate results. Xception performed well compared with the ResNet, Inception, VGG19, MobileNet architectures by achieving high evaluation scores with accuracy (98.90%) and min loss (0.19). As per existing research in this domain, until now, transfer learning-based backbone architectures that have been used on printed or handwritten data recognition are not well represented in literature. We split the total dataset into 80 percent for training and 20 percent for testing purpose and then into different backbone architecture models with the same number of epochs, and found that the Xception architecture achieved higher accuracy than the others. In addition, the ResNet50V2 model gave us higher accuracy (96.92%) than the ResNet152V2 model (96.34%).
RESUMO
Digitization created a demand for highly efficient handwritten document recognition systems. A handwritten document consists of digits, text, symbols, diagrams, etc. Digits are an essential element of handwritten documents. Accurate recognition of handwritten digits is vital for effective communication and data analysis. Various researchers have attempted to address this issue with modern convolutional neural network (CNN) techniques. Even after training, CNN filter weights remain unchanged despite the high identification accuracy. As a result, the process cannot flexibly adapt to input changes. Hence computer vision researchers have recently become interested in Vision Transformers (ViTs) and Multilayer Perceptrons (MLPs). The shortcomings of CNNs gave rise to a hybrid model revolution that combines the best elements of the two fields. This paper analyzes how the hybrid convolutional ViT model affects the ability to recognize handwritten digits. Also, the real-time data contains noise, distortions, and varying writing styles. Hence, cleaned and uncleaned handwritten digit images are used for evaluation in this paper. The accuracy of the proposed method is compared with the state-of-the-art techniques, and the result shows that the proposed model achieves the highest recognition accuracy. Also, the probable solutions for recognizing other aspects of handwritten documents are discussed in this paper.â¢Analyzed the effect of convolutional vision transformer on cleaned and real-time handwritten digit images.â¢The model's performance improved with the implication of cross-validation and hyper-parameter tuning.â¢The results show that the proposed model is robust, feasible, and effective on cleaned and uncleaned handwritten digits.
RESUMO
In the digital age, the proliferation of health-related information online has heightened the risk of misinformation, posing substantial threats to public well-being. This research conducts a meticulous comparative analysis of classification models, focusing on detecting health misinformation. The study evaluates the performance of traditional machine learning models and advanced graph convolutional networks (GCN) across critical algorithmic metrics. The results comprehensively understand each algorithm's effectiveness in identifying health misinformation and provide valuable insights for combating the pervasive spread of false health information in the digital landscape. GCN with TF-IDF gives the best result, as shown in the result section. â¢The research method involves a comparative analysis of classification algorithms to detect health misinformation, exploring traditional machine learning models and graph convolutional networks.â¢This research used algorithms such as Passive Aggressive Classifier, Random Forest, Decision Tree, Logistic Regression, Light GBM, GCN, GCN with BERT, GCN with TF-IDF, and GCN with Word2Vec were employed. Performance Metrics: Accuracy: for Passive Aggressive Classifier: 85.75 %, Random Forest: 86 %, Decision Tree: 81.30 %, Light BGM: 83.29 %, normal GCN: 84.53 %, GCN with BERT: 85.00 %, GCN with TR-IDF: 93.86 % and GCN with word2Vec: 81.00 %â¢Algorithmic performance metrics, including accuracy, precision, recall, and F1-score, were systematically evaluated to assess the efficacy of each model in detecting health misinformation, focusing on understanding the strengths and limitations of different approaches. The superior performance of Graph Convolutional Networks (GCNs) with TF-IDF embedding, achieving an accuracy of 93.86.
RESUMO
Cancer is a heterogeneous disease that results from genetic alteration of cell cycle and proliferation controls. Identifying mutations that drive cancer, understanding cancer type specificities, and delineating how driver mutations interact with each other to establish disease is vital for identifying therapeutic vulnerabilities. Such cancer specific patterns and gene co-occurrences can be identified by studying tumor genome sequences, and networks have proven effective in uncovering relationships between sequences. We present two network-based approaches to identify driver gene patterns among tumor samples. The first approach relies on analysis using the Directed Weighted All Nearest Neighbors (DiWANN) model, which is a variant of sequence similarity network, and the second approach uses bipartite network analysis. A data reduction framework was implemented to extract the minimal relevant information for the sequence similarity network analysis, where a transformed reference sequence is generated for constructing the driver gene network. This data reduction process combined with the efficiency of the DiWANN network model, greatly lowered the computational cost (in terms of execution time and memory usage) of generating the networks enabling us to work at a much larger scale than previously possible. The DiWANN network helped us identify cancer types in which samples were more closely connected to each other suggesting they are less heterogeneous and potentially susceptible to a common drug. The bipartite network analysis provided insight into gene associations and co-occurrences. We identified genes that were broadly mutated in multiple cancer types and mutations exclusive to only a few. Additionally, weighted one-mode gene projections of the bipartite networks revealed a pattern of occurrence of driver genes in different cancers. Our study demonstrates that network-based approaches can be an effective tool in cancer genomics. The analysis identifies co-occurring and exclusive driver genes and mutations for specific cancer types, providing a better understanding of the driver genes that lead to tumor initiation and evolution.
RESUMO
A rolling bearing is a crucial element within rotating machinery, and its smooth operation profoundly influences the overall well-being of the equipment. Consequently, analyzing its operational condition is crucial to prevent production losses or, in extreme cases, potential fatalities due to catastrophic failures. Accurate estimates of the Remaining Useful Life (RUL) of rolling bearings ensure manufacturing safety while also leading to cost savings.â¢This paper proposes an intelligent deep learning-based framework for remaining useful life estimation of bearings on the basis of informed detection of anomalies.â¢The paper demonstrates the setup of an experimental bearing test rig and the collection of bearing condition monitoring data such as vibration data.â¢Advanced hybrid models of Encoder-Decoder LSTM demonstrate high forecasting accuracy in RUL estimation.
RESUMO
Attention mechanism has recently gained immense importance in the natural language processing (NLP) world. This technique highlights parts of the input text that the NLP task (such as translation) must pay "attention" to. Inspired by this, some researchers have recently applied the NLP domain, deep-learning based, attention mechanism techniques to predictive maintenance. In contrast to the deep-learning based solutions, Industry 4.0 predictive maintenance solutions that often rely on edge-computing, demand lighter predictive models. With this objective, we have investigated the adaptation of a simpler, incredibly fast and compute-resource friendly, "Nadaraya-Watson estimator based" attention method. We develop a method to predict tool-wear of a milling machine using this attention mechanism and demonstrate, with the help of heat-maps, how the attention mechanism highlights regions that assist in predicting onset of tool-wear. We validate the effectiveness of this adaptation on the benchmark IEEEDataPort PHM Society dataset, by comparing against other comparatively "lighter" machine learning techniques - Bayesian Ridge, Gradient Boosting Regressor, SGD Regressor and Support Vector Regressor. Our experiments indicate that the proposed Nadaraya-Watson attention mechanism performed best with an MAE of 0.069, RMSE of 0.099 and R2 of 83.40 %, when compared to the next best technique Gradient Boosting Regressor with figures of 0.100, 0.138, 66.51 % respectively. Additionally, it produced a lighter and faster model as well.â¢We propose a Nadaraya-Watson estimator based "attention mechanism", applied to a predictive maintenance problem.â¢Unlike the deep-learning based attention mechanisms from the NLP domain, our method creates fast, light and high-performance models, suitable for edge computing devices and therefore supports the Industry 4.0 initiative.â¢Method validated on real tool-wear data of a milling machine.
RESUMO
In today's world of managing multimedia content, dealing with the amount of CCTV footage poses challenges related to storage, accessibility and efficient navigation. To tackle these issues, we suggest an encompassing technique, for summarizing videos that merges machine-learning techniques with user engagement. Our methodology consists of two phases, each bringing improvements to video summarization. In Phase I we introduce a method for summarizing videos based on keyframe detection and behavioral analysis. By utilizing technologies like YOLOv5 for object recognition, Deep SORT for object tracking, and Single Shot Detector (SSD) for creating video summaries. In Phase II we present a User Interest Based Video summarization system driven by machine learning. By incorporating user preferences into the summarization process we enhance techniques with personalized content curation. Leveraging tools such as NLTK, OpenCV, TensorFlow, and the EfficientDET model enables our system to generate customized video summaries tailored to preferences. This innovative approach not only enhances user interactions but also efficiently handles the overwhelming amount of video data on digital platforms. By combining these two methodologies we make progress in applying machine learning techniques while offering a solution to the complex challenges presented by managing multimedia data.
RESUMO
In recent decades, abstractive text summarization using multimodal input has attracted many researchers due to the capability of gathering information from various sources to create a concise summary. However, the existing methodologies based on multimodal summarization provide only a summary for the short videos and poor results for the lengthy videos. To address the aforementioned issues, this research presented the Multimodal Abstractive Summarization using Bidirectional Encoder Representations from Transformers (MAS-BERT) with an attention mechanism. The purpose of the video summarization is to increase the speed of searching for a large collection of videos so that the users can quickly decide whether the video is relevant or not by reading the summary. Initially, the data is obtained from the publicly available How2 dataset and is encoded using the Bidirectional Gated Recurrent Unit (Bi-GRU) encoder and the Long Short Term Memory (LSTM) encoder. The textual data which is embedded in the embedding layer is encoded using a bidirectional GRU encoder and the features with audio and video data are encoded with LSTM encoder. After this, BERT based attention mechanism is used to combine the modalities and finally, the BI-GRU based decoder is used for summarizing the multimodalities. The results obtained through the experiments that show the proposed MAS-BERT has achieved a better result of 60.2 for Rouge-1 whereas, the existing Decoder-only Multimodal Transformer (D-MmT) and the Factorized Multimodal Transformer based Decoder Only Language model (FLORAL) has achieved 49.58 and 56.89 respectively. Our work facilitates users by providing better contextual information and user experience and would help video-sharing platforms for customer retention by allowing users to search for relevant videos by looking at its summary.
RESUMO
PURPOSE: The aim of this study was to assess and compare the likelihood of relapse one year after LeFort I advancement surgery in patients with and without cleft lip and palate. METHODS: A retrospective observational study which included two groups of participants who underwent LeFort I maxillary advancement was performed. Group 1 included 10 non-cleft subjects and Group 2 included 21 subjects with cleft palate. These maxillary deficient patients were chosen and operated using a technique where only a sagittal displacement was intended. Patients who underwent additional mandibular surgery, significant vertical or transverse alterations, or both were excluded. Pre-operative (T1), immediately post-operative (T2), and minimum one-year follow-up (T3) lateral cephalograms were studied for each group. Skeletal stability and dental stability after LeFort I surgery at a minimum of one-year follow-up in cleft palate and non-cleft patients were evaluated. RESULTS: For the given sample size, relapse tendencies showed statistically significant differences between cleft palate patients and non-cleft palate patients after maxillary advancement. The sella nasion angle and horizontal overlap of the maxillary and mandibular incisors (overjet) decreased by 2 degrees and 0.9 mm respectively in the cleft palate group while decreasing by 1.10 degrees and 0.40 mm in the non-cleft group. CONCLUSIONS: After maxillary advancement with LeFort I osteotomy and miniplate fixation in patients with cleft palate and non-cleft patients, some degree of relapse was detected in both groups for the given sample size after one year post-operatively. The cleft palate group displayed additional relapse tendencies when compared to the non-cleft group.
RESUMO
Parkinson's disease (PD), the second most prevalent neurodegenerative disorder, is projected to see a significant rise in incidence over the next three decades. The precise treatment of PD remains a formidable challenge, prompting ongoing research into early diagnostic methodologies. Network pharmacology, a burgeoning field grounded in systems biology, examines the intricate networks of biological systems to identify critical signal nodes, facilitating the development of multi-target therapeutic molecules. This approach systematically maps the components of Parkinson's disease, thereby reducing its complexity. In this review, we explore the application of network pharmacology workflows in PD, discuss the techniques employed in this field, and evaluate the current advancements and status of network pharmacology in the context of Parkinson's disease. The comprehensive insights will pave newer paths to explore early disease biomarkers and to develop diagnosis with a holistic in silico, in vitro, in vivo and clinical studies.
Assuntos
Farmacologia em Rede , Doença de Parkinson , Doença de Parkinson/tratamento farmacológico , Humanos , Animais , Biologia de Sistemas , Antiparkinsonianos/uso terapêutico , Antiparkinsonianos/farmacologia , BiomarcadoresRESUMO
The ability to derive retinal ganglion cells (RGCs) from human pluripotent stem cells (hPSCs) has led to numerous advances in the field of retinal research, with great potential for the use of hPSC-derived RGCs for studies of human retinal development, in vitro disease modeling, drug discovery, as well as their potential use for cell replacement therapeutics. Of all these possibilities, the use of hPSC-derived RGCs as a human-relevant platform for in vitro disease modeling has received the greatest attention, due to the translational relevance as well as the immediacy with which results may be obtained compared to more complex applications like cell replacement. While several studies to date have focused upon the use of hPSC-derived RGCs with genetic variants associated with glaucoma or other optic neuropathies, many of these have largely described cellular phenotypes with only limited advancement into exploring dysfunctional cellular pathways as a consequence of the disease-associated gene variants. Thus, to further advance this field of research, in the current study we leveraged an isogenic hPSC model with a glaucoma-associated mutation in the Optineurin (OPTN) protein, which plays a prominent role in autophagy. We identified an impairment of autophagic-lysosomal degradation and decreased mTORC1 signaling via activation of the stress sensor AMPK, along with subsequent neurodegeneration in OPTN(E50K) RGCs differentiated from hPSCs, and have further validated some of these findings in a mouse model of ocular hypertension. Pharmacological inhibition of mTORC1 in hPSC-derived RGCs recapitulated disease-related neurodegenerative phenotypes in otherwise healthy RGCs, while the mTOR-independent induction of autophagy reduced protein accumulation and restored neurite outgrowth in diseased OPTN(E50K) RGCs. Taken together, these results highlighted that autophagy disruption resulted in increased autophagic demand which was associated with downregulated signaling through mTORC1, contributing to the degeneration of RGCs.