RESUMO
Protein engineering is an emerging field in biotechnology that has the potential to revolutionize various areas, such as antibody design, drug discovery, food security, ecology, and more. However, the mutational space involved is too vast to be handled through experimental means alone. Leveraging accumulative protein databases, machine learning (ML) models, particularly those based on natural language processing (NLP), have considerably expedited protein engineering. Moreover, advances in topological data analysis (TDA) and artificial intelligence-based protein structure prediction, such as AlphaFold2, have made more powerful structure-based ML-assisted protein engineering strategies possible. This review aims to offer a comprehensive, systematic, and indispensable set of methodological components, including TDA and NLP, for protein engineering and to facilitate their future development.
Assuntos
Inteligência Artificial , Engenharia de Proteínas , Processamento de Linguagem Natural , Anticorpos , Análise de DadosRESUMO
Multifunctional devices integrated with electrochromic and supercapacitance properties are fascinating because of their extensive usage in modern electronic applications. In this work, vanadium-doped cobalt chloride carbonate hydroxide hydrate nanostructures (V-C3H NSs) are successfully synthesized and show unique electrochromic and supercapacitor properties. The V-C3H NSs material exhibits a high specific capacitance of 1219.9 F g-1 at 1 mV s-1 with a capacitance retention of 100% over 30 000 CV cycles. The electrochromic performance of the V-C3H NSs material is confirmed through in situ spectroelectrochemical measurements, where the switching time, coloration efficiency (CE), and optical modulation (∆T) are found to be 15.7 and 18.8 s, 65.85 cm2 C-1 and 69%, respectively. A coupled multilayer artificial neural network (ANN) model is framed to predict potential and current from red (R), green (G), and blue (B) color values. The optimized V-C3H NSs are used as the active materials in the fabrication of flexible/wearable electrochromic micro-supercapacitor devices (FEMSDs) through a cost-effective mask-assisted vacuum filtration method. The fabricated FEMSD exhibits an areal capacitance of 47.15 mF cm-2 at 1 mV s-1 and offers a maximum areal energy and power density of 104.78 Wh cm-2 and 0.04 mW cm-2, respectively. This material's interesting energy storage and electrochromic properties are promising in multifunctional electrochromic energy storage applications.
RESUMO
Multiple prognostic scores have been developed to predict morbidity and mortality in patients with spontaneous intracerebral hemorrhage(sICH). Since the advent of machine learning(ML), different ML models have also been developed for sICH prognostication. There is however a need to verify the validity of these ML models in diverse patient populations. We aim to create machine learning models for prognostication purposes in the Qatari population. By incorporating inpatient variables into model development, we aim to leverage more information. 1501 consecutive patients with acute sICH admitted to Hamad General Hospital(HGH) between 2013 and 2023 were included. We trained, evaluated, and compared several ML models to predict 90-day mortality and functional outcomes. For our dataset, we randomly selected 80% patients for model training and 20% for validation and used k-fold cross validation to train our models. The ML workflow included imbalanced class correction and dimensionality reduction in order to evaluate the effect of each. Evaluation metrics such as sensitivity, specificity, F-1 score were calculated for each prognostic model. Mean age was 50.8(SD 13.1) years and 1257(83.7%) were male. Median ICH volume was 7.5 ml(IQR 12.6). 222(14.8%) died while 897(59.7%) achieved good functional outcome at 90 days. For 90-day mortality, random forest(RF) achieved highest AUC(0.906) whereas for 90-day functional outcomes, logistic regression(LR) achieved highest AUC(0.888). Ensembling provided similar results to the best performing models, namely RF and LR, obtaining an AUC of 0.904 for mortality and 0.883 for functional outcomes. Random Forest achieved the highest AUC for 90-day mortality, and LR achieved the highest AUC for 90-day functional outcomes. Comparing ML models, there is minimal difference between their performance. By creating an ensemble of our best performing individual models we maintained maximum accuracy and decreased variance of functional outcome and mortality prediction when compared with individual models.
Assuntos
Hemorragia Cerebral , Aprendizado de Máquina , Humanos , Masculino , Feminino , Catar , Pessoa de Meia-Idade , Hemorragia Cerebral/mortalidade , Hemorragia Cerebral/diagnóstico , Prognóstico , Idoso , Adulto , Estudos Retrospectivos , Acidente Vascular Cerebral/mortalidade , Acidente Vascular Cerebral/diagnóstico , Bases de Dados FactuaisRESUMO
The class of doubly robust (DR) functionals studied by Rotnitzky et al. (2021) is of central importance in economics and biostatistics. It strictly includes both (i) the class of mean-square continuous functionals that can be written as an expectation of an affine functional of a conditional expectation studied by Chernozhukov et al. (2022b) and the class of functionals studied by Robins et al. (2008). The present state-of-the-art estimators for DR functionals ψ are double-machine-learning (DML) estimators (Chernozhukov et al., 2018). A DML estimator ψ^1 of ψ depends on estimates p^(x) and b^x of a pair of nuisance functions p(x) and bx, and is said to satisfy "rate double-robustness" if the Cauchy-Schwarz upper bound of its bias is o(n-1/2). Were it achievable, our scientific goal would have been to construct valid, assumption-lean (i.e. no complexity-reducing assumptions on b or p) tests of the validity of a nominal (1-α) Wald confidence interval (CI) centered at ψ^1. But this would require a test of the bias to be o(n-1/2), which can be shown not to exist. We therefore adopt the less ambitious goal of falsifying, when possible, an analyst's justification for her claim that the reported (1-α) Wald CI is valid. In many instances, an analyst justifies her claim by imposing complexity-reducing assumptions on b and p to ensure "rate double-robustness". Here we exhibit valid, assumption-lean tests of H0: "rate double-robustness holds", with non-trivial power against certain alternatives. If H0 is rejected, we will have falsified her justification. However, no assumption-lean test of H0, including ours, can be a consistent test. Thus, the failure of our test to reject is not meaningful evidence in favor of H0.
RESUMO
In the medical field, diagnostic tools that make use of deep neural networks have reached a level of performance never before seen. A proper diagnosis of a patient's condition is crucial in modern medicine since it determines whether or not the patient will receive the care they need. Data from a sinus CT scan is uploaded to a computer and displayed on a high-definition monitor to give the surgeon a clear anatomical orientation before endoscopic sinus surgery. In this study, a unique method is presented for detecting and diagnosing paranasal sinus disorders using machine learning. The researchers behind the current study designed their own approach. To speed up diagnosis, one of the primary goals of our study is to create an algorithm that can accurately evaluate the paranasal sinuses in CT scans. The proposed technology makes it feasible to automatically cut down on the number of CT scan images that require investigators to manually search through them all. In addition, the approach offers an automatic segmentation that may be used to locate the paranasal sinus region and crop it accordingly. As a result, the suggested method dramatically reduces the amount of data that is necessary during the training phase. As a result, this results in an increase in the efficiency of the computer while retaining a high degree of performance accuracy. The suggested method not only successfully identifies sinus irregularities but also automatically executes the necessary segmentation without requiring any manual cropping. This eliminates the need for time-consuming and error-prone human labor. When tested with actual CT scans, the method in question was discovered to have an accuracy of 95.16 percent while retaining a sensitivity of 99.14 percent throughout.
Assuntos
Artefatos , Aprendizado de Máquina , Seios Paranasais , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Seios Paranasais/diagnóstico por imagem , Algoritmos , Doenças dos Seios Paranasais/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodosRESUMO
This paper reviews the literature on model-driven engineering (MDE) tools and languages for the internet of things (IoT). Due to the abundance of big data in the IoT, data analytics and machine learning (DAML) techniques play a key role in providing smart IoT applications. In particular, since a significant portion of the IoT data is sequential time series data, such as sensor data, time series analysis techniques are required. Therefore, IoT modeling languages and tools are expected to support DAML methods, including time series analysis techniques, out of the box. In this paper, we study and classify prior work in the literature through the mentioned lens and following the scoping review approach. Hence, the key underlying research questions are what MDE approaches, tools, and languages have been proposed and which ones have supported DAML techniques at the modeling level and in the scope of smart IoT services.
RESUMO
In-car activity monitoring is a key enabler of various automotive safety functions. Existing approaches are largely based on vision systems. Radar, however, can provide a low-cost, privacy-preserving alternative. To this day, such systems based on the radar are not widely researched. In our work, we introduce a novel approach that uses the Doppler signal of an ultra-wideband (UWB) radar as an input to deep neural networks for the classification of driving activities. In contrast to previous work in the domain, we focus on generalization to unseen persons and make a new radar driving activity dataset (RaDA) available to the scientific community to encourage comparison and the benchmarking of future methods.
Assuntos
Condução de Veículo , Processamento de Sinais Assistido por Computador , Radar , Monitorização Fisiológica/métodos , Redes Neurais de ComputaçãoRESUMO
In the pursuit of effective wastewater treatment and biomass generation, the symbiotic relationship between microalgae and bacteria emerges as a promising avenue. This analysis delves into recent advancements concerning the utilization of microalgae-bacteria consortia for wastewater treatment and biomass production. It examines multiple facets of this symbiosis, encompassing the judicious selection of suitable strains, optimal culture conditions, appropriate media, and operational parameters. Moreover, the exploration extends to contrasting closed and open bioreactor systems for fostering microalgae-bacteria consortia, elucidating the inherent merits and constraints of each methodology. Notably, the untapped potential of co-cultivation with diverse microorganisms, including yeast, fungi, and various microalgae species, to augment biomass output. In this context, artificial intelligence (AI) and machine learning (ML) stand out as transformative catalysts. By addressing intricate challenges in wastewater treatment and microalgae-bacteria symbiosis, AI and ML foster innovative technological solutions. These cutting-edge technologies play a pivotal role in optimizing wastewater treatment processes, enhancing biomass yield, and facilitating real-time monitoring. The synergistic integration of AI and ML instills a novel dimension, propelling the fields towards sustainable solutions. As AI and ML become integral tools in wastewater treatment and symbiotic microorganism cultivation, novel strategies emerge that harness their potential to overcome intricate challenges and revolutionize the domain.
RESUMO
Infertility has massively disrupted social and marital life, resulting in stressful emotional well-being. Early diagnosis is the utmost need for faster adaption to respond to these changes, which makes possible via AI tools. Our main objective is to comprehend the role of AI in fertility detection since we have primarily worked to find biomarkers and related risk factors associated with infertility. This paper aims to vividly analyse the role of AI as an effective method in screening, predicting for infertility and related risk factors. Three scientific repositories: PubMed, Web of Science, and Scopus, are used to gather relevant articles via technical terms: (human infertility OR human fertility) AND risk factors AND (machine learning OR artificial intelligence OR intelligent system). In this way, we systematically reviewed 42 articles and performed a meta-analysis. The significant findings and recommendations are discussed. These include the rising importance of data augmentation, feature extraction, explainability, and the need to revisit the meaning of an effective system for fertility analysis. Additionally, the paper outlines various mitigation actions that can be employed to tackle infertility and its related risk factors. These insights contribute to a better understanding of the role of AI in fertility analysis and the potential for improving reproductive health outcomes.
Assuntos
Inteligência Artificial , Infertilidade , Humanos , Fertilidade , Emoções , Aprendizado de MáquinaRESUMO
Minimizing in vitro and in vivo testing in early drug discovery with the use of physiologically based pharmacokinetic (PBPK) modeling and machine learning (ML) approaches has the potential to reduce discovery cycle times and animal experimentation. However, the prediction success of such an approach has not been shown for a larger and diverse set of compounds representative of a lead optimization pipeline. In this study, the prediction success of the oral (PO) and intravenous (IV) pharmacokinetics (PK) parameters in rats was assessed using a "bottom-up" approach, combining in vitro and ML inputs with a PBPK model. More than 240 compounds for which all of the necessary inputs and PK data were available were used for this assessment. Different clearance scaling approaches were assessed, using hepatocyte intrinsic clearance and protein binding as inputs. In addition, a novel high-throughput PBPK (HT-PBPK) approach was evaluated to assess the scalability of PBPK predictions for a larger number of compounds in drug discovery. The results showed that bottom-up PBPK modeling was able to predict the rat IV and PO PK parameters for the majority of compounds within a 2- to 3-fold error range, using both direct scaling and dilution methods for clearance predictions. The use of only ML-predicted inputs from the structure did not perform well when using in vitro inputs, likely due to clearance miss predictions. The HT-PBPK approach produced comparable results to the full PBPK modeling approach but reduced the simulation time from hours to seconds. In conclusion, a bottom-up PBPK and HT-PBPK approach can successfully predict the PK parameters and guide early discovery by informing compound prioritization, provided that good in vitro assays are in place for key parameters such as clearance.
Assuntos
Descoberta de Drogas , Modelos Biológicos , Animais , Simulação por Computador , Descoberta de Drogas/métodos , Hepatócitos , Taxa de Depuração Metabólica/fisiologia , Farmacocinética , RatosRESUMO
The study proposes a novel machine learning (ML) paradigm for cardiovascular disease (CVD) detection in individuals at medium to high cardiovascular risk using data from a Greek cohort of 542 individuals with rheumatoid arthritis, or diabetes mellitus, and/or arterial hypertension, using conventional or office-based, laboratory-based blood biomarkers and carotid/femoral ultrasound image-based phenotypes. Two kinds of data (CVD risk factors and presence of CVD-defined as stroke, or myocardial infarction, or coronary artery syndrome, or peripheral artery disease, or coronary heart disease) as ground truth, were collected at two-time points: (i) at visit 1 and (ii) at visit 2 after 3 years. The CVD risk factors were divided into three clusters (conventional or office-based, laboratory-based blood biomarkers, carotid ultrasound image-based phenotypes) to study their effect on the ML classifiers. Three kinds of ML classifiers (Random Forest, Support Vector Machine, and Linear Discriminant Analysis) were applied in a two-fold cross-validation framework using the data augmented by synthetic minority over-sampling technique (SMOTE) strategy. The performance of the ML classifiers was recorded. In this cohort with overall 46 CVD risk factors (covariates) implemented in an online cardiovascular framework, that requires calculation time less than 1 s per patient, a mean accuracy and area-under-the-curve (AUC) of 98.40% and 0.98 (p < 0.0001) for CVD presence detection at visit 1, and 98.39% and 0.98 (p < 0.0001) at visit 2, respectively. The performance of the cardiovascular framework was significantly better than the classical CVD risk score. The ML paradigm proved to be powerful for CVD prediction in individuals at medium to high cardiovascular risk.
Assuntos
Artrite Reumatoide/complicações , Doenças Cardiovasculares/diagnóstico , Aprendizado de Máquina , Placa Aterosclerótica/diagnóstico por imagem , Artérias Carótidas/diagnóstico por imagem , Estudos Transversais , Feminino , Artéria Femoral/diagnóstico por imagem , Fatores de Risco de Doenças Cardíacas , Humanos , Masculino , Projetos Piloto , Reprodutibilidade dos TestesRESUMO
The mitochondrial respiratory chain is the main site of reactive oxygen species (ROS) production in the cell. Although mitochondria possess a powerful antioxidant system, an excess of ROS cannot be completely neutralized and cumulative oxidative damage may lead to decreasing mitochondrial efficiency in energy production, as well as an increasing ROS excess, which is known to cause a critical imbalance in antioxidant/oxidant mechanisms and a "vicious circle" in mitochondrial injury. Due to insufficient energy production, chronic exposure to ROS overproduction consequently leads to the oxidative damage of life-important biomolecules, including nucleic acids, proteins, lipids, and amino acids, among others. Different forms of mitochondrial dysfunction (mitochondriopathies) may affect the brain, heart, peripheral nervous and endocrine systems, eyes, ears, gut, and kidney, among other organs. Consequently, mitochondriopathies have been proposed as an attractive diagnostic target to be investigated in any patient with unexplained progressive multisystem disorder. This review article highlights the pathomechanisms of mitochondriopathies, details advanced analytical tools, and suggests predictive approaches, targeted prevention and personalization of medical services as instrumental for the overall management of mitochondriopathy-related cascading pathologies.
Assuntos
Metabolismo Energético , Mitocôndrias/patologia , Doenças Mitocondriais/patologia , Estresse Oxidativo , Animais , Carcinogênese/patologia , Humanos , Mitocôndrias/metabolismo , Doenças Mitocondriais/diagnóstico , Doenças Mitocondriais/metabolismo , Doenças Neurodegenerativas/diagnóstico , Doenças Neurodegenerativas/metabolismo , Doenças Neurodegenerativas/patologia , Medicina de Precisão , Espécies Reativas de Oxigênio/metabolismoRESUMO
Medical image segmentation is a key step to assist diagnosis of several diseases, and accuracy of a segmentation method is important for further treatments of different diseases. Different medical imaging modalities have different challenges such as intensity inhomogeneity, noise, low contrast, and ill-defined boundaries, which make automated segmentation a difficult task. To handle these issues, we propose a new fully automated method for medical image segmentation, which utilizes the advantages of thresholding and an active contour model. In this study, a Harris Hawks optimizer is applied to determine the optimal thresholding value, which is used to obtain the initial contour for segmentation. The obtained contour is further refined by using a spatially varying Gaussian kernel in the active contour model. The proposed method is then validated using a standard skin dataset (ISBI 2016), which consists of variable-sized lesions and different challenging artifacts, and a standard cardiac magnetic resonance dataset (ACDC, MICCAI 2017) with a wide spectrum of normal hearts, congenital heart diseases, and cardiac dysfunction. Experimental results show that the proposed method can effectively segment the region of interest and produce superior segmentation results for skin (overall Dice Score 0.90) and cardiac dataset (overall Dice Score 0.93), as compared to other state-of-the-art algorithms.
Assuntos
Falconiformes , Imageamento por Ressonância Magnética , Algoritmos , Animais , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodosRESUMO
Focused Ion Beam-Scanning Electron Microscopy (FIB-SEM) is an invaluable tool to visualize the 3D architecture of cell constituents and map cell networks. Recently, amorphous ice embedding techniques have been associated with FIB-SEM to ensure that the biological material remains as close as possible to its native state. Here we have vitrified human HeLa cells and directly imaged them by cryo-FIB-SEM with the secondary electron InLens detector at cryogenic temperature and without any staining. Image stacks were aligned and processed by denoising, removal of ion beam milling artefacts and local charge imbalance. Images were assembled into a 3D volume and the major cell constituents were modelled. The data illustrate the power of the workflow to provide a detailed view of the internal architecture of the fully hydrated, close-to-native, entire HeLa cell. In addition, we have studied the feasibility of combining cryo-FIB-SEM imaging with live-cell protein detection. We demonstrate that internalized gold particles can be visualized by detecting back scattered primary electrons at low kV while simultaneously acquiring signals from the secondary electron detector to image major cell features. Furthermore, gold-conjugated antibodies directed against RNA polymerase II could be observed in the endo-lysosomal pathway while labelling of the enzyme in the nucleus was not detected, a shortcoming likely due to the inadequacy between the size of the gold particles and the voxel size. With further refinements, this method promises to have a variety of applications where the goal is to localize cellular antigens while visualizing the entire native cell in three dimensions.
Assuntos
Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Microscopia Eletrônica de Varredura , Proteínas/ultraestrutura , Células HeLa , Humanos , Proteínas/isolamento & purificação , Coloração e RotulagemRESUMO
For COVID-19, predictive modeling, in the literature, uses broadly SEIR/SIR, agent-based, curve-fitting techniques/models. Besides, machine-learning models that are built on statistical tools/techniques are widely used. Predictions aim at making states and citizens aware of possible threats/consequences. However, for COVID-19 outbreak, state-of-the-art prediction models are failed to exploit crucial and unprecedented uncertainties/factors, such as a) hospital settings/capacity; b) test capacity/rate (on a daily basis); c) demographics; d) population density; e) vulnerable people; and f) income versus commodities (poverty). Depending on what factors are employed/considered in their models, predictions can be short-term and long-term. In this paper, we discuss how such continuous and unprecedented factors lead us to design complex models, rather than just relying on stochastic and/or discrete ones that are driven by randomly generated parameters. Further, it is a time to employ data-driven mathematically proved models that have the luxury to dynamically and automatically tune parameters over time.
Assuntos
Betacoronavirus , Infecções por Coronavirus , Previsões , Modelos Estatísticos , Pandemias , Pneumonia Viral , COVID-19 , Confiabilidade dos Dados , Surtos de Doenças , Humanos , Aprendizado de Máquina , SARS-CoV-2RESUMO
Percutaneous thermal ablation has proven to be an effective modality for treating both benign and malignant tumours in various tissues. Among these modalities, radiofrequency ablation (RFA) is the most promising and widely adopted approach that has been extensively studied in the past decades. Microwave ablation (MWA) is a newly emerging modality that is gaining rapid momentum due to its capability of inducing rapid heating and attaining larger ablation volumes, and its lesser susceptibility to the heat sink effects as compared to RFA. Although the goal of both these therapies is to attain cell death in the target tissue by virtue of heating above 50°C, their underlying mechanism of action and principles greatly differs. Computational modelling is a powerful tool for studying the effect of electromagnetic interactions within the biological tissues and predicting the treatment outcomes during thermal ablative therapies. Such a priori estimation can assist the clinical practitioners during treatment planning with the goal of attaining successful tumour destruction and preservation of the surrounding healthy tissue and critical structures. This review provides current state-of-the-art developments and associated challenges in the computational modelling of thermal ablative techniques, viz., RFA and MWA, as well as touch upon several promising avenues in the modelling of laser ablation, nanoparticles assisted magnetic hyperthermia and non-invasive RFA. The application of RFA in pain relief has been extensively reviewed from modelling point of view. Additionally, future directions have also been provided to improve these models for their successful translation and integration into the hospital work flow.
Assuntos
Técnicas de Ablação/métodos , Simulação por Computador , Temperatura , Animais , HumanosRESUMO
BACKGROUND: Tumor purity is the percent of cancer cells present in a sample of tumor tissue. The non-cancerous cells (immune cells, fibroblasts, etc.) have an important role in tumor biology. The ability to determine tumor purity is important to understand the roles of cancerous and non-cancerous cells in a tumor. METHODS: We applied a supervised machine learning method, XGBoost, to data from 33 TCGA tumor types to predict tumor purity using RNA-seq gene expression data. RESULTS: Across the 33 tumor types, the median correlation between observed and predicted tumor-purity ranged from 0.75 to 0.87 with small root mean square errors, suggesting that tumor purity can be accurately predicted υσινγ expression data. We further confirmed that expression levels of a ten-gene set (CSF2RB, RHOH, C1S, CCDC69, CCL22, CYTIP, POU2AF1, FGR, CCL21, and IL7R) were predictive of tumor purity regardless of tumor type. We tested whether our set of ten genes could accurately predict tumor purity of a TCGA-independent data set. We showed that expression levels from our set of ten genes were highly correlated (ρ = 0.88) with the actual observed tumor purity. CONCLUSIONS: Our analyses suggested that the ten-gene set may serve as a biomarker for tumor purity prediction using gene expression data.
Assuntos
Biomarcadores Tumorais , Neoplasias/genética , Biologia Computacional/métodos , Bases de Dados Genéticas , Perfilação da Expressão Gênica , Regulação Neoplásica da Expressão Gênica , Humanos , Neoplasias/diagnóstico , Reprodutibilidade dos Testes , Análise de Sequência de RNA , Aprendizado de Máquina SupervisionadoRESUMO
BACKGROUND: Deciphering the meaning of the human DNA is an outstanding goal which would revolutionize medicine and our way for treating diseases. In recent years, non-coding RNAs have attracted much attention and shown to be functional in part. Yet the importance of these RNAs especially for higher biological functions remains under investigation. METHODS: In this paper, we analyze RNA-seq data, including non-coding and protein coding RNAs, from lung adenocarcinoma patients, a histologic subtype of non-small-cell lung cancer, with deep learning neural networks and other state-of-the-art classification methods. The purpose of our paper is three-fold. First, we compare the classification performance of different versions of deep belief networks with SVMs, decision trees and random forests. Second, we compare the classification capabilities of protein coding and non-coding RNAs. Third, we study the influence of feature selection on the classification performance. RESULTS: As a result, we find that deep belief networks perform at least competitively to other state-of-the-art classifiers. Second, data from non-coding RNAs perform better than coding RNAs across a number of different classification methods. This demonstrates the equivalence of predictive information as captured by non-coding RNAs compared to protein coding RNAs, conventionally used in computational diagnostics tasks. Third, we find that feature selection has in general a negative effect on the classification performance which means that unfiltered data with all features give the best classification results. CONCLUSIONS: Our study is the first to use ncRNAs beyond miRNAs for the computational classification of cancer and for performing a direct comparison of the classification capabilities of protein coding RNAs and non-coding RNAs.
Assuntos
Neoplasias Pulmonares/classificação , Neoplasias Pulmonares/genética , RNA Mensageiro/metabolismo , RNA não Traduzido/genética , Biologia Computacional/métodos , Árvores de Decisões , Humanos , Neoplasias Pulmonares/patologia , Aprendizado de Máquina , MicroRNAs/genética , Redes Neurais de Computação , RNA Mensageiro/genética , Análise de Sequência de RNA/métodosRESUMO
BACKGROUND: Cancer patients with advanced disease routinely exhaust available clinical regimens and lack actionable genomic medicine results, leaving a large patient population without effective treatments options when their disease inevitably progresses. To address the unmet clinical need for evidence-based therapy assignment when standard clinical approaches have failed, we have developed a probabilistic computational modeling approach which integrates molecular sequencing data with functional assay data to develop patient-specific combination cancer treatments. METHODS: Tissue taken from a murine model of alveolar rhabdomyosarcoma was used to perform single agent drug screening and DNA/RNA sequencing experiments; results integrated via our computational modeling approach identified a synergistic personalized two-drug combination. Cells derived from the primary murine tumor were allografted into mouse models and used to validate the personalized two-drug combination. Computational modeling of single agent drug screening and RNA sequencing of multiple heterogenous sites from a single patient's epithelioid sarcoma identified a personalized two-drug combination effective across all tumor regions. The heterogeneity-consensus combination was validated in a xenograft model derived from the patient's primary tumor. Cell cultures derived from human and canine undifferentiated pleomorphic sarcoma were assayed by drug screen; computational modeling identified a resistance-abrogating two-drug combination common to both cell cultures. This combination was validated in vitro via a cell regrowth assay. RESULTS: Our computational modeling approach addresses three major challenges in personalized cancer therapy: synergistic drug combination predictions (validated in vitro and in vivo in a genetically engineered murine cancer model), identification of unifying therapeutic targets to overcome intra-tumor heterogeneity (validated in vivo in a human cancer xenograft), and mitigation of cancer cell resistance and rewiring mechanisms (validated in vitro in a human and canine cancer model). CONCLUSIONS: These proof-of-concept studies support the use of an integrative functional approach to personalized combination therapy prediction for the population of high-risk cancer patients lacking viable clinical options and without actionable DNA sequencing-based therapy.
Assuntos
Biologia Computacional/métodos , Avaliação Pré-Clínica de Medicamentos/métodos , Quimioterapia Combinada/métodos , Modelos Estatísticos , Medicina de Precisão/métodos , Rabdomiossarcoma Alveolar/tratamento farmacológico , Animais , Linhagem Celular Tumoral , Modelos Animais de Doenças , Cães , Sinergismo Farmacológico , Feminino , Xenoenxertos , Humanos , Estimativa de Kaplan-Meier , Camundongos , Camundongos Endogâmicos NODRESUMO
This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%).