Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 74
Filtrar
1.
Dalton Trans ; 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38829152

RESUMO

Herein, we report the hydrogenation of carbon dioxide to sodium formate catalyzed by low-valent molybdenum phosphine complexes. The 1,3-bis(diphenylphosphino)propane (DPPP)-based Mo complex was found to be an efficient catalyst in the presence of NaOH affording formate with a TON of 975 at 130 °C in THF/H2O after 24 h utilizing 40 bar (CO2 : H2 = 10 : 30) pressure. The complex was also active in the hydrogenation of sodium bicarbonate and inorganic carbonates to the corresponding formates. Mechanistic investigation revealed that the reaction proceeded via an intermediate formato complex.

2.
Sensors (Basel) ; 24(9)2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38732978

RESUMO

Machine learning (ML) models have experienced remarkable growth in their application for multimodal data analysis over the past decade [...].

3.
J Med Syst ; 48(1): 29, 2024 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-38441727

RESUMO

Schizophrenia is a serious chronic mental disorder that significantly affects daily life. Electroencephalography (EEG), a method used to measure mental activities in the brain, is among the techniques employed in the diagnosis of schizophrenia. The symptoms of the disease typically begin in childhood and become more pronounced as one grows older. However, it can be managed with specific treatments. Computer-aided methods can be used to achieve an early diagnosis of this illness. In this study, various machine learning algorithms and the emerging technology of quantum-based machine learning algorithm were used to detect schizophrenia using EEG signals. The principal component analysis (PCA) method was applied to process the obtained data in quantum systems. The data, which were reduced in dimensionality, were transformed into qubit form using various feature maps and provided as input to the Quantum Support Vector Machine (QSVM) algorithm. Thus, the QSVM algorithm was applied using different qubit numbers and different circuits in addition to classical machine learning algorithms. All analyses were conducted in the simulator environment of the IBM Quantum Platform. In the classification of this EEG dataset, it is evident that the QSVM algorithm demonstrated superior performance with a 100% success rate when using Pauli X and Pauli Z feature maps. This study serves as proof that quantum machine learning algorithms can be effectively utilized in the field of healthcare.


Assuntos
Esquizofrenia , Humanos , Esquizofrenia/diagnóstico , Algoritmos , Encéfalo , Eletroencefalografia , Aprendizado de Máquina
4.
Dalton Trans ; 53(7): 3236-3243, 2024 Feb 13.
Artigo em Inglês | MEDLINE | ID: mdl-38251673

RESUMO

We present here a phosphine-free, quinoline-based pincer Mn catalyst for α-alkylation of methyl ketones using primary alcohols as alkyl surrogates. The C-C bond formation reaction proceeds via a hydrogen auto-transfer methodology. The sole by-product formed is water, rendering the protocol atom efficient. Electronic structure theory studies corroborated the proposed mechanism.

5.
Chem Asian J ; 18(23): e202300758, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-37815164

RESUMO

Transition metal-catalyzed homogeneous hydrogenation and dehydrogenation reactions for attaining plethora of organic scaffolds have evolved as a key domain of research in academia and industry. These protocols are atom-economic, greener, in line with the goal of sustainability, eventually pave the way for numerous novel environmentally benign methodologies. Appealing progress has been achieved in the realm of homogeneous catalysis utilizing noble metals. Owing to their high cost, less abundance along with toxicity issues led the scientific community to search for sustainable alternatives. In this context, earth- abundant base metals have gained substantial attention culminating enormous progress in recent years, predominantly with pincer-type complexes of nickel, cobalt, iron, and manganese. In this regard, group VI chromium, molybdenum and tungsten complexes have been overlooked and remain underdeveloped despite their earth-abundance and bio-compatibility. This review delineates a comprehensive overview in the arena of homogeneously catalysed (de)hydrogenation reactions using group VI base metals chromium, molybdenum, and tungsten till date. Various reactions have been described; hydrogenation, transfer hydrogenation, dehydrogenation, acceptorless dehydrogenative coupling, hydrogen auto transfer, along with their scope and brief mechanistic insights.

6.
Sensors (Basel) ; 23(16)2023 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-37631569

RESUMO

Anxiety, learning disabilities, and depression are the symptoms of attention deficit hyperactivity disorder (ADHD), an isogenous pattern of hyperactivity, impulsivity, and inattention. For the early diagnosis of ADHD, electroencephalogram (EEG) signals are widely used. However, the direct analysis of an EEG is highly challenging as it is time-consuming, nonlinear, and nonstationary in nature. Thus, in this paper, a novel approach (LSGP-USFNet) is developed based on the patterns obtained from Ulam's spiral and Sophia Germain's prime numbers. The EEG signals are initially filtered to remove the noise and segmented with a non-overlapping sliding window of a length of 512 samples. Then, a time-frequency analysis approach, namely continuous wavelet transform, is applied to each channel of the segmented EEG signal to interpret it in the time and frequency domain. The obtained time-frequency representation is saved as a time-frequency image, and a non-overlapping n × n sliding window is applied to this image for patch extraction. An n × n Ulam's spiral is localized on each patch, and the gray levels are acquired from this patch as features where Sophie Germain's primes are located in Ulam's spiral. All gray tones from all patches are concatenated to construct the features for ADHD and normal classes. A gray tone selection algorithm, namely ReliefF, is employed on the representative features to acquire the final most important gray tones. The support vector machine classifier is used with a 10-fold cross-validation criteria. Our proposed approach, LSGP-USFNet, was developed using a publicly available dataset and obtained an accuracy of 97.46% in detecting ADHD automatically. Our generated model is ready to be validated using a bigger database and it can also be used to detect other children's neurological disorders.


Assuntos
Transtorno do Deficit de Atenção com Hiperatividade , Criança , Humanos , Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico , Eletroencefalografia , Algoritmos , Ansiedade , Transtornos de Ansiedade , Niacinamida
7.
J Digit Imaging ; 36(6): 2441-2460, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37537514

RESUMO

Detecting neurological abnormalities such as brain tumors and Alzheimer's disease (AD) using magnetic resonance imaging (MRI) images is an important research topic in the literature. Numerous machine learning models have been used to detect brain abnormalities accurately. This study addresses the problem of detecting neurological abnormalities in MRI. The motivation behind this problem lies in the need for accurate and efficient methods to assist neurologists in the diagnosis of these disorders. In addition, many deep learning techniques have been applied to MRI to develop accurate brain abnormality detection models, but these networks have high time complexity. Hence, a novel hand-modeled feature-based learning network is presented to reduce the time complexity and obtain high classification performance. The model proposed in this work uses a new feature generation architecture named pyramid and fixed-size patch (PFP). The main aim of the proposed PFP structure is to attain high classification performance using essential feature extractors with both multilevel and local features. Furthermore, the PFP feature extractor generates low- and high-level features using a handcrafted extractor. To obtain the high discriminative feature extraction ability of the PFP, we have used histogram-oriented gradients (HOG); hence, it is named PFP-HOG. Furthermore, the iterative Chi2 (IChi2) is utilized to choose the clinically significant features. Finally, the k-nearest neighbors (kNN) with tenfold cross-validation is used for automated classification. Four MRI neurological databases (AD dataset, brain tumor dataset 1, brain tumor dataset 2, and merged dataset) have been utilized to develop our model. PFP-HOG and IChi2-based models attained 100%, 94.98%, 98.19%, and 97.80% using the AD dataset, brain tumor dataset1, brain tumor dataset 2, and merged brain MRI dataset, respectively. These findings not only provide an accurate and robust classification of various neurological disorders using MRI but also hold the potential to assist neurologists in validating manual MRI brain abnormality screening.


Assuntos
Doença de Alzheimer , Neoplasias Encefálicas , Humanos , Imageamento por Ressonância Magnética/métodos , Neuroimagem , Encéfalo/diagnóstico por imagem , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Aprendizado de Máquina , Doença de Alzheimer/diagnóstico por imagem
8.
Sensors (Basel) ; 23(14)2023 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-37514877

RESUMO

Screening programs for early lung cancer diagnosis are uncommon, primarily due to the challenge of reaching at-risk patients located in rural areas far from medical facilities. To overcome this obstacle, a comprehensive approach is needed that combines mobility, low cost, speed, accuracy, and privacy. One potential solution lies in combining the chest X-ray imaging mode with federated deep learning, ensuring that no single data source can bias the model adversely. This study presents a pre-processing pipeline designed to debias chest X-ray images, thereby enhancing internal classification and external generalization. The pipeline employs a pruning mechanism to train a deep learning model for nodule detection, utilizing the most informative images from a publicly available lung nodule X-ray dataset. Histogram equalization is used to remove systematic differences in image brightness and contrast. Model training is then performed using combinations of lung field segmentation, close cropping, and rib/bone suppression. The resulting deep learning models, generated through this pre-processing pipeline, demonstrate successful generalization on an independent lung nodule dataset. By eliminating confounding variables in chest X-ray images and suppressing signal noise from the bone structures, the proposed deep learning lung nodule detection algorithm achieves an external generalization accuracy of 89%. This approach paves the way for the development of a low-cost and accessible deep learning-based clinical system for lung cancer screening.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Humanos , Redes Neurais de Computação , Raios X , Detecção Precoce de Câncer , Neoplasias Pulmonares/diagnóstico por imagem , Pulmão
9.
Physiol Meas ; 44(3)2023 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-36599170

RESUMO

Objective.Schizophrenia (SZ) is a severe, chronic psychiatric-cognitive disorder. The primary objective of this work is to present a handcrafted model using state-of-the-art technique to detect SZ accurately with EEG signals.Approach.In our proposed work, the features are generated using a histogram-based generator and an iterative decomposition model. The graph-based molecular structure of the carbon chain is employed to generate low-level features. Hence, the developed feature generation model is called the carbon chain pattern (CCP). An iterative tunable q-factor wavelet transform (ITQWT) technique is implemented in the feature extraction phase to generate various sub-bands of the EEG signal. The CCP was applied to the generated sub-bands to obtain several feature vectors. The clinically significant features were selected using iterative neighborhood component analysis (INCA). The selected features were then classified using the k nearest neighbor (kNN) with a 10-fold cross-validation strategy. Finally, the iterative weighted majority method was used to obtain the results in multiple channels.Main results.The presented CCP-ITQWT and INCA-based automated model achieved an accuracy of 95.84% and 99.20% using a single channel and majority voting method, respectively with kNN classifier.Significance.Our results highlight the success of the proposed CCP-ITQWT and INCA-based model in the automated detection of SZ using EEG signals.


Assuntos
Disfunção Cognitiva , Esquizofrenia , Humanos , Eletroencefalografia/métodos , Esquizofrenia/diagnóstico , Análise de Ondaletas , Carbono , Algoritmos
10.
IEEE Trans Technol Soc ; 3(4): 272-289, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36573115

RESUMO

This article's main contributions are twofold: 1) to demonstrate how to apply the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare and 2) to investigate the research question of what does "trustworthy AI" mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multiregional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic. The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient's lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia, Italy, since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses sociotechnical scenarios to identify ethical, technical, and domain-specific issues in the use of the AI system in the context of the pandemic.

11.
Comput Biol Med ; 150: 106100, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36182761

RESUMO

Automated sleep disorder detection is challenging because physiological symptoms can vary widely. These variations make it difficult to create effective sleep disorder detection models which support hu-man experts during diagnosis and treatment monitoring. From 2010 to 2021, authors of 95 scientific papers have taken up the challenge of automating sleep disorder detection. This paper provides an expert review of this work. We investigated whether digital technology and Artificial Intelligence (AI) can provide automated diagnosis support for sleep disorders. We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines during the content discovery phase. We compared the performance of proposed sleep disorder detection methods, involving differ-ent datasets or signals. During the review, we found eight sleep disorders, of which sleep apnea and insomnia were the most studied. These disorders can be diagnosed using several kinds of biomedical signals, such as Electrocardiogram (ECG), Polysomnography (PSG), Electroencephalogram (EEG), Electromyogram (EMG), and snore sound. Subsequently, we established areas of commonality and distinctiveness. Common to all reviewed papers was that AI models were trained and tested with labelled physiological signals. Looking deeper, we discovered that 24 distinct algorithms were used for the detection task. The nature of these algorithms evolved, before 2017 only traditional Machine Learning (ML) was used. From 2018 onward, both ML and Deep Learning (DL) methods were used for sleep disorder detection. The strong emergence of DL algorithms has considerable implications for future detection systems because these algorithms demand significantly more data for training and testing when compared with ML. Based on our review results, we suggest that both type and amount of labelled data is crucial for the design of future sleep disorder detection systems because this will steer the choice of AI algorithm which establishes the desired decision support. As a guiding principle, more labelled data will help to represent the variations in symptoms. DL algorithms can extract information from these larger data quantities more effectively, therefore; we predict that the role of these algorithms will continue to expand.


Assuntos
Inteligência Artificial , Transtornos do Sono-Vigília , Humanos , Sono , Algoritmos , Aprendizado de Máquina , Transtornos do Sono-Vigília/diagnóstico
12.
Diagnostics (Basel) ; 12(10)2022 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-36292199

RESUMO

BACKGROUND: Sleep stage classification is a crucial process for the diagnosis of sleep or sleep-related diseases. Currently, this process is based on manual electroencephalogram (EEG) analysis, which is resource-intensive and error-prone. Various machine learning models have been recommended to standardize and automate the analysis process to address these problems. MATERIALS AND METHODS: The well-known cyclic alternating pattern (CAP) sleep dataset is used to train and test an L-tetrolet pattern-based sleep stage classification model in this research. By using this dataset, the following three cases are created, and they are: Insomnia, Normal, and Fused cases. For each of these cases, the machine learning model is tasked with identifying six sleep stages. The model is structured in terms of feature generation, feature selection, and classification. Feature generation is established with a new L-tetrolet (Tetris letter) function and multiple pooling decomposition for level creation. We fuse ReliefF and iterative neighborhood component analysis (INCA) feature selection using a threshold value. The hybrid and iterative feature selectors are named threshold selection-based ReliefF and INCA (TSRFINCA). The selected features are classified using a cubic support vector machine. RESULTS: The presented L-tetrolet pattern and TSRFINCA-based sleep stage classification model yield 95.43%, 91.05%, and 92.31% accuracies for Insomnia, Normal dataset, and Fused cases, respectively. CONCLUSION: The recommended L-tetrolet pattern and TSRFINCA-based model push the envelope of current knowledge engineering by accurately classifying sleep stages even in the presence of sleep disorders.

13.
PDA J Pharm Sci Technol ; 76(6): 485-496, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35613741

RESUMO

Interventions performed by personnel during an aseptic process can be a key source of microbiological contamination of sterile biopharmaceutical products, irrespective of the type of manufacturing system used. Understanding the relative risk of this source of contamination provides valuable information to help make decisions for the design, qualification, validation, operation, monitoring, and evaluation of the aseptic process. These decisions can be used to improve the aseptic process and provide assurance of the sterility of the products. To achieve these goals, an assessment of the contamination risk is needed. This risk assessment should be objective, accurate, and useful. This article presents an Intervention Risk Evaluation Model (IREM) philosophy and an objective, accurate, and useful method for intervention risk determination. The IREM uses a key word approach to identify, obtain, measure, and evaluate intervention risk factors. This article presents a general discussion of the method with the help of a case study to illustrate the development of the model, whereas subsequent parts would focus on application of this model with practical examples. This not only attempts to create objectivity of the entire process, but it develops awareness of the associated risks among shop floor operators, which can lead to a reduction of the overall risk level of the process and an improvement in the sterility assurance level.


Assuntos
Contaminação de Medicamentos , Infertilidade , Humanos , Contaminação de Medicamentos/prevenção & controle , Medição de Risco/métodos , Fatores de Risco
14.
Comput Biol Med ; 145: 105464, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35390746

RESUMO

BACKGROUND: Artificial intelligence technologies in classification/detection of COVID-19 positive cases suffer from generalizability. Moreover, accessing and preparing another large dataset is not always feasible and time-consuming. Several studies have combined smaller COVID-19 CT datasets into "supersets" to maximize the number of training samples. This study aims to assess generalizability by splitting datasets into different portions based on 3D CT images using deep learning. METHOD: Two large datasets, including 1110 3D CT images, were split into five segments of 20% each. Each dataset's first 20% segment was separated as a holdout test set. 3D-CNN training was performed with the remaining 80% from each dataset. Two small external datasets were also used to independently evaluate the trained models. RESULTS: The total combination of 80% of each dataset has an accuracy of 91% on Iranmehr and 83% on Moscow holdout test datasets. Results indicated that 80% of the primary datasets are adequate for fully training a model. The additional fine-tuning using 40% of a secondary dataset helps the model generalize to a third, unseen dataset. The highest accuracy achieved through transfer learning was 85% on LDCT dataset and 83% on Iranmehr holdout test sets when retrained on 80% of Iranmehr dataset. CONCLUSION: While the total combination of both datasets produced the best results, different combinations and transfer learning still produced generalizable results. Adopting the proposed methodology may help to obtain satisfactory results in the case of limited external datasets.


Assuntos
COVID-19 , Aprendizado Profundo , Inteligência Artificial , COVID-19/diagnóstico por imagem , Humanos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos
15.
Comput Biol Med ; 145: 105407, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35349801

RESUMO

Heart Rate Variability (HRV) is a good predictor of human health because the heart rhythm is modulated by a wide range of physiological processes. This statement embodies both challenges to and opportunities for HRV analysis. Opportunities arise from the wide-ranging applicability of HRV analysis for disease detection. The availability of modern high-quality sensors and the low data rate of heart rate signals make HRV easy to measure, communicate, store, and process. However, there are also significant obstacles that prevent a wider use of this technology. HRV signals are both nonstationary and nonlinear and, to the human eye, they appear noise-like. This makes them difficult to analyze and indeed the analysis findings are difficult to explain. Moreover, it is difficult to discriminate between the influences of different complex physiological processes on the HRV. These difficulties are compounded by the effects of aging and the presence of comorbidities. In this review, we have looked at scientific studies that have addressed these challenges with advanced signal processing and Artificial Intelligence (AI) methods.


Assuntos
Inteligência Artificial , Eletrocardiografia , Eletrocardiografia/métodos , Frequência Cardíaca/fisiologia , Humanos , Processamento de Sinais Assistido por Computador
16.
Artigo em Inglês | MEDLINE | ID: mdl-35206124

RESUMO

Mask usage is one of the most important precautions to limit the spread of COVID-19. Therefore, hygiene rules enforce the correct use of face coverings. Automated mask usage classification might be used to improve compliance monitoring. This study deals with the problem of inappropriate mask use. To address that problem, 2075 face mask usage images were collected. The individual images were labeled as either mask, no masked, or improper mask. Based on these labels, the following three cases were created: Case 1: mask versus no mask versus improper mask, Case 2: mask versus no mask + improper mask, and Case 3: mask versus no mask. This data was used to train and test a hybrid deep feature-based masked face classification model. The presented method comprises of three primary stages: (i) pre-trained ResNet101 and DenseNet201 were used as feature generators; each of these generators extracted 1000 features from an image; (ii) the most discriminative features were selected using an improved RelieF selector; and (iii) the chosen features were used to train and test a support vector machine classifier. That resulting model attained 95.95%, 97.49%, and 100.0% classification accuracy rates on Case 1, Case 2, and Case 3, respectively. Having achieved these high accuracy values indicates that the proposed model is fit for a practical trial to detect appropriate face mask use in real time.


Assuntos
COVID-19 , Máscaras , COVID-19/prevenção & controle , Humanos , SARS-CoV-2 , Máquina de Vetores de Suporte
17.
Stat Methods Med Res ; 31(5): 917-927, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35133933

RESUMO

The proportion of non-differentially expressed genes is an important quantity in microarray data analysis and an appropriate estimate of the same is used to construct adaptive multiple testing procedures. Most of the estimators for the proportion of true null hypotheses based on the thresholding, maximum likelihood and density estimation approaches assume independence among the gene expressions. Usually, sparse dependence structure is natural in modelling associations in microarray gene expression data and hence it is necessary to develop methods for accommodating the sparse dependence well within the framework of existing estimators. We propose a clustering based method to put genes in the same group that are not coexpressed using the estimated high dimensional correlation structure under sparse assumption as dissimilarity matrix. This novel method is applied to three existing estimators for the proportion of true null hypotheses. Extensive simulation study shows that the proposed method improves an existing estimator by making it less conservative and the corresponding adaptive Benjamini-Hochberg algorithm more powerful. The proposed method is applied to a microarray gene expression dataset of colorectal cancer patients and the results show gain in terms of number of differentially expressed genes. The R code is available at https://github.com/aniketstat/Proportiontion-of-true-null-under-sparse-dependence-2021.


Assuntos
Algoritmos , Perfilação da Expressão Gênica , Simulação por Computador , Perfilação da Expressão Gênica/métodos , Humanos , Análise de Sequência com Séries de Oligonucleotídeos/métodos
18.
Comput Methods Programs Biomed ; 216: 106677, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35139459

RESUMO

BACKGROUND AND OBJECTIVES: Photoplethysmography (PPG) is a device that measures the amount of light absorbed by the blood vessel, blood, and tissues, which can, in turn, translate into various measurements such as the variation in blood flow volume, heart rate variability, blood pressure, etc. Hence, PPG signals can produce a wide variety of biological information that can be useful for the detection and diagnosis of various health problems. In this review, we are interested in the possible health disorders that can be detected using PPG signals. METHODS: We applied PRISMA guidelines to systematically search various journal databases and identified 43 PPG studies that fit the criteria of this review. RESULTS: Twenty-five health issues were identified from these studies that were classified into six categories: cardiac, blood pressure, sleep health, mental health, diabetes, and miscellaneous. Various routes were employed in these PPG studies to perform the diagnosis: machine learning, deep learning, and statistical routes. The studies were reviewed and summarized. CONCLUSIONS: We identified limitations such as poor standardization of sampling frequencies and lack of publicly available PPG databases. We urge that future work should consider creating more publicly available databases so that a wide spectrum of health problems can be covered. We also want to promote the use of PPG signals as a potential precision medicine tool in both ambulatory and hospital settings.


Assuntos
Aprendizado de Máquina , Fotopletismografia , Pressão Sanguínea , Atenção à Saúde , Frequência Cardíaca , Processamento de Sinais Assistido por Computador
19.
J Biomol Struct Dyn ; 40(7): 2893-2907, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-33179569

RESUMO

A multi-omics-based approach targeting the plant-based natural products from Thumbai (Leucas aspera), an important yet untapped potential source of many therapeutic agents for myriads of immunological conditions and genetic disorders, was conceptualized to reconnoiter its potential biomedical application. A library of 79 compounds from this plant was created, out of which 9 compounds qualified the pharmacokinetics parameters. Reverse pharmacophore technique for target fishing of the screened compounds was executed through which renin receptor (ATP6AP2) and thymidylate kinase (DTYMK) were identified as potential targets. Network biology approaches were used to comprehend and validate the functional, biochemical and clinical relevance of the targets. The target-ligand interaction and subsequent stability parameters at molecular scale were investigated using multiple strategies including molecular modeling, pharmacophore approaches and molecular dynamics simulation. Herein, isololiolide and 4-hydroxy-2-methoxycinnamaldehyde were substantiated as the lead molecules exhibiting comparatively the best binding affinity against the two putative protein targets. These natural lead products from L. aspera and the combinatorial effects may have plausible medical applications in a wide variety of neurodegenerative, genetic and developmental disorders. The lead molecules also exhibit promising alternative in diagnostics and therapeutics through immuno-modulation targeting natural killer T-cell function in transplantation-related pathogenesis, autoimmune and other immunological disorders.Communicated by Ramaswamy H. Sarma.


Assuntos
Produtos Biológicos , Células T Matadoras Naturais , Produtos Biológicos/farmacologia , Lamiaceae , Simulação de Acoplamento Molecular , Simulação de Dinâmica Molecular
20.
Sensors (Basel) ; 21(23)2021 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-34884045

RESUMO

The global pandemic of coronavirus disease (COVID-19) has caused millions of deaths and affected the livelihood of many more people. Early and rapid detection of COVID-19 is a challenging task for the medical community, but it is also crucial in stopping the spread of the SARS-CoV-2 virus. Prior substantiation of artificial intelligence (AI) in various fields of science has encouraged researchers to further address this problem. Various medical imaging modalities including X-ray, computed tomography (CT) and ultrasound (US) using AI techniques have greatly helped to curb the COVID-19 outbreak by assisting with early diagnosis. We carried out a systematic review on state-of-the-art AI techniques applied with X-ray, CT, and US images to detect COVID-19. In this paper, we discuss approaches used by various authors and the significance of these research efforts, the potential challenges, and future trends related to the implementation of an AI system for disease detection during the COVID-19 pandemic.


Assuntos
COVID-19 , Pandemias , Inteligência Artificial , Humanos , SARS-CoV-2 , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA