Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 76
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Funct Integr Genomics ; 23(1): 47, 2023 Jan 24.
Artículo en Inglés | MEDLINE | ID: mdl-36692535

RESUMEN

Climate change seriously impacts global agriculture, with rising temperatures directly affecting the yield. Vegetables are an essential part of daily human consumption and thus have importance among all agricultural crops. The human population is increasing daily, so there is a need for alternative ways which can be helpful in maximizing the harvestable yield of vegetables. The increase in temperature directly affects the plants' biochemical and molecular processes; having a significant impact on quality and yield. Breeding for climate-resilient crops with good yields takes a long time and lots of breeding efforts. However, with the advent of new omics technologies, such as genomics, transcriptomics, proteomics, and metabolomics, the efficiency and efficacy of unearthing information on pathways associated with high-temperature stress resilience has improved in many of the vegetable crops. Besides omics, the use of genomics-assisted breeding and new breeding approaches such as gene editing and speed breeding allow creation of modern vegetable cultivars that are more resilient to high temperatures. Collectively, these approaches will shorten the time to create and release novel vegetable varieties to meet growing demands for productivity and quality. This review discusses the effects of heat stress on vegetables and highlights recent research with a focus on how omics and genome editing can produce temperature-resilient vegetables more efficiently and faster.


Asunto(s)
Fitomejoramiento , Verduras , Humanos , Verduras/genética , Productos Agrícolas/genética , Genómica , Proteómica
2.
Mol Biol Rep ; 50(8): 6783-6793, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37392286

RESUMEN

BACKGROUND: Bacterial diseases are a huge threat to the production of tomatoes. During infection intervals, pathogens affect biochemical, oxidant and molecular properties of tomato. Therefore, it is necessary to study the antioxidant enzymes, oxidation state and genes involved during bacterial infection in tomato. METHODS AND RESULTS: Different bioinformatic analyses were performed to conduct homology, gene promoter analysis and determined protein structure. Antioxidant, MDA and H2O2 response was measured in Falcon, Rio grande and Sazlica tomato cultivars. In this study, RNA Polymerase II (RNAP) C-Terminal Domain Phosphatase-like 3 (SlCPL-3) gene was identified and characterized. It contained 11 exons, and encoded for two protein domains i.e., CPDCs and BRCT. SOPMA and Phyre2, online bioinformatic tools were used to predict secondary structure. For the identification of protein pockets CASTp web-based tool was used. Netphos and Pondr was used for prediction of phosphorylation sites and protein disordered regions. Promoter analysis revealed that the SlCPL-3 is involved in defense-related mechanisms. We further amplified two different regions of SlCPL-3 and sequenced them. It showed homology respective to the reference tomato genome. Our results showed that SlCPL-3 gene was triggered during bacterial stress. SlCPL-3 expression was upregulated in response to bacterial stress during different time intervals. Rio grande showed a high level of SICPL-3 gene expression after 72 hpi. Biochemical and gene expression analysis showed that under biotic stress Rio grande cultivar is more sensitive to Pst DC 3000 bacteria. CONCLUSION: This study lays a solid foundation for the functional characterization of SlCPL-3 gene in tomato cultivars. All these findings would be beneficial for further analysis of SlCPL-3 gene and may be helpful for the development of resilient tomato cultivars.


Asunto(s)
Solanum lycopersicum , Solanum lycopersicum/genética , ARN Polimerasa II/genética , Antioxidantes , Monoéster Fosfórico Hidrolasas/genética , Peróxido de Hidrógeno/metabolismo , Estrés Fisiológico/genética , Enfermedades de las Plantas/genética , Enfermedades de las Plantas/microbiología , Regulación de la Expresión Génica de las Plantas/genética
3.
Sensors (Basel) ; 23(16)2023 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-37631699

RESUMEN

In the era of interconnected and intelligent cyber-physical systems, preserving privacy has become a paramount concern. This paper aims a groundbreaking proof-of-concept (PoC) design that leverages consortium blockchain technology to address privacy challenges in cyber-physical systems (CPSs). The proposed design introduces a novel approach to safeguarding sensitive information and ensuring data integrity while maintaining a high level of trust among stakeholders. By harnessing the power of consortium blockchain, the design establishes a decentralized and tamper-resistant framework for privacy preservation. However, ensuring the security and privacy of sensitive information within CPSs poses significant challenges. This paper proposes a cutting-edge privacy approach that leverages consortium blockchain technology to secure secrets in CPSs. Consortium blockchain, with its permissioned nature, provides a trusted framework for governing the network and validating transactions. By employing consortium blockchain, secrets in CPSs can be securely stored, shared, and accessed by authorized entities only, mitigating the risks of unauthorized access and data breaches. The proposed approach offers enhanced security, privacy preservation, increased trust and accountability, as well as interoperability and scalability. This paper aims to address the limitations of traditional security mechanisms in CPSs and harness the potential of consortium blockchain to revolutionize the management of secrets, contributing to the advancement of CPS security and privacy. The effectiveness of the design is demonstrated through extensive simulations and performance evaluations. The results indicate that the proposed approach offers significant advancements in privacy protection, paving the way for secure and trustworthy cyber-physical systems in various domains.

4.
Sensors (Basel) ; 23(17)2023 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-37687931

RESUMEN

Precision medicine has emerged as a transformative approach to healthcare, aiming to deliver personalized treatments and therapies tailored to individual patients. However, the realization of precision medicine relies heavily on the availability of comprehensive and diverse medical data. In this context, blockchain-enabled federated learning, coupled with electronic medical records (EMRs), presents a groundbreaking solution to unlock revolutionary insights in precision medicine. This abstract explores the potential of blockchain technology to empower precision medicine by enabling secure and decentralized data sharing and analysis. By leveraging blockchain's immutability, transparency, and cryptographic protocols, federated learning can be conducted on distributed EMR datasets without compromising patient privacy. The integration of blockchain technology ensures data integrity, traceability, and consent management, thereby addressing critical concerns associated with data privacy and security. Through the federated learning paradigm, healthcare institutions and research organizations can collaboratively train machine learning models on locally stored EMR data, without the need for data centralization. The blockchain acts as a decentralized ledger, securely recording the training process and aggregating model updates while preserving data privacy at its source. This approach allows the discovery of patterns, correlations, and novel insights across a wide range of medical conditions and patient populations. By unlocking revolutionary insights through blockchain-enabled federated learning and EMRs, precision medicine can revolutionize healthcare delivery. This paradigm shift has the potential to improve diagnosis accuracy, optimize treatment plans, identify subpopulations for clinical trials, and expedite the development of novel therapies. Furthermore, the transparent and auditable nature of blockchain technology enhances trust among stakeholders, enabling greater collaboration, data sharing, and collective intelligence in the pursuit of advancing precision medicine. In conclusion, this abstract highlights the transformative potential of blockchain-enabled federated learning in empowering precision medicine. By unlocking revolutionary insights from diverse and distributed EMR datasets, this approach paves the way for a future where healthcare is personalized, efficient, and tailored to the unique needs of each patient.


Asunto(s)
Cadena de Bloques , Medicina de Precisión , Humanos , Registros Electrónicos de Salud , Difusión de la Información , Poder Psicológico
5.
Adv Anat Pathol ; 29(4): 241-251, 2022 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-35249993

RESUMEN

Metastases to the kidney are rare and were historically described in autopsy series, and the incidence ranged between 2.36% and 12.6%. However, in the contemporary literature with the improvements in imaging modalities (computed tomography scan and magnetic resonance imaging) and other health care screening services, metastatic tumors to the kidney are being diagnosed more frequently in surgical specimens. The utility of needle core biopsies in the primary evaluation of renal masses has also increased the number of sampled metastases, and as a result, only limited histologic material is available for evaluation in some cases and may potentially lead to diagnostic pitfalls. In the last decade, a few large clinical series have been published. In these series, the majority of metastatic tumors to the kidney are carcinomas, with the lung being the most common primary site. A significant number of the various tumor types with metastasis to the kidney are also associated with widespread metastases to other organs, and the renal metastasis may present several years after diagnosis of the primary tumor. The majority of secondary tumors of the kidney are asymptomatic, incidentally discovered, and solitary. There should be a high index of suspicion of metastasis to the kidney in patients with an associated enlarging renal lesion with minimal to no enhancement on imaging and tumor progression of a known high-grade nonrenal malignancy. Secondary tumors of the kidney can be accurately diagnosed by correlating histopathologic features with clinical and radiographic findings and the judicious use of ancillary studies.


Asunto(s)
Carcinoma , Neoplasias Renales , Carcinoma/patología , Humanos , Riñón/diagnóstico por imagen , Riñón/patología , Neoplasias Renales/diagnóstico , Neoplasias Renales/patología , Tomografía Computarizada por Rayos X
6.
Sensors (Basel) ; 22(8)2022 Apr 13.
Artículo en Inglés | MEDLINE | ID: mdl-35458962

RESUMEN

Emotions are an essential part of daily human communication. The emotional states and dynamics of the brain can be linked by electroencephalography (EEG) signals that can be used by the Brain-Computer Interface (BCI), to provide better human-machine interactions. Several studies have been conducted in the field of emotion recognition. However, one of the most important issues facing the emotion recognition process, using EEG signals, is the accuracy of recognition. This paper proposes a deep learning-based approach for emotion recognition through EEG signals, which includes data selection, feature extraction, feature selection and classification phases. This research serves the medical field, as the emotion recognition model helps diagnose psychological and behavioral disorders. The research contributes to improving the performance of the emotion recognition model to obtain more accurate results, which, in turn, aids in making the correct medical decisions. A standard pre-processed Database of Emotion Analysis using Physiological signaling (DEAP) was used in this work. The statistical features, wavelet features, and Hurst exponent were extracted from the dataset. The feature selection task was implemented through the Binary Gray Wolf Optimizer. At the classification stage, the stacked bi-directional Long Short-Term Memory (Bi-LSTM) Model was used to recognize human emotions. In this paper, emotions are classified into three main classes: arousal, valence and liking. The proposed approach achieved high accuracy compared to the methods used in past studies, with an average accuracy of 99.45%, 96.87% and 99.68% of valence, arousal, and liking, respectively, which is considered a high performance for the emotion recognition model.


Asunto(s)
Interfaces Cerebro-Computador , Aprendizaje Profundo , Electroencefalografía , Emociones , Memoria a Corto Plazo
7.
Sensors (Basel) ; 22(9)2022 Apr 28.
Artículo en Inglés | MEDLINE | ID: mdl-35591061

RESUMEN

Web applications have become ubiquitous for many business sectors due to their platform independence and low operation cost. Billions of users are visiting these applications to accomplish their daily tasks. However, many of these applications are either vulnerable to web defacement attacks or created and managed by hackers such as fraudulent and phishing websites. Detecting malicious websites is essential to prevent the spreading of malware and protect end-users from being victims. However, most existing solutions rely on extracting features from the website's content which can be harmful to the detection machines themselves and subject to obfuscations. Detecting malicious Uniform Resource Locators (URLs) is safer and more efficient than content analysis. However, the detection of malicious URLs is still not well addressed due to insufficient features and inaccurate classification. This study aims at improving the detection accuracy of malicious URL detection by designing and developing a cyber threat intelligence-based malicious URL detection model using two-stage ensemble learning. The cyber threat intelligence-based features are extracted from web searches to improve detection accuracy. Cybersecurity analysts and users reports around the globe can provide important information regarding malicious websites. Therefore, cyber threat intelligence-based (CTI) features extracted from Google searches and Whois websites are used to improve detection performance. The study also proposed a two-stage ensemble learning model that combines the random forest (RF) algorithm for preclassification with multilayer perceptron (MLP) for final decision making. The trained MLP classifier has replaced the majority voting scheme of the three trained random forest classifiers for decision making. The probabilistic output of the weak classifiers of the random forest was aggregated and used as input for the MLP classifier for adequate classification. Results show that the extracted CTI-based features with the two-stage classification outperform other studies' detection models. The proposed CTI-based detection model achieved a 7.8% accuracy improvement and 6.7% reduction in false-positive rates compared with the traditional URL-based model.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Algoritmos , Seguridad Computacional , Inteligencia
8.
Sensors (Basel) ; 22(9)2022 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-35590801

RESUMEN

Data streaming applications such as the Internet of Things (IoT) require processing or predicting from sequential data from various sensors. However, most of the data are unlabeled, making applying fully supervised learning algorithms impossible. The online manifold regularization approach allows sequential learning from partially labeled data, which is useful for sequential learning in environments with scarcely labeled data. Unfortunately, the manifold regularization technique does not work out of the box as it requires determining the radial basis function (RBF) kernel width parameter. The RBF kernel width parameter directly impacts the performance as it is used to inform the model to which class each piece of data most likely belongs. The width parameter is often determined off-line via hyperparameter search, where a vast amount of labeled data is required. Therefore, it limits its utility in applications where it is difficult to collect a great deal of labeled data, such as data stream mining. To address this issue, we proposed eliminating the RBF kernel from the manifold regularization technique altogether by combining the manifold regularization technique with a prototype learning method, which uses a finite set of prototypes to approximate the entire data set. Compared to other manifold regularization approaches, this approach instead queries the prototype-based learner to find the most similar samples for each sample instead of relying on the RBF kernel. Thus, it no longer necessitates the RBF kernel, which improves its practicality. The proposed approach can learn faster and achieve a higher classification performance than other manifold regularization techniques based on experiments on benchmark data sets. Results showed that the proposed approach can perform well even without using the RBF kernel, which improves the practicality of manifold regularization techniques for semi-supervised learning.


Asunto(s)
Internet de las Cosas , Aprendizaje Automático Supervisado , Algoritmos , Benchmarking , Minería de Datos
9.
Sensors (Basel) ; 22(7)2022 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-35408423

RESUMEN

A vehicular ad hoc network (VANET) is an emerging technology that improves road safety, traffic efficiency, and passenger comfort. VANETs' applications rely on co-operativeness among vehicles by periodically sharing their context information, such as position speed and acceleration, among others, at a high rate due to high vehicles mobility. However, rogue nodes, which exploit the co-operativeness feature and share false messages, can disrupt the fundamental operations of any potential application and cause the loss of people's lives and properties. Unfortunately, most of the current solutions cannot effectively detect rogue nodes due to the continuous context change and the inconsideration of dynamic data uncertainty during the identification. Although there are few context-aware solutions proposed for VANET, most of these solutions are data-centric. A vehicle is considered malicious if it shares false or inaccurate messages. Such a rule is fuzzy and not consistently accurate due to the dynamic uncertainty of the vehicular context, which leads to a poor detection rate. To this end, this study proposed a fuzzy-based context-aware detection model to improve the overall detection performance. A fuzzy inference system is constructed to evaluate the vehicles based on their generated information. The output of the proposed fuzzy inference system is used to build a dynamic context reference based on the proposed fuzzy inference system. Vehicles are classified into either honest or rogue nodes based on the deviation of their evaluation scores calculated using the proposed fuzzy inference system from the context reference. Extensive experiments were carried out to evaluate the proposed model. Results show that the proposed model outperforms the state-of-the-art models. It achieves a 7.88% improvement in the overall performance, while a 16.46% improvement is attained for detection rate compared to the state-of-the-art model. The proposed model can be used to evict the rogue nodes, and thus improve the safety and traffic efficiency of crewed or uncrewed vehicles designed for different environments, land, naval, or air.

10.
Sensors (Basel) ; 22(10)2022 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-35632016

RESUMEN

The Internet of Things (IoT) is a widely used technology in automated network systems across the world. The impact of the IoT on different industries has occurred in recent years. Many IoT nodes collect, store, and process personal data, which is an ideal target for attackers. Several researchers have worked on this problem and have presented many intrusion detection systems (IDSs). The existing system has difficulties in improving performance and identifying subcategories of cyberattacks. This paper proposes a deep-convolutional-neural-network (DCNN)-based IDS. A DCNN consists of two convolutional layers and three fully connected dense layers. The proposed model aims to improve performance and reduce computational power. Experiments were conducted utilizing the IoTID20 dataset. The performance analysis of the proposed model was carried out with several metrics, such as accuracy, precision, recall, and F1-score. A number of optimization techniques were applied to the proposed model in which Adam, AdaMax, and Nadam performance was optimum. In addition, the proposed model was compared with various advanced deep learning (DL) and traditional machine learning (ML) techniques. All experimental analysis indicates that the accuracy of the proposed approach is high and more robust than existing DL-based algorithms.


Asunto(s)
Internet de las Cosas , Algoritmos , Aprendizaje Automático , Redes Neurales de la Computación
11.
Int J Mol Sci ; 23(21)2022 Oct 30.
Artículo en Inglés | MEDLINE | ID: mdl-36362018

RESUMEN

Determining and modeling the possible behaviour and actions of molecules requires investigating the basic structural features and physicochemical properties that determine their behaviour during chemical, physical, biological, and environmental processes. Computational approaches such as machine learning methods are alternatives to predicting the physiochemical properties of molecules based on their structures. However, the limited accuracy and high error rates of such predictions restrict their use. In this paper, a novel technique based on a deep learning convolutional neural network (CNN) for the prediction of chemical compounds' bioactivity is proposed and developed. The molecules are represented in the new matrix format Mol2mat, a molecular matrix representation adapted from the well-known 2D-fingerprint descriptors. To evaluate the performance of the proposed methods, a series of experiments were conducted using two standard datasets, namely the MDL Drug Data Report (MDDR) and Sutherland, datasets comprising 10 homogeneous and 14 heterogeneous activity classes. After analysing the eight fingerprints, all the probable combinations were investigated using the five best descriptors. The results showed that a combination of three fingerprints, ECFP4, EPFP4, and ECFC4, along with a CNN activity prediction process, achieved the highest performance of 98% AUC when compared to the state-of-the-art ML algorithms NaiveB, LSVM, and RBFN.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Algoritmos
12.
Sensors (Basel) ; 22(1)2021 Dec 28.
Artículo en Inglés | MEDLINE | ID: mdl-35009725

RESUMEN

Due to the wide availability and usage of connected devices in Internet of Things (IoT) networks, the number of attacks on these networks is continually increasing. A particularly serious and dangerous type of attack in the IoT environment is the botnet attack, where the attackers can control the IoT systems to generate enormous networks of "bot" devices for generating malicious activities. To detect this type of attack, several Intrusion Detection Systems (IDSs) have been proposed for IoT networks based on machine learning and deep learning methods. As the main characteristics of IoT systems include their limited battery power and processor capacity, maximizing the efficiency of intrusion detection systems for IoT networks is still a research challenge. It is important to provide efficient and effective methods that use lower computational time and have high detection rates. This paper proposes an aggregated mutual information-based feature selection approach with machine learning methods to enhance detection of IoT botnet attacks. In this study, the N-BaIoT benchmark dataset was used to detect botnet attack types using real traffic data gathered from nine commercial IoT devices. The dataset includes binary and multi-class classifications. The feature selection method incorporates Mutual Information (MI) technique, Principal Component Analysis (PCA) and ANOVA f-test at finely-granulated detection level to select the relevant features for improving the performance of IoT Botnet classifiers. In the classification step, several ensemble and individual classifiers were used, including Random Forest (RF), XGBoost (XGB), Gaussian Naïve Bayes (GNB), k-Nearest Neighbor (k-NN), Logistic Regression (LR) and Support Vector Machine (SVM). The experimental results showed the efficiency and effectiveness of the proposed approach, which outperformed other techniques using various evaluation metrics.


Asunto(s)
Internet de las Cosas , Teorema de Bayes , Aprendizaje Automático , Análisis de Componente Principal , Máquina de Vectores de Soporte
13.
Sensors (Basel) ; 21(23)2021 Nov 30.
Artículo en Inglés | MEDLINE | ID: mdl-34884022

RESUMEN

Wireless Sensors Networks have been the focus of significant attention from research and development due to their applications of collecting data from various fields such as smart cities, power grids, transportation systems, medical sectors, military, and rural areas. Accurate and reliable measurements for insightful data analysis and decision-making are the ultimate goals of sensor networks for critical domains. However, the raw data collected by WSNs usually are not reliable and inaccurate due to the imperfect nature of WSNs. Identifying misbehaviours or anomalies in the network is important for providing reliable and secure functioning of the network. However, due to resource constraints, a lightweight detection scheme is a major design challenge in sensor networks. This paper aims at designing and developing a lightweight anomaly detection scheme to improve efficiency in terms of reducing the computational complexity and communication and improving memory utilization overhead while maintaining high accuracy. To achieve this aim, one-class learning and dimension reduction concepts were used in the design. The One-Class Support Vector Machine (OCSVM) with hyper-ellipsoid variance was used for anomaly detection due to its advantage in classifying unlabelled and multivariate data. Various One-Class Support Vector Machine formulations have been investigated and Centred-Ellipsoid has been adopted in this study due to its effectiveness. Centred-Ellipsoid is the most effective kernel among studies formulations. To decrease the computational complexity and improve memory utilization, the dimensions of the data were reduced using the Candid Covariance-Free Incremental Principal Component Analysis (CCIPCA) algorithm. Extensive experiments were conducted to evaluate the proposed lightweight anomaly detection scheme. Results in terms of detection accuracy, memory utilization, computational complexity, and communication overhead show that the proposed scheme is effective and efficient compared few existing schemes evaluated. The proposed anomaly detection scheme achieved the accuracy higher than 98%, with O(nd) memory utilization and no communication overhead.


Asunto(s)
Redes de Comunicación de Computadores , Tecnología Inalámbrica , Algoritmos , Análisis de Componente Principal , Máquina de Vectores de Soporte
14.
Sensors (Basel) ; 21(24)2021 Dec 17.
Artículo en Inglés | MEDLINE | ID: mdl-34960516

RESUMEN

This study presents a novel feature-engineered-natural gradient descent ensemble-boosting (NGBoost) machine-learning framework for detecting fraud in power consumption data. The proposed framework was sequentially executed in three stages: data pre-processing, feature engineering, and model evaluation. It utilized the random forest algorithm-based imputation technique initially to impute the missing data entries in the acquired smart meter dataset. In the second phase, the majority weighted minority oversampling technique (MWMOTE) algorithm was used to avoid an unequal distribution of data samples among different classes. The time-series feature-extraction library and whale optimization algorithm were utilized to extract and select the most relevant features from the kWh reading of consumers. Once the most relevant features were acquired, the model training and testing process was initiated by using the NGBoost algorithm to classify the consumers into two distinct categories ("Healthy" and "Theft"). Finally, each input feature's impact (positive or negative) in predicting the target variable was recognized with the tree SHAP additive-explanations algorithm. The proposed framework achieved an accuracy of 93%, recall of 91%, and precision of 95%, which was greater than all the competing models, and thus validated its efficacy and significance in the studied field of research.


Asunto(s)
Algoritmos , Aprendizaje Automático , Electricidad , Fraude , Factores de Tiempo
15.
J Urol ; 204(4): 734-740, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32347780

RESUMEN

PURPOSE: Accurate preoperative staging of prostate cancer is essential for treatment planning. Conventional imaging is limited in detection of metastases. Our primary aim was to determine if [18F]fluciclovine positron emission tomography/computerized tomography is an early indicator of subclinical metastasis among patients with high risk prostate cancer. MATERIALS AND METHODS: A total of 68 patients with unfavorable intermediate to very high risk prostate cancer without systemic disease on conventional imaging were recruited before robotic radical prostatectomy with extended pelvic lymph node dissection. Diagnostic performance of [18F]fluciclovine positron emission tomography/computerized tomography and conventional imaging for detection of metastatic disease, and correlation of positivity to node and metastatic deposit size were determined. RESULTS: Overall 57 of 68 patients completed the protocol, of whom 31 had nodal metastasis on histology. [18F]Fluciclovine positron emission tomography/computerized tomography sensitivity and specificity in detecting nodal metastasis was 55.3% and 84.8% per patient, and 54.8% and 96.4% per region (right and left pelvis, presacral and nonregional), respectively. Compared with conventional imaging [18F]Fluciclovine positron emission tomography/computerized tomography had significantly higher sensitivity on patient based (55.3% vs 33.3%, p <0.01) and region based (54.8% vs 19.4%, p <0.01) analysis, detecting metastasis in 7 more patients and 22 more regions, with similar high specificity. Four additional patients had distant disease or other cancer detected on [18F] fluciclovine positron emission tomography/computerized tomography which precluded surgery. Detection of metastasis was related to size of metastatic deposits within lymph nodes and overall metastatic burden. CONCLUSIONS: [18F]Fluciclovine positron emission tomography/computerized tomography detects occult metastases not identified on conventional imaging and may help guide treatment decisions and lymph node dissection due to high specificity for metastatic disease.


Asunto(s)
Ácidos Carboxílicos , Ciclobutanos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Anciano , Humanos , Masculino , Persona de Mediana Edad , Metástasis de la Neoplasia , Estadificación de Neoplasias , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Periodo Preoperatorio , Estudios Prospectivos , Neoplasias de la Próstata/cirugía
16.
Int J Qual Health Care ; 32(Supplement_1): 84-88, 2020 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-32026936

RESUMEN

This paper examines the principles of benchmarking in healthcare and how benchmarking can contribute to practice improvement and improved health outcomes for patients. It uses the Deepening our Understanding of Quality in Australia (DUQuA) study published in this Supplement and DUQuA's predecessor in Europe, the Deepening our Understanding of Quality improvement in Europe (DUQuE) study, as models. Benchmarking is where the performances of institutions or individuals are compared using agreed indicators or standards. The rationale for benchmarking is that institutions will respond positively to being identified as a low outlier or desire to be or stay as a high performer, or both, and patients will be empowered to make choices to seek care at institutions that are high performers. Benchmarking often begins with a conceptual framework that is based on a logic model. Such a framework can drive the selection of indicators to measure performance, rather than their selection being based on what is easy to measure. A Donabedian range of indicators can be chosen, including structure, process and outcomes, created around multiple domains or specialties. Indicators based on continuous variables allow organizations to understand where their performance is within a population, and their interdependencies and associations can be understood. Benchmarking should optimally target providers, in order to drive them towards improvement. The DUQuA and DUQuE studies both incorporated some of these principles into their design, thereby creating a model of how to incorporate robust benchmarking into large-scale health services research.


Asunto(s)
Benchmarking/métodos , Investigación sobre Servicios de Salud/métodos , Indicadores de Calidad de la Atención de Salud , Australia , Benchmarking/normas , Hospitales Públicos/normas , Humanos , Seguridad del Paciente , Mejoramiento de la Calidad/organización & administración
17.
Molecules ; 26(1)2020 Dec 29.
Artículo en Inglés | MEDLINE | ID: mdl-33383976

RESUMEN

Virtual screening (VS) is a computational practice applied in drug discovery research. VS is popularly applied in a computer-based search for new lead molecules based on molecular similarity searching. In chemical databases similarity searching is used to identify molecules that have similarities to a user-defined reference structure and is evaluated by quantitative measures of intermolecular structural similarity. Among existing approaches, 2D fingerprints are widely used. The similarity of a reference structure and a database structure is measured by the computation of association coefficients. In most classical similarity approaches, it is assumed that the molecular features in both biological and non-biologically-related activity carry the same weight. However, based on the chemical structure, it has been found that some distinguishable features are more important than others. Hence, this difference should be taken consideration by placing more weight on each important fragment. The main aim of this research is to enhance the performance of similarity searching by using multiple descriptors. In this paper, a deep learning method known as deep belief networks (DBN) has been used to reweight the molecule features. Several descriptors have been used for the MDL Drug Data Report (MDDR) dataset each of which represents different important features. The proposed method has been implemented with each descriptor individually to select the important features based on a new weight, with a lower error rate, and merging together all new features from all descriptors to produce a new descriptor for similarity searching. Based on the extensive experiments conducted, the results show that the proposed method outperformed several existing benchmark similarity methods, including Bayesian inference networks (BIN), the Tanimoto similarity method (TAN), adapted similarity measure of text processing (ASMTP) and the quantum-based similarity method (SQB). The results of this proposed multi-descriptor-based on Stack of deep belief networks method (SDBN) demonstrated a higher accuracy compared to existing methods on structurally heterogeneous datasets.


Asunto(s)
Aprendizaje Profundo , Diseño de Fármacos , Descubrimiento de Drogas/métodos , Preparaciones Farmacéuticas/química , Teorema de Bayes , Quimioinformática/métodos , Bases de Datos Farmacéuticas , Redes Neurales de la Computación , Análisis de Componente Principal
18.
Transpl Infect Dis ; 21(1): e13005, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30276937

RESUMEN

Primary effusion lymphoma (PEL) is a rare mature B-cell non-Hodgkin's lymphoma arising in body cavities and presenting with effusions. It has been described predominantly in patients with impaired immunity from the acquired immunodeficiency syndrome and is associated with the Human Herpesvirus-8 (HHV-8). Seldom has PEL been diagnosed in persons negative for the human immunodeficiency virus (HIV), and in such cases it has occurred primarily in the setting of posttransplant immunosuppression. We report an instructive case of a Caribbean-American HIV-negative orthotopic heart transplant recipient with a history of HHV-8-associated Kaposi's sarcoma who developed HHV-8 viremia and PEL of the pleural space early in the posttransplant course. This case highlights the importance of considering PEL in the differential diagnosis of a new pleural effusion in a transplant recipient at risk for HHV-8-associated disease.


Asunto(s)
Trasplante de Corazón/efectos adversos , Herpesvirus Humano 8/aislamiento & purificación , Linfoma de Efusión Primaria/diagnóstico , Complicaciones Posoperatorias/diagnóstico , Viremia/diagnóstico , Diagnóstico Diferencial , Humanos , Terapia de Inmunosupresión/efectos adversos , Terapia de Inmunosupresión/métodos , Linfoma de Efusión Primaria/inmunología , Linfoma de Efusión Primaria/patología , Linfoma de Efusión Primaria/virología , Masculino , Persona de Mediana Edad , Recurrencia Local de Neoplasia/inmunología , Recurrencia Local de Neoplasia/patología , Recurrencia Local de Neoplasia/virología , Cavidad Pleural/patología , Cavidad Pleural/virología , Complicaciones Posoperatorias/inmunología , Complicaciones Posoperatorias/virología , Sarcoma de Kaposi/inmunología , Sarcoma de Kaposi/patología , Sarcoma de Kaposi/virología , Neoplasias Cutáneas/inmunología , Neoplasias Cutáneas/patología , Neoplasias Cutáneas/virología , Viremia/inmunología , Viremia/virología
19.
J Comput Aided Mol Des ; 31(4): 365-378, 2017 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-28220440

RESUMEN

Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.


Asunto(s)
Descubrimiento de Drogas/métodos , Bibliotecas de Moléculas Pequeñas/química , Algoritmos , Simulación por Computador , Humanos , Ligandos , Modelos Moleculares , Probabilidad , Proteínas/metabolismo , Teoría Cuántica , Bibliotecas de Moléculas Pequeñas/farmacología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA