RESUMO
Saffron is the world's most expensive and legendary crop that is widely used in cuisine, drugs, and cosmetics. Therefore, the demand for saffron is increasing globally day by day. Despite its massive demand the cultivation of saffron has dramatically decreased and grown in only a few countries. Saffron is an environment-sensitive crop that is affected by various factors including rapid change in climate, light intensity, pH level, soil moisture, salinity level, and inappropriate cultivation techniques. It is not possible to control many of these environmental factors in traditional farming. Although, many innovative technologies like Artificial Intelligence and Internet of Things (IoT) have been used to enhance the growth of saffron still, there is a dire need for a system that can overcome primary issues related to saffron growth. In this research, we have proposed an IoT-based system for the greenhouse to control the numerous agronomical variables such as corm size, temperature, humidity, pH level, soil moisture, salinity, and water availability. The proposed architecture monitors and controls environmental factors automatically and sends real-time data from the greenhouse to the microcontroller. The sensed values of various agronomical variables are compared with threshold values and saved at cloud for sending to the farm owner for efficient management. The experiment results reveal that the proposed system is capable to maximize saffron production in the greenhouse by controlling environmental factors as per crop needs.
Assuntos
Crocus , Internet das Coisas , Crocus/crescimento & desenvolvimento , Produtos Agrícolas/crescimento & desenvolvimento , Agricultura/métodos , Solo/química , TemperaturaRESUMO
BACKGROUND AND OBJECTIVE: Lung cancer is an important cause of death and morbidity around the world. Two of the primary computed tomography (CT) imaging markers that can be used to differentiate malignant and benign lung nodules are the inhomogeneity of the nodules' texture and nodular morphology. The objective of this paper is to present a new model that can capture the inhomogeneity of the detected lung nodules as well as their morphology. METHODS: We modified the local ternary pattern to use three different levels (instead of two) and a new pattern identification algorithm to capture the nodule's inhomogeneity and morphology in a more accurate and flexible way. This modification aims to address the wide Hounsfield unit value range of the detected nodules which decreases the ability of the traditional local binary/ternary pattern to accurately classify nodules' inhomogeneity. The cut-off values defining these three levels of the novel technique are estimated empirically from the training data. Subsequently, the extracted imaging markers are fed to a hyper-tuned stacked generalization-based classification architecture to classify the nodules as malignant or benign. The proposed system was evaluated on in vivo datasets of 679 CT scans (364 malignant nodules and 315 benign nodules) from the benchmark Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) and an external dataset of 100 CT scans (50 malignant and 50 benign). The performance of the classifier was quantitatively assessed using a Leave-one-out cross-validation approach and externally validated using the unseen external dataset based on sensitivity, specificity, and accuracy. RESULTS: The overall accuracy of the system is 96.17% with 97.14% sensitivity and 95.33% specificity. The area under the receiver-operating characteristic curve was 0.98, which highlights the robustness of the system. Using the unseen external dataset for validating the system led to consistent results showing the generalization abilities of the proposed approach. Moreover, applying the original local binary/ternary pattern or using other classification structures achieved inferior performance when compared against the proposed approach. CONCLUSIONS: These experimental results demonstrate the feasibility of the proposed model as a novel tool to assist physicians and radiologists for lung nodules' early assessment based on the new comprehensive imaging markers.
Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Neoplasias Pulmonares/diagnóstico , Pulmão/patologia , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Curva ROC , Nódulo Pulmonar Solitário/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por ComputadorRESUMO
Lung cancer is among the most common mortality causes worldwide. This scientific article is a comprehensive review of current knowledge regarding screening, subtyping, imaging, staging, and management of treatment response for lung cancer. The traditional imaging modality for screening and initial lung cancer diagnosis is computed tomography (CT). Recently, a dual-energy CT was proven to enhance the categorization of variable pulmonary lesions. The National Comprehensive Cancer Network (NCCN) recommends usage of fluorodeoxyglucose positron emission tomography (FDG PET) in concert with CT to properly stage lung cancer and to prevent fruitless thoracotomies. Diffusion MR is an alternative to FDG PET/CT that is radiation-free and has a comparable diagnostic performance. For response evaluation after treatment, FDG PET/CT is a potent modality which predicts survival better than CT. Updated knowledge of lung cancer genomic abnormalities and treatment regimens helps to improve the radiologists' skills. Incorporating the radiologic experience is crucial for precise diagnosis, therapy planning, and surveillance of lung cancer.
RESUMO
Pulmonary nodules are the precursors of bronchogenic carcinoma, its early detection facilitates early treatment which save a lot of lives. Unfortunately, pulmonary nodule detection and classification are liable to subjective variations with high rate of missing small cancerous lesions which opens the way for implementation of artificial intelligence (AI) and computer aided diagnosis (CAD) systems. The field of deep learning and neural networks is expanding every day with new models designed to overcome diagnostic problems and provide more applicable and simply used models. We aim in this review to briefly discuss the current applications of AI in lung segmentation, pulmonary nodule detection and classification.
RESUMO
Cell-penetrating peptides (CPPs) are special peptides capable of carrying a variety of bioactive molecules, such as genetic materials, short interfering RNAs and nanoparticles, into cells. Recently, research on CPP has gained substantial interest from researchers, and the biological mechanisms of CPPS have been assessed in the context of safe drug delivery agents and therapeutic applications. Correct identification and synthesis of CPPs using traditional biochemical methods is an extremely slow, expensive and laborious task particularly due to the large volume of unannotated peptide sequences accumulating in the World Bank repository. Hence, a powerful bioinformatics predictor that rapidly identifies CPPs with a high recognition rate is urgently needed. To date, numerous computational methods have been developed for CPP prediction. However, the available machine-learning (ML) tools are unable to distinguish both the CPPs and their uptake efficiencies. This study aimed to develop a two-layer deep learning framework named DeepCPPred to identify both CPPs in the first phase and peptide uptake efficiency in the second phase. The DeepCPPred predictor first uses four types of descriptors that cover evolutionary, energy estimation, reduced sequence and amino-acid contact information. Then, the extracted features are optimized through the elastic net algorithm and fed into a cascade deep forest algorithm to build the final CPP model. The proposed method achieved 99.45 percent overall accuracy with the CPP924 benchmark dataset in the first layer and 95.43 percent accuracy in the second layer with the CPPSite3 dataset using a 5-fold cross-validation test. Thus, our proposed bioinformatics tool surpassed all the existing state-of-the-art sequence-based CPP approaches.
Assuntos
Peptídeos Penetradores de Células , Aprendizado Profundo , Sequência de Aminoácidos , Peptídeos Penetradores de Células/química , Biologia Computacional/métodos , Aprendizado de MáquinaRESUMO
Liver cancer is a major cause of morbidity and mortality in the world. The primary goals of this manuscript are the identification of novel imaging markers (morphological, functional, and anatomical/textural), and development of a computer-aided diagnostic (CAD) system to accurately detect and grade liver tumors non-invasively. A total of 95 patients with liver tumors (M = 65, F = 30, age range = 34-82 years) were enrolled in the study after consents were obtained. 38 patients had benign tumors (LR1 = 19 and LR2 = 19), 19 patients had intermediate tumors (LR3), and 38 patients had hepatocellular carcinoma (HCC) malignant tumors (LR4 = 19 and LR5 = 19). A multi-phase contrast-enhanced magnetic resonance imaging (CE-MRI) was collected to extract the imaging markers. A comprehensive CAD system was developed, which includes the following main steps: i) estimation of morphological markers using a new parametric spherical harmonic model, ii) estimation of textural markers using a novel rotation invariant gray-level co-occurrence matrix (GLCM) and gray-level run-length matrix (GLRLM) models, and iii) calculation of the functional markers by estimating the wash-in/wash-out slopes, which enable quantification of the enhancement characteristics across different CE-MR phases. These markers were subsequently processed using a two-stages random forest-based classifier to classify the liver tumor as benign, intermediate, or malignant and determine the corresponding grade (LR1, LR2, LR3, LR4, or LR5). The overall CAD system using all the identified imaging markers achieved a sensitivity of 91.8%±0.9%, specificity of 91.2%±1.9%, and F[Formula: see text] score of 0.91±0.01, using the leave-one-subject-out (LOSO) cross-validation approach. Importantly, the CAD system achieved overall accuracies of [Formula: see text], 85%±2%, 78%±3%, 83%±4%, and 79%±3% in grading liver tumors into LR1, LR2, LR3, LR4, and LR5, respectively. In addition to LOSO, the developed CAD system was tested using randomly stratified 10-fold and 5-fold cross-validation approaches. Alternative classification algorithms, including support vector machine, naive Bayes classifier, k-nearest neighbors, and linear discriminant analysis all produced inferior results compared to the proposed two stage random forest classification model. These experiments demonstrate the feasibility of the proposed CAD system as a novel tool to objectively assess liver tumors based on the new comprehensive imaging markers. The identified imaging markers and CAD system can be used as a non-invasive diagnostic tool for early and accurate detection and grading of liver cancer.
Assuntos
Diagnóstico por Computador , Neoplasias Hepáticas/diagnóstico , Neoplasias Hepáticas/patologia , Algoritmos , Humanos , Imageamento Tridimensional , Neoplasias Hepáticas/diagnóstico por imagem , Imageamento por Ressonância Magnética , Gradação de Tumores , ProbabilidadeRESUMO
Oil leaks onto water surfaces from big tankers, ships, and pipeline cracks cause considerable damage and harm to the marine environment. Synthetic Aperture Radar (SAR) images provide an approximate representation for target scenes, including sea and land surfaces, ships, oil spills, and look-alikes. Detection and segmentation of oil spills from SAR images are crucial to aid in leak cleanups and protecting the environment. This paper introduces a two-stage deep-learning framework for the identification of oil spill occurrences based on a highly unbalanced dataset. The first stage classifies patches based on the percentage of oil spill pixels using a novel 23-layer Convolutional Neural Network. In contrast, the second stage performs semantic segmentation using a five-stage U-Net structure. The generalized Dice loss is minimized to account for the reduced oil spill representation in the patches. The results of this study are very promising and provide a comparable improved precision and Dice score compared to related work.
RESUMO
Processed pseudogenes are generated by reverse transcription of a functional gene. They are generally nonfunctional after their insertion and, as a consequence, are no longer subjected to the selective constraints associated with functional genes. Because of this property they can be used as neutral markers in molecular evolution. In this work, we investigated the relationship between the evolution of GC content in recently inserted processed pseudogenes and the local recombination pattern in two mammalian genomes (human and mouse). We confirmed, using original markers, that recombination drives GC content in the human genome and we demonstrated that this is also true for the mouse genome despite lower recombination rates. Finally, we discussed the consequences on isochores evolution and the contrast between the human and the mouse pattern.
Assuntos
Composição de Bases , Evolução Molecular , Genoma , Pseudogenes , Recombinação Genética , Animais , Biologia Computacional , Bases de Dados Genéticas , Genoma Humano , Humanos , Camundongos , Fatores de TempoRESUMO
In mammals, several studies have suggested that levels of methylation are higher in repetitive DNA than in nonrepetitive DNA, possibly reflecting a genome-wide defense mechanism against deleterious effects associated with transposable elements (TEs). To analyze the determinants of methylation patterns in primate repetitive DNA, we took advantage of the fact that the methylation rate in the germ line is reflected by the transition rate at CpG sites. We assessed the variability of CpG substitution rates in nonrepetitive DNA and in various TE and retropseudogene families. We show that, unlike other substitution rates, the rate of transition at CpG sites is significantly (37%) higher in repetitive DNA than in nonrepetitive DNA. Moreover, this rate of CpG transition varies according to the number of repeats, their length, and their level of divergence from the ancestral sequence (up to 2.7 times higher in long, lowly divergent TEs compared with unique sequences). This observation strongly suggests the existence of a homology-dependent methylation (HDM) mechanism in mammalian genomes. We propose that HDM is a direct consequence of interfering RNA-induced transcriptional gene silencing.
Assuntos
Metilação de DNA , DNA/genética , DNA/metabolismo , Primatas/genética , Sequências Repetitivas de Ácido Nucleico/genética , Homologia de Sequência do Ácido Nucleico , Animais , Ilhas de CpG/genética , Evolução Molecular , Humanos , Modelos Genéticos , Pan troglodytes/genética , Papio/genética , Mutação Puntual/genética , Pseudogenes/genética , Seleção Genética , Alinhamento de Sequência , Elementos Nucleotídeos Curtos e Dispersos/genéticaRESUMO
Processed pseudogenes result from reverse transcribed mRNAs. In general, because processed pseudogenes lack promoters, they are no longer functional from the moment they are inserted into the genome. Subsequently, they freely accumulate substitutions, insertions and deletions. Moreover, the ancestral structure of processed pseudogenes could be easily inferred using the sequence of their functional homologous genes. Owing to these characteristics, processed pseudogenes represent good neutral markers for studying genome evolution. Recently, there is an increasing interest for these markers, particularly to help gene prediction in the field of genome annotation, functional genomics and genome evolution analysis (patterns of substitution). For these reasons, we have developed a method to annotate processed pseudogenes in complete genomes. To make them useful to different fields of research, we stored them in a nucleic acid database after having annotated them. In this work, we screened both mouse and human complete genomes from ENSEMBL to find processed pseudogenes generated from functional genes with introns. We used a conservative method to detect processed pseudogenes in order to minimize the rate of false positive sequences. Within processed pseudogenes, some are still having a conserved open reading frame and some have overlapping gene locations. We designated as retroelements all reverse transcribed sequences and more strictly, we designated as processed pseudogenes, all retroelements not falling in the two former categories (having a conserved open reading or overlapping gene locations). We annotated 5823 retroelements (5206 processed pseudogenes) in the human genome and 3934 (3428 processed pseudogenes) in the mouse genome. Compared to previous estimations, the total number of processed pseudogenes was underestimated but the aim of this procedure was to generate a high-quality dataset. To facilitate the use of processed pseudogenes in studying genome structure and evolution, DNA sequences from processed pseudogenes, and their functional reverse transcribed homologs, are now stored in a nucleic acid database, HOPPSIGEN. HOPPSIGEN can be browsed on the PBIL (Pole Bioinformatique Lyonnais) World Wide Web server (http://pbil.univ-lyon1.fr/) or fully downloaded for local installation.