ABSTRACT
Fusarium verticillioides represents a major phytopathogenic threat to maize crops worldwide. In this study, we present genomic sequence data of a phytopathogen isolated from a maize stem that shows obvious signs of vascular rot. Using rigorous microbiological identification techniques, we correlated the disease symptoms observed in an affected maize region with the presence of the pathogen. Subsequently, the pathogen was cultured in a suitable fungal growth medium and extensive morphological characterization was performed. In addition, a pathogenicity test was carried out in a DCA model with three treatments and seven repetitions. De novo assembly from Illumina Novaseq 6000 sequencing yielded 456 contigs, which together constitute a 42.8 Mb genome assembly with a GC % content of 48.26. Subsequent comparative analyses were performed with other Fusarium genomes available in the NCBI database.
ABSTRACT
Abstract Objective To determine reference intervals (RI) for fasting blood insulin (FBI) in Brazilian adolescents, 12 to 17 years old, by direct and indirect approaches, and to validate indirectly determined RI. Methods Two databases were used for RI determination. Database 1 (DB1), used to obtain RI through a posteriori direct method, consisted of prospectively selected healthy individuals. Database 2 (DB2) was retrospectively mined from an outpatient laboratory information system (LIS) used for the indirect method (Bhattacharya method). Results From DB1, 29345 individuals were enrolled (57.65 % female) and seven age ranges and sex partitions were statistically determined according to mean FBI values: females: 12 and 13 years-old, 14 years-old, 15 years-old, 16 and 17 years-old; and males: 12, 13 and 14 years-old, 15 years-old, 16 and 17 years-old. From DB2, 5465 adolescents (67.5 % female) were selected and grouped according to DB1 partitions. The mean FBI level was significantly higher in DB2, on all groups. The RI upper limit (URL) determined by Bhattacharya method was slightly lower than the 90 % CI URL directly obtained on DB1, except for group female 12 and 13 years old. High agreement rates for diagnosing elevated FBI in all groups on DB1 validated indirect RI presented. Conclusion The present study demonstrates that Bhattacharya indirect method to determine FBI RI in adolescents can overcome some of the difficulties and challenges of the direct approach.
ABSTRACT
Heat stress is a condition that impairs the animal's productive and reproductive performance, and can be monitored by physiological and environmental variables, including body surface temperature, through infrared thermography. The objective of this work is to develop computational models for classification of heat stress from respiratory rate variable in dairy cattle using infrared thermography. The database used for the construction of the models was obtained from 10 weaned heifers, housed in a climate chamber with temperature control, and submitted to thermal comfort and heat wave treatments. Physiological and environmental data were collected, as well as thermographic images. The machine learning modeling environment used was IBM Watson, IBM's cognitive computing services platform, which has several data processing and mining tools. Classifier models for heat stress were evaluated using the confusion matrix metrics and compared to the traditional method based on Temperature and Humidity Index. The best accuracy obtained for classification of the heat stress level was 86.8%, which is comparable to previous works. The authors conclude that it was possible to develop accurate and practical models for real-time monitoring of dairy cattle heat stress.
Subject(s)
Cattle Diseases , Heat Stress Disorders , Machine Learning , Thermography , Animals , Cattle/physiology , Thermography/veterinary , Thermography/methods , Female , Heat Stress Disorders/veterinary , Heat Stress Disorders/physiopathology , Heat Stress Disorders/diagnosis , Cattle Diseases/diagnosis , Dairying/methods , Respiratory Rate , Infrared Rays , Hot Temperature/adverse effectsABSTRACT
Infrared thermography has been investigated in recent studies to monitor body surface temperature and correlate it with animal welfare and performance factors. In this context, this study proposes the use of the thermal signature method as a feature extractor from the temperature matrix obtained from regions of the body surface of laying hens (face, eye, wattle, comb, leg, and foot) to enable the construction of a computational model for heat stress level classification. In an experiment conducted in climate-controlled chambers, 192 laying hens, 34 weeks old, from two different strains (Dekalb White and Dekalb Brown) were divided into groups and housed under conditions of heat stress (35 °C and 60% humidity) and thermal comfort (26 °C and 60% humidity). Weekly, individual thermal images of the hens were collected using a thermographic camera, along with their respective rectal temperatures. Surface temperatures of the six featherless image areas of the hens' bodies were cut out. Rectal temperature was used to label each infrared thermography data as "Danger" or "Normal", and five different classifier models (Random Forest, Random Tree, Multilayer Perceptron, K-Nearest Neighbors, and Logistic Regression) for rectal temperature class were generated using the respective thermal signatures. No differences between the strains were observed in the thermal signature of surface temperature and rectal temperature. It was evidenced that the rectal temperature and the thermal signature express heat stress and comfort conditions. The Random Forest model for the face area of the laying hen achieved the highest performance (89.0%). For the wattle area, a Random Forest model also demonstrated high performance (88.3%), indicating the significance of this area in strains where it is more developed. These findings validate the method of extracting characteristics from infrared thermography. When combined with machine learning, this method has proven promising for generating classifier models of thermal stress levels in laying hen production environments.
ABSTRACT
Behavior analysis is a widely used non-invasive tool in the practical production routine, as the animal acts as a biosensor capable of reflecting its degree of adaptation and discomfort to some environmental challenge. Conventional statistics use occurrence data for behavioral evaluation and well-being estimation, disregarding the temporal sequence of events. The Generalized Sequential Pattern (GSP) algorithm is a data mining method that identifies recurrent sequences that exceed a user-specified support threshold, the potential of which has not yet been investigated for broiler chickens in enriched environments. Enrichment aims to increase environmental complexity with promising effects on animal welfare, stimulating priority behaviors and potentially reducing the deleterious effects of heat stress. The objective here was to validate the application of the GSP algorithm to identify temporal correlations between heat stress and the behavior of broiler chickens in enriched environments through a proof of concept. Video image collection was carried out automatically for 48 continuous hours, analyzing a continuous period of seven hours, from 12:00 PM to 6:00 PM, during two consecutive days of tests for chickens housed in enriched and non-enriched environments under comfort and stress temperatures. Chickens at the comfort temperature showed high motivation to perform the behaviors of preening (P), foraging (F), lying down (Ld), eating (E), and walking (W); the sequences <{Ld,P}>; <{Ld,F}>; <{P,F,P}>; <{Ld,P,F}>; and <{E,W,F}> were the only ones observed in both treatments. All other sequential patterns (comfort and stress) were distinct, suggesting that environmental enrichment alters the behavioral pattern of broiler chickens. Heat stress drastically reduced the sequential patterns found at the 20% threshold level in the tested environments. The behavior of lying laterally "Ll" is a strong indicator of heat stress in broilers and was only frequent in the non-enriched environment, which may suggest that environmental enrichment provides the animal with better opportunities to adapt to stress-inducing challenges, such as heat.
ABSTRACT
Soldiers of the Mexican Army with obesity were subjected to an intense 60-day weight-loss course consisting of a controlled diet, daily physical training, and psychological sessions. The nutritional treatment followed the European Society of Cardiology (ESC) recommendations, incorporating elements of the traditional milpa diet in the nutritional intervention. The total energy intake was reduced by 200 kcal every 20 days, starting with 1,800 kcal and ending with 1,400 kcal daily. On average, the participants reduced their body weight by 18 kg. We employed an innovative approach to monitor the progress of the twelve soldiers who completed the entire program. We compared the untargeted metabolomics profiles of their urine samples, taken before and after the course. The data obtained through liquid chromatography and high-resolution mass spectrometry (LC-MS) provided insightful results. Classification models perfectly separated the profiles pre and post-course, indicating a significant reprogramming of the participants' metabolism. The changes were observed in the C1-, vitamin, amino acid, and energy metabolism pathways, primarily affecting the liver, biliary system, and mitochondria. This study not only demonstrates the potential of rapid weight loss and metabolic pathway modification but also introduces a non-invasive method for monitoring the metabolic state of individuals through urine mass spectrometry data.
Subject(s)
Military Personnel , Obesity , Weight Loss , Humans , Male , Obesity/metabolism , Obesity/diet therapy , Obesity/therapy , Weight Loss/physiology , Adult , Metabolomics , Young Adult , Energy Metabolism/physiology , Mass Spectrometry , Diet, Reducing , Weight Reduction Programs/methods , Metabolic ReprogrammingABSTRACT
Leptospirosis is a global disease that impacts people worldwide, particularly in humid and tropical regions, and is associated with significant socio-economic deficiencies. Its symptoms are often confused with other syndromes, which can compromise clinical diagnosis and the failure to carry out specific laboratory tests. In this respect, this paper presents a study of three algorithms (Decision Tree, Random Forest and Adaboost) for predicting the outcome (cure or death) of individuals with leptospirosis. Using the records contained in the government National System of Aggressions and Notification (SINAN, in portuguese) from 2007 to 2017, for the state of Pará, Brazil, where the temporal attributes of health care, symptoms (headache, vomiting, jaundice, calf pain) and clinical evolution (renal failure and respiratory changes) were used. In the performance evaluation of the selected models, it was observed that the Random Forest exhibited an accuracy of 90.81% for the training dataset, considering the attributes of experiment 8, and the Decision Tree presented an accuracy of 74.29 for the validation database. So, this result considers the best attributes pointed out by experiment 10: time first symptoms medical attention, time first symptoms ELISA sample collection, medical attention hospital admission time, headache, calf pain, vomiting, jaundice, renal insufficiency, and respiratory alterations. The contribution of this article is the confirmation that artificial intelligence, using the Decision Tree model algorithm, depicting the best choice as the final model to be used in future data for the prediction of human leptospirosis cases, helping in the diagnosis and course of the disease, aiming to avoid the evolution to death.
Subject(s)
Leptospirosis , Machine Learning , Leptospirosis/diagnosis , Humans , Algorithms , Decision Trees , Brazil/epidemiology , Outcome Assessment, Health Care/methods , Male , Female , AdultABSTRACT
The continuous improvement of the steelmaking process is a critical issue for steelmakers. In the production of Ca-treated Al-killed steel, the Ca and S contents are controlled for successful inclusion modification treatment. In this study, a machine learning technique was used to build a decision tree classifier and thus identify the process variables that most influence the desired Ca and S contents at the end of ladle furnace refining. The attribute of the root node of the decision tree was correlated with process variables via the Pearson formalism. Thus, the attribute of the root node corresponded to the sulfur distribution coefficient at the end of the refining process, and its value allowed for the discrimination of satisfactory heats from unsatisfactory heats. The variables with higher correlation with the sulfur distribution coefficient were the content of sulfur in both steel and slag at the end of the refining process, as well as the Si content at that stage of the process. As secondary variables, the Si content and the basicity of the slag at the end of the refining process were correlated with the S content in the steel and slag, respectively, at that stage. The analysis showed that the conditions of steel and slag at the beginning of the refining process and the efficient S removal during the refining process are crucial for reaching desired Ca and S contents.
ABSTRACT
Artificial intelligence has revolutionized many sectors with unparalleled predictive capabilities supported by machine learning (ML). So far, this tool has not been able to provide the same level of development in pharmaceutical nanotechnology. This review discusses the current data science methodologies related to polymeric drug-loaded nanoparticle production from an innovative multidisciplinary perspective while considering the strictest data science practices. Several methodological and data interpretation flaws were identified by analyzing the few qualified ML studies. Most issues lie in following appropriate analysis steps, such as cross-validation, balancing data, or testing alternative models. Thus, better-planned studies following the recommended data science analysis steps along with adequate numbers of experiments would change the current landscape, allowing the exploration of the full potential of ML.
[Box: see text].
Subject(s)
Artificial Intelligence , Data Science , Machine Learning , Nanoparticles , Nanoparticles/chemistry , Humans , Data Science/methods , Nanotechnology/methods , Polymers/chemistryABSTRACT
OBJECTIVE: To determine reference intervals (RI) for fasting blood insulin (FBI) in Brazilian adolescents, 12 to 17 years old, by direct and indirect approaches, and to validate indirectly determined RI. METHODS: Two databases were used for RI determination. Database 1 (DB1), used to obtain RI through a posteriori direct method, consisted of prospectively selected healthy individuals. Database 2 (DB2) was retrospectively mined from an outpatient laboratory information system (LIS) used for the indirect method (Bhattacharya method). RESULTS: From DB1, 29345 individuals were enrolled (57.65 % female) and seven age ranges and sex partitions were statistically determined according to mean FBI values: females: 12 and 13 years-old, 14 years-old, 15 years-old, 16 and 17 years-old; and males: 12, 13 and 14 years-old, 15 years-old, 16 and 17 years-old. From DB2, 5465 adolescents (67.5 % female) were selected and grouped according to DB1 partitions. The mean FBI level was significantly higher in DB2, on all groups. The RI upper limit (URL) determined by Bhattacharya method was slightly lower than the 90 % CI URL directly obtained on DB1, except for group female 12 and 13 years old. High agreement rates for diagnosing elevated FBI in all groups on DB1 validated indirect RI presented. CONCLUSION: The present study demonstrates that Bhattacharya indirect method to determine FBI RI in adolescents can overcome some of the difficulties and challenges of the direct approach.
Subject(s)
Data Mining , Fasting , Insulin , Humans , Adolescent , Female , Male , Reference Values , Brazil , Child , Insulin/blood , Fasting/blood , Data Mining/methods , Retrospective Studies , Databases, FactualABSTRACT
The development of non-invasive methods and accessible tools for application to plant phenotyping is considered a breakthrough. This work presents the preliminary results using an electronic nose (E-Nose) and machine learning (ML) as affordable tools. An E-Nose is an electronic system used for smell global analysis, which emulates the human nose structure. The soybean (Glycine Max) was used to conduct this experiment under water stress. Commercial E-Nose was used, and a chamber was designed and built to conduct the measurement of the gas sample from the soybean. This experiment was conducted for 22 days, observing the stages of plant growth during this period. This chamber is embedded with relative humidity [RH (%)], temperature (°C), and CO2 concentration (ppm) sensors, as well as the natural light intensity, which was monitored. These systems allowed intermittent monitoring of each parameter to create a database. The soil used was the red-yellow dystrophic type and was covered to avoid evapotranspiration effects. The measurement with the electronic nose was done daily, during the morning and afternoon, and in two phenological situations of the plant (with the healthful soy irrigated with deionized water and underwater stress) until the growth V5 stage to obtain the plant gases emissions. Data mining techniques were used, through the software "Weka™" and the decision tree strategy. From the evaluation of the sensors database, a dynamic variation of plant respiration pattern was observed, with the two distinct behaviors observed in the morning (~9:30 am) and afternoon (3:30 pm). With the initial results obtained with the E-Nose signals and ML, it was possible to distinguish the two situations, i.e., the irrigated plant standard and underwater stress, the influence of the two periods of daylight, and influence of temporal variability of the weather. As a result of this investigation, a classifier was developed that, through a non-invasive analysis of gas samples, can accurately determine the absence of water in soybean plants with a rate of 94.4% accuracy. Future investigations should be carried out under controlled conditions that enable early detection of the stress level.
ABSTRACT
The objective of this study was to apply the Knowledge Discovery in Databases process to find out if beneficiaries of a private healthcare insurance would belong, at least once, to the 'very high cost' and 'complex cases' groups throughout the 12 months after the month when algorithms were applied. Datasets were built containing information on beneficiaries' effective use of their health plan, as well as their characteristics. Five machine learning algorithms were used, namely Random forest, Extra tree, Xgboost, Naive bayes and K-nearest neighbor. The K-nearest neighbor algorithm had a recall rate of 81.12%, 83.77% precision and an Area Under the Curve (AUC) value of 0.9045. The study also revealed that categorization occurs, on average, 8.11 months before a beneficiary entering, for the first time, a high-risk group, considering the dataset classification from January 2019 to June 2020.
Subject(s)
Algorithms , Insurance , Humans , Bayes Theorem , Machine Learning , Databases, FactualABSTRACT
Introduction Wegener granulomatosis (WG) appears with clinical symptoms, including recurrent respiratory infection, renal manifestations, and nonspecific systemic symptoms. Objective To study the clinical manifestations of WG in Iranian ethnicities, and data on 164 patients were recorded from 2013 to 2018. Methods The data included demographics, symptoms, and the Birmingham Vasculitis Activity Score (BVAS). The symptoms involved the following sites: the nose, sinus, glottis, ears, lungs, kidneys, eyes, central nervous system, mucous membranes, skin, heart, stomach, intestine, as well as general symptoms. The clinical manifestations of nine ethnicities were analyzed. Results In total, 48% of the patients were male and 51% were female, with a median age of 51 years. The BVAS was of 15.4, the sites most involved were the sinus ( n = 155), nose ( n = 126), lungs ( n = 125), and ears ( n = 107). Gastrointestinal ( n = 14) and cardiac ( n = 7) involvement were less common. Among the patients, 48.17% were Persian, 13.41% were Azari, 11.17% were Gilaki, 11.17% were Kurd, and 10.9% were Lor. Conclusion Our findings indicated that the sinus, nose, lungs, and ears were the sites most involved, and gastrointestinal and cardiac involvement were less common. In the present study, involvement of the upper and lower respiratory tract was higher than that reported in Western and Asian case series. Moreover, we report for the first time that, in all patients with ear involvement, the left ear was the first to be affected. The clinical manifestations among Iranian ethnicities were not different, and the Gilaki ethnicity had the highest BVAS, mostly because the weather was humid; therefore, in Iran, in areas with humidity, the rate of the disease was higher.
ABSTRACT
Background: The in-hospital treatment for COVID-19 may include medicines from various therapeutic classes, such as antiviral remdesivir and immunosuppressant tocilizumab. Safety data for these medicines are based on controlled clinical trials and case reports, limiting the knowledge about less frequent, rare or unique population adverse events excluded from clinical trials. Objective: This study aims at analyzing the reports of Adverse Drug Events (ADEs) related to these two medicines, focusing on events in pregnant women and foetuses. Methods: Data from the open-access FDA Adverse Event Reporting System (FAERS) from 2020 to 2022 were used to create a dashboard on the Grafana platform to ease querying and analyzing report events. Potential safety signals were generated using the ROR disproportionality measure. Results: Remdesivir was notified as the primary suspect in 7,147 reports and tocilizumab in 19,602. Three hundred and three potential safety signals were identified for remdesivir, of which six were related to pregnant women and foetuses (including abortion and foetal deaths). Tocilizumab accumulated 578 potential safety signals, and three of them were associated with this population (including neonatal death). Discussion: None of the possible signals generated for this population were found in the product labels. According to the NIH and the WHO protocols, both medicines are recommended for pregnant women hospitalized with COVID-19. Conclusion: Despite the known limitations of working with open data from spontaneous reporting systems (e.g., absence of certain clinical data, underreporting, a tendency to report severe events and recent medicines) and disproportionality analysis, the findings suggest concerning associations that need to be confirmed or rejected in subsequent clinical studies.
ABSTRACT
Cyclosporine is an immunosuppressant used to prevent organ rejection in kidney, liver, and heart allogeneic transplants. This study aimed to assess the safety of cyclosporine through the analysis of adverse events (AEs) related to cyclosporine in the US Food and Drug Administration Adverse Event Reporting System (FAERS). To detect AEs associated with cyclosporine, a pharmacovigilance analysis was conducted using four algorithms on the FAERS database: reporting odds ratio (ROR), proportional reporting ratio (PRR), Bayesian confidence propagation neural network (BCPNN), and empirical Bayes geometric mean (EBGM). A statistical analysis was performed on data extracted from the FAERS database, covering 19,582 case reports spanning from 2013 to 2022. Among these cases, 3,911 AEs were identified, with 476 linked to cyclosporine as the primary suspected drug. Cyclosporin-induced AEs targeted 27 System Organ Classes (SOCs). Notably, the highest case at the SOC level included eye disorders, injury, poisoning, and procedural complications, as well as immune system disorders, all of which are listed on the cyclosporine label. Furthermore, we discovered novel potential AEs associated with hepatobiliary disorders, among others. Moreover, unexpected adverse drug reactions (ADRs), such as biliary anastomosis complication and spermatozoa progressive motility decrease, were identified. Importantly, these newly identified ADRs were not mentioned on the cyclosporine label, which were involved in injury, poisoning, and procedural complications, and investigations at the SOC level. The study used pharmacovigilance analysis of FAERS database to identify new and unexpected potential ADRs relating to cyclosporine, which can provide safety tips for the safe use of cyclosporine.
ABSTRACT
Fraud detection through auditors' manual review of accounting and financial records has traditionally relied on human experience and intuition. However, replicating this task using technological tools has represented a challenge for information security researchers. Natural language processing techniques, such as topic modeling, have been explored to extract information and categorize large sets of documents. Topic modeling, such as latent Dirichlet allocation (LDA) or non-negative matrix factorization (NMF), has recently gained popularity for discovering thematic structures in text collections. However, unsupervised topic modeling may not always produce the best results for specific tasks, such as fraud detection. Therefore, in the present work, we propose to use semi-supervised topic modeling, which allows the incorporation of specific knowledge of the study domain through the use of keywords to learn latent topics related to fraud. By leveraging relevant keywords, our proposed approach aims to identify patterns related to the vertices of the fraud triangle theory, providing more consistent and interpretable results for fraud detection. The model's performance was evaluated by training with several datasets and testing it with another one that did not intervene in its training. The results showed efficient performance averages with a 7% increase in performance compared to a previous job. Overall, the study emphasizes the importance of deepening the analysis of fraud behaviors and proposing strategies to identify them proactively.
ABSTRACT
BACKGROUND: Dropout and poor academic performance are persistent problems in medical schools in emerging economies. Identifying at-risk students early and knowing the factors that contribute to their success would be useful for designing educational interventions. Educational Data Mining (EDM) methods can identify students at risk of poor academic progress and dropping out. The main goal of this study was to use machine learning models, Artificial Neural Networks (ANN) and Naïve Bayes (NB), to identify first year medical students that succeed academically, using sociodemographic data and academic history. METHODS: Data from seven cohorts (2011 to 2017) of admitted medical students to the National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City were analysed. Data from 7,976 students (2011 to 2017 cohorts) of the program were included. Information from admission diagnostic exam results, academic history, sociodemographic characteristics and family environment was used. The main dataset included 48 variables. The study followed the general knowledge discovery process: pre-processing, data analysis, and validation. Artificial Neural Networks (ANN) and Naïve Bayes (NB) models were used for data mining analysis. RESULTS: ANNs models had slightly better performance in accuracy, sensitivity, and specificity. Both models had better sensitivity when classifying regular students and better specificity when classifying irregular students. Of the 25 variables with highest predictive value in the Naïve Bayes model, percentage of correct answers in the diagnostic exam was the best variable. CONCLUSIONS: Both ANN and Naïve Bayes methods can be useful for predicting medical students' academic achievement in an undergraduate program, based on information of their prior knowledge and socio-demographic factors. Although ANN offered slightly superior results, Naïve Bayes made it possible to obtain an in-depth analysis of how the different variables influenced the model. The use of educational data mining techniques and machine learning classification techniques have potential in medical education.
Subject(s)
Students, Medical , Humans , Bayes Theorem , Educational Status , Achievement , Neural Networks, ComputerABSTRACT
Background: The factors necessitating the need for referrals for in-person evaluations by a dermatologist are not adequately understood and have not been studied using automated text mining so far. The objective of this study was to compare the prevalence of required in-person dermatologist care in the presence or absence of certain clinical features. Methods: Observational cross-sectional study of 11,661 teledermatology reports made from February 2017 to March 2020. Results: The need for dermoscopy was associated with a 348% increase in the possibility of referral for in-person dermatologist evaluations (prevalence ratio [PR]: 4.48, 95% confidence interval [CI]: 4.17-4.82). Infectious diseases were associated with a 64% lower possibility of referral (PR: 0.36, 95% CI: 0.30-0.43). Discussion: Some lesions and poorly documented cases are challenging to assess remotely. This study presents a different approach to research more detailed data from teledermatology reports, using text mining, and points out the risk magnitude for demanding dermatologic in-person care of which feature analyzed. As limitations, the variables related to lesion location, size, and extension were not analyzed and the dictionaries used were originally in Brazilian Portuguese. Conclusions: Teledermatology seems sufficient for the management of 75% of clinical cases, especially acute in young patients with inflammatory or infectious lesions. Referrals for in-person dermatologist consultations were not only strongly associated with the need for dermoscopy, but also for therapeutic reasons like surgical procedures, phototherapy, and the use of some systemic medications.
Subject(s)
Dermatology , Skin Diseases , Telemedicine , Humans , Dermatology/methods , Cross-Sectional Studies , Dermatologists , Telemedicine/methods , Referral and Consultation , Skin Diseases/diagnosis , Skin Diseases/epidemiology , Skin Diseases/therapyABSTRACT
Several species within the Acidithiobacillus (At.) genus can derive energy from oxidizing ferrous iron and sulfur. Two bacterial strains according to their 16S rRNA gene sequences closely related to At. ferridurans and At. ferrivorans were obtained from the industrial sulfide heap leaching process at Minera Escondida (SLH), named D2 and DM, respectively. We applied statistical and data mining analyses to the abundance of At. ferridurans D2 and At. ferrivorans DM taxa in the industrial process over 16 years of operation. In addition, we performed phylogenetic analysis and genome comparison of the type strains, as well as culturing approaches with representative isolates of At. ferridurans D2 and At. ferrivorans DM taxa to understand the differential phenotypic features. Throughout the 16 years, two main operational stages were identified based on the D2 and DM taxa predominance in solution samples. The better suitability of At. ferrivorans DM to grow in a wide range of temperature and in micro-oxic environments, and to oxidize S by reducing Fe(III) revealed through culturing approaches can, in a way, explain the taxa distribution in both operational stages. The isolate At. ferridurans D2 could be considered as a specialist in aerobic sulfur oxidation, while isolate At. ferrivorans DM is a specialist in iron oxidation. In addition, the results from ore samples occasionally obtained from the industrial heap suggest that At. ferridurans D2 abundance was more related to its abundance in the solution samples than At. ferrivorans DM was. This dynamic coincides with previously obtained results in in-lab cell-mineral attaching experiments with both strains. This information increases our knowledge the ecophysiology of Acidithiobacillus and of the importance of diverse physiological traits at industrial bioleaching scales.
Subject(s)
Acidithiobacillus , Iron , Copper , Acidithiobacillus/genetics , Phylogeny , RNA, Ribosomal, 16S/genetics , Sulfur , Sulfides , Oxidation-ReductionABSTRACT
Abstract Introduction Wegener granulomatosis (WG) appears with clinical symptoms, including recurrent respiratory infection, renal manifestations, and nonspecific systemic symptoms. Objective To study the clinical manifestations of WG in Iranian ethnicities, and data on 164 patients were recorded from 2013 to 2018. Methods The data included demographics, symptoms, and the Birmingham Vasculitis Activity Score (BVAS). The symptoms involved the following sites: the nose, sinus, glottis, ears, lungs, kidneys, eyes, central nervous system, mucous membranes, skin, heart, stomach, intestine, as well as general symptoms. The clinical manifestations of nine ethnicities were analyzed. Results In total, 48% of the patients were male and 51% were female, with a median age of 51 years. The BVAS was of 15.4, the sites most involved were the sinus (n =155), nose (n = 126), lungs (n = 125), and ears (n =107). Gastrointestinal (n = 14) and cardiac (n = 7) involvement were less common. Among the patients, 48.17% were Persian, 13.41% were Azari, 11.17% were Gilaki, 11.17% were Kurd, and 10.9% were Lor. Conclusion Our findings indicated that the sinus, nose, lungs, and ears were the sites most involved, and gastrointestinal and cardiac involvement were less common. In the present study, involvement of the upper and lower respiratory tract was higher than that reported in Western and Asian case series. Moreover, we report for the first time that, in all patients with ear involvement, the left ear was the first to be affected. The clinical manifestations among Iranian ethnicities were not different, and the Gilaki ethnicity had the highest BVAS, mostly because the weather was humid; therefore, in Iran, in areas with humidity, the rate of the disease was higher.