Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 75
Filtrar
1.
J Xray Sci Technol ; 2024 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-38701131

RESUMO

BACKGROUND: The emergence of deep learning (DL) techniques has revolutionized tumor detection and classification in medical imaging, with multimodal medical imaging (MMI) gaining recognition for its precision in diagnosis, treatment, and progression tracking. OBJECTIVE: This review comprehensively examines DL methods in transforming tumor detection and classification across MMI modalities, aiming to provide insights into advancements, limitations, and key challenges for further progress. METHODS: Systematic literature analysis identifies DL studies for tumor detection and classification, outlining methodologies including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variants. Integration of multimodality imaging enhances accuracy and robustness. RESULTS: Recent advancements in DL-based MMI evaluation methods are surveyed, focusing on tumor detection and classification tasks. Various DL approaches, including CNNs, YOLO, Siamese Networks, Fusion-Based Models, Attention-Based Models, and Generative Adversarial Networks, are discussed with emphasis on PET-MRI, PET-CT, and SPECT-CT. FUTURE DIRECTIONS: The review outlines emerging trends and future directions in DL-based tumor analysis, aiming to guide researchers and clinicians toward more effective diagnosis and prognosis. Continued innovation and collaboration are stressed in this rapidly evolving domain. CONCLUSION: Conclusions drawn from literature analysis underscore the efficacy of DL approaches in tumor detection and classification, highlighting their potential to address challenges in MMI analysis and their implications for clinical practice.

2.
Environ Res ; 227: 115740, 2023 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-36997044

RESUMO

Salinity is one of the major abiotic stresses in arid and semiarid climates which threatens the food security of the world. Present study had been designed to assess the efficacy of different abiogenic sources of silicon (Si) to mitigate the salinity stress on maize crop grown on salt-affected soil. Abiogenic sources of Si including silicic acid (SA), sodium silicate (Na-Si), potassium silicate (K-Si), and nanoparticles of silicon (NPs-Si) were applied in saline-sodic soil. Two consecutive maize crops with different seasons were harvested to evaluate the growth response of maize under salinity stress. Post-harvest soil analysis showed a significant decrease in soil electrical conductivity of soil paste extract (ECe) (-23.0%), sodium adsorption ratio (SAR) (-47.7%) and pH of soil saturated paste (pHs) (-9.5%) by comparing with salt-affected control. Results revealed that the maximum root dry weight was recorded in maize1 by the application of NPs-Si (149.3%) and maize2 (88.6%) over control. The maximum shoot dry weight was observed by the application of NPs-Si in maize1 (42.0%) and maize2 (7.4%) by comparing with control treatment. The physiological parameters like chlorophyll contents (52.5%), photosynthetic rate (84.6%), transpiration (100.2%), stomatal conductance (50.5%), and internal CO2 concentration (61.6%) were increased by NPs-Si in the maize1 crop when compared with the control treatment. The application of an abiogenic source (NPs-Si) of Si significantly increased the concentration of phosphorus (P) in roots (223.4%), shoots (22.3%), and cobs (130.3%) of the first maize crop. The current study concluded that the application of NPs-Si and K-Si improved the plant growth by increasing the availability of nutrients like P and potassium (K), physiological attributes, and by reducing the salts stress and cationic ratios in maize after maize crop rotation..


Assuntos
Nanopartículas , Zea mays , Silício/farmacologia , Solo/química , Cloreto de Sódio/farmacologia , Nanopartículas/química , Potássio/farmacologia
3.
Sensors (Basel) ; 23(2)2023 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-36679541

RESUMO

Coronavirus Disease 2019 (COVID-19) is still a threat to global health and safety, and it is anticipated that deep learning (DL) will be the most effective way of detecting COVID-19 and other chest diseases such as lung cancer (LC), tuberculosis (TB), pneumothorax (PneuTh), and pneumonia (Pneu). However, data sharing across hospitals is hampered by patients' right to privacy, leading to unexpected results from deep neural network (DNN) models. Federated learning (FL) is a game-changing concept since it allows clients to train models together without sharing their source data with anybody else. Few studies, however, focus on improving the model's accuracy and stability, whereas most existing FL-based COVID-19 detection techniques aim to maximize secondary objectives such as latency, energy usage, and privacy. In this work, we design a novel model named decision-making-based federated learning network (DMFL_Net) for medical diagnostic image analysis to distinguish COVID-19 from four distinct chest disorders including LC, TB, PneuTh, and Pneu. The DMFL_Net model that has been suggested gathers data from a variety of hospitals, constructs the model using the DenseNet-169, and produces accurate predictions from information that is kept secure and only released to authorized individuals. Extensive experiments were carried out with chest X-rays (CXR), and the performance of the proposed model was compared with two transfer learning (TL) models, i.e., VGG-19 and VGG-16 in terms of accuracy (ACC), precision (PRE), recall (REC), specificity (SPF), and F1-measure. Additionally, the DMFL_Net model is also compared with the default FL configurations. The proposed DMFL_Net + DenseNet-169 model achieves an accuracy of 98.45% and outperforms other approaches in classifying COVID-19 from four chest diseases and successfully protects the privacy of the data among diverse clients.


Assuntos
COVID-19 , Neoplasias Pulmonares , Humanos , Raios X , COVID-19/diagnóstico por imagem , Radiografia , Tórax/diagnóstico por imagem , Hospitais
4.
Sensors (Basel) ; 23(20)2023 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-37896548

RESUMO

Skin cancer is considered a dangerous type of cancer with a high global mortality rate. Manual skin cancer diagnosis is a challenging and time-consuming method due to the complexity of the disease. Recently, deep learning and transfer learning have been the most effective methods for diagnosing this deadly cancer. To aid dermatologists and other healthcare professionals in classifying images into melanoma and nonmelanoma cancer and enabling the treatment of patients at an early stage, this systematic literature review (SLR) presents various federated learning (FL) and transfer learning (TL) techniques that have been widely applied. This study explores the FL and TL classifiers by evaluating them in terms of the performance metrics reported in research studies, which include true positive rate (TPR), true negative rate (TNR), area under the curve (AUC), and accuracy (ACC). This study was assembled and systemized by reviewing well-reputed studies published in eminent fora between January 2018 and July 2023. The existing literature was compiled through a systematic search of seven well-reputed databases. A total of 86 articles were included in this SLR. This SLR contains the most recent research on FL and TL algorithms for classifying malignant skin cancer. In addition, a taxonomy is presented that summarizes the many malignant and non-malignant cancer classes. The results of this SLR highlight the limitations and challenges of recent research. Consequently, the future direction of work and opportunities for interested researchers are established that help them in the automated classification of melanoma and nonmelanoma skin cancers.


Assuntos
Melanoma , Neoplasias Cutâneas , Humanos , Estudos Prospectivos , Neoplasias Cutâneas/diagnóstico , Neoplasias Cutâneas/patologia , Melanoma/diagnóstico , Pele/patologia , Aprendizado de Máquina
5.
Sensors (Basel) ; 23(9)2023 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-37177670

RESUMO

Hundreds of people are injured or killed in road accidents. These accidents are caused by several intrinsic and extrinsic factors, including the attentiveness of the driver towards the road and its associated features. These features include approaching vehicles, pedestrians, and static fixtures, such as road lanes and traffic signs. If a driver is made aware of these features in a timely manner, a huge chunk of these accidents can be avoided. This study proposes a computer vision-based solution for detecting and recognizing traffic types and signs to help drivers pave the door for self-driving cars. A real-world roadside dataset was collected under varying lighting and road conditions, and individual frames were annotated. Two deep learning models, YOLOv7 and Faster RCNN, were trained on this custom-collected dataset to detect the aforementioned road features. The models produced mean Average Precision (mAP) scores of 87.20% and 75.64%, respectively, along with class accuracies of over 98.80%; all of these were state-of-the-art. The proposed model provides an excellent benchmark to build on to help improve traffic situations and enable future technological advances, such as Advance Driver Assistance System (ADAS) and self-driving cars.


Assuntos
Condução de Veículo , Aprendizado Profundo , Pedestres , Humanos , Acidentes de Trânsito/prevenção & controle , Atenção
6.
J Pak Med Assoc ; 73(2): 275-279, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36800709

RESUMO

OBJECTIVE: To determine the association of dryness of eyes with rheumatoid arthritis severity. METHODS: The cross-sectional, observational study was conducted at the Jinnah Medical College Hospital, Karachi, from December 2020 to May 2021, and comprised adult patients of either gender with rheumatoid arthritis who were diagnosed on the basis of clinical and serological investigations. Data was collected using a structured pre-tested questionnaire. Ocular Surface Disease Index questionnaires with Tear Film Breakup Time were used to assess the severity of dry eyes. Disease Activity Score-28 with erythrocyte sedimentation rate was used to assess the severity of rheumatoid arthritis. Association between the two was explored. Data was analysed using SPSS 22. RESULTS: Of the 61 patients, 52(85.2%) were females and 9(14.8%) were males. The overall mean age was 41.7±12.8 years, with 4(6.6%) aged <20 years, 26(42.6%) aged 21-40 years, 28(45.9%) aged 41-60 years and 3(4.9%) aged >60years. Further, 46(75.4%) subjects had sero-positive rheumatoid arthritis, 25(41%) had high severity, 30(49.2%) had severe Occular Surface Density Index score and 36(59%) had decreased Tear Film Breakup Time. Logistic Regression analysis showed there were 5.45 times higher odds of having severe disease among the people with Occular Surface Density Index score >33 (p=0.003). In patients with positive Tear Film Breakup Time, there were 6.25 higher odds of having increased disease activity score (p=0.001). CONCLUSIONS: Disease activity scores of rheumatoid arthritis were found to have strong association with dryness of eyes, high Ocular Surface Disease Index score and increased erythrocyte sedimentation rate.


Assuntos
Artrite Reumatoide , Síndromes do Olho Seco , Ceratoconjuntivite Seca , Adulto , Feminino , Masculino , Humanos , Pessoa de Meia-Idade , Estudos Transversais , Síndromes do Olho Seco/diagnóstico , Síndromes do Olho Seco/epidemiologia , Artrite Reumatoide/diagnóstico , Artrite Reumatoide/epidemiologia , Sedimentação Sanguínea
7.
Sensors (Basel) ; 23(1)2022 Dec 28.
Artigo em Inglês | MEDLINE | ID: mdl-36616923

RESUMO

Industrial automation uses robotics and software to operate equipment and procedures across industries. Many applications integrate IoT, machine learning, and other technologies to provide smart features that improve the user experience. The use of such technology offers businesses and people tremendous assistance in successfully achieving commercial and noncommercial requirements. Organizations are expected to automate industrial processes owing to the significant risk management and inefficiency of conventional processes. Hence, we developed an elaborative stepwise stacked artificial neural network (ESSANN) algorithm to greatly improve automation industries in controlling and monitoring the industrial environment. Initially, an industrial dataset provided by KLEEMANN Greece was used. The collected data were then preprocessed. Principal component analysis (PCA) was used to extract features, and feature selection was based on least absolute shrinkage and selection operator (LASSO). Subsequently, the ESSANN approach is proposed to improve automation industries. The performance of the proposed algorithm was also examined and compared with that of existing algorithms. The key factors compared with existing technologies are delay, network bandwidth, scalability, computation time, packet loss, operational cost, accuracy, precision, recall, and mean absolute error (MAE). Compared to traditional algorithms for industrial automation, our proposed techniques achieved high results, such as a delay of approximately 52%, network bandwidth accomplished at 97%, scalability attained at 96%, computation time acquired at 59 s, packet loss achieved at a minimum level of approximately 53%, an operational cost of approximately 59%, accuracy of 98%, precision of 98.95%, recall of 95.02%, and MAE of 80%. By analyzing the results, it can be seen that the proposed system was effectively implemented.


Assuntos
Internet das Coisas , Humanos , Automação , Indústrias , Tecnologia , Aprendizado de Máquina
8.
Sensors (Basel) ; 22(15)2022 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-35957209

RESUMO

Skin cancer is a deadly disease, and its early diagnosis enhances the chances of survival. Deep learning algorithms for skin cancer detection have become popular in recent years. A novel framework based on deep learning is proposed in this study for the multiclassification of skin cancer types such as Melanoma, Melanocytic Nevi, Basal Cell Carcinoma and Benign Keratosis. The proposed model is named as SCDNet which combines Vgg16 with convolutional neural networks (CNN) for the classification of different types of skin cancer. Moreover, the accuracy of the proposed method is also compared with the four state-of-the-art pre-trained classifiers in the medical domain named Resnet 50, Inception v3, AlexNet and Vgg19. The performance of the proposed SCDNet classifier, as well as the four state-of-the-art classifiers, is evaluated using the ISIC 2019 dataset. The accuracy rate of the proposed SDCNet is 96.91% for the multiclassification of skin cancer whereas, the accuracy rates for Resnet 50, Alexnet, Vgg19 and Inception-v3 are 95.21%, 93.14%, 94.25% and 92.54%, respectively. The results showed that the proposed SCDNet performed better than the competing classifiers.


Assuntos
Aprendizado Profundo , Melanoma , Neoplasias Cutâneas , Dermoscopia/métodos , Humanos , Melanoma/diagnóstico por imagem , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia
9.
Sensors (Basel) ; 22(20)2022 Oct 21.
Artigo em Inglês | MEDLINE | ID: mdl-36298412

RESUMO

Sensor fusion is the process of merging data from many sources, such as radar, lidar and camera sensors, to provide less uncertain information compared to the information collected from single source [...].


Assuntos
Algoritmos , Aprendizado Profundo , Radar , Visão Ocular , Computadores
10.
Sensors (Basel) ; 20(15)2020 Jul 27.
Artigo em Inglês | MEDLINE | ID: mdl-32726915

RESUMO

Image-to-image conversion based on deep learning techniques is a topic of interest in the fields of robotics and computer vision. A series of typical tasks, such as applying semantic labels to building photos, edges to photos, and raining to de-raining, can be seen as paired image-to-image conversion problems. In such problems, the image generation network learns from the information in the form of input images. The input images and the corresponding targeted images must share the same basic structure to perfectly generate target-oriented output images. However, the shared basic structure between paired images is not as ideal as assumed, which can significantly affect the output of the generating model. Therefore, we propose a novel Input-Perceptual and Reconstruction Adversarial Network (IP-RAN) as an all-purpose framework for imperfect paired image-to-image conversion problems. We demonstrate, through the experimental results, that our IP-RAN method significantly outperforms the current state-of-the-art techniques.

11.
Pak J Pharm Sci ; 33(4): 1735-1738, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33583810

RESUMO

The ongoing outbreak of coronavirus diseases (COVID-19) has been declared as Pandemic by the World Health Organization and now become a global health emergency. Low and Middle income-countries lack standard pharmacy services in terms of staff, education, training, pharmaceutical care, research, and practice. The literature aimed to provide emerging pharmacy services and recommend it to be implemented in low and middle-income countries. Currently, pharmacies were easily accessible sites by the community, a trained staff under the guidance of pharmacist can be helpful for the management of visiting customers. In the surge of disease, pharmacists proved themselves as a frontline defense for the community by significant contribution in identifying, reporting, and managing COVID-19 patients through pharmaceutical care services at the community level, hospital/clinical level, and through Tele-pharmaceutical services.


Assuntos
COVID-19/prevenção & controle , Países em Desenvolvimento , Notificação de Doenças , Educação de Pacientes como Assunto , Pesquisa Farmacêutica , Assistência Farmacêutica , Papel Profissional , Serviços Comunitários de Farmácia , Interações Medicamentosas , Humanos , Farmacêuticos , Serviço de Farmácia Hospitalar , Quarentena , SARS-CoV-2 , Telemedicina , Tratamento Farmacológico da COVID-19
12.
Pak J Pharm Sci ; 32(3): 1091-1095, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-31278724

RESUMO

Iron deficiency anemia (IDA) is one of the foremost health issues among women of reproductive age. The study highlights to assess the level of awareness about the causes, symptoms, prevention and treatment of IDA among women of reproductive age in district Bahawalpur, province Punjab, Pakistan. A randomized study was conducted by using a self-designed standardized questionnaire disseminated to the hostels of female residents and homes in the immediate vicinity of Islamia University Bahawalpur. Females aged 18-45 years without any previous history of medical or gynecological problems were enlisted. A total number of 200 women were surveyed for awareness of iron deficiency anemia. Seventy three percent (73%) of women (n=146) were aware of the term IDA with the highest proportion of women falling in the age bracket 20-35 years. Most (66.9%) of the women were aware of the fact that their diet contains iron and its importance in health. It is concluded that, in reproductive age women the IDA can be prevented and treated through proper guidance and awareness through education.


Assuntos
Anemia Ferropriva , Conhecimentos, Atitudes e Prática em Saúde , Ferro da Dieta/administração & dosagem , Adolescente , Adulto , Anemia Ferropriva/etiologia , Anemia Ferropriva/prevenção & controle , Anemia Ferropriva/terapia , Dieta , Suplementos Nutricionais , Escolaridade , Feminino , Humanos , Pessoa de Meia-Idade , Paquistão , Gravidez , Inquéritos e Questionários , Adulto Jovem
13.
Sensors (Basel) ; 18(2)2018 Feb 03.
Artigo em Inglês | MEDLINE | ID: mdl-29401681

RESUMO

A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.


Assuntos
Aprendizado de Máquina , Condução de Veículo , Automóveis , Movimentos Oculares , Fixação Ocular , Movimentos da Cabeça , Humanos
14.
Sensors (Basel) ; 18(6)2018 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-29795038

RESUMO

Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker's location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.

15.
Sensors (Basel) ; 18(5)2018 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-29748495

RESUMO

The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.

16.
Sensors (Basel) ; 17(4)2017 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-28420114

RESUMO

Gaze-based interaction (GBI) techniques have been a popular subject of research in the last few decades. Among other applications, GBI can be used by persons with disabilities to perform everyday tasks, as a game interface, and can play a pivotal role in the human computer interface (HCI) field. While gaze tracking systems have shown high accuracy in GBI, detecting a user's gaze for target selection is a challenging problem that needs to be considered while using a gaze detection system. Past research has used the blinking of the eyes for this purpose as well as dwell time-based methods, but these techniques are either inconvenient for the user or requires a long time for target selection. Therefore, in this paper, we propose a method for fuzzy system-based target selection for near-infrared (NIR) camera-based gaze trackers. The results of experiments performed in addition to tests of the usability and on-screen keyboard use of the proposed method show that it is better than previous methods.

17.
Sensors (Basel) ; 16(9)2016 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-27589768

RESUMO

Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.


Assuntos
Pesquisa Empírica , Fixação Ocular/fisiologia , Movimentos da Cabeça/fisiologia , Fotografação/instrumentação , Desenho de Equipamento , Humanos , Imageamento Tridimensional , Ultrassom
18.
Int J Syst Evol Microbiol ; 65(9): 2931-2936, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26016492

RESUMO

Bacterial strains ZYY136(T) and ZYY9 were isolated from surface-sterilized rice roots from a long-term experiment of rice-rice--Astragalus sinicus rotation. The 16S rRNA gene sequences of strains ZYY136(T) and ZYY9 showed the highest similarity, of 97.0%, to Rhizobium tarimense PL-41(T). Sequence analysis of the housekeeping genes recA, thrC and atpD clearly differentiated the isolates from currently described species of the genus Rhizobium. The DNA-DNA relatedness value between ZYY136(T) and ZYY9 was 82.3%, and ZYY136(T) showed 34.0% DNA-DNA relatedness with the most closely related type strain, R. tarimense PL-41(T). The DNA G+C content of strain ZYY136(T) was 58.1 mol%. The major cellular fatty acids were summed feature 8 (C18 : 1ω7c and/or C18 : 1ω6c), C16 : 0 and C16 : 0 3-OH. Strains ZYY136(T) and ZYY9 could be differentiated from the previously defined species of the genus Rhizobium by several phenotypic characteristics. Therefore, we conclude that strains ZYY136(T) and ZYY9 represent a novel species of the genus Rhizobium, for which the name Rhizobium oryzicola sp. nov. is proposed (type strain ZYY136(T) = ACCC 05753(T) = KCTC 32088(T)).


Assuntos
Endófitos/classificação , Oryza/microbiologia , Filogenia , Rhizobium/classificação , Técnicas de Tipagem Bacteriana , Composição de Bases , China , DNA Bacteriano/genética , Endófitos/genética , Endófitos/isolamento & purificação , Ácidos Graxos/química , Genes Bacterianos , Dados de Sequência Molecular , Hibridização de Ácido Nucleico , Raízes de Plantas/microbiologia , RNA Ribossômico 16S/genética , Rhizobium/genética , Rhizobium/isolamento & purificação , Análise de Sequência de DNA
20.
Int J Syst Evol Microbiol ; 64(Pt 4): 1373-1377, 2014 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24449787

RESUMO

Two strains (J3-AN59(T) and J3-N84) of Gram-stain-negative, aerobic and rod-shaped bacteria were isolated from the roots of fresh rice plants. The 16S rRNA gene sequence similarity results showed that the similarity between strains J3-AN59(T) and J3-N84 was 100 %. Both strains were phylogenetically related to members of the genus Rhizobium, and they were most closely related to Rhizobium tarimense ACCC 06128(T) (97.43 %). Similarities in the sequences of housekeeping genes between strains J3-AN59(T) and J3-N84 and those of recognized species of the genus Rhizobium were less than 90 %. The polar lipid profiles of both strains were predominantly composed of phosphatidylglycerol, diphosphatidylglycerol, phosphatidylethanolamine, phosphatidylcholine and an unknown aminophospholipid. The major cellular fatty acids were summed feature 8 (C18 : 1ω7c and/or C18 : 1ω6c) and C16 : 0. The DNA G+C contents of J3-AN59(T) and J3-N84 were 55.7 and 57.1 mol%, respectively. The DNA-DNA relatedness value between J3-AN59(T) and J3-N84 was 89 %, and strain J3-AN59(T) showed 9 % DNA-DNA relatedness to R. tarimense ACCC 06128(T), the most closely related strain. Based on this evidence, we found that J3-AN59(T) and J3-N84 represent a novel species in the genus Rhizobium and we propose the name Rhizobium rhizoryzae sp. nov. The type strain is J3-AN59(T) ( = ACCC 05916(T) = KCTC 23652(T)).


Assuntos
Oryza/microbiologia , Filogenia , Raízes de Plantas/microbiologia , Rhizobium/classificação , Técnicas de Tipagem Bacteriana , Composição de Bases , China , DNA Bacteriano/genética , Ácidos Graxos/química , Genes Bacterianos , Dados de Sequência Molecular , Hibridização de Ácido Nucleico , Fosfolipídeos/química , RNA Ribossômico 16S/genética , Rhizobium/genética , Rhizobium/isolamento & purificação , Análise de Sequência de DNA
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA