Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Environ Biol ; 36(5): 1119-23, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26521554

RESUMO

Dry and healthy seeds of two lentil cultivars, LH90-54 (macrosperma) and LH89-48 (microsperma) were treated with three doses of ethyl methane sulphonate (0.1, 0.2 and 0.4 %). In both the cultivars, all the M, plants with sufficient seed from each treatment and control were taken to raise independent M2 plant progenies. Wider range of means in both positive and negative directions along with overall positive shift in mean for all the polygenic traits, except pod-initiation height and 100-seed weight, were observed in different treatments in M2 generation. In both the cultivars, medium dose induced highest amount of variation. The estimates of variance, GCV and PCV for different polygenic traits increased significantly over control values in all the treatments of both the cultivars. Higher estimates of heritability and genetic advance in M2 population indicated tremendous scope for the improvement of seed yield and its component traits through selection in the mutagenized material.


Assuntos
Metanossulfonato de Etila/farmacologia , Variação Genética , Lens (Planta)/efeitos dos fármacos , Lens (Planta)/metabolismo , Animais , Regulação da Expressão Gênica de Plantas/efeitos dos fármacos , Lens (Planta)/genética , Seleção Genética
2.
Cureus ; 15(10): e46467, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37927676

RESUMO

Background In this study, we aimed to evaluate optical coherence tomography angiography (OCTA) parameters among Indian patients affected with central serous chorioretinopathy (CSCR). Methodology A cross-sectional study on Indian patients having unilateral or bilateral affection with CSCR was conducted at the Department of Ophthalmology, Guru Nanak Eye Centre, and Maulana Azad Medical College, New Delhi. A history of ocular symptoms such as a diminution of vision, metamorphopsia, decreased contrast sensitivity (CS), and defective color vision (CV) and their duration were obtained. A detailed ocular examination for best-corrected visual acuity (BCVA), intraocular pressure (IOP), CV, and CS was done. Following this, fundus fluorescein angiography (FFA) and indocyanine green angiography (ICGA) were performed. OCT was done for central foveal thickness (CFT), subfoveal choroidal thickness (SFCT), neurosensory detachment (NSD), pigment epithelial detachment (PED), and choroidal neovascular membranes (CNVMs). The OCTA imaging was done to examine the foveal avascular zone (FAZ) size, perimeter and circularity, vessel density (VD), and features such as enlarged/distorted FAZ, dark areas, dark spots, abnormal vessels, and choriocapillaris island (CCI) in the retino-choroidal layers. We compared the OCTA features of affected eyes with those of fellow eyes. Results The study involved 52 eyes of 40 CSCR patients, including 32 (80%) males and eight (20%) females with a mean age of 39.3 ± 6.1 (24-49) years. Of the 40 patients, 12 (30%) had a bilateral involvement. The mean CFT was 300.3 ± 158.4 µ, and the SFCT was 258.5 ± 60.4 µ. The mean distance BCVA was the logarithm of the minimum angle of resolution (logMAR) 0.58 ± 0.32. The OCTA showed features such as enlarged/distorted FAZ (36.53% eyes), dark areas (NSD/PED) (84.61% eyes), dark spots (PED) (5.76% eyes), abnormal vessels (dilated vessels/CNVM) (96.15% eyes), and CCI (17.30% eyes). The mean FAZ area, perimeter, and circularity were 0.40 ± 0.71 mm2, 41.8 ± 280.0 mm, and 0.48 ± 0.12, respectively. The VD in the superficial capillary plexus (SCP) was 25.4 ± 14.1, deep capillary plexus (DCP) 15.0 ± 11.5, outer retina (OR) 5.9 ± 6.8, outer retinal choriocapillaris (ORCC) 33.7 ± 16.9, choriocapillaris 29.7 ± 17.5, and choroid 29.9 ± 17.5. The fellow eyes showed a mean FAZ area, perimeter, and circularity of 0.34 ± 0.23 mm2, 76.8 ± 391.2 mm, and 0.47 ± 0.11, respectively, while VD of SCP was 25.9 ± 13.6, DCP 16.5 ± 11.7, OR 14.3 ± 14.9, ORCC 38.0 ± 16.5, choriocapillaris 36.3 ± 17.7, and choroid 35.5 ± 19.2. Conclusions The CSCR eyes had a thicker fovea and sub-foveal choroid (SFC). The FAZ area of affected eyes was larger, while the perimeter was smaller than that in the fellow eye. In the affected eye, the VD in all the retino-choroidal layers was lower, although it was significantly reduced in OR whole (p = 0.006) and foveal choroid (p = 0.022).

3.
JAMA Netw Open ; 3(5): e205111, 2020 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-32432709

RESUMO

Importance: Histopathological diagnoses of tumors from tissue biopsy after hematoxylin and eosin (H&E) dye staining is the criterion standard for oncological care, but H&E staining requires trained operators, dyes and reagents, and precious tissue samples that cannot be reused. Objectives: To use deep learning algorithms to develop models that perform accurate computational H&E staining of native nonstained prostate core biopsy images and to develop methods for interpretation of H&E staining deep learning models and analysis of computationally stained images by computer vision and clinical approaches. Design, Setting, and Participants: This cross-sectional study used hundreds of thousands of native nonstained RGB (red, green, and blue channel) whole slide image (WSI) patches of prostate core tissue biopsies obtained from excess tissue material from prostate core biopsies performed in the course of routine clinical care between January 7, 2014, and January 7, 2017, at Brigham and Women's Hospital, Boston, Massachusetts. Biopsies were registered with their H&E-stained versions. Conditional generative adversarial neural networks (cGANs) that automate conversion of native nonstained RGB WSI to computational H&E-stained images were then trained. Deidentified whole slide images of prostate core biopsy and medical record data were transferred to Massachusetts Institute of Technology, Cambridge, for computational research. Results were shared with physicians for clinical evaluations. Data were analyzed from July 2018 to February 2019. Main Outcomes and Measures: Methods for detailed computer vision image analytics, visualization of trained cGAN model outputs, and clinical evaluation of virtually stained images were developed. The main outcome was interpretable deep learning models and computational H&E-stained images that achieved high performance in these metrics. Results: Among 38 patients who provided samples, single core biopsy images were extracted from each whole slide, resulting in 102 individual nonstained and H&E dye-stained image pairs that were compared with matched computationally stained and unstained images. Calculations showed high similarities between computationally and H&E dye-stained images, with a mean (SD) structural similarity index (SSIM) of 0.902 (0.026), Pearson correlation coefficient (PCC) of 0.962 (0.096), and peak signal to noise ratio (PSNR) of 22.821 (1.232) dB. A second cGAN performed accurate computational destaining of H&E-stained images back to their native nonstained form, with a mean (SD) SSIM of 0.900 (0.030), PCC of 0.963 (0.011), and PSNR of 25.646 (1.943) dB compared with native nonstained images. A single blind prospective study computed approximately 95% pixel-by-pixel overlap among prostate tumor annotations provided by 5 board certified pathologists on computationally stained images, compared with those on H&E dye-stained images. This study also used the first visualization and explanation of neural network kernel activation maps during H&E staining and destaining of RGB images by cGANs. High similarities between kernel activation maps of computationally and H&E-stained images (mean-squared errors <0.0005) provide additional mathematical and mechanistic validation of the staining system. Conclusions and Relevance: These findings suggest that computational H&E staining of native unlabeled RGB images of prostate core biopsy could reproduce Gleason grade tumor signatures that were easily assessed and validated by clinicians. Methods for benchmarking, visualization, and clinical validation of deep learning models and virtually H&E-stained images communicated in this study have wide applications in clinical informatics and oncology research. Clinical researchers may use these systems for early indications of possible abnormalities in native nonstained tissue biopsies prior to histopathological workflows.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata/patologia , Coloração e Rotulagem , Idoso , Biópsia com Agulha de Grande Calibre , Amarelo de Eosina-(YS) , Hematoxilina , Humanos , Masculino
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 4414-4418, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946845

RESUMO

Cone beam computed tomography has demonstrated value by offering enhanced conceptualization of features of teeth in the 3D space. However, these systems require higher effective radiation doses to image teeth. Previous research from our group has used non-ionizing near-infrared (NIR) light for diagnosing demineralization and caries in human tooth enamel. However, use of safe NIR radiation for rapid, 3D imaging of tooth anatomy has not been described previously. Here we describe a optical setup to rapidly laser scan teeth ex vivo using 1310nm NIR laser diode. We also detail a novel process that uses laser scanning to create stacks of images of extracted teeth, and construct highly accurate 3D models. Our 3D reconstructive models offer promising starting points to recover anatomical details using pixel intensities within these images as projection data to diagnose carious lesions, and can assist in providing rapid and affordable technology-enabled early caries screenings to patients.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Cárie Dentária , Desmineralização do Dente , Dente , Cárie Dentária/diagnóstico por imagem , Humanos , Raios Infravermelhos , Lasers , Dente/anatomia & histologia , Dente/diagnóstico por imagem
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3387-3393, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946607

RESUMO

Imaging fluorescent disease biomarkers in tissues and skin is a non-invasive method to screen for health conditions. We report an automated process that combines intraoral fluorescent porphyrin biomarker imaging, clinical examinations and machine learning for correlation of systemic health conditions with periodontal disease. 1215 intraoral fluorescent images, from 284 consenting adults aged 18-90, were analyzed using a machine learning classifier that can segment periodontal inflammation. The classifier achieved an AUC of 0.677 with precision and recall of 0.271 and 0.429, respectively, indicating a learned association between disease signatures in collected images. Periodontal diseases were more prevalent among males (p=0.0012) and older subjects (p=0.0224) in the screened population. Physicians independently examined the collected images, assigning localized modified gingival indices (MGIs). MGIs and periodontal disease were then cross-correlated with responses to a medical history questionnaire, blood pressure and body mass index measurements, and optic nerve, tympanic membrane, neurological, and cardiac rhythm imaging examinations. Gingivitis and early periodontal disease were associated with subjects diagnosed with optic nerve abnormalities (p<; 0.0001) in their retinal scans. We also report significant co-occurrences of periodontal disease in subjects reporting swollen joints (p=0.0422) and a family history of eye disease (p=0.0337). These results indicate cross-correlation of poor periodontal health with systemic health outcomes and stress the importance of oral health screenings at the primary care level. Our screening process and analysis method, using images and machine learning, can be generalized for automated diagnoses and systemic health screenings for other diseases.


Assuntos
Nível de Saúde , Aprendizado de Máquina , Saúde Bucal , Doenças Periodontais/complicações , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Oftalmopatias/complicações , Gengivite/complicações , Gengivite/diagnóstico , Humanos , Articulações/fisiopatologia , Masculino , Pessoa de Meia-Idade , Doenças do Nervo Óptico/complicações , Doenças Periodontais/diagnóstico , Exame Físico , Adulto Jovem
6.
PLoS One ; 11(7): e0159781, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27472222

RESUMO

Advances in automation and data science have led agriculturists to seek real-time, high-quality, high-volume crop data to accelerate crop improvement through breeding and to optimize agronomic practices. Breeders have recently gained massive data-collection capability in genome sequencing of plants. Faster phenotypic trait data collection and analysis relative to genetic data leads to faster and better selections in crop improvement. Furthermore, faster and higher-resolution crop data collection leads to greater capability for scientists and growers to improve precision-agriculture practices on increasingly larger farms; e.g., site-specific application of water and nutrients. Unmanned aerial vehicles (UAVs) have recently gained traction as agricultural data collection systems. Using UAVs for agricultural remote sensing is an innovative technology that differs from traditional remote sensing in more ways than strictly higher-resolution images; it provides many new and unique possibilities, as well as new and unique challenges. Herein we report on processes and lessons learned from year 1-the summer 2015 and winter 2016 growing seasons-of a large multidisciplinary project evaluating UAV images across a range of breeding and agronomic research trials on a large research farm. Included are team and project planning, UAV and sensor selection and integration, and data collection and analysis workflow. The study involved many crops and both breeding plots and agronomic fields. The project's goal was to develop methods for UAVs to collect high-quality, high-volume crop data with fast turnaround time to field scientists. The project included five teams: Administration, Flight Operations, Sensors, Data Management, and Field Research. Four case studies involving multiple crops in breeding and agronomic applications add practical descriptive detail. Lessons learned include critical information on sensors, air vehicles, and configuration parameters for both. As the first and most comprehensive project of its kind to date, these lessons are particularly salient to researchers embarking on agricultural research with UAVs.


Assuntos
Agricultura , Ensaios de Triagem em Larga Escala , Fenótipo , Tecnologia de Sensoriamento Remoto/métodos , Solo
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa