Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Viruses ; 14(11)2022 11 02.
Artículo en Inglés | MEDLINE | ID: mdl-36366534

RESUMEN

Protein phosphorylation is a post-translational modification that enables various cellular activities and plays essential roles in protein interactions. Phosphorylation is an important process for the replication of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2). To shed more light on the effects of phosphorylation, we used an ensemble of neural networks to predict potential kinases that might phosphorylate SARS-CoV-2 nonstructural proteins (nsps) and molecular dynamics (MD) simulations to investigate the effects of phosphorylation on nsps structure, which could be a potential inhibitory target to attenuate viral replication. Eight target candidate sites were found as top-ranked phosphorylation sites of SARS-CoV-2. During the process of molecular dynamics (MD) simulation, the root-mean-square deviation (RMSD) analysis was used to measure conformational changes in each nsps. Root-mean-square fluctuation (RMSF) was employed to measure the fluctuation in each residue of 36 systems considered, allowing us to evaluate the most flexible regions. These analysis shows that there are significant structural deviations in the residues namely nsp1 THR 72, nsp2 THR 73, nsp3 SER 64, nsp4 SER 81, nsp4 SER 455, nsp5 SER284, nsp6 THR 238, and nsp16 SER 132. The identified list of residues suggests how phosphorylation affects SARS-CoV-2 nsps function and stability. This research also suggests that kinase inhibitors could be a possible component for evaluating drug binding studies, which are crucial in therapeutic discovery research.


Asunto(s)
COVID-19 , SARS-CoV-2 , Humanos , Simulación de Dinámica Molecular , Proteínas no Estructurales Virales/metabolismo , Fosforilación , Replicación Viral
2.
PLoS One ; 17(10): e0275446, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36201448

RESUMEN

Glaucoma is the second leading cause of blindness worldwide, and peripapillary atrophy (PPA) is a morphological symptom associated with it. Therefore, it is necessary to clinically detect PPA for glaucoma diagnosis. This study was aimed at developing a detection method for PPA using fundus images with deep learning algorithms to be used by ophthalmologists or optometrists for screening purposes. The model was developed based on localization for the region of interest (ROI) using a mask region-based convolutional neural networks R-CNN and a classification network for the presence of PPA using CNN deep learning algorithms. A total of 2,472 images, obtained from five public sources and one Saudi-based resource (King Abdullah International Medical Research Center in Riyadh, Saudi Arabia), were used to train and test the model. First the images from public sources were analyzed, followed by those from local sources, and finally, images from both sources were analyzed together. In testing the classification model, the area under the curve's (AUC) scores of 0.83, 0.89, and 0.87 were obtained for the local, public, and combined sets, respectively. The developed model will assist in diagnosing glaucoma in screening programs; however, more research is needed on segmenting the PPA boundaries for more detailed PPA detection, which can be combined with optic disc and cup boundaries to calculate the cup-to-disc ratio.


Asunto(s)
Aprendizaje Profundo , Glaucoma , Disco Óptico , Atrofia/patología , Fondo de Ojo , Glaucoma/diagnóstico por imagen , Glaucoma/patología , Humanos , Disco Óptico/diagnóstico por imagen , Disco Óptico/patología
3.
Cells ; 11(14)2022 07 20.
Artículo en Inglés | MEDLINE | ID: mdl-35883687

RESUMEN

Cytogenetics laboratory tests are among the most important procedures for the diagnosis of genetic diseases, especially in the area of hematological malignancies. Manual chromosomal karyotyping methods are time consuming and labor intensive and, hence, expensive. Therefore, to alleviate the process of analysis, several attempts have been made to enhance karyograms. The current chromosomal image enhancement is based on classical image processing. This approach has its limitations, one of which is that it has a mandatory application to all chromosomes, where customized application to each chromosome is ideal. Moreover, each chromosome needs a different level of enhancement, depending on whether a given area is from the chromosome itself or it is just an artifact from staining. The analysis of poor-quality karyograms, which is a difficulty faced often in preparations from cancer samples, is time consuming and might result in missing the abnormality or difficulty in reporting the exact breakpoint within the chromosome. We developed ChromoEnhancer, a novel artificial-intelligence-based method to enhance neoplastic karyogram images. The method is based on Generative Adversarial Networks (GANs) with a data-centric approach. GANs are known for the conversion of one image domain to another. We used GANs to convert poor-quality karyograms into good-quality images. Our method of karyogram enhancement led to robust routine cytogenetic analysis and, therefore, to accurate detection of cryptic chromosomal abnormalities. To evaluate ChromoEnahancer, we randomly assigned a subset of the enhanced images and their corresponding original (unenhanced) images to two independent cytogeneticists to measure the karyogram quality and the elapsed time to complete the analysis, using four rating criteria, each scaled from 1 to 5. Furthermore, we compared the enhanced images with our method to the original ones, using quantitative measures (PSNR and SSIM metrics).


Asunto(s)
Aberraciones Cromosómicas , Procesamiento de Imagen Asistido por Computador , Citogenética , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Inteligencia , Cariotipificación
4.
Clin Ophthalmol ; 16: 747-764, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35300031

RESUMEN

Background: Globally, glaucoma is the second leading cause of blindness. Detecting glaucoma in the early stages is essential to avoid disease complications, which lead to blindness. Thus, computer-aided diagnosis systems are powerful tools to overcome the shortage of glaucoma screening programs. Methods: A systematic search of public databases, including PubMed, Google Scholar, and other sources, was performed to identify relevant studies to overview the publicly available fundus image datasets used to train, validate, and test machine learning and deep learning methods. Additionally, existing machine learning and deep learning methods for optic cup and disc segmentation were surveyed and critically reviewed. Results: Eight fundus images datasets were publicly available with 15,445 images labeled with glaucoma or non-glaucoma, and manually annotated optic disc and cup boundaries were found. Five metrics were identified for evaluating the developed models. Finally, three main deep learning architectural designs were commonly used for optic disc and optic cup segmentation. Conclusion: We provided future research directions to formulate robust optic cup and disc segmentation systems. Deep learning can be utilized in clinical settings for this task. However, many challenges need to be addressed before using this strategy in clinical trials. Finally, two deep learning architectural designs have been widely adopted, such as U-net and its variants.

5.
J Multidiscip Healthc ; 14: 2017-2033, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34354361

RESUMEN

BACKGROUND: Coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), emerged in Wuhan, China, in late 2019 and created a global pandemic that overwhelmed healthcare systems. COVID-19, as of July 3, 2021, yielded 182 million confirmed cases and 3.9 million deaths globally according to the World Health Organization. Several patients who were initially diagnosed with mild or moderate COVID-19 later deteriorated and were reclassified to severe disease type. OBJECTIVE: The aim is to create a predictive model for COVID-19 ventilatory support and mortality early on from baseline (at the time of diagnosis) and routinely collected data of each patient (CXR, CBC, demographics, and patient history). METHODS: Four common machine learning algorithms, three data balancing techniques, and feature selection are used to build and validate predictive models for COVID-19 mechanical requirement and mortality. Baseline CXR, CBC, demographic, and clinical data were retrospectively collected from April 2, 2020, till June 18, 2020, for 5739 patients with confirmed PCR COVID-19 at King Abdulaziz Medical City in Riyadh. However, of those patients, only 1508 and 1513 have met the inclusion criteria for ventilatory support and mortalilty endpoints, respectively. RESULTS: In an independent test set, ventilation requirement predictive model with top 20 features selected with reliefF algorithm from baseline radiological, laboratory, and clinical data using support vector machines and random undersampling technique attained an AUC of 0.87 and a balanced accuracy of 0.81. For mortality endpoint, the top model yielded an AUC of 0.83 and a balanced accuracy of 0.80 using all features with balanced random forest. This indicates that with only routinely collected data our models can predict the outcome with good performance. The predictive ability of combined data consistently outperformed each data set individually for intubation and mortality. For the ventilator support, chest X-ray severity annotations alone performed better than comorbidity, complete blood count, age, or gender with an AUC of 0.85 and balanced accuracy of 0.79. For mortality, comorbidity alone achieved an AUC of 0.80 and a balanced accuracy of 0.72, which is higher than models that use either chest radiograph, laboratory, or demographic features only. CONCLUSION: The experimental results demonstrate the practicality of the proposed COVID-19 predictive tool for hospital resource planning and patients' prioritization in the current COVID-19 pandemic crisis.

6.
BMC Bioinformatics ; 22(1): 113, 2021 Mar 09.
Artículo en Inglés | MEDLINE | ID: mdl-33750288

RESUMEN

BACKGROUND: Automated text classification has many important applications in the clinical setting; however, obtaining labelled data for training machine learning and deep learning models is often difficult and expensive. Active learning techniques may mitigate this challenge by reducing the amount of labelled data required to effectively train a model. In this study, we analyze the effectiveness of 11 active learning algorithms on classifying subsite and histology from cancer pathology reports using a Convolutional Neural Network as the text classification model. RESULTS: We compare the performance of each active learning strategy using two differently sized datasets and two different classification tasks. Our results show that on all tasks and dataset sizes, all active learning strategies except diversity-sampling strategies outperformed random sampling, i.e., no active learning. On our large dataset (15K initial labelled samples, adding 15K additional labelled samples each iteration of active learning), there was no clear winner between the different active learning strategies. On our small dataset (1K initial labelled samples, adding 1K additional labelled samples each iteration of active learning), marginal and ratio uncertainty sampling performed better than all other active learning techniques. We found that compared to random sampling, active learning strongly helps performance on rare classes by focusing on underrepresented classes. CONCLUSIONS: Active learning can save annotation cost by helping human annotators efficiently and intelligently select which samples to label. Our results show that a dataset constructed using effective active learning techniques requires less than half the amount of labelled data to achieve the same performance as a dataset constructed using random sampling.


Asunto(s)
Aprendizaje Automático , Neoplasias , Algoritmos , Humanos , Neoplasias/genética , Neoplasias/patología , Redes Neurales de la Computación
7.
IEEE J Biomed Health Inform ; 25(9): 3596-3607, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-33635801

RESUMEN

Bidirectional Encoder Representations from Transformers (BERT) and BERT-based approaches are the current state-of-the-art in many natural language processing (NLP) tasks; however, their application to document classification on long clinical texts is limited. In this work, we introduce four methods to scale BERT, which by default can only handle input sequences up to approximately 400 words long, to perform document classification on clinical texts several thousand words long. We compare these methods against two much simpler architectures - a word-level convolutional neural network and a hierarchical self-attention network - and show that BERT often cannot beat these simpler baselines when classifying MIMIC-III discharge summaries and SEER cancer pathology reports. In our analysis, we show that two key components of BERT - pretraining and WordPiece tokenization - may actually be inhibiting BERT's performance on clinical text classification tasks where the input document is several thousand words long and where correctly identifying labels may depend more on identifying a few key words or phrases rather than understanding the contextual meaning of sequences of text.


Asunto(s)
Procesamiento de Lenguaje Natural , Redes Neurales de la Computación , Humanos
8.
IEEE Trans Emerg Top Comput ; 9(3): 1219-1230, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-36117774

RESUMEN

Population cancer registries can benefit from Deep Learning (DL) to automatically extract cancer characteristics from the high volume of unstructured pathology text reports they process annually. The success of DL to tackle this and other real-world problems is proportional to the availability of large labeled datasets for model training. Although collaboration among cancer registries is essential to fully exploit the promise of DL, privacy and confidentiality concerns are main obstacles for data sharing across cancer registries. Moreover, DL for natural language processing (NLP) requires sharing a vocabulary dictionary for the embedding layer which may contain patient identifiers. Thus, even distributing the trained models across cancer registries causes a privacy violation issue. In this paper, we propose DL NLP model distribution via privacy-preserving transfer learning approaches without sharing sensitive data. These approaches are used to distribute a multitask convolutional neural network (MT-CNN) NLP model among cancer registries. The model is trained to extract six key cancer characteristics - tumor site, subsite, laterality, behavior, histology, and grade - from cancer pathology reports. Using 410,064 pathology documents from two cancer registries, we compare our proposed approach to conventional transfer learning without privacy-preserving, single-registry models, and a model trained on centrally hosted data. The results show that transfer learning approaches including data sharing and model distribution outperform significantly the single-registry model. In addition, the best performing privacy-preserving model distribution approach achieves statistically indistinguishable average micro- and macro-F1 scores across all extraction tasks (0.823,0.580) as compared to the centralized model (0.827,0.585).

9.
J Biomed Inform ; 110: 103564, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32919043

RESUMEN

OBJECTIVE: In machine learning, it is evident that the classification of the task performance increases if bootstrap aggregation (bagging) is applied. However, the bagging of deep neural networks takes tremendous amounts of computational resources and training time. The research question that we aimed to answer in this research is whether we could achieve higher task performance scores and accelerate the training by dividing a problem into sub-problems. MATERIALS AND METHODS: The data used in this study consist of free text from electronic cancer pathology reports. We applied bagging and partitioned data training using Multi-Task Convolutional Neural Network (MT-CNN) and Multi-Task Hierarchical Convolutional Attention Network (MT-HCAN) classifiers. We split a big problem into 20 sub-problems, resampled the training cases 2,000 times, and trained the deep learning model for each bootstrap sample and each sub-problem-thus, generating up to 40,000 models. We performed the training of many models concurrently in a high-performance computing environment at Oak Ridge National Laboratory (ORNL). RESULTS: We demonstrated that aggregation of the models improves task performance compared with the single-model approach, which is consistent with other research studies; and we demonstrated that the two proposed partitioned bagging methods achieved higher classification accuracy scores on four tasks. Notably, the improvements were significant for the extraction of cancer histology data, which had more than 500 class labels in the task; these results show that data partition may alleviate the complexity of the task. On the contrary, the methods did not achieve superior scores for the tasks of site and subsite classification. Intrinsically, since data partitioning was based on the primary cancer site, the accuracy depended on the determination of the partitions, which needs further investigation and improvement. CONCLUSION: Results in this research demonstrate that 1. The data partitioning and bagging strategy achieved higher performance scores. 2. We achieved faster training leveraged by the high-performance Summit supercomputer at ORNL.


Asunto(s)
Neoplasias , Redes Neurales de la Computación , Metodologías Computacionales , Humanos , Almacenamiento y Recuperación de la Información , Aprendizaje Automático
10.
PLoS One ; 15(5): e0232840, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32396579

RESUMEN

Individual electronic health records (EHRs) and clinical reports are often part of a larger sequence-for example, a single patient may generate multiple reports over the trajectory of a disease. In applications such as cancer pathology reports, it is necessary not only to extract information from individual reports, but also to capture aggregate information regarding the entire cancer case based off case-level context from all reports in the sequence. In this paper, we introduce a simple modular add-on for capturing case-level context that is designed to be compatible with most existing deep learning architectures for text classification on individual reports. We test our approach on a corpus of 431,433 cancer pathology reports, and we show that incorporating case-level context significantly boosts classification accuracy across six classification tasks-site, subsite, laterality, histology, behavior, and grade. We expect that with minimal modifications, our add-on can be applied towards a wide range of other clinical text-based tasks.


Asunto(s)
Registros Electrónicos de Salud/clasificación , Neoplasias/patología , Técnicas Histológicas , Humanos , Procesamiento de Lenguaje Natural , Programa de VERF
11.
J Am Med Inform Assoc ; 27(1): 89-98, 2020 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-31710668

RESUMEN

OBJECTIVE: We implement 2 different multitask learning (MTL) techniques, hard parameter sharing and cross-stitch, to train a word-level convolutional neural network (CNN) specifically designed for automatic extraction of cancer data from unstructured text in pathology reports. We show the importance of learning related information extraction (IE) tasks leveraging shared representations across the tasks to achieve state-of-the-art performance in classification accuracy and computational efficiency. MATERIALS AND METHODS: Multitask CNN (MTCNN) attempts to tackle document information extraction by learning to extract multiple key cancer characteristics simultaneously. We trained our MTCNN to perform 5 information extraction tasks: (1) primary cancer site (65 classes), (2) laterality (4 classes), (3) behavior (3 classes), (4) histological type (63 classes), and (5) histological grade (5 classes). We evaluated the performance on a corpus of 95 231 pathology documents (71 223 unique tumors) obtained from the Louisiana Tumor Registry. We compared the performance of the MTCNN models against single-task CNN models and 2 traditional machine learning approaches, namely support vector machine (SVM) and random forest classifier (RFC). RESULTS: MTCNNs offered superior performance across all 5 tasks in terms of classification accuracy as compared with the other machine learning models. Based on retrospective evaluation, the hard parameter sharing and cross-stitch MTCNN models correctly classified 59.04% and 57.93% of the pathology reports respectively across all 5 tasks. The baseline models achieved 53.68% (CNN), 46.37% (RFC), and 36.75% (SVM). Based on prospective evaluation, the percentages of correctly classified cases across the 5 tasks were 60.11% (hard parameter sharing), 58.13% (cross-stitch), 51.30% (single-task CNN), 42.07% (RFC), and 35.16% (SVM). Moreover, hard parameter sharing MTCNNs outperformed the other models in computational efficiency by using about the same number of trainable parameters as a single-task CNN. CONCLUSIONS: The hard parameter sharing MTCNN offers superior classification accuracy for automated coding support of pathology documents across a wide range of cancers and multiple information extraction tasks while maintaining similar training and inference time as those of a single task-specific model.


Asunto(s)
Almacenamiento y Recuperación de la Información/métodos , Aprendizaje Automático , Procesamiento de Lenguaje Natural , Neoplasias/patología , Redes Neurales de la Computación , Sistema de Registros , Humanos , Neoplasias/clasificación , Máquina de Vectores de Soporte
12.
Artif Intell Med ; 101: 101726, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31813492

RESUMEN

We introduce a deep learning architecture, hierarchical self-attention networks (HiSANs), designed for classifying pathology reports and show how its unique architecture leads to a new state-of-the-art in accuracy, faster training, and clear interpretability. We evaluate performance on a corpus of 374,899 pathology reports obtained from the National Cancer Institute's (NCI) Surveillance, Epidemiology, and End Results (SEER) program. Each pathology report is associated with five clinical classification tasks - site, laterality, behavior, histology, and grade. We compare the performance of the HiSAN against other machine learning and deep learning approaches commonly used on medical text data - Naive Bayes, logistic regression, convolutional neural networks, and hierarchical attention networks (the previous state-of-the-art). We show that HiSANs are superior to other machine learning and deep learning text classifiers in both accuracy and macro F-score across all five classification tasks. Compared to the previous state-of-the-art, hierarchical attention networks, HiSANs not only are an order of magnitude faster to train, but also achieve about 1% better relative accuracy and 5% better relative macro F-score.


Asunto(s)
Neoplasias/patología , Aprendizaje Profundo , Humanos , Procesamiento de Lenguaje Natural , Neoplasias/clasificación , Redes Neurales de la Computación
13.
Artículo en Inglés | MEDLINE | ID: mdl-36081613

RESUMEN

Automated text information extraction from cancer pathology reports is an active area of research to support national cancer surveillance. A well-known challenge is how to develop information extraction tools with robust performance across cancer registries. In this study we investigated whether transfer learning (TL) with a convolutional neural network (CNN) can facilitate cross-registry knowledge sharing. Specifically, we performed a series of experiments to determine whether a CNN trained with single-registry data is capable of transferring knowledge to another registry or whether developing a cross-registry knowledge database produces a more effective and generalizable model. Using data from two cancer registries and primary tumor site and topography as the information extraction task of interest, our study showed that TL results in 6.90% and 17.22% improvement of classification macro F-score over the baseline single-registry models. Detailed analysis illustrated that the observed improvement is evident in the low prevalence classes.

14.
IEEE Trans Med Imaging ; 38(5): 1172-1184, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30418900

RESUMEN

Building a data-driven model to localize the origin of ventricular activation from 12-lead electrocardiograms (ECG) requires addressing the challenge of large anatomical and physiological variations across individuals. The alternative of a patient-specific model is, however, difficult to implement in clinical practice because the training data must be obtained through invasive procedures. In this paper, we present a novel approach that overcomes this problem of the scarcity of clinical data by transferring the knowledge from a large set of patient-specific simulation data while utilizing domain adaptation to address the discrepancy between the simulation and clinical data. The method that we have developed quantifies non-uniformly distributed simulation errors, which are then incorporated into the process of domain adaptation in the context of both classification and regression. This yields a quantitative model that, with the addition of 12-lead ECG data from each patient, provides progressively improved patient-specific localizations of the origin of ventricular activation. We evaluated the performance of the presented method in localizing 75 pacing sites on three in-vivo premature ventricular contraction (PVC) patients. We found that the presented model showed an improvement in localization accuracy relative to a model trained on clinical ECG data alone or a model trained on combined simulation and clinical data without considering domain shift. Furthermore, we demonstrated the ability of the presented model to improve the real-time prediction of the origin of ventricular activation with each added clinical ECG data, progressively guiding the clinician towards the target site.


Asunto(s)
Electrocardiografía/métodos , Ventrículos Cardíacos/fisiopatología , Función Ventricular/fisiología , Algoritmos , Simulación por Computador , Humanos , Aprendizaje Automático , Modelación Específica para el Paciente , Complejos Prematuros Ventriculares/fisiopatología
15.
Gene ; 578(2): 162-8, 2016 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-26723512

RESUMEN

A long-held presupposition in the field of bioinformatics holds that genetic, and now even epigenetic 'information' can be abstracted from the physicochemical details of the macromolecular polymers in which it resides. It is perhaps rather ironic that this basic conjecture originated upon the first observations of DNA structure itself. This static model of DNA led very quickly to the conclusion that only the nucleobase sequence itself is rich enough in molecular complexity to replicate a complex biology. This idea has been pervasive throughout genomic science, higher education and popular culture ever since; to the point that most of us would accept it unquestioningly as fact. What is more alarming is that this conjecture is driving a significant portion of the technological development in modern genomics towards methods strongly rooted in DNA sequencing, thereby reducing a dynamic multi-dimensional biology into single-dimensional forms of data. Evidence countering this central tenet of bioinformatics has been quietly mounting over many decades, prompting some to propose that the genome must be studied from the perspective of its molecular reality, rather than as a body of information to be represented symbolically. Here, we explore the epistemological boundary between bioinformatics and molecular biology, and warn against an 'overtly' bioinformatic perspective. We review a selection of new bioinformatic methods that move beyond sequence-based approaches to include consideration of databased three dimensional structures. However, we also note that these hybrid methods still ignore the most important element of gene function when attempting to improve outcomes; the fourth dimension of molecular dynamics over time.


Asunto(s)
Biología Computacional/tendencias , ADN/genética , Simulación de Dinámica Molecular/tendencias , Proteínas/genética , ADN/química , Genómica , Mutación , Conformación de Ácido Nucleico , Conformación Proteica , Proteínas/química , Análisis de Secuencia de ADN
16.
Middle East Afr J Ophthalmol ; 22(1): 108-14, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-25624684

RESUMEN

PURPOSE: The purpose was to evaluate subjective quality of vision and patient satisfaction after laser in situ keratomileusis (LASIK) for myopia and myopic astigmatism. PATIENTS AND METHODS: A self-administered patient questionnaire consisting 29 items was prospectively administered to LASIK patients at the Yemen Magrabi Hospital. Seven scales covering specific aspects of the quality of vision were formulated including; global satisfaction; quality of uncorrected and corrected vision; quality of night vision; glare; daytime driving and; night driving. Main outcome measures were responses to individual questions and scale scores and correlations with clinical parameters. The scoring scale ranged from 1 (dissatisfied) to 3 (very satisfied) and was stratified in the following manner: 1-1.65 = dissatisfied; 1.66-2.33 = satisfied and; 2.33-3 = very satisfied. Data at 6 months postoperatively are reported. RESULTS: This study sample was comprised of 200 patients (122 females: 78 males) ranging in age from 18 to 46 years old. The preoperative myopic sphere was - 3.50 ± 1.70 D and myopic astigmatism was 0.90 ± 0.82 D. There were 96% of eyes within ± 1.00 D of the targeted correction. Postoperatively, the uncorrected visual acuity was 20/40 or better in 99% of eyes. The mean score for the overall satisfaction was 2.64 ± 0.8. A total of 98.5% of patients was satisfied or very satisfied with their surgery, 98.5% considered their main goal for surgery was achieved. Satisfaction with uncorrected vision was 2.5 ± 0.50. The main score for glare was 1.98 ± 0.7 at night. Night driving was rated more difficult preoperatively by 6.2%, whereas 79% had less difficulty driving at night. CONCLUSION: Patient satisfaction with uncorrected vision after LASIK for myopia and myopic astigmatism appears to be excellent and is related to the residual refractive error postoperatively.


Asunto(s)
Astigmatismo/cirugía , Queratomileusis por Láser In Situ , Láseres de Excímeros/uso terapéutico , Miopía/cirugía , Satisfacción del Paciente/estadística & datos numéricos , Adolescente , Adulto , Astigmatismo/fisiopatología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Miopía/fisiopatología , Periodo Posoperatorio , Estudios Prospectivos , Encuestas y Cuestionarios , Resultado del Tratamiento , Agudeza Visual/fisiología
17.
Nucleic Acids Res ; 42(17): 10915-26, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25200075

RESUMEN

While mRNA stability has been demonstrated to control rates of translation, generating both global and local synonymous codon biases in many unicellular organisms, this explanation cannot adequately explain why codon bias strongly tracks neighboring intergene GC content; suggesting that structural dynamics of DNA might also influence codon choice. Because minor groove width is highly governed by 3-base periodicity in GC, the existence of triplet-based codons might imply a functional role for the optimization of local DNA molecular dynamics via GC content at synonymous sites (≈GC3). We confirm a strong association between GC3-related intrinsic DNA flexibility and codon bias across 24 different prokaryotic multiple whole-genome alignments. We develop a novel test of natural selection targeting synonymous sites and demonstrate that GC3-related DNA backbone dynamics have been subject to moderate selective pressure, perhaps contributing to our observation that many genes possess extreme DNA backbone dynamics for their given protein space. This dual function of codons may impose universal functional constraints affecting the evolution of synonymous and non-synonymous sites. We propose that synonymous sites may have evolved as an 'accessory' during an early expansion of a primordial genetic code, allowing for multiplexed protein coding and structural dynamic information within the same molecular context.


Asunto(s)
Codón , ADN/química , Proteínas Bacterianas/genética , Composición de Base , ADN de Algas/química , ADN de Archaea/química , ADN Bacteriano/química , Genoma , Mutación , Selección Genética , Transaminasas/genética
18.
Saudi Med J ; 34(9): 913-9, 2013 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-24043002

RESUMEN

OBJECTIVE: To compare preoperative and postoperative visual outcomes, determine patient's satisfaction, and evaluate visual symptoms after implantable collamer lens (ICL) implantation. METHODS: One hundred and twelve patients with myopia between -2.75 and -19.50 diopter had ICL or Toric ICL (TICL) implantation. The implantations were carried out at the Cornea and Refractive Unit, Magrabi Eye Hospital, Sana'a, Republic of Yemen between September 2007 and October 2010. Preoperative and postoperative uncorrected visual acuity (UCVA), best spectacle corrected visual acuity (BSCVA), and refraction was evaluated. Patient's satisfaction and visual symptoms were evaluated using a questionnaire. RESULTS: The mean age was 26.74 +/- 5.6 years. The mean preoperative UCVA improved from 0.01 +/- 0.04 to 0.75 +/- 0.22. The mean postoperative UCVA (0.75 +/- 0.23) versus preoperative BSCVA (0.61 +/- 0.23) had a significant statistical change (p<0.001), and Pearson correlation of 0.818. Preoperative BSCVA versus postoperative BSCVA gained 5 lines in 2.5%, 4 lines in 4.4%, 3 lines in 14.2%, 2 lines in 32.8%, and one line improvement in 24%, whereas it was maintained in 20.1%, and lost one or more lines in 2%. The mean score for the overall satisfaction was 2.67 +/- 0.45. A total of 15.2% reported complaint of halos, 13.4% reported perception of stars around lights, and 23.2% had glare. CONCLUSION: Implantation of ICL and TICL is safe and effective and provides predictable refractive results with good satisfaction in the treatment of moderate to high myopia, suggesting its viability as a surgical option for the treatment of myopia.


Asunto(s)
Astigmatismo/cirugía , Implantación de Lentes Intraoculares , Miopía/cirugía , Satisfacción del Paciente , Adolescente , Adulto , Astigmatismo/fisiopatología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Miopía/fisiopatología , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...