Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Neuroimage ; 281: 120376, 2023 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-37714389

RESUMEN

Tractography algorithms are prone to reconstructing spurious connections. The set of streamlines generated with tractography can be post-processed to retain the streamlines that are most biologically plausible. Several microstructure-informed filtering algorithms are available for this purpose, however, the comparative performance of these methods has not been extensively evaluated. In this study, we aim to evaluate streamline filtering and post-processing algorithms using simulated connectome phantoms. We first establish a framework for generating connectome phantoms featuring brain-like white matter fiber architectures. We then use our phantoms to systematically evaluate the performance of a range of streamline filtering algorithms, including SIFT, COMMIT, and LiFE. We find that all filtering methods successfully improve connectome accuracy, although filter performance depends on the complexity of the underlying white matter fiber architecture. Filtering algorithms can markedly improve tractography accuracy for simple tubular fiber bundles (F-measure deterministic- unfiltered: 0.49 and best filter: 0.72; F-measure probabilistic- unfiltered: 0.37 and best filter: 0.81), but for more complex brain-like fiber architectures, the improvement is modest (F-measure deterministic- unfiltered: 0.53 and best filter: 0.54; F-measure probabilistic- unfiltered: 0.46 and best filter: 0.50). Overall, filtering algorithms have the potential to improve the accuracy of connectome mapping pipelines, particularly for weighted connectomes and pipelines using probabilistic tractography methods. Our results highlight the need for further advances tractography and streamline filtering to improve the accuracy of connectome mapping.

2.
Brief Bioinform ; 22(5)2021 09 02.
Artículo en Inglés | MEDLINE | ID: mdl-33834181

RESUMEN

MOTIVATION: The high accuracy of recent haplotype phasing tools is enabling the integration of haplotype (or phase) information more widely in genetic investigations. One such possibility is phase-aware expression quantitative trait loci (eQTL) analysis, where haplotype-based analysis has the potential to detect associations that may otherwise be missed by standard SNP-based approaches. RESULTS: We present eQTLHap, a novel method to investigate associations between gene expression and genetic variants, considering their haplotypic and genotypic effect. Using multiple simulations based on real data, we demonstrate that phase-aware eQTL analysis significantly outperforms typical SNP-based methods when the causal genetic architecture involves multiple SNPs. We show that phase-aware eQTL analysis is robust to phasing errors, showing only a minor impact ($<4\%$) on sensitivity. Applying eQTLHap to real GEUVADIS and GTEx datasets detects numerous novel eQTLs undetected by a single-SNP approach, with 22 eQTLs replicating across studies or tissue types, highlighting the utility of phase-aware eQTL analysis. AVAILABILITY AND IMPLEMENTATION: https://github.com/ziadbkh/eQTLHap. CONTACT: ziad.albkhetan@gmail.com. SUPPLEMENTARY INFORMATION: Supplementary data are available at Briefings in Bioinformatics online.


Asunto(s)
Biología Computacional/métodos , Estudio de Asociación del Genoma Completo/métodos , Haplotipos , Polimorfismo de Nucleótido Simple , Sitios de Carácter Cuantitativo/genética , Algoritmos , Regulación de la Expresión Génica , Genotipo , Humanos , Internet , Desequilibrio de Ligamiento
3.
Brief Bioinform ; 22(4)2021 07 20.
Artículo en Inglés | MEDLINE | ID: mdl-33236761

RESUMEN

Haplotype phasing is a critical step for many genetic applications but incorrect estimates of phase can negatively impact downstream analyses. One proposed strategy to improve phasing accuracy is to combine multiple independent phasing estimates to overcome the limitations of any individual estimate. However, such a strategy is yet to be thoroughly explored. This study provides a comprehensive evaluation of consensus strategies for haplotype phasing. We explore the performance of different consensus paradigms, and the effect of specific constituent tools, across several datasets with different characteristics and their impact on the downstream task of genotype imputation. Based on the outputs of existing phasing tools, we explore two different strategies to construct haplotype consensus estimators: voting across outputs from multiple phasing tools and multiple outputs of a single non-deterministic tool. We find that the consensus approach from multiple tools reduces SE by an average of 10% compared to any constituent tool when applied to European populations and has the highest accuracy regardless of population ethnicity, sample size, variant density or variant frequency. Furthermore, the consensus estimator improves the accuracy of the downstream task of genotype imputation carried out by the widely used Minimac3, pbwt and BEAGLE5 tools. Our results provide guidance on how to produce the most accurate phasing estimates and the trade-offs that a consensus approach may have. Our implementation of consensus haplotype phasing, consHap, is available freely at https://github.com/ziadbkh/consHap. Supplementary information: Supplementary data are available at Briefings in Bioinformatics online.


Asunto(s)
Algoritmos , Bases de Datos de Ácidos Nucleicos , Polimorfismo de Nucleótido Simple , Análisis de Secuencia de ADN , Haplotipos , Humanos
4.
NMR Biomed ; 34(12): e4605, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34516016

RESUMEN

Diffusion MRI tractography is the most widely used macroscale method for mapping connectomes in vivo. However, tractography is prone to various errors and biases, and thus tractography-derived connectomes require careful validation. Here, we critically review studies that have developed or utilized phantoms and tracer maps to validate tractography-derived connectomes, either quantitatively or qualitatively. We identify key factors impacting connectome reconstruction accuracy, including streamline seeding, propagation and filtering methods, and consider the strengths and limitations of state-of-the-art connectome phantoms and associated validation studies. These studies demonstrate the inherent limitations of current fiber orientation models and tractography algorithms and their impact on connectome reconstruction accuracy. Reconstructing connectomes with both high sensitivity and high specificity is challenging, given that some tractography methods can generate an abundance of spurious connections, while others can overlook genuine fiber bundles. We argue that streamline filtering can minimize spurious connections and potentially improve the biological plausibility of connectomes derived from tractography. We find that algorithmic choices such as the tractography seeding methodology, angular threshold, and streamline propagation method can substantially impact connectome reconstruction accuracy. Hence, careful application of tractography is necessary to reconstruct accurate connectomes. Improvements in diffusion MRI acquisition techniques will not necessarily overcome current tractography limitations without accompanying modeling and algorithmic advances.


Asunto(s)
Imagen de Difusión Tensora/métodos , Algoritmos , Encéfalo/diagnóstico por imagen , Humanos , Aprendizaje Automático , Fantasmas de Imagen , Sensibilidad y Especificidad
5.
Neuroimage ; 212: 116654, 2020 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-32068163

RESUMEN

We propose a new framework to map structural connectomes using deep learning and diffusion MRI. We show that our framework not only enables connectome mapping with a convolutional neural network (CNN), but can also be straightforwardly incorporated into conventional connectome mapping pipelines to enhance accuracy. Our framework involves decomposing the entire brain volume into overlapping blocks. Blocks are sufficiently small to ensure that a CNN can be efficiently trained to predict each block's internal connectivity architecture. We develop a block stitching algorithm to rebuild the full brain volume from these blocks and thereby map end-to-end connectivity matrices. To evaluate our block decomposition and stitching (BDS) framework independent of CNN performance, we first map each block's internal connectivity using conventional streamline tractography. Performance is evaluated using simulated diffusion MRI data generated from numerical connectome phantoms with known ground truth connectivity. Due to the redundancy achieved by allowing blocks to overlap, we find that our block decomposition and stitching steps per se can enhance the accuracy of probabilistic and deterministic tractography algorithms by up to 20-30%. Moreover, we demonstrate that our framework can improve the strength of structure-function coupling between in vivo diffusion and functional MRI data. We find that structural brain networks mapped with deep learning correlate more strongly with functional brain networks (r â€‹= â€‹0.45) than those mapped with conventional tractography (r â€‹= â€‹0.36). In conclusion, our BDS framework not only enables connectome mapping with deep learning, but its two constituent steps can be straightforwardly incorporated as part of conventional connectome mapping pipelines to enhance accuracy.


Asunto(s)
Encéfalo , Conectoma/métodos , Aprendizaje Profundo , Modelos Neurológicos , Imagen de Difusión por Resonancia Magnética , Humanos
6.
Magn Reson Med ; 81(2): 1368-1384, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30303550

RESUMEN

PURPOSE: Human connectomics necessitates high-throughput, whole-brain reconstruction of multiple white matter fiber bundles. Scaling up tractography to meet these high-throughput demands yields new fiber tracking challenges, such as minimizing spurious connections and controlling for gyral biases. The aim of this study is to determine which of the two broadest classes of tractography algorithms-deterministic or probabilistic-is most suited to mapping connectomes. METHODS: This study develops numerical connectome phantoms that feature realistic network topologies and that are matched to the fiber complexity of in vivo diffusion MRI (dMRI) data. The phantoms are utilized to evaluate the performance of tensor-based and multi-fiber implementations of deterministic and probabilistic tractography. RESULTS: For connectome phantoms that are representative of the fiber complexity of in vivo dMRI, multi-fiber deterministic tractography yields the most accurate connectome reconstructions (F-measure = 0.35). Probabilistic algorithms are hampered by an abundance of false-positive connections, leading to lower specificity (F = 0.19). While omitting connections with the fewest number of streamlines (thresholding) improves the performance of probabilistic algorithms (F = 0.38), multi-fiber deterministic tractography remains optimal when it benefits from thresholding (F = 0.42). CONCLUSIONS: Multi-fiber deterministic tractography is well suited to connectome mapping, while connectome thresholding is essential when using probabilistic algorithms.


Asunto(s)
Conectoma , Imagen de Difusión por Resonancia Magnética , Procesamiento de Imagen Asistido por Computador/métodos , Sustancia Blanca/diagnóstico por imagen , Algoritmos , Encéfalo/diagnóstico por imagen , Simulación por Computador , Imagen de Difusión Tensora , Humanos , Fantasmas de Imagen , Probabilidad , Curva ROC , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador
7.
J Biomed Inform ; 66: 19-31, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-28011233

RESUMEN

BACKGROUND AND OBJECTIVE: Critical care patient events like sepsis or septic shock in intensive care units (ICUs) are dangerous complications which can cause multiple organ failures and eventual death. Preventive prediction of such events will allow clinicians to stage effective interventions for averting these critical complications. METHODS: It is widely understood that physiological conditions of patients on variables such as blood pressure and heart rate are suggestive to gradual changes over a certain period of time, prior to the occurrence of a septic shock. This work investigates the performance of a novel machine learning approach for the early prediction of septic shock. The approach combines highly informative sequential patterns extracted from multiple physiological variables and captures the interactions among these patterns via coupled hidden Markov models (CHMM). In particular, the patterns are extracted from three non-invasive waveform measurements: the mean arterial pressure levels, the heart rates and respiratory rates of septic shock patients from a large clinical ICU dataset called MIMIC-II. EVALUATION AND RESULTS: For baseline estimations, SVM and HMM models on the continuous time series data for the given patients, using MAP (mean arterial pressure), HR (heart rate), and RR (respiratory rate) are employed. Single channel patterns based HMM (SCP-HMM) and multi-channel patterns based coupled HMM (MCP-HMM) are compared against baseline models using 5-fold cross validation accuracies over multiple rounds. Particularly, the results of MCP-HMM are statistically significant having a p-value of 0.0014, in comparison to baseline models. Our experiments demonstrate a strong competitive accuracy in the prediction of septic shock, especially when the interactions between the multiple variables are coupled by the learning model. CONCLUSIONS: It can be concluded that the novelty of the approach, stems from the integration of sequence-based physiological pattern markers with the sequential CHMM model to learn dynamic physiological behavior, as well as from the coupling of such patterns to build powerful risk stratification models for septic shock patients.


Asunto(s)
Unidades de Cuidados Intensivos , Aprendizaje Automático , Medición de Riesgo/métodos , Choque Séptico , Presión Sanguínea , Cuidados Críticos , Predicción , Frecuencia Cardíaca , Humanos , Insuficiencia Multiorgánica
9.
BMC Genomics ; 15 Suppl 9: S16, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25521201

RESUMEN

BACKGROUND: Altered expression profiles of microRNAs (miRNAs) are linked to many diseases including lung cancer. miRNA expression profiling is reproducible and miRNAs are very stable. These characteristics of miRNAs make them ideal biomarker candidates. METHOD: This work is aimed to detect 2-and 3-miRNA groups, together with specific expression ranges of these miRNAs, to form simple linear discriminant rules for biomarker identification and biological interpretation. Our method is based on a novel committee of decision trees to derive 2-and 3-miRNA 100%-frequency rules. This method is applied to a data set of lung miRNA expression profiles of 61 squamous cell carcinoma (SCC) samples and 10 normal tissue samples. A distance separation technique is used to select the most reliable rules which are then evaluated on a large independent data set. RESULTS: We obtained four 2-miRNA and three 3-miRNA top-ranked rules. One important rule is that: If the expression level of miR-98 is above 7.356 and the expression level of miR-205 is below 9.601 (log2 quantile normalized MirVan miRNA Bioarray signals), then the sample is normal rather than cancerous with specificity and sensitivity both 100%. The classification performance of our best miRNA rules remarkably outperformed that by randomly selected miRNA rules. Our data analysis also showed that miR-98 and miR-205 have two common predicted target genes FZD3 and RPS6KA3, which are actually genes associated with carcinoma according to the Online Mendelian Inheritance in Man (OMIM) database. We also found that most of the chromosomal loci of these miRNAs have a high frequency of genomic alteration in lung cancer. On the independent data set (with balanced controls), the three miRNAs miR-126, miR-205 and miR-182 from our best rule can separate the two classes of samples at the accuracy of 84.49%, sensitivity of 91.40% and specificity of 77.14%. CONCLUSION: Our results indicate that rule discovery followed by distance separation is a powerful computational method to identify reliable miRNA biomarkers. The visualization of the rules and the clear separation between the normal and cancer samples by our rules will help biology experts for their analysis and biological interpretation.


Asunto(s)
Biomarcadores de Tumor/genética , Carcinoma de Células Escamosas/diagnóstico , Carcinoma de Células Escamosas/genética , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/genética , MicroARNs/genética , Estudios de Casos y Controles , Cromosomas Humanos/genética , Perfilación de la Expresión Génica , Sitios Genéticos/genética , Genómica , Humanos
10.
Artículo en Inglés | MEDLINE | ID: mdl-36264723

RESUMEN

Knowledge distillation (KD), as an efficient and effective model compression technique, has received considerable attention in deep learning. The key to its success is about transferring knowledge from a large teacher network to a small student network. However, most existing KD methods consider only one type of knowledge learned from either instance features or relations via a specific distillation strategy, failing to explore the idea of transferring different types of knowledge with different distillation strategies. Moreover, the widely used offline distillation also suffers from a limited learning capacity due to the fixed large-to-small teacher-student architecture. In this article, we devise a collaborative KD via multiknowledge transfer (CKD-MKT) that prompts both self-learning and collaborative learning in a unified framework. Specifically, CKD-MKT utilizes a multiple knowledge transfer framework that assembles self and online distillation strategies to effectively: 1) fuse different kinds of knowledge, which allows multiple students to learn knowledge from both individual instances and instance relations, and 2) guide each other by learning from themselves using collaborative and self-learning. Experiments and ablation studies on six image datasets demonstrate that the proposed CKD-MKT significantly outperforms recent state-of-the-art methods for KD.

11.
BMC Bioinformatics ; 11 Suppl 1: S46, 2010 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-20122220

RESUMEN

BACKGROUND: Protein structure comparison is a fundamental task in structural biology. While the number of known protein structures has grown rapidly over the last decade, searching a large database of protein structures is still relatively slow using existing methods. There is a need for new techniques which can rapidly compare protein structures, whilst maintaining high matching accuracy. RESULTS: We have developed IR Tableau, a fast protein comparison algorithm, which leverages the tableau representation to compare protein tertiary structures. IR tableau compares tableaux using information retrieval style feature indexing techniques. Experimental analysis on the ASTRAL SCOP protein structural domain database demonstrates that IR Tableau achieves two orders of magnitude speedup over the search times of existing methods, while producing search results of comparable accuracy. CONCLUSION: We show that it is possible to obtain very significant speedups for the protein structure comparison problem, by employing an information retrieval style approach for indexing proteins. The comparison accuracy achieved is also strong, thus opening the way for large scale processing of very large protein structure databases.


Asunto(s)
Algoritmos , Bases de Datos de Proteínas , Proteínas/química , Modelos Moleculares , Pliegue de Proteína
12.
BMC Bioinformatics ; 10 Suppl 1: S19, 2009 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-19208118

RESUMEN

BACKGROUND: Microarray gene expression profiling has provided extensive datasets that can describe characteristics of cancer patients. An important challenge for this type of data is the discovery of gene sets which can be used as the basis of developing a clinical predictor for cancer. It is desirable that such gene sets be compact, give accurate predictions across many classifiers, be biologically relevant and have good biological process coverage. RESULTS: By using a new type of multiple classifier voting approach, we have identified gene sets that can predict breast cancer prognosis accurately, for a range of classification algorithms. Unlike a wrapper approach, our method is not specialised towards a single classification technique. Experimental analysis demonstrates higher prediction accuracies for our sets of genes compared to previous work in the area. Moreover, our sets of genes are generally more compact than those previously proposed. Taking a biological viewpoint, from the literature, most of the genes in our sets are known to be strongly related to cancer. CONCLUSION: We show that it is possible to obtain superior classification accuracy with our approach and obtain a compact gene set that is also biologically relevant and has good coverage of different biological processes.


Asunto(s)
Algoritmos , Perfilación de la Expresión Génica/métodos , Genes Relacionados con las Neoplasias , Bases de Datos Genéticas , Humanos , Análisis de Secuencia por Matrices de Oligonucleótidos/métodos
13.
J Biomed Semantics ; 9(1): 7, 2018 01 30.
Artículo en Inglés | MEDLINE | ID: mdl-29382397

RESUMEN

BACKGROUND: Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. RESULTS: Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. CONCLUSIONS: We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.


Asunto(s)
Gráficos por Computador , Minería de Datos/métodos , Mapeo de Interacción de Proteínas
14.
PLoS One ; 13(6): e0198281, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29864167

RESUMEN

In this paper, we propose a novel classification model for automatically identifying individuals with age-related macular degeneration (AMD) or Diabetic Macular Edema (DME) using retinal features from Spectral Domain Optical Coherence Tomography (SD-OCT) images. Our classification method uses retinal features such as the thickness of the retina and the thickness of the individual retinal layers, and the volume of the pathologies such as drusen and hyper-reflective intra-retinal spots. We extract automatically, ten clinically important retinal features by segmenting individual SD-OCT images for classification purposes. The effectiveness of the extracted features is evaluated using several classification methods such as Random Forrest on 251 (59 normal, 177 AMD and 15 DME) subjects. We have performed 15-fold cross-validation tests for three phenotypes; DME, AMD and normal cases using these data sets and achieved accuracy of more than 95% on each data set with the classification method using Random Forrest. When we trained the system as a two-class problem of normal and eye with pathology, using the Random Forrest classifier, we obtained an accuracy of more than 96%. The area under the receiver operating characteristic curve (AUC) finds a value of 0.99 for each dataset. We have also shown the performance of four state-of-the-methods for classification the eye participants and found that our proposed method showed the best accuracy.


Asunto(s)
Retina/diagnóstico por imagen , Enfermedades de la Retina/clasificación , Enfermedades de la Retina/patología , Tomografía de Coherencia Óptica/métodos , Algoritmos , Área Bajo la Curva , Retinopatía Diabética/diagnóstico por imagen , Retinopatía Diabética/patología , Femenino , Humanos , Degeneración Macular/diagnóstico por imagen , Degeneración Macular/patología , Masculino , Curva ROC , Retina/patología , Enfermedades de la Retina/diagnóstico por imagen , Drusas Retinianas/diagnóstico por imagen , Drusas Retinianas/patología
15.
Comput Biol Med ; 74: 18-29, 2016 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-27160638

RESUMEN

We present a novel method for the quantification of focal arteriolar narrowing (FAN) in human retina, a precursor for hypertension, stroke and other cardiovascular diseases. A reliable and robust arteriolar boundary mapping method is proposed where intensity, gradient and spatial prior knowledge about the arteriolar shape is incorporated into a graph based optimization method to obtain the arteriolar boundary. Following the mapping of the arteriolar boundaries, arteriolar widths are analysed to quantify the severity of focal arteriolar narrowing (FAN). We evaluate our proposed method on a dataset of 116 retinal arteriolar segments which are manually graded by two expert graders. The experimental results indicate a strong correlation between the quantified FAN measurement scores provided by our method and two experts graded FAN severity levels. Our proposed FAN measurement score: percent narrowing (PN) shows high correlation (Spearman correlation coefficient of 0.82(p<0.0001) for grader-1 and 0.84(p<0.0001) for grade-2) with the manually graded FAN severity levels provided by two expert graders. In addition to that, the proposed method shows better reproducibility (Spearman correlation coefficient ρ=0.92(p<0.0001)) compared to two expert graders ( [Formula: see text] (p<0.0001) and [Formula: see text] ) in two successive sessions. The quantitative measurements provided by the proposed method can help us to establish a more reliable link between FAN and known systemic and eye diseases.


Asunto(s)
Hipertensión/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Arteria Retiniana/diagnóstico por imagen , Femenino , Humanos , Hipertensión/fisiopatología , Masculino , Arteria Retiniana/fisiopatología
17.
Comput Med Imaging Graph ; 45: 102-11, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26398564

RESUMEN

White matter lesions (WMLs) are small groups of dead cells that clump together in the white matter of brain. In this paper, we propose a reliable method to automatically segment WMLs. Our method uses a novel filter to enhance the intensity of WMLs. Then a feature set containing enhanced intensity, anatomical and spatial information is used to train a random forest classifier for the initial segmentation of WMLs. Following that a reliable and robust edge potential function based Markov Random Field (MRF) is proposed to obtain the final segmentation by removing false positive WMLs. Quantitative evaluation of the proposed method is performed on 24 subjects of ENVISion study. The segmentation results are validated against the manual segmentation, performed under the supervision of an expert neuroradiologist. The results show a dice similarity index of 0.76 for severe lesion load, 0.73 for moderate lesion load and 0.61 for mild lesion load. In addition to that we have compared our method with three state of the art methods on 20 subjects of Medical Image Computing and Computer Aided Intervention Society's (MICCAI's) MS lesion challenge dataset, where our method shows better segmentation accuracy compare to the state of the art methods. These results indicate that the proposed method can assist the neuroradiologists in assessing the WMLs in clinical practice.


Asunto(s)
Encéfalo/patología , Imagen de Difusión Tensora/métodos , Interpretación de Imagen Asistida por Computador/métodos , Leucoencefalopatías/patología , Reconocimiento de Normas Patrones Automatizadas/métodos , Sustancia Blanca/patología , Simulación por Computador , Medios de Contraste , Interpretación Estadística de Datos , Humanos , Aprendizaje Automático , Cadenas de Markov , Modelos Estadísticos , Variaciones Dependientes del Observador , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Técnica de Sustracción
18.
Artículo en Inglés | MEDLINE | ID: mdl-25571443

RESUMEN

Retinal arteriovenous (AV) nicking is a precursor for hypertension, stroke and other cardiovascular diseases. In this paper, an effective method is proposed for the analysis of retinal venular widths to automatically classify the severity level of AV nicking. We use combination of intensity and edge information of the vein to compute its widths. The widths at various sections of the vein near the crossover point are then utilized to train a random forest classifier to classify the severity of AV nicking. We analyzed 47 color retinal images obtained from two population based studies for quantitative evaluation of the proposed method. We compare the detection accuracy of our method with a recently published four class AV nicking classification method. Our proposed method shows 64.51% classification accuracy in-contrast to the reported classification accuracy of 49.46% by the state of the art method.


Asunto(s)
Malformaciones Arteriovenosas/diagnóstico , Procesamiento de Imagen Asistido por Computador/métodos , Retina/patología , Algoritmos , Malformaciones Arteriovenosas/patología , Automatización , Color , Fondo de Ojo , Humanos
19.
Comput Biol Med ; 44: 1-9, 2014 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-24377683

RESUMEN

Retinal imaging can facilitate the measurement and quantification of subtle variations and abnormalities in retinal vasculature. Retinal vascular imaging may thus offer potential as a noninvasive research tool to probe the role and pathophysiology of the microvasculature, and as a cardiovascular risk prediction tool. In order to perform this, an accurate method must be provided that is statistically sound and repeatable. This paper presents the methodology of such a system that assists physicians in measuring vessel caliber (i.e., diameters or width) from digitized fundus photographs. The system involves texture and edge information to measure and quantify vessel caliber. The graphical user interfaces are developed to allow retinal image graders to select individual vessel area that automatically returns the vessel calibers for noisy images. The accuracy of the method is validated using the measured caliber from graders and an existing method. The system provides very high accuracy vessel caliber measurement which is also reproducible with high consistency.


Asunto(s)
Enfermedades Cardiovasculares/patología , Procesamiento de Imagen Asistido por Computador , Arteria Retiniana/patología , Programas Informáticos , Enfermedades Cardiovasculares/fisiopatología , Humanos , Arteria Retiniana/fisiopatología
20.
Methods Mol Biol ; 932: 87-106, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-22987348

RESUMEN

In this chapter we provide a survey of protein secondary and supersecondary structure prediction using methods from machine learning. Our focus is on machine learning methods applicable to ß-hairpin and ß-sheet prediction, but we also discuss methods for more general supersecondary structure prediction. We provide background on the secondary and supersecondary structures that we discuss, the features used to describe them, and the basic theory behind the machine learning methods used. We survey the machine learning methods available for secondary and supersecondary structure prediction and compare them where possible.


Asunto(s)
Inteligencia Artificial , Biología Computacional/métodos , Modelos Moleculares , Estructura Secundaria de Proteína , Proteínas/química , Secuencias de Aminoácidos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA