Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Neuroimage ; 273: 120075, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37054828

RESUMO

Developmental reading disability is a prevalent and often enduring problem with varied mechanisms that contribute to its phenotypic heterogeneity. This mechanistic and phenotypic variation, as well as relatively modest sample sizes, may have limited the development of accurate neuroimaging-based classifiers for reading disability, including because of the large feature space of neuroimaging datasets. An unsupervised learning model was used to reduce deformation-based data to a lower-dimensional manifold and then supervised learning models were used to classify these latent representations in a dataset of 96 reading disability cases and 96 controls (mean age: 9.86 ± 1.56 years). A combined unsupervised autoencoder and supervised convolutional neural network approach provided an effective classification of cases and controls (accuracy: 77%; precision: 0.75; recall: 0.78). Brain regions that contributed to this classification accuracy were identified by adding noise to the voxel-level image data, which showed that reading disability classification accuracy was most influenced by the superior temporal sulcus, dorsal cingulate, and lateral occipital cortex. Regions that were most important for the accurate classification of controls included the supramarginal gyrus, orbitofrontal, and medial occipital cortex. The contribution of these regions reflected individual differences in reading-related abilities, such as non-word decoding or verbal comprehension. Together, the results demonstrate an optimal deep learning solution for classification using neuroimaging data. In contrast with standard mass-univariate test results, results from the deep learning model also provided evidence for regions that may be specifically affected in reading disability cases.


Assuntos
Aprendizado Profundo , Dislexia , Humanos , Criança , Dislexia/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Neuroimagem/métodos , Compreensão
2.
Proc IEEE Inst Electr Electron Eng ; 111(10): 1236-1286, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37859667

RESUMO

The emergence of artificial emotional intelligence technology is revolutionizing the fields of computers and robotics, allowing for a new level of communication and understanding of human behavior that was once thought impossible. While recent advancements in deep learning have transformed the field of computer vision, automated understanding of evoked or expressed emotions in visual media remains in its infancy. This foundering stems from the absence of a universally accepted definition of "emotion," coupled with the inherently subjective nature of emotions and their intricate nuances. In this article, we provide a comprehensive, multidisciplinary overview of the field of emotion analysis in visual media, drawing on insights from psychology, engineering, and the arts. We begin by exploring the psychological foundations of emotion and the computational principles that underpin the understanding of emotions from images and videos. We then review the latest research and systems within the field, accentuating the most promising approaches. We also discuss the current technological challenges and limitations of emotion analysis, underscoring the necessity for continued investigation and innovation. We contend that this represents a "Holy Grail" research problem in computing and delineate pivotal directions for future inquiry. Finally, we examine the ethical ramifications of emotion-understanding technologies and contemplate their potential societal impacts. Overall, this article endeavors to equip readers with a deeper understanding of the domain of emotion analysis in visual media and to inspire further research and development in this captivating and rapidly evolving field.

3.
Plant Cell ; 29(10): 2413-2432, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28974550

RESUMO

Plant cell separation and expansion require pectin degradation by endogenous pectinases such as polygalacturonases, few of which have been functionally characterized. Stomata are a unique system to study both processes because stomatal maturation involves limited separation between sister guard cells and stomatal responses require reversible guard cell elongation and contraction. However, the molecular mechanisms for how stomatal pores form and how guard cell walls facilitate dynamic stomatal responses remain poorly understood. We characterized POLYGALACTURONASE INVOLVED IN EXPANSION3 (PGX3), which is expressed in expanding tissues and guard cells. PGX3-GFP localizes to the cell wall and is enriched at sites of stomatal pore initiation in cotyledons. In seedlings, ablating or overexpressing PGX3 affects both cotyledon shape and the spacing and pore dimensions of developing stomata. In adult plants, PGX3 affects rosette size. Although stomata in true leaves display normal density and morphology when PGX3 expression is altered, loss of PGX3 prevents smooth stomatal closure, and overexpression of PGX3 accelerates stomatal opening. These phenotypes correspond with changes in pectin molecular mass and abundance that can affect wall mechanics. Together, these results demonstrate that PGX3-mediated pectin degradation affects stomatal development in cotyledons, promotes rosette expansion, and modulates guard cell mechanics in adult plants.


Assuntos
Proteínas de Arabidopsis/metabolismo , Arabidopsis/metabolismo , Estômatos de Plantas/metabolismo , Plântula/metabolismo , Arabidopsis/genética , Proteínas de Arabidopsis/genética , Regulação da Expressão Gênica de Plantas/genética , Regulação da Expressão Gênica de Plantas/fisiologia , Folhas de Planta/genética , Folhas de Planta/metabolismo , Estômatos de Plantas/genética , Plântula/genética
4.
Int J Comput Vis ; 128(1): 1-25, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33664553

RESUMO

Humans are arguably innately prepared to comprehend others' emotional expressions from subtle body movements. If robots or computers can be empowered with this capability, a number of robotic applications become possible. Automatically recognizing human bodily expression in unconstrained situations, however, is daunting given the incomplete understanding of the relationship between emotional expressions and body movements. The current research, as a multidisciplinary effort among computer and information sciences, psychology, and statistics, proposes a scalable and reliable crowdsourcing approach for collecting in-the-wild perceived emotion data for computers to learn to recognize body languages of humans. To accomplish this task, a large and growing annotated dataset with 9876 video clips of body movements and 13,239 human characters, named Body Language Dataset (BoLD), has been created. Comprehensive statistical analysis of the dataset revealed many interesting insights. A system to model the emotional expressions based on bodily movements, named Automated Recognition of Bodily Expression of Emotion (ARBEE), has also been developed and evaluated. Our analysis shows the effectiveness of Laban Movement Analysis (LMA) features in characterizing arousal, and our experiments using LMA features further demonstrate computability of bodily expression. We report and compare results of several other baseline methods which were developed for action recognition based on two different modalities, body skeleton and raw image. The dataset and findings presented in this work will likely serve as a launchpad for future discoveries in body language understanding that will enable future robots to interact and collaborate more effectively with humans.

5.
Pattern Recognit Lett ; 140: 165-171, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33324026

RESUMO

We propose a multi-region saliency-aware learning (MSL) method for cross-domain placenta image segmentation. Unlike most existing image-level transfer learning methods that fail to preserve the semantics of paired regions, our MSL incorporates the attention mechanism and a saliency constraint into the adversarial translation process, which can realize multi-region mappings in the semantic level. Specifically, the built-in attention module serves to detect the most discriminative semantic regions that the generator should focus on. Then we use the attention consistency as another guidance for retaining semantics after translation. Furthermore, we exploit the specially designed saliency-consistent constraint to enforce the semantic consistency by requiring the saliency regions unchanged. We conduct experiments using two real-world placenta datasets we have collected. We examine the efficacy of this approach in (1) segmentation and (2) prediction of the placental diagnoses of fetal and maternal inflammatory response (FIR, MIR). Experimental results show the superiority of the proposed approach over the state of the art.

6.
J Exp Bot ; 70(14): 3561-3572, 2019 07 23.
Artigo em Inglês | MEDLINE | ID: mdl-30977824

RESUMO

In plants, stomatal guard cells are one of the most dynamic cell types, rapidly changing their shape and size in response to environmental and intrinsic signals to control gas exchange at the plant surface. Quantitative and systematic knowledge of the biomechanical underpinnings of stomatal dynamics will enable strategies to optimize stomatal responsiveness and improve plant productivity by enhancing the efficiency of photosynthesis and water use. Recent developments in microscopy, mechanical measurements, and computational modeling have revealed new insights into the biomechanics of stomatal regulation and the genetic, biochemical, and structural origins of how plants achieve rapid and reliable stomatal function by tuning the mechanical properties of their guard cell walls. This review compares historical and recent experimental and modeling studies of the biomechanics of stomatal complexes, highlighting commonalities and contrasts between older and newer studies. Key gaps in our understanding of stomatal functionality are also presented, along with assessments of potential methods that could bridge those gaps.


Assuntos
Parede Celular/química , Estômatos de Plantas/química , Fenômenos Biomecânicos , Modelos Biológicos , Plantas/química
7.
BMC Bioinformatics ; 15 Suppl 12: S1, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25474588

RESUMO

BACKGROUND: Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. METHODS: G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. RESULTS: Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean query statement automatically from the natural language query strings. G-Bean is available at http://bioinformatics.clemson.edu/G-Bean/index.php. CONCLUSIONS: G-Bean addresses PubMed's limitations with ontology-graph based query expansion, automatic document indexing, and user search intention discovery. It shows significant advantages in finding relevant articles from the MEDLINE database to meet the information need of the user.


Assuntos
Ontologias Biológicas , Armazenamento e Recuperação da Informação/métodos , PubMed , Software , Algoritmos , Internet , MEDLINE
8.
BMC Med Imaging ; 14: 36, 2014 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-25311811

RESUMO

BACKGROUND: Early and accurate diagnosis of melanoma, the deadliest type of skin cancer, has the potential to reduce morbidity and mortality rate. However, early diagnosis of melanoma is not trivial even for experienced dermatologists, as it needs sampling and laboratory tests which can be extremely complex and subjective. The accuracy of clinical diagnosis of melanoma is also an issue especially in distinguishing between melanoma and mole. To solve these problems, this paper presents an approach that makes non-subjective judgements based on quantitative measures for automatic diagnosis of melanoma. METHODS: Our approach involves image acquisition, image processing, feature extraction, and classification. 187 images (19 malignant melanoma and 168 benign lesions) were collected in a clinic by a spectroscopic device that combines single-scattered, polarized light spectroscopy with multiple-scattered, un-polarized light spectroscopy. After noise reduction and image normalization, features were extracted based on statistical measurements (i.e. mean, standard deviation, mean absolute deviation, L1 norm, and L2 norm) of image pixel intensities to characterize the pattern of melanoma. Finally, these features were fed into certain classifiers to train learning models for classification. RESULTS: We adopted three classifiers - artificial neural network, naïve bayes, and k-nearest neighbour to evaluate our approach separately. The naive bayes classifier achieved the best performance - 89% accuracy, 89% sensitivity and 89% specificity, which was integrated with our approach in a desktop application running on the spectroscopic system for diagnosis of melanoma. CONCLUSIONS: Our work has two strengths. (1) We have used single scattered polarized light spectroscopy and multiple scattered unpolarized light spectroscopy to decipher the multilayered characteristics of human skin. (2) Our approach does not need image segmentation, as we directly probe tiny spots in the lesion skin and the image scans do not involve background skin. The desktop application for automatic diagnosis of melanoma can help dermatologists get a non-subjective second opinion for their diagnosis decision.


Assuntos
Melanoma/classificação , Melanoma/diagnóstico , Análise Espectral/instrumentação , Adulto , Idoso , Inteligência Artificial , Feminino , Humanos , Processamento de Imagem Assistida por Computador/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Masculino , Pessoa de Meia-Idade , Sensibilidade e Especificidade , Análise Espectral/métodos , Adulto Jovem
9.
Patterns (N Y) ; 5(5): 100964, 2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38800363

RESUMO

Visual learning often occurs in a specific context, where an agent acquires skills through exploration and tracking of its location in a consistent environment. The historical spatial context of the agent provides a similarity signal for self-supervised contrastive learning. We present a unique approach, termed environmental spatial similarity (ESS), that complements existing contrastive learning methods. Using images from simulated, photorealistic environments as an experimental setting, we demonstrate that ESS outperforms traditional instance discrimination approaches. Moreover, sampling additional data from the same environment substantially improves accuracy and provides new augmentations. ESS allows remarkable proficiency in room classification and spatial prediction tasks, especially in unfamiliar environments. This learning paradigm has the potential to enable rapid visual learning in agents operating in new environments with unique visual characteristics. Potentially transformative applications span from robotics to space exploration. Our proof of concept demonstrates improved efficiency over methods that rely on extensive, disconnected datasets.

10.
Patterns (N Y) ; 4(10): 100816, 2023 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-37876902

RESUMO

Bodily expressed emotion understanding (BEEU) aims to automatically recognize human emotional expressions from body movements. Psychological research has demonstrated that people often move using specific motor elements to convey emotions. This work takes three steps to integrate human motor elements to study BEEU. First, we introduce BoME (body motor elements), a highly precise dataset for human motor elements. Second, we apply baseline models to estimate these elements on BoME, showing that deep learning methods are capable of learning effective representations of human movement. Finally, we propose a dual-source solution to enhance the BEEU model with the BoME dataset, which trains with both motor element and emotion labels and simultaneously produces predictions for both. Through experiments on the BoLD in-the-wild emotion understanding benchmark, we showcase the significant benefit of our approach. These results may inspire further research utilizing human motor elements for emotion understanding and mental health analysis.

11.
Artigo em Inglês | MEDLINE | ID: mdl-38090870

RESUMO

Most conventional crowd counting methods utilize a fully-supervised learning framework to establish a mapping between scene images and crowd density maps. They usually rely on a large quantity of costly and time-intensive pixel-level annotations for training supervision. One way to mitigate the intensive labeling effort and improve counting accuracy is to leverage large amounts of unlabeled images. This is attributed to the inherent self-structural information and rank consistency within a single image, offering additional qualitative relation supervision during training. Contrary to earlier methods that utilized the rank relations at the original image level, we explore such rank-consistency relation within the latent feature spaces. This approach enables the incorporation of numerous pyramid partial orders, strengthening the model representation capability. A notable advantage is that it can also increase the utilization ratio of unlabeled samples. Specifically, we propose a Deep Rank-consistEnt pyrAmid Model (), which makes full use of rank consistency across coarse-to-fine pyramid features in latent spaces for enhanced crowd counting with massive unlabeled images. In addition, we have collected a new unlabeled crowd counting dataset, FUDAN-UCC, comprising 4000 images for training purposes. Extensive experiments on four benchmark datasets, namely UCF-QNRF, ShanghaiTech PartA and PartB, and UCF-CC-50, show the effectiveness of our method compared with previous semi-supervised methods. The codes are available at https://github.com/bridgeqiqi/DREAM.

12.
Brain Inform (2023) ; 13974: 167-178, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38352916

RESUMO

Specific learning disability of reading, or dyslexia, affects 5-17% of the population in the United States. Research on the neurobiology of dyslexia has included studies with relatively small sample sizes across research sites, thus limiting inference and the application of novel methods, such as deep learning. To address these issues and facilitate open science, we developed an online platform for data-sharing and advanced research programs to enhance opportunities for replication by providing researchers with secondary data that can be used in their research (https://www.dyslexiadata.org). This platform integrates a set of well-designed machine learning algorithms and tools to generate secondary datasets, such as cortical thickness, as well as regional brain volume metrics that have been consistently associated with dyslexia. Researchers can access shared data to address fundamental questions about dyslexia and development, replicate research findings, apply new methods, and educate the next generation of researchers. The overarching goal of this platform is to advance our understanding of a disorder that has significant academic, social, and economic impacts on children, their families, and society.

13.
Comput Med Imaging Graph ; 107: 102236, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37146318

RESUMO

Stroke is one of the leading causes of death and disability in the world. Despite intensive research on automatic stroke lesion segmentation from non-invasive imaging modalities including diffusion-weighted imaging (DWI), challenges remain such as a lack of sufficient labeled data for training deep learning models and failure in detecting small lesions. In this paper, we propose BBox-Guided Segmentor, a method that significantly improves the accuracy of stroke lesion segmentation by leveraging expert knowledge. Specifically, our model uses a very coarse bounding box label provided by the expert and then performs accurate segmentation automatically. The small overhead of having the expert provide a rough bounding box leads to large performance improvement in segmentation, which is paramount to accurate stroke diagnosis. To train our model, we employ a weakly-supervised approach that uses a large number of weakly-labeled images with only bounding boxes and a small number of fully labeled images. The scarce fully labeled images are used to train a generator segmentation network, while adversarial training is used to leverage the large number of weakly-labeled images to provide additional learning signals. We evaluate our method extensively using a unique clinical dataset of 99 fully labeled cases (i.e., with full segmentation map labels) and 831 weakly labeled cases (i.e., with only bounding box labels), and the results demonstrate the superior performance of our approach over state-of-the-art stroke lesion segmentation models. We also achieve competitive performance as a SOTA fully supervised method using less than one-tenth of the complete labels. Our proposed approach has the potential to improve stroke diagnosis and treatment planning, which may lead to better patient outcomes.


Assuntos
Imagem de Difusão por Ressonância Magnética , Acidente Vascular Cerebral , Humanos , Acidente Vascular Cerebral/diagnóstico por imagem , Processamento de Imagem Assistida por Computador
14.
Med Image Comput Comput Assist Interv ; 14225: 116-126, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38911098

RESUMO

The placenta is a valuable organ that can aid in understanding adverse events during pregnancy and predicting issues post-birth. Manual pathological examination and report generation, however, are laborious and resource-intensive. Limitations in diagnostic accuracy and model efficiency have impeded previous attempts to automate placenta analysis. This study presents a novel framework for the automatic analysis of placenta images that aims to improve accuracy and efficiency. Building on previous vision-language contrastive learning (VLC) methods, we propose two enhancements, namely Pathology Report Feature Recomposition and Distributional Feature Recomposition, which increase representation robustness and mitigate feature suppression. In addition, we employ efficient neural networks as image encoders to achieve model compression and inference acceleration. Experiments validate that the proposed approach outperforms prior work in both performance and efficiency by significant margins. The benefits of our method, including enhanced efficacy and deployability, may have significant implications for reproductive healthcare, particularly in rural areas or low- and middle-income countries.

15.
IEEE Trans Cybern ; 52(11): 12175-12188, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34133294

RESUMO

By utilizing physical models of the atmosphere collected from the current weather conditions, the numerical weather prediction model developed by the European Centre for Medium-range Weather Forecasts (ECMWF) can provide the indicators of severe weather such as heavy precipitation for an early-warning system. However, the performance of precipitation forecasts from ECMWF often suffers from considerable prediction biases due to the high complexity and uncertainty for the formation of precipitation. The bias correcting on precipitation (BCoP) was thus utilized for correcting these biases via forecasting variables, including the historical observations and variables of precipitation, and these variables, as predictors, from ECMWF are highly relevant to precipitation. The existing BCoP methods, such as model output statistics and ordinal boosting autoencoder, do not take advantage of both spatiotemporal (ST) dependencies of precipitation and scales of related predictors that can change with different precipitation. We propose an end-to-end deep-learning BCoP model, called the ST scale adaptive selection (SSAS) model, to automatically select the ST scales of the predictors via ST Scale-Selection Modules (S3M/TS2M) for acquiring the optimal high-level ST representations. Qualitative and quantitative experiments carried out on two benchmark datasets indicate that SSAS can achieve state-of-the-art performance, compared with 11 published BCoP methods, especially on heavy precipitation.

16.
Patterns (N Y) ; 3(12): 100627, 2022 Dec 09.
Artigo em Inglês | MEDLINE | ID: mdl-36569557

RESUMO

Automating the three-dimensional (3D) segmentation of stomatal guard cells and other confocal microscopy data is extremely challenging due to hardware limitations, hard-to-localize regions, and limited optical resolution. We present a memory-efficient, attention-based, one-stage segmentation neural network for 3D images of stomatal guard cells. Our model is trained end to end and achieved expert-level accuracy while leveraging only eight human-labeled volume images. As a proof of concept, we applied our model to 3D confocal data from a cell ablation experiment that tests the "polar stiffening" model of stomatal biomechanics. The resulting data allow us to refine this polar stiffening model. This work presents a comprehensive, automated, computer-based volumetric analysis of fluorescent guard cell images. We anticipate that our model will allow biologists to rapidly test cell mechanics and dynamics and help them identify plants that more efficiently use water, a major limiting factor in global agricultural production and an area of critical concern during climate change.

17.
Med Image Anal ; 80: 102522, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35810587

RESUMO

In an emergency room (ER) setting, stroke triage or screening is a common challenge. A quick CT is usually done instead of MRI due to MRI's slow throughput and high cost. Clinical tests are commonly referred to during the process, but the misdiagnosis rate remains high. We propose a novel multimodal deep learning framework, DeepStroke, to achieve computer-aided stroke presence assessment by recognizing patterns of minor facial muscles incoordination and speech inability for patients with suspicion of stroke in an acute setting. Our proposed DeepStroke takes one-minute facial video data and audio data readily available during stroke triage for local facial paralysis detection and global speech disorder analysis. Transfer learning was adopted to reduce face-attribute biases and improve generalizability. We leverage a multi-modal lateral fusion to combine the low- and high-level features and provide mutual regularization for joint training. Novel adversarial training is introduced to obtain identity-free and stroke-discriminative features. Experiments on our video-audio dataset with actual ER patients show that DeepStroke outperforms state-of-the-art models and achieves better performance than both a triage team and ER doctors, attaining a 10.94% higher sensitivity and maintaining 7.37% higher accuracy than traditional stroke triage when specificity is aligned. Meanwhile, each assessment can be completed in less than six minutes, demonstrating the framework's great potential for clinical translation.


Assuntos
Aprendizado Profundo , Acidente Vascular Cerebral , Serviço Hospitalar de Emergência , Humanos , Imageamento por Ressonância Magnética , Acidente Vascular Cerebral/diagnóstico por imagem , Triagem
18.
Proteomics ; 11(19): 3845-52, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21761563

RESUMO

Identification of genes and pathways involved in diseases and physiological conditions is a major task in systems biology. In this study, we developed a novel non-parameter Ising model to integrate protein-protein interaction network and microarray data for identifying differentially expressed (DE) genes. We also proposed a simulated annealing algorithm to find the optimal configuration of the Ising model. The Ising model was applied to two breast cancer microarray data sets. The results showed that more cancer-related DE sub-networks and genes were identified by the Ising model than those by the Markov random field model. Furthermore, cross-validation experiments showed that DE genes identified by Ising model can improve classification performance compared with DE genes identified by Markov random field model.


Assuntos
Neoplasias da Mama/genética , Perfilação da Expressão Gênica/métodos , Regulação Neoplásica da Expressão Gênica , Mapas de Interação de Proteínas , Algoritmos , Simulação por Computador , Feminino , Humanos , Modelos Biológicos , Modelos Estatísticos , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Sensibilidade e Especificidade
19.
Nucleic Acids Res ; 37(Web Server issue): W345-9, 2009 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-19491312

RESUMO

We have developed a set of online tools for measuring the semantic similarities of Gene Ontology (GO) terms and the functional similarities of gene products, and for further discovering biomedical knowledge from the GO database. The tools have been used for about 6.9 million times by 417 institutions from 43 countries since October 2006. The online tools are available at: http://bioinformatics.clemson.edu/G-SESAME.


Assuntos
Genes , Software , Vocabulário Controlado , Análise por Conglomerados , Bases de Dados Genéticas , Internet
20.
J Xray Sci Technol ; 19(4): 531-44, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-25214385

RESUMO

This study presents a computer-aided classification method to distinguish osteoarthritis finger joints from healthy ones based on the functional images captured by x-ray guided diffuse optical tomography. Three imaging features, joint space width, optical absorption, and scattering coefficients, are employed to train a Least Squares Support Vector Machine (LS-SVM) classifier for osteoarthritis classification. The 10-fold validation results show that all osteoarthritis joints are clearly identified and all healthy joints are ruled out by the LS-SVM classifier. The best sensitivity, specificity, and overall accuracy of the classification by experienced technicians based on manual calculation of optical properties and visual examination of optical images are only 85%, 93%, and 90%, respectively. Therefore, our LS-SVM based computer-aided classification is a considerably improved method for osteoarthritis diagnosis.


Assuntos
Articulações dos Dedos/patologia , Interpretação de Imagem Assistida por Computador/métodos , Osteoartrite/diagnóstico , Tomografia Óptica/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Desenho de Equipamento , Feminino , Humanos , Pessoa de Meia-Idade , Máquina de Vetores de Suporte , Tomografia Óptica/instrumentação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA