Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 97
1.
Nat Methods ; 18(11): 1317-1321, 2021 11.
Article En | MEDLINE | ID: mdl-34725480

The scaling of single-cell data exploratory analysis with the rapidly growing diversity and quantity of single-cell omics datasets demands more interpretable and robust data representation that is generalizable across datasets. Here, we have developed a 'linearly interpretable' framework that combines the interpretability and transferability of linear methods with the representational power of non-linear methods. Within this framework we introduce a data representation and visualization method, GraphDR, and a structure discovery method, StructDR, that unifies cluster, trajectory and surface estimation and enables their confidence set inference.


Algorithms , Computational Biology/methods , Computer Graphics/statistics & numerical data , Datasets as Topic , Sequence Analysis, RNA/methods , Single-Cell Analysis/methods , Software , Animals , Humans , RNA-Seq
2.
PLoS One ; 16(8): e0256187, 2021.
Article En | MEDLINE | ID: mdl-34388224

Given a trained deep graph convolution network (GCN), how can we effectively compress it into a compact network without significant loss of accuracy? Compressing a trained deep GCN into a compact GCN is of great importance for implementing the model to environments such as mobile or embedded systems, which have limited computing resources. However, previous works for compressing deep GCNs do not consider the multi-hop aggregation of the deep GCNs, though it is the main purpose for their multiple GCN layers. In this work, we propose MustaD (Multi-staged knowledge Distillation), a novel approach for compressing deep GCNs to single-layered GCNs through multi-staged knowledge distillation (KD). MustaD distills the knowledge of 1) the aggregation from multiple GCN layers as well as 2) task prediction while preserving the multi-hop feature aggregation of deep GCNs by a single effective layer. Extensive experiments on four real-world datasets show that MustaD provides the state-of-the-art performance compared to other KD based methods. Specifically, MustaD presents up to 4.21%p improvement of accuracy compared to the second-best KD models.


Computer Graphics/statistics & numerical data , Neural Networks, Computer , Datasets as Topic , Humans , Knowledge Bases
4.
Brief Bioinform ; 22(5)2021 09 02.
Article En | MEDLINE | ID: mdl-33415333

Predicting disease-related long non-coding RNAs (lncRNAs) is beneficial to finding of new biomarkers for prevention, diagnosis and treatment of complex human diseases. In this paper, we proposed a machine learning techniques-based classification approach to identify disease-related lncRNAs by graph auto-encoder (GAE) and random forest (RF) (GAERF). First, we combined the relationship of lncRNA, miRNA and disease into a heterogeneous network. Then, low-dimensional representation vectors of nodes were learned from the network by GAE, which reduce the dimension and heterogeneity of biological data. Taking these feature vectors as input, we trained a RF classifier to predict new lncRNA-disease associations (LDAs). Related experiment results show that the proposed method for the representation of lncRNA-disease characterizes them accurately. GAERF achieves superior performance owing to the ensemble learning method, outperforming other methods significantly. Moreover, case studies further demonstrated that GAERF is an effective method to predict LDAs.


Lung Neoplasms/genetics , Machine Learning , Neural Networks, Computer , Prostatic Neoplasms/genetics , RNA, Long Noncoding/genetics , Stomach Neoplasms/genetics , Biomarkers, Tumor/genetics , Biomarkers, Tumor/metabolism , Computational Biology/methods , Computer Graphics/statistics & numerical data , Decision Trees , Gene Expression Regulation, Neoplastic , Humans , Lung Neoplasms/diagnosis , Lung Neoplasms/metabolism , Lung Neoplasms/pathology , Male , MicroRNAs/classification , MicroRNAs/genetics , MicroRNAs/metabolism , Prostatic Neoplasms/diagnosis , Prostatic Neoplasms/metabolism , Prostatic Neoplasms/pathology , RNA, Long Noncoding/classification , RNA, Long Noncoding/metabolism , ROC Curve , Risk Factors , Stomach Neoplasms/diagnosis , Stomach Neoplasms/metabolism , Stomach Neoplasms/pathology
5.
J Clin Epidemiol ; 132: 34-45, 2021 04.
Article En | MEDLINE | ID: mdl-33309886

BACKGROUND AND OBJECTIVE: To introduce potential static tabular and graphical techniques for visually presenting overlap between systematic reviews (SRs) included in overviews of systematic reviews (OoSRs). METHODS: The graphical approaches described include Venn and Euler diagrams, as well as matrix-based, node-link, and aggregation-based techniques. We used fundamental concepts of mathematics from set and network theory to develop our novel graphical approaches. The graphical displays were created using R. RESULTS: Overview authors have the flexibility to choose from a variety of visualizations, depending on the characteristics of their study. If the OoSRs include few SRs, a Venn or an Euler diagram can be used. In case of OoSRs with more SRs, Upset plots, heatmaps, and node-link graphs are more appropriate for visualizing overlapping SRs. Stacked bar plots constitute an aggregation-based technique of illustrating overlap. Strengths and limitations of each graphical approach are presented. CONCLUSION: The degree of overlap should be explored for the entire study and for specific outcomes of interest. The proposed graphical techniques may assist methodologists and authors in identifying overlap, which in turn may improve validity and transparency in OoSRs. More research is needed to understand which technique would be most useful and easiest to understand.


Computer Graphics/statistics & numerical data , Research Design , Systematic Reviews as Topic/methods , Evidence-Based Medicine , Humans
6.
Sci Rep ; 10(1): 18250, 2020 10 26.
Article En | MEDLINE | ID: mdl-33106501

Incorrect drug target identification is a major obstacle in drug discovery. Only 15% of drugs advance from Phase II to approval, with ineffective targets accounting for over 50% of these failures1-3. Advances in data fusion and computational modeling have independently progressed towards addressing this issue. Here, we capitalize on both these approaches with Rosalind, a comprehensive gene prioritization method that combines heterogeneous knowledge graph construction with relational inference via tensor factorization to accurately predict disease-gene links. Rosalind demonstrates an increase in performance of 18%-50% over five comparable state-of-the-art algorithms. On historical data, Rosalind prospectively identifies 1 in 4 therapeutic relationships eventually proven true. Beyond efficacy, Rosalind is able to accurately predict clinical trial successes (75% recall at rank 200) and distinguish likely failures (74% recall at rank 200). Lastly, Rosalind predictions were experimentally tested in a patient-derived in-vitro assay for Rheumatoid arthritis (RA), which yielded 5 promising genes, one of which is unexplored in RA.


Arthritis, Rheumatoid/drug therapy , Computational Biology/methods , Computer Graphics/statistics & numerical data , Computer Simulation/standards , Drug Development/methods , Drug Discovery/methods , Drug Evaluation, Preclinical , Algorithms , Arthritis, Rheumatoid/genetics , Arthritis, Rheumatoid/metabolism , Bayes Theorem , Humans
7.
J Pregnancy ; 2020: 3943498, 2020.
Article En | MEDLINE | ID: mdl-32411465

BACKGROUND: A partograph is a graphic representation of labor which is used by health professionals for monitoring labor progress and fetal and maternal wellbeing. However, its utilization and associated factors have not been studied yet in Hadiya Zone, Southern Ethiopia. Hence, the aim of this study was to determine partograph utilization and associated factors among obstetric care providers at public health facilities in Hadiya Zone, Southern Ethiopia. METHODS: A facility-based cross-sectional study was conducted on 436 health professionals. The study was conducted from March 04 to April 07, 2019. A simple random sampling method was carried out to select 19 health facilities and study participants from selected facilities. Data was collected using a pretested structured questionnaire, entered into EPI-data version 3.1 and exported to Statistical Package for Social Sciences (SPSS) version 20. Descriptive statistics and binary and multivariable logistic regression analyses were done. P values less than 0.05 were used to declare significant association between dependent and independent variables. RESULTS: The overall magnitude of partograph utilization was found to be 54.4%, and finding from data abstraction from a document revealed that out of 18 parameters, only 10 parameters were recorded completely. Type of health facility (hospital as compared to HC) (AOR = 2.96; CI = 1.71, 5.12), received on-the-job training on partograph (AOR = 7.06; CI = 4.3, 11.37), knowledgeable about partograph (AOR = 2.12; CI = 1.3, 3.9), and favorable attitude toward partograph use (AOR = 1.8; CI = 1.12 - 2.97) were significantly associated with partograph use. CONCLUSION: Overall partograph utilization was low, and incomplete recording of required parameters on partograph was observed in this study. Participants who received on-the-job training on partograph, who are working in a hospital, who are knowledgeable about partograph, and who have favorable attitude toward partograph use were factors affecting partograph use positively.


Computer Graphics/statistics & numerical data , Health Facilities , Health Personnel , Labor, Obstetric , Procedures and Techniques Utilization , Attitude of Health Personnel , Ethiopia/epidemiology , Female , Humans , Pregnancy
8.
PLoS One ; 14(11): e0223745, 2019.
Article En | MEDLINE | ID: mdl-31725742

In this paper, we define novel graph measures for directed networks. The measures are based on graph polynomials utilizing the out- and in-degrees of directed graphs. Based on these polynomial, we define another polynomial and use their positive zeros as graph measures. The measures have meaningful properties that we investigate based on analytical and numerical results. As the computational complexity to compute the measures is polynomial, our approach is efficient and can be applied to large networks. We emphasize that our approach clearly complements the literature in this field as, to the best of our knowledge, existing complexity measures for directed graphs have never been applied on a large scale.


Computational Biology/statistics & numerical data , Computer Graphics/statistics & numerical data , Computer Simulation , Game Theory , Mathematical Concepts , Systems Biology/statistics & numerical data
9.
PLoS One ; 14(4): e0214852, 2019.
Article En | MEDLINE | ID: mdl-30973907

In this paper, we put forward a real-time multiple GPUs (multi-GPU) accelerated virtual-reality interaction simulation framework where the reconstructed objects from camera images interact with virtual deformable objects. Firstly, based on an extended voxel-based visual hull (VbVH) algorithm, we design an image-based 3D reconstruction platform for real objects. Then, an improved hybrid deformation model, which couples the geometry constrained fast lattice shape matching method (FLSM) and total Lagrangian explicit dynamics (TLED) algorithm, is proposed to achieve efficient and stable simulation of the virtual objects' elastic deformations. Finally, one-way virtual-reality interactions including soft tissues' virtual cutting with bleeding effects are successfully simulated. Moreover, with the purpose of significantly improving the computational efficiency of each time step, we propose an entire multi-GPU implementation method of the framework using compute unified device architecture (CUDA). The experiment results demonstrate that our multi-GPU accelerated virtual-reality interaction framework achieves real-time performance under the moderate calculation scale, which is a new effective 3D interaction technique for virtual reality applications.


Computer Graphics , Virtual Reality , Algorithms , Computer Graphics/statistics & numerical data , Computer Simulation , Computer Systems , Computer-Assisted Instruction , Humans , Image Processing, Computer-Assisted/methods , Image Processing, Computer-Assisted/statistics & numerical data , Imaging, Three-Dimensional/methods , Imaging, Three-Dimensional/statistics & numerical data , Models, Anatomic , Surgical Procedures, Operative/education , User-Computer Interface
10.
Brief Bioinform ; 20(4): 1513-1523, 2019 07 19.
Article En | MEDLINE | ID: mdl-29590305

The field of computational biology has become largely dependent on data visualization tools to analyze the increasing quantities of data gathered through the use of new and growing technologies. Aside from the volume, which often results in large amounts of noise and complex relationships with no clear structure, the visualization of biological data sets is hindered by their heterogeneity, as data are obtained from different sources and contain a wide variety of attributes, including spatial and temporal information. This requires visualization approaches that are able to not only represent various data structures simultaneously but also provide exploratory methods that allow the identification of meaningful relationships that would not be perceptible through data analysis algorithms alone. In this article, we present a survey of visualization approaches applied to the analysis of biological data. We focus on graph-based visualizations and tools that use coordinated multiple views to represent high-dimensional multivariate data, in particular time series gene expression, protein-protein interaction networks and biological pathways. We then discuss how these methods can be used to help solve the current challenges surrounding the visualization of complex biological data sets.


Computational Biology/methods , Data Analysis , Algorithms , Animals , Computer Graphics/statistics & numerical data , Data Interpretation, Statistical , Gene Expression Profiling/statistics & numerical data , Humans , Models, Biological , Multivariate Analysis , Protein Interaction Maps , User-Computer Interface
11.
IEEE Trans Vis Comput Graph ; 25(9): 2725-2737, 2019 09.
Article En | MEDLINE | ID: mdl-30028709

We present a volume exploration framework, FeatureLego, that uses a novel voxel clustering approach for efficient selection of semantic features. We partition the input volume into a set of compact super-voxels that represent the finest selection granularity. We then perform an exhaustive clustering of these super-voxels using a graph-based clustering method. Unlike the prevalent brute-force parameter sampling approaches, we propose an efficient algorithm to perform this exhaustive clustering. By computing an exhaustive set of clusters, we aim to capture as many boundaries as possible and ensure that the user has sufficient options for efficiently selecting semantically relevant features. Furthermore, we merge all the computed clusters into a single tree of meta-clusters that can be used for hierarchical exploration. We implement an intuitive user-interface to interactively explore volumes using our clustering approach. Finally, we show the effectiveness of our framework on multiple real-world datasets of different modalities.


Algorithms , Computer Graphics , Cluster Analysis , Computer Graphics/statistics & numerical data , Computer Simulation , Databases, Factual/statistics & numerical data , Humans , Image Interpretation, Computer-Assisted/statistics & numerical data , Imaging, Three-Dimensional , Models, Anatomic , Semantics , Spinal Cord/anatomy & histology , Spine/anatomy & histology , Tooth/anatomy & histology , User-Computer Interface
12.
Biometrics ; 75(1): 48-57, 2019 03.
Article En | MEDLINE | ID: mdl-30129091

We introduce a novel method for separating amplitude and phase variability in exponential family functional data. Our method alternates between two steps: the first uses generalized functional principal components analysis to calculate template functions, and the second estimates smooth warping functions that map observed curves to templates. Existing approaches to registration have primarily focused on continuous functional observations, and the few approaches for discrete functional data require a pre-smoothing step; these methods are frequently computationally intensive. In contrast, we focus on the likelihood of the observed data and avoid the need for preprocessing, and we implement both steps of our algorithm in a computationally efficient way. Our motivation comes from the Baltimore Longitudinal Study on Aging, in which accelerometer data provides valuable insights into the timing of sedentary behavior. We analyze binary functional data with observations each minute over 24 hours for 592 participants, where values represent activity and inactivity. Diurnal patterns of activity are obscured due to misalignment in the original data but are clear after curves are aligned. Simulations designed to mimic the application indicate that the proposed methods outperform competing approaches in terms of estimation accuracy and computational efficiency. Code for our method and simulations is publicly available.


Data Interpretation, Statistical , Principal Component Analysis/methods , Time , Algorithms , Computer Graphics/statistics & numerical data , Computer Simulation/statistics & numerical data , Humans , Longitudinal Studies , Motor Activity , Sample Size
13.
Biometrics ; 75(1): 36-47, 2019 03.
Article En | MEDLINE | ID: mdl-30081434

The directed acyclic graph (DAG) is a powerful tool to model the interactions of high-dimensional variables. While estimating edge directions in a DAG often requires interventional data, one can estimate the skeleton of a DAG (i.e., an undirected graph formed by removing the direction of each edge in a DAG) using observational data. In real data analyses, the samples of the high-dimensional variables may be collected from a mixture of multiple populations. Each population has its own DAG while the DAGs across populations may have significant overlap. In this article, we propose a two-step approach to jointly estimate the DAG skeletons of multiple populations while the population origin of each sample may or may not be labeled. In particular, our method allows a probabilistic soft label for each sample, which can be easily computed and often leads to more accurate skeleton estimation than hard labels. Compared with separate estimation of skeletons for each population, our method is more accurate and robust to labeling errors. We study the estimation consistency for our method, and demonstrate its performance using simulation studies in different settings. Finally, we apply our method to analyze gene expression data from breast cancer patients of multiple cancer subtypes.


Computer Graphics/statistics & numerical data , Epidemiologic Research Design , Models, Statistical , Breast Neoplasms/genetics , Computer Simulation , Female , Gene Expression , Genes, Neoplasm , Humans
14.
J Proteome Res ; 17(3): 1314-1320, 2018 03 02.
Article En | MEDLINE | ID: mdl-29400476

Label-free quantification has grown in popularity as a means of obtaining relative abundance measures for proteomics experiments. However, easily accessible and integrated tools to perform label-free quantification have been lacking. We describe StPeter, an implementation of Normalized Spectral Index quantification for wide availability through integration into the widely used Trans-Proteomic Pipeline. This implementation has been specifically designed for reproducibility and ease of use. We demonstrate that StPeter outperforms other state-of-the art packages using a recently reported benchmark data set over the range of false discovery rates relevant to shotgun proteomics results. We also demonstrate that the software is computationally efficient and supports data from a variety of instrument platforms and experimental designs. Results can be viewed within the Trans-Proteomic Pipeline graphical user interfaces and exported in standard formats for downstream statistical analysis. By integrating StPeter into the freely available Trans-Proteomic Pipeline, users can now obtain high-quality label-free quantification of any data set in seconds by adding a single command to the workflow.


Datasets as Topic/statistics & numerical data , Mass Spectrometry/statistics & numerical data , Proteomics/methods , User-Computer Interface , Animals , Benchmarking , Computer Graphics/statistics & numerical data , Databases, Protein , Escherichia coli/chemistry , Humans , Internet , Mass Spectrometry/methods , Proteomics/statistics & numerical data
15.
IEEE Trans Vis Comput Graph ; 24(8): 2298-2314, 2018 08.
Article En | MEDLINE | ID: mdl-28809701

Skeletonization offers a compact representation of an object while preserving important topological and geometrical features. Literature on skeletonization of binary objects is quite mature. However, challenges involved with skeletonization of fuzzy objects are mostly unanswered. This paper presents a new theory and algorithm of skeletonization for fuzzy objects, evaluates its performance, and demonstrates its applications. A formulation of fuzzy grassfire propagation is introduced; its relationships with fuzzy distance functions, level sets, and geodesics are discussed; and several new theoretical results are presented in the continuous space. A notion of collision-impact of fire-fronts at skeletal points is introduced, and its role in filtering noisy skeletal points is demonstrated. A fuzzy object skeletonization algorithm is developed using new notions of surface- and curve-skeletal voxels, digital collision-impact, filtering of noisy skeletal voxels, and continuity of skeletal surfaces. A skeletal noise pruning algorithm is presented using branch-level significance. Accuracy and robustness of the new algorithm are examined on computer-generated phantoms and micro- and conventional CT imaging of trabecular bone specimens. An application of fuzzy object skeletonization to compute structure-width at a low image resolution is demonstrated, and its ability to predict bone strength is examined. Finally, the performance of the new fuzzy object skeletonization algorithm is compared with two binary object skeletonization methods.


Algorithms , Computer Graphics/statistics & numerical data , Fuzzy Logic , Animals , Bone and Bones/diagnostic imaging , Bone and Bones/physiology , Computer Simulation , Humans , Models, Anatomic , Models, Statistical , Phantoms, Imaging/statistics & numerical data , Tomography, X-Ray Computed/statistics & numerical data , X-Ray Microtomography/statistics & numerical data
16.
Pac Symp Biocomput ; 23: 578-589, 2018.
Article En | MEDLINE | ID: mdl-29218916

In this paper, we present VisAGE, a method that visualizes electronic medical records (EMRs) in a low-dimensional space. Effective visualization of new patients allows doctors to view similar, previously treated patients and to identify the new patients' disease subtypes, reducing the chance of misdiagnosis. However, EMRs are typically incomplete or fragmented, resulting in patients who are missing many available features being placed near unrelated patients in the visualized space. VisAGE integrates several external data sources to enrich EMR databases to solve this issue. We evaluated VisAGE on a dataset of Parkinson's disease patients. We qualitatively and quantitatively show that VisAGE can more effectively cluster patients, which allows doctors to better discover patient subtypes and thus improve patient care.


Electronic Health Records/statistics & numerical data , Algorithms , Computational Biology/methods , Computer Graphics/statistics & numerical data , Databases, Factual/statistics & numerical data , Disease Progression , False Positive Reactions , Female , Humans , Information Storage and Retrieval/statistics & numerical data , Knowledge Bases , Male , Parkinson Disease/drug therapy , Parkinson Disease/etiology , Polymorphism, Single Nucleotide , Protein Interaction Maps
17.
Pac Symp Biocomput ; 23: 590-601, 2018.
Article En | MEDLINE | ID: mdl-29218917

Obtaining relevant information about gene interactions is critical for understanding disease processes and treatment. With the rise in text mining approaches, the volume of such biomedical data is rapidly increasing, thereby creating a new problem for the users of this data: information overload. A tool for efficient querying and visualization of biomedical data that helps researchers understand the underlying biological mechanisms for diseases and drug responses, and ultimately helps patients, is sorely needed. To this end we have developed GeneDive, a web-based information retrieval, filtering, and visualization tool for large volumes of gene interaction data. GeneDive offers various features and modalities that guide the user through the search process to efficiently reach the information of their interest. GeneDive currently processes over three million gene-gene interactions with response times within a few seconds. For over half of the curated gene sets sourced from four prominent databases, more than 80% of the gene set members are recovered by GeneDive. In the near future, GeneDive will seamlessly accommodate other interaction types, such as gene-drug and gene-disease interactions, thus enabling full exploration of topics such as precision medicine. The GeneDive application and information about its underlying system architecture are available at http://www.genedive.net.


Epistasis, Genetic , Precision Medicine/statistics & numerical data , Software , Computational Biology/methods , Computer Graphics/statistics & numerical data , Data Mining/statistics & numerical data , Databases, Genetic/statistics & numerical data , Gene Regulatory Networks , Humans , Information Storage and Retrieval/statistics & numerical data , Internet , User-Computer Interface
18.
Genet Sel Evol ; 49(1): 91, 2017 Dec 20.
Article En | MEDLINE | ID: mdl-29262775

BACKGROUND: Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. RESULTS: The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . CONCLUSION: ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.


Computer Graphics/statistics & numerical data , Data Interpretation, Statistical , Genomics/methods , Software , Animals , Genome
19.
PLoS One ; 12(2): e0171428, 2017.
Article En | MEDLINE | ID: mdl-28182743

Graphlet analysis is an approach to network analysis that is particularly popular in bioinformatics. We show how to set up a system of linear equations that relate the orbit counts and can be used in an algorithm that is significantly faster than the existing approaches based on direct enumeration of graphlets. The approach presented in this paper presents a generalization of the currently fastest method for counting 5-node graphlets in bioinformatics. The algorithm requires existence of a vertex with certain properties; we show that such vertex exists for graphlets of arbitrary size, except for complete graphs and a cycle with four nodes, which are treated separately. Empirical analysis of running time agrees with the theoretical results.


Algorithms , Computational Biology/methods , Computer Graphics , Electronic Data Processing/methods , Computer Graphics/statistics & numerical data , Computer Simulation , Gene Regulatory Networks , Models, Biological , Models, Theoretical , Protein Interaction Mapping
20.
Health Expect ; 20(4): 797-804, 2017 08.
Article En | MEDLINE | ID: mdl-27981688

OBJECTIVE: Patients making treatment decisions require understandable evidence-based information. However, evidence on graphical presentation of benefits and side-effects of medical treatments is not conclusive. The study evaluated a new space-saving format, CLARIFIG (clarifying risk figures), aiming to facilitate accuracy of comprehension. METHODS: CLARIFIG displays groups of patients with and without treatment benefits as coloured sectors of a proportional bar graph representing in total 100 patients. Supplementary icons indicate the corresponding group's actual condition. The study used an application showing effects of immunotherapy intended to slow disease progression in multiple sclerosis (MS). In a four-arm web-based randomized controlled trial, CLARIFIG was compared to the reference standard, multifigure pictographs (MFP), regarding comprehension (primary outcome) and processing time. Both formats were presented as static and animated versions. People with MS were recruited through the website of the German MS society. RESULTS: Six hundred and eighty-two patients were randomized and analysed for the primary end point. There were no differences in comprehension rates (MFPstatic =46%, CLARIFIGstatic =44%; P=.59; MFPanimated =23%, CLARIFIGanimated =30%; P=.134). Processing time for CLARIFIG was shorter only in the animated version (MFPstatic =162 seconds, CLARIFIGstatic =155 seconds; P=.653; MFPanimated =286 seconds, CLARIFIGanimated =189 seconds; P≤.001). However, both animated versions caused more wrong answers and longer processing time than static presentation (MFPstatic vs animated : P≤.001/.001, CLARIFIGstatic vs animated : P=.027/.017). CONCLUSION: Comprehension of the new format is comparable to MFP. CLARIFIG has the potential to simplify presentation in more complex contexts such as comparison of several treatment options in patient decision aids, but further studies are needed.


Communication , Computer Graphics/statistics & numerical data , Decision Support Techniques , Patient Education as Topic , Adult , Decision Making , Female , Humans , Internet , Male , Patient Preference , Risk Assessment
...