Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
1.
Cell Microbiol ; 23(2): e13280, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33073426

RESUMO

Detailed analysis of secondary envelopment of the herpesvirus human cytomegalovirus (HCMV) by transmission electron microscopy (TEM) is crucial for understanding the formation of infectious virions. Here, we present a convolutional neural network (CNN) that automatically recognises cytoplasmic capsids and distinguishes between three HCMV capsid envelopment stages in TEM images. 315 TEM images containing 2,610 expert-labelled capsids of the three classes were available for CNN training. To overcome the limitation of small training datasets and thus poor CNN performance, we used a deep learning method, the generative adversarial network (GAN), to automatically increase our labelled training dataset with 500 synthetic images and thus to 9,192 labelled capsids. The synthetic TEM images were added to the ground truth dataset to train the Faster R-CNN deep learning-based object detector. Training with 315 ground truth images yielded an average precision (AP) of 53.81% for detection, whereas the addition of 500 synthetic training images increased the AP to 76.48%. This shows that generation and additional use of synthetic labelled images for detector training is an inexpensive way to improve detector performance. This work combines the gold standard of secondary envelopment research with state-of-the-art deep learning technology to speed up automatic image analysis even when large labelled training datasets are not available.


Assuntos
Capsídeo/ultraestrutura , Citomegalovirus/ultraestrutura , Aprendizado Profundo , Infecções por Herpesviridae/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Vírion/ultraestrutura , Algoritmos , Citomegalovirus/metabolismo , Infecções por Herpesviridae/virologia , Humanos , Aprendizado de Máquina , Microscopia Eletrônica de Transmissão , Redes Neurais de Computação , Vírion/metabolismo
2.
J Microsc ; 277(1): 12-22, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31859366

RESUMO

Detecting crossovers in cryo-electron microscopy images of protein fibrils is an important step towards determining the morphological composition of a sample. Currently, the crossover locations are picked by hand, which introduces errors and is a time-consuming procedure. With the rise of deep learning in computer vision tasks, the automation of such problems has become more and more applicable. However, because of insufficient quality of raw data and missing labels, neural networks alone cannot be applied successfully to target the given problem. Thus, we propose an approach combining conventional computer vision techniques and deep learning to automatically detect fibril crossovers in two-dimensional cryo-electron microscopy image data and apply it to murine amyloid protein A fibrils, where we first use direct image processing methods to simplify the image data such that a convolutional neural network can be applied to the remaining segmentation problem. LAY DESCRIPTION: The ability of protein to form fibrillary structures underlies important cellular functions but can also give rise to disease, such as in a group of disorders, termed amyloid diseases. These diseases are characterised by the formation of abnormal protein filaments, so-called amyloid fibrils, that deposit inside the tissue. Many amyloid fibrils are helically twisted, which leads to periodic variations in the apparent width of the fibril, when observing amyloid fibrils using microscopy techniques like cryogenic electron microscopy (cryo-EM). Due to the two-dimensional projection, parts of the fibril orthogonal to the projection plane appear narrower than parts parallel to the plane. The parts of small width are called crossovers. The distance between two adjacent crossovers is an important characteristic for the analysis of amyloid fibrils, because it is informative about the fibril morphology and because it can be determined from raw data by eye. A given protein can typically form different fibril morphologies. The morphology can vary depending on the chemical and physical conditions of fibril formation, but even when fibrils are formed under identical solution conditions, different morphologies may be present in a sample. As the crossovers allow to define fibril morphologies in a heterogeneous sample, detecting crossovers is an important first step in the sample analysis. In the present paper, we introduce a method for the automated detection of fibril crossovers in cryo-EM image data. The data consists of greyscale images, each showing an unknown number of potentially overlapping fibrils. In a first step, techniques from image analysis and pattern detection are employed to detect single fibrils in the raw data. Then, a convolutional neural network is used to find the locations of crossovers on each single fibril. As these predictions may contain errors, further postprocessing steps assess the quality and may slightly alter or reject the predicted crossovers.


Assuntos
Amiloide/ultraestrutura , Microscopia Crioeletrônica/métodos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Animais , Camundongos , Redes Neurais de Computação , Conformação Proteica , Reprodutibilidade dos Testes
3.
Artigo em Inglês | MEDLINE | ID: mdl-38753475

RESUMO

In volume rendering, transfer functions are used to classify structures of interest, and to assign optical properties such as color and opacity. They are commonly defined as 1D or 2D functions that map simple features to these optical properties. As the process of designing a transfer function is typically tedious and unintuitive, several approaches have been proposed for their interactive specification. In this paper, we present a novel method to define transfer functions for volume rendering by leveraging the feature extraction capabilities of self-supervised pre-trained vision transformers. To design a transfer function, users simply select the structures of interest in a slice viewer, and our method automatically selects similar structures based on the high-level features extracted by the neural network. Contrary to previous learning-based transfer function approaches, our method does not require training of models and allows for quick inference, enabling an interactive exploration of the volume data. Our approach reduces the amount of necessary annotations by interactively informing the user about the current classification, so they can focus on annotating the structures of interest that still require annotation. In practice, this allows users to design transfer functions within seconds, instead of minutes. We compare our method to existing learning-based approaches in terms of annotation and compute time, as well as with respect to segmentation accuracy. Our accompanying video showcases the interactivity and effectiveness of our method.

4.
J Gastrointestin Liver Dis ; 33(2): 226-233, 2024 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-38944875

RESUMO

BACKGROUND AND AIMS: Endoscopy simulators are primarily designed to provide training in interventions performed during procedures. Peri-interventional tasks such as checking patient data, filling out forms for team time-out, patient monitoring, and performing sedation are often not covered. This study assesses the face, content, and construct validity of the ViGaTu (Virtual Gastro Tutor) immersive virtual reality (VR) simulator in teaching these skills. METHODS: 71 nurses and physicians were invited to take part in VR training. The participants experienced an immersive VR simulation of an endoscopy procedure, including setting up the endoscopic devices, checking sign-in and team time-out forms, placing monitoring devices, and performing sedation. The actions performed by the participants and their timing were continuously recorded. Face and content validity, as well as the System Usability Scale (SUS), were then assessed. RESULTS: 43 physicians and 28 nurses from 43 centers took a mean of 27.8 min (standard deviation ± 14.42 min) to complete the simulation. Seventy-five percent of the items for assessing face validity were rated as realistic, and 60% of items assessing content validity and usefulness of the simulation for different learning goals were rated as useful by the participants (four out of five on a Likert scale). The SUS score was 70, demonstrating a high degree of usability. With regard to construct validity, experienced endoscopy staff were significantly faster in setting up the endoscope tower and instruments than beginners. CONCLUSIONS: This multicenter study presents a new type of interdisciplinary endoscopy training system featuring peri-interventional tasks and sedation in an immersive VR environment.


Assuntos
Competência Clínica , Treinamento por Simulação , Realidade Virtual , Humanos , Treinamento por Simulação/métodos , Reprodutibilidade dos Testes , Feminino , Adulto , Masculino , Endoscopia Gastrointestinal/educação , Enfermeiras e Enfermeiros , Pessoa de Meia-Idade , Médicos
5.
Artigo em Inglês | MEDLINE | ID: mdl-37027532

RESUMO

Neural networks have shown great success in extracting geometric information from color images. Especially, monocular depth estimation networks are increasingly reliable in real-world scenes. In this work we investigate the applicability of such monocular depth estimation networks to semi-transparent volume rendered images. As depth is notoriously difficult to define in a volumetric scene without clearly defined surfaces, we consider different depth computations that have emerged in practice, and compare state-of-the-art monocular depth estimation approaches for these different interpretations during an evaluation considering different degrees of opacity in the renderings. Additionally, we investigate how these networks can be extended to further obtain color and opacity information, in order to create a layered representation of the scene based on a single color image. This layered representation consists of spatially separated semi-transparent intervals that composite to the original input rendering. In our experiments we show that existing approaches to monocular depth estimation can be adapted to perform well on semi-transparent volume renderings, which has several applications in the area of scientific visualization, like re-composition with additional objects and labels or additional shading.

6.
IEEE Trans Vis Comput Graph ; 29(12): 5468-5482, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36288226

RESUMO

Exploring high-dimensional data is a common task in many scientific disciplines. To address this task, two-dimensional embeddings, such as tSNE and UMAP, are widely used. While these determine the 2D position of data items, effectively encoding the first two dimensions, suitable visual encodings can be employed to communicate higher-dimensional features. To investigate such encodings, we have evaluated two commonly used glyph types, namely flower glyphs and star glyphs. To evaluate their capabilities for communicating higher-dimensional features in two-dimensional embeddings, we ran a large set of crowd-sourced user studies using real-world data obtained from data.gov. During these studies, participants completed a broad set of relevant tasks derived from related research. This article describes the evaluated glyph designs, details our tasks, and the quantitative study setup before discussing the results. Finally, we will present insights and provide guidance on the choice of glyph encodings when exploring high-dimensional data.

7.
Rofo ; 195(9): 797-803, 2023 09.
Artigo em Inglês, Alemão | MEDLINE | ID: mdl-37160147

RESUMO

BACKGROUND: Artificial intelligence is playing an increasingly important role in radiology. However, more and more often it is no longer possible to reconstruct decisions, especially in the case of new and powerful methods from the field of deep learning. The resulting models fulfill their function without the users being able to understand the internal processes and are used as so-called black boxes. Especially in sensitive areas such as medicine, the explainability of decisions is of paramount importance in order to verify their correctness and to be able to evaluate alternatives. For this reason, there is active research going on to elucidate these black boxes. METHOD: This review paper presents different approaches for explainable artificial intelligence with their advantages and disadvantages. Examples are used to illustrate the introduced methods. This study is intended to enable the reader to better assess the limitations of the corresponding explanations when meeting them in practice and strengthen the integration of such solutions in new research projects. RESULTS AND CONCLUSION: Besides methods to analyze black-box models for explainability, interpretable models offer an interesting alternative. Here, explainability is part of the process and the learned model knowledge can be verified with expert knowledge. KEY POINTS: · The use of artificial intelligence in radiology offers many possibilities to provide safer and more efficient medical care. This includes, but is not limited to support during image acquisition and processing or for diagnosis.. · Complex models can achieve high accuracy, but make it difficult to understand data processing.. · If the explainability is already taken into account during the planning of the model, methods can be developed that are powerful and interpretable at the same time.. CITATION FORMAT: · Gallée L, Kniesel H, Ropinski T et al. Artificial intelligence in radiology - beyond the black box. Fortschr Röntgenstr 2023; 195: 797 - 803.


Assuntos
Inteligência Artificial , Radiologia , Radiografia , Conhecimento
8.
J Med Imaging (Bellingham) ; 10(4): 044007, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37600751

RESUMO

Purpose: Semantic segmentation is one of the most significant tasks in medical image computing, whereby deep neural networks have shown great success. Unfortunately, supervised approaches are very data-intensive, and obtaining reliable annotations is time-consuming and expensive. Sparsely labeled approaches, such as bounding boxes, have shown some success in reducing the annotation time. However, in 3D volume data, each slice must still be manually labeled. Approach: We evaluate approaches that reduce the annotation effort by reducing the number of slices that need to be labeled in a 3D volume. In a two-step process, a similarity metric is used to select slices that should be annotated by a trained radiologist. In the second step, a predictor is used to predict the segmentation mask for the rest of the slices. We evaluate different combinations of selectors and predictors on medical CT and MRI volumes. Thus we can determine that combination works best, and how far slice annotations can be reduced. Results: Our results show that for instance for the Medical Segmentation Decathlon-heart dataset, some selector, and predictor combinations allow for a Dice score 0.969 when only annotating 20% of slices per volume. Experiments on other datasets show a similarly positive trend. Conclusions: We evaluate a method that supports experts during the labeling of 3D medical volumes. Our approach makes it possible to drastically reduce the number of slices that need to be manually labeled. We present a recommendation in which selector predictor combination to use for different tasks and goals.

9.
Sci Rep ; 13(1): 20260, 2023 11 20.
Artigo em Inglês | MEDLINE | ID: mdl-37985685

RESUMO

Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis. Training such deep learning models requires large and accurate datasets, with annotations for all training samples. However, in the medical imaging domain, annotated datasets for specific tasks are often small due to the high complexity of annotations, limited access, or the rarity of diseases. To address this challenge, deep learning models can be pre-trained on large image datasets without annotations using methods from the field of self-supervised learning. After pre-training, small annotated datasets are sufficient to fine-tune the models for a specific task. The most popular self-supervised pre-training approaches in medical imaging are based on contrastive learning. However, recent studies in natural image processing indicate a strong potential for masked autoencoder approaches. Our work compares state-of-the-art contrastive learning methods with the recently introduced masked autoencoder approach "SparK" for convolutional neural networks (CNNs) on medical images. Therefore, we pre-train on a large unannotated CT image dataset and fine-tune on several CT classification tasks. Due to the challenge of obtaining sufficient annotated training data in medical imaging, it is of particular interest to evaluate how the self-supervised pre-training methods perform when fine-tuning on small datasets. By experimenting with gradually reducing the training dataset size for fine-tuning, we find that the reduction has different effects depending on the type of pre-training chosen. The SparK pre-training method is more robust to the training dataset size than the contrastive methods. Based on our results, we propose the SparK pre-training for medical imaging tasks with only small annotated datasets.


Assuntos
Aprendizado Profundo , Humanos , Diagnóstico por Imagem , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Radiografia , Aprendizado de Máquina Supervisionado
10.
IEEE Trans Vis Comput Graph ; 29(10): 4198-4214, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35749328

RESUMO

Cryo-electron tomography (cryo-ET) is a new 3D imaging technique with unprecedented potential for resolving submicron structural details. Existing volume visualization methods, however, are not able to reveal details of interest due to low signal-to-noise ratio. In order to design more powerful transfer functions, we propose leveraging soft segmentation as an explicit component of visualization for noisy volumes. Our technical realization is based on semi-supervised learning, where we combine the advantages of two segmentation algorithms. First, the weak segmentation algorithm provides good results for propagating sparse user-provided labels to other voxels in the same volume and is used to generate dense pseudo-labels. Second, the powerful deep-learning-based segmentation algorithm learns from these pseudo-labels to generalize the segmentation to other unseen volumes, a task that the weak segmentation algorithm fails at completely. The proposed volume visualization uses deep-learning-based segmentation as a component for segmentation-aware transfer function design. Appropriate ramp parameters can be suggested automatically through frequency distribution analysis. Furthermore, our visualization uses gradient-free ambient occlusion shading to further suppress the visual presence of noise, and to give structural detail the desired prominence. The cryo-ET data studied in our technical experiments are based on the highest-quality tilted series of intact SARS-CoV-2 virions. Our technique shows the high impact in target sciences for visual data analysis of very noisy volumes that cannot be visualized with existing techniques.

11.
JMIR Mhealth Uhealth ; 10(6): e32910, 2022 06 23.
Artigo em Inglês | MEDLINE | ID: mdl-35737429

RESUMO

BACKGROUND: Smart sensors have been developed as diagnostic tools for rehabilitation to cover an increasing number of geriatric patients. They promise to enable an objective assessment of complex movement patterns. OBJECTIVE: This research aimed to identify and analyze the conflicting ethical values associated with smart sensors in geriatric rehabilitation and provide ethical guidance on the best use of smart sensors to all stakeholders, including technology developers, health professionals, patients, and health authorities. METHODS: On the basis of a systematic literature search of the scientific databases PubMed and ScienceDirect, we conducted a qualitative document analysis to identify evidence-based practical implications of ethical relevance. We included 33 articles in the analysis. The practical implications were extracted inductively. Finally, we carried out an ethical analysis based on the 4 principles of biomedical ethics: autonomy, beneficence, nonmaleficence, and justice. The results are reported in categories based on these 4 principles. RESULTS: We identified 8 conflicting aims for using smart sensors. Gains in autonomy come at the cost of patient privacy. Smart sensors at home increase the independence of patients but may reduce social interactions. Independent measurements performed by patients may result in lower diagnostic accuracy. Although smart sensors could provide cost-effective and high-quality diagnostics for most patients, minorities could end up with suboptimal treatment owing to their underrepresentation in training data and studies. This could lead to algorithmic biases that would not be recognized by medical professionals when treating patients. CONCLUSIONS: The application of smart sensors has the potential to improve the rehabilitation of geriatric patients in several ways. It is important that patients do not have to choose between autonomy and privacy and are well informed about the insights that can be gained from the data. Smart sensors should support and not replace interactions with medical professionals. Patients and medical professionals should be educated about the correct application and the limitations of smart sensors. Smart sensors should include an adequate representation of minorities in their training data and should be covered by health insurance to guarantee fair access.


Assuntos
Confidencialidade , Privacidade , Idoso , Análise Ética , Humanos , Tecnologia
12.
IEEE Trans Vis Comput Graph ; 27(2): 1268-1278, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33048686

RESUMO

We present a novel deep learning based technique for volumetric ambient occlusion in the context of direct volume rendering. Our proposed Deep Volumetric Ambient Occlusion (DVAO) approach can predict per-voxel ambient occlusion in volumetric data sets, while considering global information provided through the transfer function. The proposed neural network only needs to be executed upon change of this global information, and thus supports real-time volume interaction. Accordingly, we demonstrate DVAO's ability to predict volumetric ambient occlusion, such that it can be applied interactively within direct volume rendering. To achieve the best possible results, we propose and analyze a variety of transfer function representations and injection strategies for deep neural networks. Based on the obtained results we also give recommendations applicable in similar volume learning scenarios. Lastly, we show that DVAO generalizes to a variety of modalities, despite being trained on computed tomography data only.

13.
IEEE Trans Vis Comput Graph ; 27(6): 2980-2991, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33556010

RESUMO

To convey neural network architectures in publications, appropriate visualizations are of great importance. While most current deep learning papers contain such visualizations, these are usually handcrafted just before publication, which results in a lack of a common visual grammar, significant time investment, errors, and ambiguities. Current automatic network visualization tools focus on debugging the network itself and are not ideal for generating publication visualizations. Therefore, we present an approach to automate this process by translating network architectures specified in Keras into visualizations that can directly be embedded into any publication. To do so, we propose a visual grammar for convolutional neural networks (CNNs), which has been derived from an analysis of such figures extracted from all ICCV and CVPR papers published between 2013 and 2019. The proposed grammar incorporates visual encoding, network layout, layer aggregation, and legend generation. We have further realized our approach in an online system available to the community, which we have evaluated through expert feedback, and a quantitative study. It not only reduces the time needed to generate network visualizations for publications, but also enables a unified and unambiguous visualization design.

14.
IEEE Trans Vis Comput Graph ; 27(10): 3913-3925, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-32406840

RESUMO

To enhance depth perception and thus data comprehension, additional depth cues are often used in 3D visualizations of complex vascular structures. There is a variety of different approaches described in the literature, ranging from chromadepth color coding over depth of field to glyph-based encodings. Unfortunately, the majority of existing approaches suffers from the same problem: As these cues are directly applied to the geometry's surface, the display of additional information on the vessel wall, such as other modalities or derived attributes, is impaired. To overcome this limitation we propose Void Space Surfaces which utilizes empty space in between vessel branches to communicate depth and their relative positioning. This allows us to enhance the depth perception of vascular structures without interfering with the spatial data and potentially superimposed parameter information. With this article, we introduce Void Space Surfaces, describe their technical realization, and show their application to various vessel trees. Moreover, we report the outcome of two user studies which we have conducted in order to evaluate the perceptual impact of Void Space Surfaces compared to existing vessel visualization techniques and discuss expert feedback.

15.
IEEE Trans Vis Comput Graph ; 16(6): 1358-65, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20975176

RESUMO

Although direct volume rendering is established as a powerful tool for the visualization of volumetric data, efficient and reliable feature detection is still an open topic. Usually, a tradeoff between fast but imprecise classification schemes and accurate but time-consuming segmentation techniques has to be made. Furthermore, the issue of uncertainty introduced with the feature detection process is completely neglected by the majority of existing approaches.In this paper we propose a guided probabilistic volume segmentation approach that focuses on the minimization of uncertainty. In an iterative process, our system continuously assesses uncertainty of a random walker-based segmentation in order to detect regions with high ambiguity, to which the user's attention is directed to support the correction of potential misclassifications. This reduces the risk of critical segmentation errors and ensures that information about the segmentation's reliability is conveyed to the user in a dependable way. In order to improve the efficiency of the segmentation process, our technique does not only take into account the volume data to be segmented, but also enables the user to incorporate classification information. An interactive workflow has been achieved by implementing the presented system on the GPU using the OpenCL API. Our results obtained for several medical data sets of different modalities, including brain MRI and abdominal CT, demonstrate the reliability and efficiency of our approach.


Assuntos
Gráficos por Computador , Algoritmos , Encéfalo/anatomia & histologia , Simulação por Computador , Apresentação de Dados , Humanos , Imageamento Tridimensional , Fígado/diagnóstico por imagem , Imageamento por Ressonância Magnética/estatística & dados numéricos , Modelos Anatômicos , Tomografia Computadorizada por Raios X/estatística & dados numéricos
16.
IEEE Trans Vis Comput Graph ; 26(11): 3241-3254, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31180858

RESUMO

The complexity of today's visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for instance by providing a data flow network editor. Unfortunately, these abstractions result in several issues, which need to be circumvented through an abstraction-centered system design. Often, a high level of abstraction hides low level details, which makes it difficult to directly access the underlying computing platform, which would be important to achieve an optimal performance. Therefore, we propose a layer structure developed for modern and sustainable visualization systems allowing developers to interact with all contained abstraction levels. We refer to this interaction capabilities as usage abstraction levels, since we target application developers with various levels of experience. We formulate the requirements for such a system, derive the desired architecture, and present how the concepts have been exemplary realized within the Inviwo visualization system. Furthermore, we address several specific challenges that arise during the realization of such a layered architecture, such as communication between different computing platforms, performance centered encapsulation, as well as layer-independent development by supporting cross layer documentation and debugging capabilities.

17.
F1000Res ; 9: 295, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33552475

RESUMO

Research software has become a central asset in academic research. It optimizes existing and enables new research methods, implements and embeds research knowledge, and constitutes an essential research product in itself. Research software must be sustainable in order to understand, replicate, reproduce, and build upon existing research or conduct new research effectively. In other words, software must be available, discoverable, usable, and adaptable to new needs, both now and in the future. Research software therefore requires an environment that supports sustainability. Hence, a change is needed in the way research software development and maintenance are currently motivated, incentivized, funded, structurally and infrastructurally supported, and legally treated. Failing to do so will threaten the quality and validity of research. In this paper, we identify challenges for research software sustainability in Germany and beyond, in terms of motivation, selection, research software engineering personnel, funding, infrastructure, and legal aspects. Besides researchers, we specifically address political and academic decision-makers to increase awareness of the importance and needs of sustainable research software practices. In particular, we recommend strategies and measures to create an environment for sustainable research software, with the ultimate goal to ensure that software-driven research is valid, reproducible and sustainable, and that software is recognized as a first class citizen in research. This paper is the outcome of two workshops run in Germany in 2019, at deRSE19 - the first International Conference of Research Software Engineers in Germany - and a dedicated DFG-supported follow-up workshop in Berlin.


Assuntos
Conhecimento , Pesquisadores , Software , Previsões , Alemanha , Humanos
18.
IEEE Trans Vis Comput Graph ; 15(6): 1515-22, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19834228

RESUMO

In this paper, we present a visualization system for the visual analysis of PET/CT scans of aortic arches of mice. The system has been designed in close collaboration between researchers from the areas of visualization and molecular imaging with the objective to get deeper insights into the structural and molecular processes which take place during plaque development. Understanding the development of plaques might lead to a better and earlier diagnosis of cardiovascular diseases, which are still the main cause of death in the western world. After motivating our approach, we will briefly describe the multimodal data acquisition process before explaining the visualization techniques used. The main goal is to develop a system which supports visual comparison of the data of different species. Therefore, we have chosen a linked multi-view approach, which amongst others integrates a specialized straightened multipath curved planar reformation and a multimodal vessel flattening technique. We have applied the visualization concepts to multiple data sets, and we will present the results of this investigation.


Assuntos
Aorta/anatomia & histologia , Interpretação de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Tomografia Computadorizada por Raios X/métodos , Animais , Aorta/patologia , Estenose da Valva Aórtica/patologia , Aterosclerose/patologia , Modelos Animais de Doenças , Camundongos , Imagens de Fantasmas , Reprodutibilidade dos Testes
19.
IEEE Trans Vis Comput Graph ; 25(8): 2514-2528, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-29994478

RESUMO

We discuss the concept of directness in the context of spatial interaction with visualization. In particular, we propose a model that allows practitioners to analyze and describe the spatial directness of interaction techniques, ultimately to be able to better understand interaction issues that may affect usability. To reach these goals, we distinguish between different types of directness. Each type of directness depends on a particular mapping between different spaces, for which we consider the data space, the visualization space, the output space, the user space, the manipulation space, and the interaction space. In addition to the introduction of the model itself, we also show how to apply it to several real-world interaction scenarios in visualization, and thus discuss the resulting types of spatial directness, without recommending either more direct or more indirect interaction techniques. In particular, we will demonstrate descriptive and evaluative usage of the proposed model, and also briefly discuss its generative usage.

20.
IEEE Trans Vis Comput Graph ; 14(6): 1499-506, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-18989002

RESUMO

Myocardial perfusion imaging with single photon emission computed tomography (SPECT) is an established method for the detection and evaluation of coronary artery disease (CAD). State-of-the-art SPECT scanners yield a large number of regional parameters of the left-ventricular myocardium (e.g., blood supply at rest and during stress, wall thickness, and wall thickening during heart contraction) that all need to be assessed by the physician. Today, the individual parameters of this multivariate data set are displayed as stacks of 2D slices, bull's eye plots, or, more recently, surfaces in 3D, which depict the left-ventricular wall. In all these visualizations, the data sets are displayed side-by-side rather than in an integrated manner, such that the multivariate data have to be examined sequentially and need to be fused mentally. This is time consuming and error-prone. In this paper we present an interactive 3D glyph visualization, which enables an effective integrated visualization of the multivariate data. Results from semiotic theory are used to optimize the mapping of different variables to glyph properties. This facilitates an improved perception of important information and thus an accelerated diagnosis. The 3D glyphs are linked to the established 2D views, which permit a more detailed inspection, and to relevant meta-information such as known stenoses of coronary vessels supplying the myocardial region. Our method has demonstrated its potential for clinical routine use in real application scenarios assessed by nuclear physicians.


Assuntos
Doença da Artéria Coronariana/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Interface Usuário-Computador , Disfunção Ventricular Esquerda/diagnóstico por imagem , Algoritmos , Inteligência Artificial , Gráficos por Computador , Doença da Artéria Coronariana/complicações , Humanos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Disfunção Ventricular Esquerda/etiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA