Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Cancers (Basel) ; 15(23)2023 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-38067324

RESUMO

Automated brain tumor segmentation has significant importance, especially for disease diagnosis and treatment planning. The study utilizes a range of MRI modalities, namely T1-weighted (T1), T1-contrast-enhanced (T1ce), T2-weighted (T2), and fluid-attenuated inversion recovery (FLAIR), with each providing unique and vital information for accurate tumor localization. While state-of-the-art models perform well on standardized datasets like the BraTS dataset, their suitability in diverse clinical settings (matrix size, slice thickness, manufacturer-related differences such as repetition time, and echo time) remains a subject of debate. This research aims to address this gap by introducing a novel 'Region-Focused Selection Plus (RFS+)' strategy designed to efficiently improve the generalization and quantification capabilities of deep learning (DL) models for automatic brain tumor segmentation. RFS+ advocates a targeted approach, focusing on one region at a time. It presents a holistic strategy that maximizes the benefits of various segmentation methods by customizing input masks, activation functions, loss functions, and normalization techniques. Upon identifying the top three models for each specific region in the training dataset, RFS+ employs a weighted ensemble learning technique to mitigate the limitations inherent in each segmentation approach. In this study, we explore three distinct approaches, namely, multi-class, multi-label, and binary class for brain tumor segmentation, coupled with various normalization techniques applied to individual sub-regions. The combination of different approaches with diverse normalization techniques is also investigated. A comparative analysis is conducted among three U-net model variants, including the state-of-the-art models that emerged victorious in the BraTS 2020 and 2021 challenges. These models are evaluated using the dice similarity coefficient (DSC) score on the 2021 BraTS validation dataset. The 2D U-net model yielded DSC scores of 77.45%, 82.14%, and 90.82% for enhancing tumor (ET), tumor core (TC), and the whole tumor (WT), respectively. Furthermore, on our local dataset, the 2D U-net model augmented with the RFS+ strategy demonstrates superior performance compared to the state-of-the-art model, achieving the highest DSC score of 79.22% for gross tumor volume (GTV). The model utilizing RFS+ requires 10% less training dataset, 67% less memory and completes training in 92% less time compared to the state-of-the-art model. These results confirm the effectiveness of the RFS+ strategy for enhancing the generalizability of DL models in brain tumor segmentation.

2.
Entropy (Basel) ; 25(4)2023 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-37190356

RESUMO

The graph autoencoder (GAE) is a powerful graph representation learning tool in an unsupervised learning manner for graph data. However, most existing GAE-based methods typically focus on preserving the graph topological structure by reconstructing the adjacency matrix while ignoring the preservation of the attribute information of nodes. Thus, the node attributes cannot be fully learned and the ability of the GAE to learn higher-quality representations is weakened. To address the issue, this paper proposes a novel GAE model that preserves node attribute similarity. The structural graph and the attribute neighbor graph, which is constructed based on the attribute similarity between nodes, are integrated as the encoder input using an effective fusion strategy. In the encoder, the attributes of the nodes can be aggregated both in their structural neighborhood and by their attribute similarity in their attribute neighborhood. This allows performing the fusion of the structural and node attribute information in the node representation by sharing the same encoder. In the decoder module, the adjacency matrix and the attribute similarity matrix of the nodes are reconstructed using dual decoders. The cross-entropy loss of the reconstructed adjacency matrix and the mean-squared error loss of the reconstructed node attribute similarity matrix are used to update the model parameters and ensure that the node representation preserves the original structural and node attribute similarity information. Extensive experiments on three citation networks show that the proposed method outperforms state-of-the-art algorithms in link prediction and node clustering tasks.

3.
J Digit Imaging ; 35(4): 938-946, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35293605

RESUMO

Diagnosis of brain tumor gliomas is a challenging task in medical image analysis due to its complexity, the less regularity of tumor structures, and the diversity of tissue textures and shapes. Semantic segmentation approaches using deep learning have consistently outperformed the previous methods in this challenging task. However, deep learning is insufficient to provide the required local features related to tissue texture changes due to tumor growth. This paper designs a hybrid method arising from this need, which incorporates machine-learned and hand-crafted features. A semantic segmentation network (SegNet) is used to generate the machine-learned features, while the grey-level co-occurrence matrix (GLCM)-based texture features construct the hand-crafted features. In addition, the proposed approach only takes the region of interest (ROI), which represents the extension of the complete tumor structure, as input, and suppresses the intensity of other irrelevant area. A decision tree (DT) is used to classify the pixels of ROI MRI images into different parts of tumors, i.e. edema, necrosis and enhanced tumor. The method was evaluated on BRATS 2017 dataset. The results demonstrate that the proposed model provides promising segmentation in brain tumor structure. The F-measures for automatic brain tumor segmentation against ground truth are 0.98, 0.75 and 0.69 for whole tumor, core and enhanced tumor, respectively.


Assuntos
Neoplasias Encefálicas , Glioma , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Glioma/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação
4.
Biodivers Data J ; 8: e47051, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32269476

RESUMO

Digitisation of natural history collections has evolved from creating databases for the recording of specimens' catalogue and label data to include digital images of specimens. This has been driven by several important factors, such as a need to increase global accessibility to specimens and to preserve the original specimens by limiting their manual handling. The size of the collections pointed to the need of high throughput digitisation workflows. However, digital imaging of large numbers of fragile specimens is an expensive and time-consuming process that should be performed only once. To achieve this, the digital images produced need to be useful for the largest set of applications possible and have a potentially unlimited shelf life. The constraints on digitisation speed need to be balanced against the applicability and longevity of the images, which, in turn, depend directly on the quality of those images. As a result, the quality criteria that specimen images need to fulfil influence the design, implementation and execution of digitisation workflows. Different standards and guidelines for producing quality research images from specimens have been proposed; however, their actual adaptation to suit the needs of different types of specimens requires further analysis. This paper presents the digitisation workflow implemented by Meise Botanic Garden (MBG). This workflow is relevant because of its modular design, its strong focus on image quality assessment, its flexibility that allows combining in-house and outsourced digitisation, processing, preservation and publishing facilities and its capacity to evolve for integrating alternative components from different sources. The design and operation of the digitisation workflow is provided to showcase how it was derived, with particular attention to the built-in audit trail within the workflow, which ensures the scalable production of high-quality specimen images and how this audit trail ensures that new modules do not affect either the speed of imaging or the quality of the images produced.

5.
IEEE Trans Vis Comput Graph ; 20(5): 675-85, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-26357291

RESUMO

Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.

6.
IEEE Trans Vis Comput Graph ; 19(7): 1199-217, 2013 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-23661012

RESUMO

Three-dimensional surface registration transforms multiple three-dimensional data sets into the same coordinate system so as to align overlapping components of these sets. Recent surveys have covered different aspects of either rigid or nonrigid registration, but seldom discuss them as a whole. Our study serves two purposes: 1) To give a comprehensive survey of both types of registration, focusing on three-dimensional point clouds and meshes and 2) to provide a better understanding of registration from the perspective of data fitting. Registration is closely related to data fitting in which it comprises three core interwoven components: model selection, correspondences and constraints, and optimization. Study of these components 1) provides a basis for comparison of the novelties of different techniques, 2) reveals the similarity of rigid and nonrigid registration in terms of problem representations, and 3) shows how overfitting arises in nonrigid registration and the reasons for increasing interest in intrinsic techniques. We further summarize some practical issues of registration which include initializations and evaluations, and discuss some of our own observations, insights and foreseeable research trends.

7.
IEEE Trans Syst Man Cybern B Cybern ; 41(3): 749-60, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21134817

RESUMO

Cellular automata (CA) with given evolution rules have been widely investigated, but the inverse problem of extracting CA rules from observed data is less studied. Current CA rule extraction approaches are both time consuming and inefficient when selecting neighborhoods. We give a novel approach to identifying CA rules from observed data and selecting CA neighborhoods based on the identified CA model. Our identification algorithm uses a model linear in its parameters and gives a unified framework for representing the identification problem for both deterministic and probabilistic CA. Parameters are estimated based on a minimum variance criterion. An incremental procedure is applied during CA identification to select an initial coarse neighborhood. Redundant cells in the neighborhood are then removed based on parameter estimates, and the neighborhood size is determined using the Bayesian information criterion. Experimental results show the effectiveness of our algorithm and that it outperforms other leading CA identification algorithms.


Assuntos
Algoritmos , Inteligência Artificial , Técnicas de Apoio para a Decisão , Modelos Teóricos , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador
8.
IEEE Trans Vis Comput Graph ; 15(4): 642-53, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19423888

RESUMO

An algorithm is presented to automatically generate bas-reliefs based on adaptive histogram equalization (AHE), starting from an input height field. A mesh model may alternatively be provided, in which case a height field is first created via orthogonal or perspective projection. The height field is regularly gridded and treated as an image, enabling a modified AHE method to be used to generate a bas-relief with a user-chosen height range. We modify the original image-contrast-enhancement AHE method to use gradient weights also to enhance the shape features of the bas-relief. To effectively compress the height field, we limit the height-dependent scaling factors used to compute relative height variations in the output from height variations in the input; this prevents any height differences from having too great effect. Results of AHE over different neighborhood sizes are averaged to preserve information at different scales in the resulting bas-relief. Compared to previous approaches, the proposed algorithm is simple and yet largely preserves original shape features. Experiments show that our results are, in general, comparable to and in some cases better than the best previously published methods.

9.
IEEE Trans Vis Comput Graph ; 13(5): 925-38, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17622677

RESUMO

We present a simple and fast mesh denoising method, which can remove noise effectively, while preserving mesh features such as sharp edges and corners. The method consists of two stages. Firstly, noisy face normals are filtered iteratively by weighted averaging of neighboring face normals. Secondly, vertex positions are iteratively updated to agree with the denoised face normals. The weight function used during normal filtering is much simpler than that used in previous similar approaches, being simply a trimmed quadratic. This makes the algorithm both fast and simple to implement. Vertex position updating is based on the integration of surface normals using a least-squares error criterion. Like previous algorithms, we solve the least-squares problem by gradient descent, but whereas previous methods needed user input to determine the iteration step size, we determine it automatically. In addition, we prove the convergence of the vertex position updating approach. Analysis and experiments show the advantages of our proposed method over various earlier surface denoising methods.


Assuntos
Algoritmos , Artefatos , Gráficos por Computador , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...