Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Plant Cell Environ ; 2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38780063

RESUMO

Plasmodesmata (PDs) are intercellular organelles carrying multiple membranous nanochannels that allow the trafficking of cellular signalling molecules. The channel regulation of PDs occurs dynamically and is required in various developmental and physiological processes. It is well known that callose is a critical component in regulating PD permeability or symplasmic connectivity, but the understanding of the signalling pathways and mechanisms of its regulation is limited. Here, we used the reverse genetic approach to investigate the role of C-type lectin receptor-like kinase 1 (CLRLK1) in the aspect of PD callose-modulated symplasmic continuity. Here, we found that loss-of-function mutations in CLRLK1 resulted in excessive PD callose deposits and reduced symplasmic continuity, resulting in an accelerated gravitropic response. The protein interactome study also found that CLRLK1 interacted with actin depolymerizing factor 3 (ADF3) in vitro and in plants. Moreover, mutations in ADF3 result in elevated PD callose deposits and faster gravitropic response. Our results indicate that CLRLK1 and ADF3 negatively regulate PD callose accumulation, contributing to fine-tuning symplasmic opening apertures. Overall, our studies identified two key components involved in the deposits of PD callose and provided new insights into how symplasmic connectivity is maintained by the control of PD callose homoeostasis.

2.
IEEE Trans Med Imaging ; 41(6): 1320-1330, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34965206

RESUMO

In the last years, deep learning has dramatically improved the performances in a variety of medical image analysis applications. Among different types of deep learning models, convolutional neural networks have been among the most successful and they have been used in many applications in medical imaging. Training deep convolutional neural networks often requires large amounts of image data to generalize well to new unseen images. It is often time-consuming and expensive to collect large amounts of data in the medical image domain due to expensive imaging systems, and the need for experts to manually make ground truth annotations. A potential problem arises if new structures are added when a decision support system is already deployed and in use. Since the field of radiation therapy is constantly developing, the new structures would also have to be covered by the decision support system. In the present work, we propose a novel loss function to solve multiple problems: imbalanced datasets, partially-labeled data, and incremental learning. The proposed loss function adapts to the available data in order to utilize all available data, even when some have missing annotations. We demonstrate that the proposed loss function also works well in an incremental learning setting, where an existing model is easily adapted to semi-automatically incorporate delineations of new organs when they appear. Experiments on a large in-house dataset show that the proposed method performs on par with baseline models, while greatly reducing the training time and eliminating the hassle of maintaining multiple models in practice.


Assuntos
Aprendizado Profundo , Diagnóstico por Imagem , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Semântica
3.
Artigo em Inglês | MEDLINE | ID: mdl-36998700

RESUMO

Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility, our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS.

4.
Med Phys ; 47(12): 6216-6231, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33169365

RESUMO

PURPOSE: When using convolutional neural networks (CNNs) for segmentation of organs and lesions in medical images, the conventional approach is to work with inputs and outputs either as single slice [two-dimensional (2D)] or whole volumes [three-dimensional (3D)]. One common alternative, in this study denoted as pseudo-3D, is to use a stack of adjacent slices as input and produce a prediction for at least the central slice. This approach gives the network the possibility to capture 3D spatial information, with only a minor additional computational cost. METHODS: In this study, we systematically evaluate the segmentation performance and computational costs of this pseudo-3D approach as a function of the number of input slices, and compare the results to conventional end-to-end 2D and 3D CNNs, and to triplanar orthogonal 2D CNNs. The standard pseudo-3D method regards the neighboring slices as multiple input image channels. We additionally design and evaluate a novel, simple approach where the input stack is a volumetric input that is repeatably convolved in 3D to obtain a 2D feature map. This 2D map is in turn fed into a standard 2D network. We conducted experiments using two different CNN backbone architectures and on eight diverse data sets covering different anatomical regions, imaging modalities, and segmentation tasks. RESULTS: We found that while both pseudo-3D methods can process a large number of slices at once and still be computationally much more efficient than fully 3D CNNs, a significant improvement over a regular 2D CNN was only observed with two of the eight data sets. triplanar networks had the poorest performance of all the evaluated models. An analysis of the structural properties of the segmentation masks revealed no relations to the segmentation performance with respect to the number of input slices. A post hoc rank sum test which combined all metrics and data sets yielded that only our newly proposed pseudo-3D method with an input size of 13 slices outperformed almost all methods. CONCLUSION: In the general case, multislice inputs appear not to improve segmentation results over using 2D or 3D CNNs. For the particular case of 13 input slices, the proposed novel pseudo-3D method does appear to have a slight advantage across all data sets compared to all other methods evaluated in this work.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Imageamento Tridimensional
5.
IEEE Trans Med Imaging ; 39(9): 2856-2868, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32149682

RESUMO

Deep learning methods have proven extremely effective at performing a variety of medical image analysis tasks. With their potential use in clinical routine, their lack of transparency has however been one of their few weak points, raising concerns regarding their behavior and failure modes. While most research to infer model behavior has focused on indirect strategies that estimate prediction uncertainties and visualize model support in the input image space, the ability to explicitly query a prediction model regarding its image content offers a more direct way to determine the behavior of trained models. To this end, we present a novel Visual Question Answering approach that allows an image to be queried by means of a written question. Experiments on a variety of medical and natural image datasets show that by fusing image and question features in a novel way, the proposed approach achieves an equal or higher accuracy compared to current methods.


Assuntos
Diagnóstico por Imagem , Radiografia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA