Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Med Image Anal ; 97: 103287, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39111265

RESUMO

Deep neural networks are commonly used for automated medical image segmentation, but models will frequently struggle to generalize well across different imaging modalities. This issue is particularly problematic due to the limited availability of annotated data, both in the target as well as the source modality, making it difficult to deploy these models on a larger scale. To overcome these challenges, we propose a new semi-supervised training strategy called MoDATTS. Our approach is designed for accurate cross-modality 3D tumor segmentation on unpaired bi-modal datasets. An image-to-image translation strategy between modalities is used to produce synthetic but annotated images and labels in the desired modality and improve generalization to the unannotated target modality. We also use powerful vision transformer architectures for both image translation (TransUNet) and segmentation (Medformer) tasks and introduce an iterative self-training procedure in the later task to further close the domain gap between modalities, thus also training on unlabeled images in the target modality. MoDATTS additionally allows the possibility to exploit image-level labels with a semi-supervised objective that encourages the model to disentangle tumors from the background. This semi-supervised methodology helps in particular to maintain downstream segmentation performance when pixel-level label scarcity is also present in the source modality dataset, or when the source dataset contains healthy controls. The proposed model achieves superior performance compared to other methods from participating teams in the CrossMoDA 2022 vestibular schwannoma (VS) segmentation challenge, as evidenced by its reported top Dice score of 0.87±0.04 for the VS segmentation. MoDATTS also yields consistent improvements in Dice scores over baselines on a cross-modality adult brain gliomas segmentation task composed of four different contrasts from the BraTS 2020 challenge dataset, where 95% of a target supervised model performance is reached when no target modality annotations are available. We report that 99% and 100% of this maximum performance can be attained if 20% and 50% of the target data is additionally annotated, which further demonstrates that MoDATTS can be leveraged to reduce the annotation burden.


Assuntos
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Redes Neurais de Computação , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos , Aprendizado Profundo , Aprendizado de Máquina Supervisionado , Processamento de Imagem Assistida por Computador/métodos
2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 4832-4835, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892291

RESUMO

Previous studies have shown that athletic jump mechanics assessments are valuable tools for identifying indicators of an individual's anterior cruciate ligament injury risk. These assessments, such as the drop jump test, often relied on camera systems or sensors that are not always accessible nor practical for screening individuals in a sports setting. As human pose estimation deep learning models improve, we envision transitioning biometrical assessments to mobile devices. As such, here we have addressed two of the most preclusive hindrances of the current state-of-the-art models: accuracy of the lower limb joint prediction and the slow run-time of in-the-wild inference. We tackle the issue of accuracy by adding a post-processing step that is compatible with all inference methods that outputs 3D key points. Additionally, to overcome the lengthy inference rate, we propose a depth estimation method that runs in real-time and can function with any 2D human pose estimation model that outputs COCO key points. Our solution, paired with a state-of-the-art model for 3D human pose estimation, significantly increased lower-limb positional accuracy. Furthermore, when paired with our real-time joint depth estimation algorithm, it is a plausible solution for developing the first mobile device prototype for athlete jump mechanics assessments.


Assuntos
Lesões do Ligamento Cruzado Anterior , Sistema Musculoesquelético , Esportes , Atletas , Humanos , Extremidade Inferior
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA