Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Med Image Anal ; 89: 102875, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37441881

RESUMO

Medical images are generally acquired with limited field-of-view (FOV), which could lead to incomplete regions of interest (ROI), and thus impose a great challenge on medical image analysis. This is particularly evident for the learning-based multi-target landmark detection, where algorithms could be misleading to learn primarily the variation of background due to the varying FOV, failing the detection of targets. Based on learning a navigation policy, instead of predicting targets directly, reinforcement learning (RL)-based methods have the potential to tackle this challenge in an efficient manner. Inspired by this, in this work we propose a multi-agent RL framework for simultaneous multi-target landmark detection. This framework is aimed to learn from incomplete or (and) complete images to form an implicit knowledge of global structure, which is consolidated during the training stage for the detection of targets from either complete or incomplete test images. To further explicitly exploit the global structural information from incomplete images, we propose to embed a shape model into the RL process. With this prior knowledge, the proposed RL model can not only localize dozens of targets simultaneously, but also work effectively and robustly in the presence of incomplete images. We validated the applicability and efficacy of the proposed method on various multi-target detection tasks with incomplete images from practical clinics, using body dual-energy X-ray absorptiometry (DXA), cardiac MRI and head CT datasets. Results showed that our method could predict whole set of landmarks with incomplete training images up to 80% missing proportion (average distance error 2.29 cm on body DXA), and could detect unseen landmarks in regions with missing image information outside FOV of target images (average distance error 6.84 mm on 3D half-head CT). Our code will be released via https://zmiclab.github.io/projects.html.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Radiografia , Absorciometria de Fóton , Cabeça
2.
J Genet Genomics ; 49(10): 934-942, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35259542

RESUMO

Facial and cranial variation represent a multidimensional set of highly correlated and heritable phenotypes. Little is known about the genetic basis explaining this correlation. We develop a software package ALoSFL for simultaneous localization of facial and cranial landmarks from head computed tomography (CT) images, apply it in the analysis of head CT images of 777 Han Chinese women, and obtain a set of phenotypes representing variation in face, skull and facial soft tissue thickness (FSTT). Association analysis of 301 single nucleotide polymorphisms (SNPs) from 191 distinct genomic loci previously associated with facial variation reveals an unexpected larger number of loci showing significant associations (P < 1e-3) with cranial phenotypes than expected under the null (O/E = 3.39), suggesting facial and cranial phenotypes share a substantial proportion of genetic components. Adding FSTT to a SNP-only model shows a large impact in explaining facial variance. A gene ontology analysis reveals that bone morphogenesis and osteoblast differentiation likely underlie our cranial-significant findings. Overall, this study simultaneously investigates the genetic effects on both facial and cranial variation of the same sample, supporting that facial variation is a composite phenotype of cranial variation and FSTT.


Assuntos
Face , Antropologia Forense , Feminino , Animais , Face/anatomia & histologia , Pontos de Referência Anatômicos , Crânio/diagnóstico por imagem , Crânio/anatomia & histologia , Fenótipo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA