Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 144
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Annu Rev Biomed Eng ; 22: 127-153, 2020 06 04.
Artigo em Inglês | MEDLINE | ID: mdl-32169002

RESUMO

Sparsity is a powerful concept to exploit for high-dimensional machine learning and associated representational and computational efficiency. Sparsity is well suited for medical image segmentation. We present a selection of techniques that incorporate sparsity, including strategies based on dictionary learning and deep learning, that are aimed at medical image segmentation and related quantification.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Algoritmos , Animais , Encéfalo/diagnóstico por imagem , Aprendizado Profundo , Cães , Ecocardiografia/métodos , Ventrículos do Coração/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Modelos Teóricos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos
2.
Methods ; 115: 100-109, 2017 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-28219745

RESUMO

This paper proposes a novel framework to help biologists explore and analyze neurons based on retrieval of data from neuron morphological databases. In recent years, the continuously expanding neuron databases provide a rich source of information to associate neuronal morphologies with their functional properties. We design a coarse-to-fine framework for efficient and effective data retrieval from large-scale neuron databases. In the coarse-level, for efficiency in large-scale, we employ a binary coding method to compress morphological features into binary codes of tens of bits. Short binary codes allow for real-time similarity searching in Hamming space. Because the neuron databases are continuously expanding, it is inefficient to re-train the binary coding model from scratch when adding new neurons. To solve this problem, we extend binary coding with online updating schemes, which only considers the newly added neurons and update the model on-the-fly, without accessing the whole neuron databases. In the fine-grained level, we introduce domain experts/users in the framework, which can give relevance feedback for the binary coding based retrieval results. This interactive strategy can improve the retrieval performance through re-ranking the above coarse results, where we design a new similarity measure and take the feedback into account. Our framework is validated on more than 17,000 neuron cells, showing promising retrieval accuracy and efficiency. Moreover, we demonstrate its use case in assisting biologists to identify and explore unknown neurons.


Assuntos
Inteligência Artificial , Processamento de Imagem Assistida por Computador/métodos , Neurônios/ultraestrutura , Reconhecimento Automatizado de Padrão/métodos , Bases de Dados Factuais , Humanos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Armazenamento e Recuperação da Informação , Neurônios/classificação
3.
J Neural Transm (Vienna) ; 124(1): 3-11, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-26704381

RESUMO

Rodents are the most commonly used preclinical model of human disease assessing the mechanism(s) involved as well as the role of genetics, epigenetics, and pharmacotherapy on this disease as well as identifying vulnerability factors and risk assessment for disease critical in the development of improved treatment strategies. Unfortunately, the majority of rodent preclinical studies utilize single housed approaches where animals are either entirely housed and tested in solitary environments or group housed but tested in solitary environments. This approach, however, ignores the important contribution of social interaction and social behavior. Social interaction in rodents is found to be a major criterion for the ethological validity of rodent species-specific behavioral characteristics (Zurn et al. 2007; Analysis 2011). It is also well established that there is significant and growing number of reports, which illustrates the important role of social environment and social interaction in all diseases, with particularly significance in all neuropsychiatric diseases. Thus, it is imperative that research studies be able to add large-scale evaluations of social interaction and behavior in mice and benefit from automated tracking of behaviors and measurements by removing user bias and by quantifying aspects of behaviors that cannot be assessed by a human observer. Single mouse setups have been used routinely, but cannot be easily extended to multiple-animal studies where social behavior is key, e.g., autism, depression, anxiety, substance and non-substance addictive disorders, aggression, sexual behavior, or parenting. While recent efforts are focusing on multiple-animal tracking alone, a significant limitation remains the lack of insightful measures of social interactions. We present a novel, non-invasive single camera-based automated tracking method described as Mouse Social Test (MoST) and set of measures designed for estimating the interactions of multiple mice at the same time in the same environment interacting freely. Our results show measurement of social interactions and designed to be adaptable and applicable to most existing home cage systems used in research, and provide a greater level of detailed analysis of social behavior than previously possible. The present study describes social behaviors assessed in a home cage environment setup containing six mice that interact freely over long periods of time, and we illustrate how these measures can be interpreted and combined to classify rodent social behaviors. In addition, we illustrate how these measures can be interpreted and combined to classify and analyze comprehensively rodent behaviors involved in several neuropsychiatric diseases as well as provide opportunity for the basic research of rodent behavior previously not possible.


Assuntos
Automação Laboratorial/métodos , Comportamento Animal , Abrigo para Animais , Camundongos Endogâmicos C57BL , Comportamento Social , Actigrafia , Animais , Comportamento Exploratório , Masculino , Camundongos Endogâmicos C57BL/psicologia , Atividade Motora , Reconhecimento Automatizado de Padrão/métodos , Reconhecimento Psicológico
4.
PLoS Comput Biol ; 10(7): e1003702, 2014 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-25033081

RESUMO

In the effort to define genes and specific neuronal circuits that control behavior and plasticity, the capacity for high-precision automated analysis of behavior is essential. We report on comprehensive computer vision software for analysis of swimming locomotion of C. elegans, a simple animal model initially developed to facilitate elaboration of genetic influences on behavior. C. elegans swim test software CeleST tracks swimming of multiple animals, measures 10 novel parameters of swim behavior that can fully report dynamic changes in posture and speed, and generates data in several analysis formats, complete with statistics. Our measures of swim locomotion utilize a deformable model approach and a novel mathematical analysis of curvature maps that enable even irregular patterns and dynamic changes to be scored without need for thresholding or dropping outlier swimmers from study. Operation of CeleST is mostly automated and only requires minimal investigator interventions, such as the selection of videotaped swim trials and choice of data output format. Data can be analyzed from the level of the single animal to populations of thousands. We document how the CeleST program reveals unexpected preferences for specific swim "gaits" in wild-type C. elegans, uncovers previously unknown mutant phenotypes, efficiently tracks changes in aging populations, and distinguishes "graceful" from poor aging. The sensitivity, dynamic range, and comprehensive nature of CeleST measures elevate swim locomotion analysis to a new level of ease, economy, and detail that enables behavioral plasticity resulting from genetic, cellular, or experience manipulation to be analyzed in ways not previously possible.


Assuntos
Biologia Computacional/métodos , Software , Natação/fisiologia , Animais , Caenorhabditis elegans , Bases de Dados Factuais , Modelos Biológicos , Fenótipo
5.
Med Image Anal ; 91: 102996, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37857067

RESUMO

This article discusses the opportunities, applications and future directions of large-scale pretrained models, i.e., foundation models, which promise to significantly improve the analysis of medical images. Medical foundation models have immense potential in solving a wide range of downstream tasks, as they can help to accelerate the development of accurate and robust models, reduce the dependence on large amounts of labeled data, preserve the privacy and confidentiality of patient data. Specifically, we illustrate the "spectrum" of medical foundation models, ranging from general imaging models, modality-specific models, to organ/task-specific models, and highlight their challenges, opportunities and applications. We also discuss how foundation models can be leveraged in downstream medical tasks to enhance the accuracy and efficiency of medical image analysis, leading to more precise diagnosis and treatment decisions.


Assuntos
Diagnóstico por Imagem , Humanos , Diagnóstico por Imagem/métodos , Previsões
6.
IEEE Trans Image Process ; 33: 3486-3495, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38814773

RESUMO

Continuous sign language recognition (CSLR) is to recognize the glosses in a sign language video. Enhancing the generalization ability of CSLR's visual feature extractor is a worthy area of investigation. In this paper, we model glosses as priors that help to learn more generalizable visual features. Specifically, the signer-invariant gloss feature is extracted by a pre-trained gloss BERT model. Then we design a gloss prior guidance network (GPGN). It contains a novel parallel densely-connected temporal feature extraction (PDC-TFE) module for multi-resolution visual feature extraction. The PDC-TFE captures the complex temporal patterns of the glosses. The pre-trained gloss feature guides the visual feature learning through a cross-modality matching loss. We propose to formulate the cross-modality feature matching into a regularized optimal transport problem, it can be efficiently solved by a variant of the Sinkhorn algorithm. The GPGN parameters are learned by optimizing a weighted sum of the cross-modality matching loss and CTC loss. The experiment results on German and Chinese sign language benchmarks demonstrate that the proposed GPGN achieves competitive performance. The ablation study verifies the effectiveness of several critical components of the GPGN. Furthermore, the proposed pre-trained gloss BERT model and cross-modality matching can be seamlessly integrated into other RGB-cue-based CSLR methods as plug-and-play formulations to enhance the generalization ability of the visual feature extractor.

7.
Med Image Anal ; 95: 103199, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38759258

RESUMO

The accurate diagnosis on pathological subtypes for lung cancer is of significant importance for the follow-up treatments and prognosis managements. In this paper, we propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on computed tomography (CT) images. Inspired by studies stating that cross-scale associations exist in the image patterns between the same case's CT images and its pathological images, we innovatively developed a pathological feature synthetic module (PFSM), which quantitatively maps cross-modality associations through deep neural networks, to derive the "gold standard" information contained in the corresponding pathological images from CT images. Additionally, we designed a radiological feature extraction module (RFEM) to directly acquire CT image information and integrated it with the pathological priors under an effective feature fusion framework, enabling the entire classification model to generate more indicative and specific pathologically related features and eventually output more accurate predictions. The superiority of the proposed model lies in its ability to self-generate hybrid features that contain multi-modality image information based on a single-modality input. To evaluate the effectiveness, adaptability, and generalization ability of our model, we performed extensive experiments on a large-scale multi-center dataset (i.e., 829 cases from three hospitals) to compare our model and a series of state-of-the-art (SOTA) classification models. The experimental results demonstrated the superiority of our model for lung cancer subtypes classification with significant accuracy improvements in terms of accuracy (ACC), area under the curve (AUC), positive predictive value (PPV) and F1-score.


Assuntos
Neoplasias Pulmonares , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/classificação , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos
8.
Med Image Anal ; 91: 102999, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37862866

RESUMO

Coronary CT angiography (CCTA) is an effective and non-invasive method for coronary artery disease diagnosis. Extracting an accurate coronary artery tree from CCTA image is essential for centerline extraction, plaque detection, and stenosis quantification. In practice, data quality varies. Sometimes, the arteries and veins have similar intensities and locate closely, which may confuse segmentation algorithms, even deep learning based ones, to obtain accurate arteries. However, it is not always feasible to re-scan the patient for better image quality. In this paper, we propose an artery and vein disentanglement network (AVDNet) for robust and accurate segmentation by incorporating the coronary vein into the segmentation task. This is the first work to segment coronary artery and vein at the same time. The AVDNet consists of an image based vessel recognition network (IVRN) and a topology based vessel refinement network (TVRN). IVRN learns to segment the arteries and veins, while TVRN learns to correct the segmentation errors based on topology consistency. We also design a novel inverse distance weighted dice (IDD) loss function to recover more thin vessel branches and preserve the vascular boundaries. Extensive experiments are conducted on a multi-center dataset of 700 patients. Quantitative and qualitative results demonstrate the effectiveness of the proposed method by comparing it with state-of-the-art methods and different variants. Prediction results of the AVDNet on the Automated Segmentation of Coronary Artery Challenge dataset are avaliabel at https://github.com/WennyJJ/Coronary-Artery-Vein-Segmentation for follow-up research.


Assuntos
Algoritmos , Vasos Coronários , Humanos , Vasos Coronários/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Angiografia Coronária/métodos , Angiografia por Tomografia Computadorizada/métodos , Processamento de Imagem Assistida por Computador/métodos
9.
Artigo em Inglês | MEDLINE | ID: mdl-38082617

RESUMO

Tooth segmentation from intraoral scans is a crucial part of digital dentistry. Many Deep Learning based tooth segmentation algorithms have been developed for this task. In most of the cases, high accuracy has been achieved, although, most of the available tooth segmentation techniques make an implicit restrictive assumption of full jaw model and they report accuracy based on full jaw models. Medically, however, in certain cases, full jaw tooth scan is not required or may not be available. Given this practical issue, it is important to understand the robustness of currently available widely used Deep Learning based tooth segmentation techniques. For this purpose, we applied available segmentation techniques on partial intraoral scans and we discovered that the available deep Learning techniques under-perform drastically. The analysis and comparison presented in this work would help us in understanding the severity of the problem and allow us to develop robust tooth segmentation technique without strong assumption of full jaw model.Clinical relevance- Deep learning based tooth mesh segmentation algorithms have achieved high accuracy. In the clinical setting, robustness of deep learning based methods is of utmost importance. We discovered that the high performing tooth segmentation methods under-perform when segmenting partial intraoral scans. In our current work, we conduct extensive experiments to show the extent of this problem. We also discuss why adding partial scans to the training data of the tooth segmentation models is non-trivial. An in-depth understanding of this problem can help in developing robust tooth segmentation tenichniques.


Assuntos
Aprendizado Profundo , Dente , Algoritmos , Dente/diagnóstico por imagem , Cintilografia , Modelos Dentários
10.
Sci Data ; 10(1): 231, 2023 04 21.
Artigo em Inglês | MEDLINE | ID: mdl-37085533

RESUMO

The success of training computer-vision models heavily relies on the support of large-scale, real-world images with annotations. Yet such an annotation-ready dataset is difficult to curate in pathology due to the privacy protection and excessive annotation burden. To aid in computational pathology, synthetic data generation, curation, and annotation present a cost-effective means to quickly enable data diversity that is required to boost model performance at different stages. In this study, we introduce a large-scale synthetic pathological image dataset paired with the annotation for nuclei semantic segmentation, termed as Synthetic Nuclei and annOtation Wizard (SNOW). The proposed SNOW is developed via a standardized workflow by applying the off-the-shelf image generator and nuclei annotator. The dataset contains overall 20k image tiles and 1,448,522 annotated nuclei with the CC-BY license. We show that SNOW can be used in both supervised and semi-supervised training scenarios. Extensive results suggest that synthetic-data-trained models are competitive under a variety of model training settings, expanding the scope of better using synthetic images for enhancing downstream data-driven clinical tasks.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Privacidade , Fluxo de Trabalho , Processamento de Imagem Assistida por Computador , Semântica , Humanos , Feminino
11.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 10409-10426, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37022840

RESUMO

Modern medical imaging techniques, such as ultrasound (US) and cardiac magnetic resonance (MR) imaging, have enabled the evaluation of myocardial deformation directly from an image sequence. While many traditional cardiac motion tracking methods have been developed for the automated estimation of the myocardial wall deformation, they are not widely used in clinical diagnosis, due to their lack of accuracy and efficiency. In this paper, we propose a novel deep learning-based fully unsupervised method, SequenceMorph, for in vivo motion tracking in cardiac image sequences. In our method, we introduce the concept of motion decomposition and recomposition. We first estimate the inter-frame (INF) motion field between any two consecutive frames, by a bi-directional generative diffeomorphic registration neural network. Using this result, we then estimate the Lagrangian motion field between the reference frame and any other frame, through a differentiable composition layer. Our framework can be extended to incorporate another registration network, to further reduce the accumulated errors introduced in the INF motion tracking step, and to refine the Lagrangian motion estimation. By utilizing temporal information to perform reasonable estimations of spatio-temporal motion fields, this novel method provides a useful solution for image sequence motion tracking. Our method has been applied to US (echocardiographic) and cardiac MR (untagged and tagged cine) image sequences; the results show that SequenceMorph is significantly superior to conventional motion tracking methods, in terms of the cardiac motion tracking accuracy and inference efficiency.


Assuntos
Algoritmos , Aprendizado de Máquina não Supervisionado , Coração/diagnóstico por imagem , Movimento (Física) , Imageamento por Ressonância Magnética
12.
Artigo em Inglês | MEDLINE | ID: mdl-38083011

RESUMO

Accurate liver tumor segmentation is a prerequisite for data-driven tumor analysis. Multiphase computed tomography (CT) with extensive liver tumor characteristics is typically used as the most crucial diagnostic basis. However, the large variations in contrast, texture, and tumor structure between CT phases limit the generalization capabilities of the associated segmentation algorithms. Inadequate feature integration across phases might also lead to a performance decrease. To address these issues, we present a domain-adversarial transformer (DA-Tran) network for segmenting liver tumors from multiphase CT images. A DA module is designed to generate domain-adapted feature maps from the non-contrast-enhanced (NC) phase, arterial (ART) phase, portal venous (PV) phase, and delay phase (DP) images. These domain-adapted feature maps are then combined with 3D transformer blocks to capture patch-structured similarity and global context attention. The experimental findings show that DA-Tran produces cutting-edge tumor segmentation outcomes, making it an ideal candidate for this co-segmentation challenge.


Assuntos
Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Algoritmos , Artérias , Fontes de Energia Elétrica , Generalização Psicológica
13.
Artigo em Inglês | MEDLINE | ID: mdl-38083482

RESUMO

Lung cancer is a malignant tumor with rapid progression and high fatality rate. According to histological morphology and cell behaviours of cancerous tissues, lung cancer can be classified into a variety of subtypes. Since different cancer subtype corresponds to distinct therapies, the early and accurate diagnosis is critical for following treatments and prognostic managements. In clinical practice, the pathological examination is regarded as the gold standard for cancer subtypes diagnosis, while the disadvantage of invasiveness limits its extensive use, leading the non-invasive and fast-imaging computed tomography (CT) test a more commonly used modality in early cancer diagnosis. However, the diagnostic results of CT test are less accurate due to the relatively low image resolution and the atypical manifestations of cancer subtypes. In this work, we propose a novel automatic classification model to offer the assistance in accurately diagnosing the lung cancer subtypes on CT images. Inspired by the findings of cross-modality associations between CT images and their corresponding pathological images, our proposed model is developed to incorporate general histopathological information into CT imagery-based lung cancer subtypes diagnostic by omitting the invasive tissue sample collection or biopsy, and thereby augmenting the diagnostic accuracy. Experimental results on both internal evaluation datasets and external evaluation datasets demonstrate that our proposed model outputs more accurate lung cancer subtypes diagnostic predictions compared to existing CT-based state-of-the-art (SOTA) classification models, by achieving significant improvements in both accuracy (ACC) and area under the receiver operating characteristic curve (AUC).Clinical Relevance- This work provides a method for automatically classifying the lung cancer subtypes on CT images.


Assuntos
Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Pulmão/patologia , Tomografia Computadorizada por Raios X/métodos , Tórax , Curva ROC
14.
Med Image Anal ; 89: 102904, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37506556

RESUMO

Generalization to previously unseen images with potential domain shifts is essential for clinically applicable medical image segmentation. Disentangling domain-specific and domain-invariant features is key for Domain Generalization (DG). However, existing DG methods struggle to achieve effective disentanglement. To address this problem, we propose an efficient framework called Contrastive Domain Disentanglement and Style Augmentation (CDDSA) for generalizable medical image segmentation. First, a disentangle network decomposes the image into domain-invariant anatomical representation and domain-specific style code, where the former is sent for further segmentation that is not affected by domain shift, and the disentanglement is regularized by a decoder that combines the anatomical representation and style code to reconstruct the original image. Second, to achieve better disentanglement, a contrastive loss is proposed to encourage the style codes from the same domain and different domains to be compact and divergent, respectively. Finally, to further improve generalizability, we propose a style augmentation strategy to synthesize images with various unseen styles in real time while maintaining anatomical information. Comprehensive experiments on a public multi-site fundus image dataset and an in-house multi-site Nasopharyngeal Carcinoma Magnetic Resonance Image (NPC-MRI) dataset show that the proposed CDDSA achieved remarkable generalizability across different domains, and it outperformed several state-of-the-art methods in generalizable segmentation. Code is available at https://github.com/HiLab-git/DAG4MIA.


Assuntos
Processamento de Imagem Assistida por Computador , Humanos , Fundo de Olho
15.
Nat Commun ; 14(1): 5510, 2023 09 07.
Artigo em Inglês | MEDLINE | ID: mdl-37679325

RESUMO

Overcoming barriers on the use of multi-center data for medical analytics is challenging due to privacy protection and data heterogeneity in the healthcare system. In this study, we propose the Distributed Synthetic Learning (DSL) architecture to learn across multiple medical centers and ensure the protection of sensitive personal information. DSL enables the building of a homogeneous dataset with entirely synthetic medical images via a form of GAN-based synthetic learning. The proposed DSL architecture has the following key functionalities: multi-modality learning, missing modality completion learning, and continual learning. We systematically evaluate the performance of DSL on different medical applications using cardiac computed tomography angiography (CTA), brain tumor MRI, and histopathology nuclei datasets. Extensive experiments demonstrate the superior performance of DSL as a high-quality synthetic medical image provider by the use of an ideal synthetic quality metric called Dist-FID. We show that DSL can be adapted to heterogeneous data and remarkably outperforms the real misaligned modalities segmentation model by 55% and the temporal datasets segmentation model by 8%.


Assuntos
Neoplasias Encefálicas , Aprendizagem , Humanos , Angiografia , Núcleo Celular , Angiografia por Tomografia Computadorizada
16.
Front Cell Dev Biol ; 11: 1242481, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37635874

RESUMO

Intra-thymic T cell development is coordinated by the regulatory actions of SATB1 genome organizer. In this report, we show that SATB1 is involved in the regulation of transcription and splicing, both of which displayed deregulation in Satb1 knockout murine thymocytes. More importantly, we characterized a novel SATB1 protein isoform and described its distinct biophysical behavior, implicating potential functional differences compared to the commonly studied isoform. SATB1 utilized its prion-like domains to transition through liquid-like states to aggregated structures. This behavior was dependent on protein concentration as well as phosphorylation and interaction with nuclear RNA. Notably, the long SATB1 isoform was more prone to aggregate following phase separation. Thus, the tight regulation of SATB1 isoforms expression levels alongside with protein post-translational modifications, are imperative for SATB1's mode of action in T cell development. Our data indicate that deregulation of these processes may also be linked to disorders such as cancer.

17.
IEEE J Biomed Health Inform ; 27(7): 3302-3313, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37067963

RESUMO

In recent years, several deep learning models have been proposed to accurately quantify and diagnose cardiac pathologies. These automated tools heavily rely on the accurate segmentation of cardiac structures in MRI images. However, segmentation of the right ventricle is challenging due to its highly complex shape and ill-defined borders. Hence, there is a need for new methods to handle such structure's geometrical and textural complexities, notably in the presence of pathologies such as Dilated Right Ventricle, Tricuspid Regurgitation, Arrhythmogenesis, Tetralogy of Fallot, and Inter-atrial Communication. The last MICCAI challenge on right ventricle segmentation was held in 2012 and included only 48 cases from a single clinical center. As part of the 12th Workshop on Statistical Atlases and Computational Models of the Heart (STACOM 2021), the M&Ms-2 challenge was organized to promote the interest of the research community around right ventricle segmentation in multi-disease, multi-view, and multi-center cardiac MRI. Three hundred sixty CMR cases, including short-axis and long-axis 4-chamber views, were collected from three Spanish hospitals using nine different scanners from three different vendors, and included a diverse set of right and left ventricle pathologies. The solutions provided by the participants show that nnU-Net achieved the best results overall. However, multi-view approaches were able to capture additional information, highlighting the need to integrate multiple cardiac diseases, views, scanners, and acquisition protocols to produce reliable automatic cardiac segmentation algorithms.


Assuntos
Aprendizado Profundo , Ventrículos do Coração , Humanos , Ventrículos do Coração/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Algoritmos , Átrios do Coração
18.
Front Cardiovasc Med ; 9: 919810, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35859582

RESUMO

Recent advances in magnetic resonance imaging are enabling the efficient creation of high-dimensional, multiparametric images, containing a wealth of potential information about the structure and function of many organs, including the cardiovascular system. However, the sizes of these rich data sets are so large that they are outstripping our ability to adequately visualize and analyze them, thus limiting their clinical impact. While there are some intrinsic limitations of human perception and of conventional display devices which hamper our ability to effectively use these data, newer computational methods for handling the data may aid our ability to extract and visualize the salient components of these high-dimensional data sets.

19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 4758-4763, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086601

RESUMO

Multi-modality images have been widely used and provide comprehensive information for medical image analysis. However, acquiring all modalities among all institutes is costly and often impossible in clinical settings. To leverage more comprehensive multi-modality information, we propose privacy secured decentralized multi-modality adaptive learning architecture named ModalityBank. Our method could learn a set of effective domain-specific modulation parameters plugged into a common domain-agnostic network. We demonstrate by switching different sets of configurations, the generator could output high-quality images for a specific modality. Our method could also complete the missing modalities across all data centers, thus could be used for modality completion purposes. The downstream task trained from the synthesized multi-modality samples could achieve higher performance than learning from one real data center and achieve close- to- real performance compare with all real images.


Assuntos
Imageamento por Ressonância Magnética , Imagem Multimodal , Imageamento por Ressonância Magnética/métodos , Imagem Multimodal/métodos
20.
Med Image Anal ; 80: 102517, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35732106

RESUMO

Despite that Convolutional Neural Networks (CNNs) have achieved promising performance in many medical image segmentation tasks, they rely on a large set of labeled images for training, which is expensive and time-consuming to acquire. Semi-supervised learning has shown the potential to alleviate this challenge by learning from a large set of unlabeled images and limited labeled samples. In this work, we present a simple yet efficient consistency regularization approach for semi-supervised medical image segmentation, called Uncertainty Rectified Pyramid Consistency (URPC). Inspired by the pyramid feature network, we chose a pyramid-prediction network that obtains a set of segmentation predictions at different scales. For semi-supervised learning, URPC learns from unlabeled data by minimizing the discrepancy between each of the pyramid predictions and their average. We further present multi-scale uncertainty rectification to boost the pyramid consistency regularization, where the rectification seeks to temper the consistency loss at outlier pixels that may have substantially different predictions than the average, potentially due to upsampling errors or lack of enough labeled data. Experiments on two public datasets and an in-house clinical dataset showed that: 1) URPC can achieve large performance improvement by utilizing unlabeled data and 2) Compared with five existing semi-supervised methods, URPC achieved better or comparable results with a simpler pipeline. Furthermore, we build a semi-supervised medical image segmentation codebase to boost research on this topic: https://github.com/HiLab-git/SSL4MIS.


Assuntos
Redes Neurais de Computação , Aprendizado de Máquina Supervisionado , Humanos , Processamento de Imagem Assistida por Computador/métodos , Incerteza
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA