RESUMO
Neuroimaging studies typically adopt a common feature space for all data, which may obscure aspects of neuroanatomy only observable in subsets of a population, e.g. cortical folding patterns unique to individuals or shared by close relatives. Here, we propose to model individual variability using a distinctive keypoint signature: a set of unique, localized patterns, detected automatically in each image by a generic saliency operator. The similarity of an image pair is then quantified by the proportion of keypoints they share using a novel Jaccard-like measure of set overlap. Experiments demonstrate the keypoint method to be highly efficient and accurate, using a set of 7536 T1-weighted MRIs pooled from four public neuroimaging repositories, including twins, non-twin siblings, and 3334 unique subjects. All same-subject image pairs are identified by a similarity threshold despite confounds including aging and neurodegenerative disease progression. Outliers reveal previously unknown data labeling inconsistencies, demonstrating the usefulness of the keypoint signature as a computational tool for curating large neuroimage datasets.
Assuntos
Encéfalo/anatomia & histologia , Encéfalo/diagnóstico por imagem , Conjuntos de Dados como Assunto , Neuroimagem/métodos , Reconhecimento Automatizado de Padrão/métodos , Irmãos , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Envelhecimento/patologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Doenças Neurodegenerativas/diagnóstico por imagem , Doenças Neurodegenerativas/patologia , Adulto JovemRESUMO
Accurate prediction of individuals' brain age is critical to establish a baseline for normal brain development. This study proposes to model brain development with a novel non-negative projective dictionary learning (NPDL) approach, which learns a discriminative representation of multi-modal neuroimaging data for predicting brain age. Our approach encodes the variability of subjects in different age groups using separate dictionaries, projecting features into a low-dimensional manifold such that information is preserved only for the corresponding age group. The proposed framework improves upon previous discriminative dictionary learning methods by incorporating orthogonality and non-negativity constraints, which remove representation redundancy and perform implicit feature selection. We study brain development on multi-modal brain imaging data from the PING dataset (Nâ¯=â¯841, ageâ¯=â¯3-21 years). The proposed analysis uses our NDPL framework to predict the age of subjects based on cortical measures from T1-weighted MRI and connectome from diffusion weighted imaging (DWI). We also investigate the association between age prediction and cognition, and study the influence of gender on prediction accuracy. Experimental results demonstrate the usefulness of NDPL for modeling brain development.
Assuntos
Encéfalo/crescimento & desenvolvimento , Córtex Cerebral/crescimento & desenvolvimento , Imageamento por Ressonância Magnética/métodos , Modelos Teóricos , Neuroimagem/métodos , Adolescente , Adulto , Fatores Etários , Encéfalo/diagnóstico por imagem , Córtex Cerebral/diagnóstico por imagem , Criança , Pré-Escolar , Imagem de Tensor de Difusão/métodos , Feminino , Humanos , Masculino , Rede Nervosa/diagnóstico por imagem , Rede Nervosa/crescimento & desenvolvimento , Fatores Sexuais , Adulto JovemRESUMO
Due to occlusion or detached markers, information can often be lost while capturing human motion with optical tracking systems. Based on three natural properties of human gait movement, this study presents two different approaches to recover corrupted motion data. These properties are used to define a reconstruction model combining low-rank matrix completion of the measured data with a group-sparsity prior on the marker trajectories mapped in the frequency domain. Unlike most existing approaches, the proposed methodology is fully unsupervised and does not need training data or kinematic information of the user. We evaluated our methods on four different gait datasets with various gap lengths and compared their performance with a state-of-the-art approach using principal component analysis (PCA). Our results showed recovering missing data more precisely, with a reduction of at least 2 mm in mean reconstruction error compared to the literature method. When a small number of marker trajectories is available, our findings showed a reduction of more than 14 mm for the mean reconstruction error compared to the literature approach.
Assuntos
Algoritmos , Marcha , Movimento , Humanos , Monitorização Fisiológica , Movimento (Física) , Análise de Componente PrincipalRESUMO
This study investigates a 3D and fully convolutional neural network (CNN) for subcortical brain structure segmentation in MRI. 3D CNN architectures have been generally avoided due to their computational and memory requirements during inference. We address the problem via small kernels, allowing deeper architectures. We further model both local and global context by embedding intermediate-layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine-grained information directly in the segmentation process. Our model is efficiently trained end-to-end on a graphics processing unit (GPU), in a single stage, exploiting the dense inference capabilities of fully CNNs. We performed comprehensive experiments over two publicly available datasets. First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then, we report a large-scale multi-site evaluation over 1112 unregistered subject datasets acquired from 17 different sites (ABIDE dataset), with ages ranging from 7 to 64 years, showing that our method is robust to various acquisition protocols, demographics and clinical factors. Our method yielded segmentations that are highly consistent with a standard atlas-based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps. This makes it convenient for massive multi-site neuroanatomical imaging studies. To the best of our knowledge, our work is the first to study subcortical structure segmentation on such large-scale and heterogeneous data.
Assuntos
Corpo Estriado/anatomia & histologia , Corpo Estriado/diagnóstico por imagem , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Tálamo/anatomia & histologia , Tálamo/diagnóstico por imagem , Adolescente , Adulto , Transtorno do Espectro Autista/diagnóstico por imagem , Transtorno do Espectro Autista/patologia , Criança , Conjuntos de Dados como Assunto , Humanos , Pessoa de Meia-Idade , Adulto JovemRESUMO
This work presents an efficient framework, based on manifold approximation, for generating brain fingerprints from multi-modal data. The proposed framework represents images as bags of local features which are used to build a subject proximity graph. Compact fingerprints are obtained by projecting this graph in a low-dimensional manifold using spectral embedding. Experiments using the T1/T2-weighted MRI, diffusion MRI, and resting-state fMRI data of 945 Human Connectome Project subjects demonstrate the benefit of combining multiple modalities, with multi-modal fingerprints more discriminative than those generated from individual modalities. Results also highlight the link between fingerprint similarity and genetic proximity, monozygotic twins having more similar fingerprints than dizygotic or non-twin siblings. This link is also reflected in the differences of feature correspondences between twin/sibling pairs, occurring in major brain structures and across hemispheres. The robustness of the proposed framework to factors like image alignment and scan resolution, as well as the reproducibility of results on retest scans, suggest the potential of multi-modal brain fingerprinting for characterizing individuals in a large cohort analysis.
Assuntos
Encéfalo , Neuroimagem Funcional/métodos , Individualidade , Imageamento por Ressonância Magnética/métodos , Irmãos , Gêmeos , Adulto , Encéfalo/anatomia & histologia , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Estudos de Coortes , Conectoma/métodos , Imagem de Difusão por Ressonância Magnética/métodos , Feminino , Humanos , Masculino , Adulto JovemRESUMO
The human cerebellum plays an essential role in motor control, is involved in cognitive function (i.e., attention, working memory, and language), and helps to regulate emotional responses. Quantitative in-vivo assessment of the cerebellum is important in the study of several neurological diseases including cerebellar ataxia, autism, and schizophrenia. Different structural subdivisions of the cerebellum have been shown to correlate with differing pathologies. To further understand these pathologies, it is helpful to automatically parcellate the cerebellum at the highest fidelity possible. In this paper, we coordinated with colleagues around the world to evaluate automated cerebellum parcellation algorithms on two clinical cohorts showing that the cerebellum can be parcellated to a high accuracy by newer methods. We characterize these various methods at four hierarchical levels: coarse (i.e., whole cerebellum and gross structures), lobe, subdivisions of the vermis, and the lobules. Due to the number of labels, the hierarchy of labels, the number of algorithms, and the two cohorts, we have restricted our analyses to the Dice measure of overlap. Under these conditions, machine learning based methods provide a collection of strategies that are efficient and deliver parcellations of a high standard across both cohorts, surpassing previous work in the area. In conjunction with the rank-sum computation, we identified an overall winning method.
Assuntos
Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico por imagem , Transtorno do Espectro Autista/diagnóstico por imagem , Ataxia Cerebelar/diagnóstico por imagem , Cerebelo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Adulto , Criança , Estudos de Coortes , Feminino , Humanos , Processamento de Imagem Assistida por Computador/normas , Imageamento por Ressonância Magnética/normas , Masculino , Neuroimagem/normasRESUMO
White matter characterization studies use the information provided by diffusion magnetic resonance imaging (dMRI) to draw cross-population inferences. However, the structure, function, and white matter geometry vary across individuals. Here, we propose a subject fingerprint, called Fiberprint, to quantify the individual uniqueness in white matter geometry using fiber trajectories. We learn a sparse coding representation for fiber trajectories by mapping them to a common space defined by a dictionary. A subject fingerprint is then generated by applying a pooling function for each bundle, thus providing a vector of bundle-wise features describing a particular subject's white matter geometry. These features encode unique properties of fiber trajectories, such as their density along prominent bundles. An analysis of data from 861 Human Connectome Project subjects reveals that a fingerprint based on approximately 3000 fiber trajectories can uniquely identify exemplars from the same individual. We also use fingerprints for twin/sibling identification, our observations consistent with the twin data studies of white matter integrity. Our results demonstrate that the proposed Fiberprint can effectively capture the variability in white matter fiber geometry across individuals, using a compact feature vector (dimension of 50), making this framework particularly attractive for handling large datasets.
Assuntos
Encéfalo/anatomia & histologia , Imagem de Difusão por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Neuroimagem/métodos , Substância Branca/anatomia & histologia , HumanosRESUMO
BACKGROUND: Emerging evidence suggests the presence of neuroanatomical abnormalities in subjects with autism spectrum disorder (ASD). Identifying anatomical correlates could thus prove useful for the automated diagnosis of ASD. Radiomic analyses based on MRI texture features have shown a great potential for characterizing differences occurring from tissue heterogeneity, and for identifying abnormalities related to these differences. However, only a limited number of studies have investigated the link between image texture and ASD. This paper proposes the study of texture features based on grey level co-occurrence matrix (GLCM) as a means for characterizing differences between ASD and development control (DC) subjects. Our study uses 64 T1-weighted MRI scans acquired from two groups of subjects: 28 typical age range subjects 4-15 years old (14 ASD and 14 DC, age-matched), and 36 non-typical age range subjects 10-24 years old (20 ASD and 16 DC). GLCM matrices are computed from manually labeled hippocampus and amygdala regions, and then encoded as texture features by applying 11 standard Haralick quantifier functions. Significance tests are performed to identify texture differences between ASD and DC subjects. An analysis using SVM and random forest classifiers is then carried out to find the most discriminative features, and use these features for classifying ASD from DC subjects. RESULTS: Preliminary results show that all 11 features derived from the hippocampus (typical and non-typical age) and 4 features extracted from the amygdala (non-typical age) have significantly different distributions in ASD subjects compared to DC subjects, with a significance of p < 0.05 following Holm-Bonferroni correction. Features derived from hippocampal regions also demonstrate high discriminative power for differentiating between ASD and DC subjects, with classifier accuracy of 67.85%, sensitivity of 62.50%, specificity of 71.42%, and the area under the ROC curve (AUC) of 76.80% for age-matched subjects with typical age range. CONCLUSIONS: Results demonstrate the potential of hippocampal texture features as a biomarker for the diagnosis and characterization of ASD.
Assuntos
Tonsila do Cerebelo/fisiopatologia , Transtorno do Espectro Autista/fisiopatologia , Hipocampo/fisiopatologia , Processamento de Imagem Assistida por Computador , Adolescente , Área Sob a Curva , Transtorno do Espectro Autista/diagnóstico por imagem , Biomarcadores/análise , Criança , Pré-Escolar , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Sensibilidade e EspecificidadeRESUMO
Inadequate postures adopted by an operator at work are among the most important risk factors in Work-related Musculoskeletal Disorders (WMSDs). Although several studies have focused on inadequate posture, there is limited information on its identification in a work context. The aim of this study is to automatically differentiate between adequate and inadequate postures using two wearable devices (helmet and instrumented insole) with an inertial measurement unit (IMU) and force sensors. From the force sensors located inside the insole, the center of pressure (COP) is computed since it is considered an important parameter in the analysis of posture. In a first step, a set of 60 features is computed with a direct approach, and later reduced to eight via a hybrid feature selection. A neural network is then employed to classify the current posture of a worker, yielding a recognition rate of 90%. In a second step, an innovative graphic approach is proposed to extract three additional features for the classification. This approach represents the main contribution of this study. Combining both approaches improves the recognition rate to 95%. Our results suggest that neural network could be applied successfully for the classification of adequate and inadequate posture.
Assuntos
Dispositivos Eletrônicos Vestíveis , Humanos , Movimento , Redes Neurais de Computação , Postura , PressãoRESUMO
Despite the remarkable progress in semi-supervised medical image segmentation methods based on deep learning, their application to real-life clinical scenarios still faces considerable challenges. For example, insufficient labeled data often makes it difficult for networks to capture the complexity and variability of the anatomical regions to be segmented. To address these problems, we design a new semi-supervised segmentation framework that aspires to produce anatomically plausible predictions. Our framework comprises two parallel networks: shape-agnostic and shape-aware networks. These networks learn from each other, enabling effective utilization of unlabeled data. Our shape-aware network implicitly introduces shape guidance to capture shape fine-grained information. Meanwhile, shape-agnostic networks employ uncertainty estimation to further obtain reliable pseudo-labels for the counterpart. We also employ a cross-style consistency strategy to enhance the network's utilization of unlabeled data. It enriches the dataset to prevent overfitting and further eases the coupling of the two networks that learn from each other. Our proposed architecture also incorporates a novel loss term that facilitates the learning of the local context of segmentation by the network, thereby enhancing the overall accuracy of prediction. Experiments on three different datasets of medical images show that our method outperforms many excellent semi-supervised segmentation methods and outperforms them in perceiving shape. The code can be seen at https://github.com/igip-liu/SLC-Net.
Assuntos
Processamento de Imagem Assistida por Computador , Aprendizado de Máquina Supervisionado , IncertezaRESUMO
Self-supervised representation learning can boost the performance of a pre-trained network on downstream tasks for which labeled data is limited. A popular method based on this paradigm, known as contrastive learning, works by constructing sets of positive and negative pairs from the data, and then pulling closer the representations of positive pairs while pushing apart those of negative pairs. Although contrastive learning has been shown to improve performance in various classification tasks, its application to image segmentation has been more limited. This stems in part from the difficulty of defining positive and negative pairs for dense feature maps without having access to pixel-wise annotations. In this work, we propose a novel self-supervised pre-training method that overcomes the challenges of contrastive learning in image segmentation. Our method leverages Information Invariant Clustering (IIC) as an unsupervised task to learn a local representation of images in the decoder of a segmentation network, but addresses three important drawbacks of this approach: (i) the difficulty of optimizing the loss based on mutual information maximization; (ii) the lack of clustering consistency for different random transformations of the same image; (iii) the poor correspondence of clusters obtained by IIC with region boundaries in the image. Toward this goal, we first introduce a regularized mutual information maximization objective that encourages the learned clusters to be balanced and consistent across different image transformations. We also propose a boundary-aware loss based on cross-correlation, which helps the learned clusters to be more representative of important regions in the image. Compared to contrastive learning applied in dense features, our method does not require computing positive and negative pairs and also enhances interpretability through the visualization of learned clusters. Comprehensive experiments involving four different medical image segmentation tasks reveal the high effectiveness of our self-supervised representation learning method. Our results show the proposed method to outperform by a large margin several state-of-the-art self-supervised and semi-supervised approaches for segmentation, reaching a performance close to full supervision with only a few labeled examples.
Assuntos
Processamento de Imagem Assistida por Computador , Aprendizagem , Humanos , Aprendizado de Máquina SupervisionadoRESUMO
Recently, deep reinforcement learning (RL) has been proposed to learn the tractography procedure and train agents to reconstruct the structure of the white matter without manually curated reference streamlines. While the performances reported were competitive, the proposed framework is complex, and little is still known about the role and impact of its multiple parts. In this work, we thoroughly explore the different components of the proposed framework, such as the choice of the RL algorithm, seeding strategy, the input signal and reward function, and shed light on their impact. Approximately 7,400 models were trained for this work, totalling nearly 41,000 h of GPU time. Our goal is to guide researchers eager to explore the possibilities of deep RL for tractography by exposing what works and what does not work with the category of approach. As such, we ultimately propose a series of recommendations concerning the choice of RL algorithm, the input to the agents, the reward function and more to help future work using reinforcement learning for tractography. We also release the open source codebase, trained models, and datasets for users and researchers wanting to explore reinforcement learning for tractography.
Assuntos
Aprendizagem , Reforço Psicológico , Humanos , Recompensa , AlgoritmosRESUMO
The performance of learning-based algorithms improves with the amount of labelled data used for training. Yet, manually annotating data is particularly difficult for medical image segmentation tasks because of the limited expert availability and intensive manual effort required. To reduce manual labelling, active learning (AL) targets the most informative samples from the unlabelled set to annotate and add to the labelled training set. On the one hand, most active learning works have focused on the classification or limited segmentation of natural images, despite active learning being highly desirable in the difficult task of medical image segmentation. On the other hand, uncertainty-based AL approaches notoriously offer sub-optimal batch-query strategies, while diversity-based methods tend to be computationally expensive. Over and above methodological hurdles, random sampling has proven an extremely difficult baseline to outperform when varying learning and sampling conditions. This work aims to take advantage of the diversity and speed offered by random sampling to improve the selection of uncertainty-based AL methods for segmenting medical images. More specifically, we propose to compute uncertainty at the level of batches instead of samples through an original use of stochastic batches (SB) during sampling in AL. Stochastic batch querying is a simple and effective add-on that can be used on top of any uncertainty-based metric. Extensive experiments on two medical image segmentation datasets show that our strategy consistently improves conventional uncertainty-based sampling methods. Our method can hence act as a strong baseline for medical image segmentation. The code is available on: https://github.com/Minimel/StochasticBatchAL.git.
RESUMO
Reconstructing and segmenting cortical surfaces from MRI is essential to a wide range of brain analyses. However, most approaches follow a multi-step slow process, such as a sequential spherical inflation and registration, which requires considerable computation times. To overcome the limitations arising from these multi-steps, we propose SegRecon, an integrated end-to-end deep learning method to jointly reconstruct and segment cortical surfaces directly from an MRI volume in one single step. We train a volume-based neural network to predict, for each voxel, the signed distances to multiple nested surfaces and their corresponding spherical representation in atlas space. This is, for instance, useful for jointly reconstructing and segmenting the white-to-gray-matter interface and the gray-matter-to-CSF (pial) surface. We evaluate the performance of our surface reconstruction and segmentation method with a comprehensive set of experiments on the MindBoggle, ABIDE and OASIS datasets. Our reconstruction error is found to be less than 0.52 mm and 0.97 mm in terms of average Hausdorff distance to the FreeSurfer generated surfaces. Likewise, the parcellation results show over 4% improvements in average Dice with respect to FreeSurfer, in addition to an observed drastic speed-up from hours to seconds of computation on a standard desktop station.
RESUMO
Deep learning methods have shown outstanding potential in dermatology for skin lesion detection and identification. However, they usually require annotations beforehand and can only classify lesion classes seen in the training set. Moreover, large-scale, open-sourced medical datasets normally have far fewer annotated classes than in real life, further aggravating the problem. This paper proposes a novel method called DNF-OOD, which applies a non-parametric deep forest-based approach to the problem of out-of-distribution (OOD) detection. By leveraging a maximum probabilistic routing strategy and over-confidence penalty term, the proposed method can achieve better performance on the task of detecting OOD skin lesion images, which is challenging due to the large intra-class variability in such images. We evaluate our OOD detection method on images from two large, publicly-available skin lesion datasets, ISIC2019 and DermNet, and compare it against recently-proposed approaches. Results demonstrate the potential of our DNF-OOD framework for detecting OOD skin images.
Assuntos
Aprendizado Profundo , Dermatopatias , Humanos , PeleRESUMO
Despite achieving promising results in a breadth of medical image segmentation tasks, deep neural networks (DNNs) require large training datasets with pixel-wise annotations. Obtaining these curated datasets is a cumbersome process which limits the applicability of DNNs in scenarios where annotated images are scarce. Mixed supervision is an appealing alternative for mitigating this obstacle. In this setting, only a small fraction of the data contains complete pixel-wise annotations and other images have a weaker form of supervision, e.g., only a handful of pixels are labeled. In this work, we propose a dual-branch architecture, where the upper branch (teacher) receives strong annotations, while the bottom one (student) is driven by limited supervision and guided by the upper branch. Combined with a standard cross-entropy loss over the labeled pixels, our novel formulation integrates two important terms: (i) a Shannon entropy loss defined over the less-supervised images, which encourages confident student predictions in the bottom branch; and (ii) a Kullback-Leibler (KL) divergence term, which transfers the knowledge (i.e., predictions) of the strongly supervised branch to the less-supervised branch and guides the entropy (student-confidence) term to avoid trivial solutions. We show that the synergy between the entropy and KL divergence yields substantial improvements in performance. We also discuss an interesting link between Shannon-entropy minimization and standard pseudo-mask generation, and argue that the former should be preferred over the latter for leveraging information from unlabeled pixels. We evaluate the effectiveness of the proposed formulation through a series of quantitative and qualitative experiments using two publicly available datasets. Results demonstrate that our method significantly outperforms other strategies for semantic segmentation within a mixed-supervision framework, as well as recent semi-supervised approaches. Moreover, in line with recent observations in classification, we show that the branch trained with reduced supervision and guided by the top branch largely outperforms the latter. Our code is publicly available: https://github.com/by-liu/ConfKD.
Assuntos
Redes Neurais de Computação , Semântica , Humanos , EntropiaRESUMO
We present an unsupervised domain adaptation method for image segmentation which aligns high-order statistics, computed for the source and target domains, encoding domain-invariant spatial relationships between segmentation classes. Our method first estimates the joint distribution of predictions for pairs of pixels whose relative position corresponds to a given spatial displacement. Domain adaptation is then achieved by aligning the joint distributions of source and target images, computed for a set of displacements. Two enhancements of this method are proposed. The first one uses an efficient multi-scale strategy that enables capturing long-range relationships in the statistics. The second one extends the joint distribution alignment loss to features in intermediate layers of the network by computing their cross-correlation. We test our method on the task of unpaired multi-modal cardiac segmentation using the Multi-Modality Whole Heart Segmentation Challenge dataset and prostate segmentation task where images from two datasets are taken as data in different domains. Our results show the advantages of our method compared to recent approaches for cross-domain image segmentation. Code is available at https://github.com/WangPing521/Domain_adaptation_shape_prior.
Assuntos
Coração , Pelve , Masculino , Humanos , Coração/diagnóstico por imagem , Próstata , Processamento de Imagem Assistida por ComputadorRESUMO
Deep learning models for semi-supervised medical image segmentation have achieved unprecedented performance for a wide range of tasks. Despite their high accuracy, these models may however yield predictions that are considered anatomically impossible by clinicians. Moreover, incorporating complex anatomical constraints into standard deep learning frameworks remains challenging due to their non-differentiable nature. To address these limitations, we propose a Constrained Adversarial Training (CAT) method that learns how to produce anatomically plausible segmentations. Unlike approaches focusing solely on accuracy measures like Dice, our method considers complex anatomical constraints like connectivity, convexity, and symmetry which cannot be easily modeled in a loss function. The problem of non-differentiable constraints is solved using a Reinforce algorithm which enables to obtain a gradient for violated constraints. To generate constraint-violating examples on the fly, and thereby obtain useful gradients, our method adopts an adversarial training strategy which modifies training images to maximize the constraint loss, and then updates the network to be robust to these adversarial examples. The proposed method offers a generic and efficient way to add complex segmentation constraints on top of any segmentation network. Experiments on synthetic data and four clinically-relevant datasets demonstrate the effectiveness of our method in terms of segmentation accuracy and anatomical plausibility.
Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina SupervisionadoRESUMO
Neonatal MRIs are used increasingly in preterm infants. However, it is not always feasible to analyze this data. Having a tool that assesses brain maturation during this period of extraordinary changes would be immensely helpful. Approaches based on deep learning approaches could solve this task since, once properly trained and validated, they can be used in practically any system and provide holistic quantitative information in a matter of minutes. However, one major deterrent for radiologists is that these tools are not easily interpretable. Indeed, it is important that structures driving the results be detailed and survive comparison to the available literature. To solve these challenges, we propose an interpretable pipeline based on deep learning to predict postmenstrual age at scan, a key measure for assessing neonatal brain development. For this purpose, we train a state-of-the-art deep neural network to segment the brain into 87 different regions using normal preterm and term infants from the dHCP study. We then extract informative features for brain age estimation using the segmented MRIs and predict the brain age at scan with a regression model. The proposed framework achieves a mean absolute error of 0.46 weeks to predict postmenstrual age at scan. While our model is based solely on structural T2-weighted images, the results are superior to recent, arguably more complex approaches. Furthermore, based on the extracted knowledge from the trained models, we found that frontal and parietal lobes are among the most important structures for neonatal brain age estimation.
Assuntos
Recém-Nascido Prematuro , Nascimento Prematuro , Feminino , Humanos , Recém-Nascido , Lactente , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Redes Neurais de ComputaçãoRESUMO
The use of multiparametric magnetic resonance imaging (mpMRI) has become a common technique used in guiding biopsy and developing treatment plans for prostate lesions. While this technique is effective, non-invasive methods such as radiomics have gained popularity for extracting imaging features to develop predictive models for clinical tasks. The aim is to minimize invasive processes for improved management of prostate cancer (PCa). This study reviews recent research progress in MRI-based radiomics for PCa, including the radiomics pipeline and potential factors affecting personalized diagnosis. The integration of artificial intelligence (AI) with medical imaging is also discussed, in line with the development trend of radiogenomics and multi-omics. The survey highlights the need for more data from multiple institutions to avoid bias and generalize the predictive model. The AI-based radiomics model is considered a promising clinical tool with good prospects for application.