Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 66
Filtrar
1.
Evol Anthropol ; 33(1): e22009, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37961949

RESUMO

The theory of punctuated equilibrium (PE) was developed a little over 50 years ago to explain long-term, large-scale appearance and disappearance of species in the fossil record. A theory designed specifically for that purpose cannot be expected, out of the box, to be directly applicable to biocultural evolution, but in revised form, PE offers a promising approach to incorporating not only a wealth of recent empirical research on genetic, linguistic, and technological evolution but also large databases that document human biological and cultural diversity across time and space. Here we isolate the fundamental components of PE and propose which pieces, when reassembled or renamed, can be highly useful in evolutionary anthropology, especially as humanity faces abrupt ecological challenges on an increasingly larger scale.


Assuntos
Evolução Biológica , Fósseis , Humanos , Diversidade Cultural , Bases de Dados Factuais
2.
Mult Scler ; 28(8): 1209-1218, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-34859704

RESUMO

BACKGROUND: Active (new/enlarging) T2 lesion counts are routinely used in the clinical management of multiple sclerosis. Thus, automated tools able to accurately identify active T2 lesions would be of high interest to neuroradiologists for assisting in their clinical activity. OBJECTIVE: To compare the accuracy in detecting active T2 lesions and of radiologically active patients based on different visual and automated methods. METHODS: One hundred multiple sclerosis patients underwent two magnetic resonance imaging examinations within 12 months. Four approaches were assessed for detecting active T2 lesions: (1) conventional neuroradiological reports; (2) prospective visual analyses performed by an expert; (3) automated unsupervised tool; and (4) supervised convolutional neural network. As a gold standard, a reference outcome was created by the consensus of two observers. RESULTS: The automated methods detected a higher number of active T2 lesions, and a higher number of active patients, but a higher number of false-positive active patients than visual methods. The convolutional neural network model was more sensitive in detecting active T2 lesions and active patients than the other automated method. CONCLUSION: Automated convolutional neural network models show potential as an aid to neuroradiological assessment in clinical practice, although visual supervision of the outcomes is still required.


Assuntos
Esclerose Múltipla , Humanos , Imageamento por Ressonância Magnética/métodos , Esclerose Múltipla/patologia , Estudos Prospectivos
3.
J Magn Reson Imaging ; 2021 Jun 16.
Artigo em Inglês | MEDLINE | ID: mdl-34137113

RESUMO

BACKGROUND: Manual brain extraction from magnetic resonance (MR) images is time-consuming and prone to intra- and inter-rater variability. Several automated approaches have been developed to alleviate these constraints, including deep learning pipelines. However, these methods tend to reduce their performance in unseen magnetic resonance imaging (MRI) scanner vendors and different imaging protocols. PURPOSE: To present and evaluate for clinical use PARIETAL, a pre-trained deep learning brain extraction method. We compare its reproducibility in a scan/rescan analysis and its robustness among scanners of different manufacturers. STUDY TYPE: Retrospective. POPULATION: Twenty-one subjects (12 women) with age range 22-48 years acquired using three different MRI scanner machines including scan/rescan in each of them. FIELD STRENGTH/SEQUENCE: T1-weighted images acquired in a 3-T Siemens with magnetization prepared rapid gradient-echo sequence and two 1.5 T scanners, Philips and GE, with spin-echo and spoiled gradient-recalled (SPGR) sequences, respectively. ASSESSMENT: Analysis of the intracranial cavity volumes obtained for each subject on the three different scanners and the scan/rescan acquisitions. STATISTICAL TESTS: Parametric permutation tests of the differences in volumes to rank and statistically evaluate the performance of PARIETAL compared to state-of-the-art methods. RESULTS: The mean absolute intracranial volume differences obtained by PARIETAL in the scan/rescan analysis were 1.88 mL, 3.91 mL, and 4.71 mL for Siemens, GE, and Philips scanners, respectively. PARIETAL was the best-ranked method on Siemens and GE scanners, while decreasing to Rank 2 on the Philips images. Intracranial differences for the same subject between scanners were 5.46 mL, 27.16 mL, and 30.44 mL for GE/Philips, Siemens/Philips, and Siemens/GE comparison, respectively. The permutation tests revealed that PARIETAL was always in Rank 1, obtaining the most similar volumetric results between scanners. DATA CONCLUSION: PARIETAL accurately segments the brain and it generalizes to images acquired at different sites without the need of training or fine-tuning it again. PARIETAL is publicly available. LEVEL OF EVIDENCE: 2 TECHNICAL EFFICACY STAGE: 2.

4.
Neuroimage ; 155: 159-168, 2017 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-28435096

RESUMO

In this paper, we present a novel automated method for White Matter (WM) lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is based on a cascade of two 3D patch-wise convolutional neural networks (CNN). The first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network. This cascaded CNN architecture tends to learn well from a small (n≤35) set of labeled data of the same MRI contrast, which can be very interesting in practice, given the difficulty to obtain manual label annotations and the large amount of available unlabeled Magnetic Resonance Imaging (MRI) data. We evaluate the accuracy of the proposed method on the public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with respect to other state-of-the-art MS lesion segmentation tools. Furthermore, the proposed method is also evaluated on two private MS clinical datasets, where the performance of our method is also compared with different recent public available state-of-the-art MS lesion segmentation methods. At the time of writing this paper, our method is the best ranked approach on the MICCAI2008 challenge, outperforming the rest of 60 participant methods when using all the available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank (3rd position) when using only T1-w and FLAIR modalities. On clinical MS data, our approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods, highly correlating (r≥0.97) also with the expected lesion volume.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Esclerose Múltipla/diagnóstico por imagem , Redes Neurais de Computação , Neuroimagem/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Humanos , Esclerose Múltipla/patologia , Substância Branca/diagnóstico por imagem , Substância Branca/patologia
5.
Bioessays ; 36(5): 503-12, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-24723412

RESUMO

Genomic instability is a hallmark of cancer. Cancer cells that exhibit abnormal chromosomes are characteristic of most advanced tumours, despite the potential threat represented by accumulated genetic damage. Carcinogenesis involves a loss of key components of the genetic and signalling molecular networks; hence some authors have suggested that this is part of a trend of cancer cells to behave as simple, minimal replicators. In this study, we explore this conjecture and suggest that, in the case of cancer, genomic instability has an upper limit that is associated with a minimal cancer cell network. Such a network would include (for a given microenvironment) the basic molecular components that allow cells to replicate and respond to selective pressures. However, it would also exhibit internal fragilities that could be exploited by appropriate therapies targeting the DNA repair machinery. The implications of this hypothesis are discussed.


Assuntos
Replicação do DNA/genética , Neoplasias/genética , Epigênese Genética , Instabilidade Genômica , Humanos
6.
J Magn Reson Imaging ; 41(1): 93-101, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24459099

RESUMO

PURPOSE: Ground-truth annotations from the well-known Internet Brain Segmentation Repository (IBSR) datasets consider Sulcal cerebrospinal fluid (SCSF) voxels as gray matter. This can lead to bias when evaluating the performance of tissue segmentation methods. In this work we compare the accuracy of 10 brain tissue segmentation methods analyzing the effects of SCSF ground-truth voxels on accuracy estimations. MATERIALS AND METHODS: The set of methods is composed by FAST, SPM5, SPM8, GAMIXTURE, ANN, FCM, KNN, SVPASEG, FANTASM, and PVC. Methods are evaluated using original IBSR ground-truth and ranked by means of their performance on pairwise comparisons using permutation tests. Afterward, the evaluation is repeated using IBSR ground-truth without considering SCSF. RESULTS: The Dice coefficient of all methods is affected by changes in SCSF annotations, especially on SPM5, SPM8 and FAST. When not considering SCSF voxels, SVPASEG (0.90 ± 0.01) and SPM8 (0.91 ± 0.01) are the methods from our study that appear more suitable for gray matter tissue segmentation, while FAST (0.89 ± 0.02) is the best tool for segmenting white matter tissue. CONCLUSION: The performance and the accuracy of methods on IBSR images vary notably when not considering SCSF voxels. The fact that three of the most common methods (FAST, SPM5, and SPM8) report an important change in their accuracy suggest to consider these differences in labeling for new comparative studies.


Assuntos
Algoritmos , Encéfalo/anatomia & histologia , Conjuntos de Dados como Assunto/estatística & dados numéricos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Mapeamento Encefálico/métodos , Humanos , Reprodutibilidade dos Testes
7.
Neuroradiology ; 57(10): 1031-43, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26227167

RESUMO

INTRODUCTION: Lesion segmentation plays an important role in the diagnosis and follow-up of multiple sclerosis (MS). This task is very time-consuming and subject to intra- and inter-rater variability. In this paper, we present a new tool for automated MS lesion segmentation using T1w and fluid-attenuated inversion recovery (FLAIR) images. METHODS: Our approach is based on two main steps, initial brain tissue segmentation according to the gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) performed in T1w images, followed by a second step where the lesions are segmented as outliers to the normal apparent GM brain tissue on the FLAIR image. RESULTS: The tool has been validated using data from more than 100 MS patients acquired with different scanners and at different magnetic field strengths. Quantitative evaluation provided a better performance in terms of precision while maintaining similar results on sensitivity and Dice similarity measures compared with those of other approaches. CONCLUSION: Our tool is implemented as a publicly available SPM8/12 extension that can be used by both the medical and research communities.


Assuntos
Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Esclerose Múltipla/patologia , Reconhecimento Automatizado de Padrão/métodos , Software , Humanos , Aumento da Imagem/métodos , Aprendizado de Máquina , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Validação de Programas de Computador
8.
Hum Biol ; 87(3): 224-34, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26932571

RESUMO

Our interaction with complex computing machines is mediated by programming languages (PLs), which constitute one of the major innovations in the evolution of technology. PLs allow flexible, scalable, and fast use of hardware and are largely responsible for shaping the history of information technology since the rise of computers in the 1950s. The rapid growth and impact of computers were followed closely by the development of PLs. As occurs with natural, human languages, PLs have emerged and gone extinct. There has been always a diversity of coexisting PLs that compete somewhat while occupying special niches. Here we show that the statistical patterns of language adoption, rise, and fall can be accounted for by a simple model in which a set of programmers can use several PLs, decide to use existing PLs used by other programmers, or decide not to use them. Our results highlight the influence of strong communities of practice in the diffusion of PL innovations.


Assuntos
Linguagens de Programação , Cultura , Humanos , Modelos Teóricos
9.
Proc Natl Acad Sci U S A ; 108(28): E288-97, 2011 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-21709225

RESUMO

Interactions between bacteria and the viruses that infect them (i.e., phages) have profound effects on biological processes, but despite their importance, little is known on the general structure of infection and resistance between most phages and bacteria. For example, are bacteria-phage communities characterized by complex patterns of overlapping exploitation networks, do they conform to a more ordered general pattern across all communities, or are they idiosyncratic and hard to predict from one ecosystem to the next? To answer these questions, we collect and present a detailed metaanalysis of 38 laboratory-verified studies of host-phage interactions representing almost 12,000 distinct experimental infection assays across a broad spectrum of taxa, habitat, and mode of selection. In so doing, we present evidence that currently available host-phage infection networks are statistically different from random networks and that they possess a characteristic nested structure. This nested structure is typified by the finding that hard to infect bacteria are infected by generalist phages (and not specialist phages) and that easy to infect bacteria are infected by generalist and specialist phages. Moreover, we find that currently available host-phage infection networks do not typically possess a modular structure. We explore possible underlying mechanisms and significance of the observed nested host-phage interaction structure. In addition, given that most of the available host-phage infection networks examined here are composed of taxa separated by short phylogenetic distances, we propose that the lack of modularity is a scale-dependent effect, and then, we describe experimental studies to test whether modular patterns exist at macroevolutionary scales.


Assuntos
Bactérias/virologia , Bacteriófagos/fisiologia , Interações Hospedeiro-Patógeno/fisiologia , Bactérias/genética , Fenômenos Fisiológicos Bacterianos , Bacteriófago lambda/genética , Bacteriófago lambda/patogenicidade , Bacteriófago lambda/fisiologia , Bacteriófagos/genética , Bacteriófagos/patogenicidade , Evolução Biológica , Bioestatística , Bases de Dados Factuais , Ecossistema , Escherichia coli/genética , Escherichia coli/fisiologia , Escherichia coli/virologia , Interações Hospedeiro-Patógeno/genética , Modelos Biológicos
10.
Comput Biol Med ; 179: 108811, 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38991315

RESUMO

Brain atrophy measurements derived from magnetic resonance imaging (MRI) are a promising marker for the diagnosis and prognosis of neurodegenerative pathologies such as Alzheimer's disease or multiple sclerosis. However, its use in individualized assessments is currently discouraged due to a series of technical and biological issues. In this work, we present a deep learning pipeline for segmentation-based brain atrophy quantification that improves upon the automated labels of the reference method from which it learns. This goal is achieved through tissue similarity regularization that exploits the a priori knowledge that scans from the same subject made within a short interval must have similar tissue volumes. To train the presented pipeline, we use unlabeled pairs of T1-weighted MRI scans having a tissue similarity prior, and generate the target brain tissue segmentations in a fully automated manner using the fsl_anat pipeline implemented in the FMRIB Software Library (FSL). Tissue similarity regularization is enforced during training through a weighted loss term that penalizes tissue volume differences between short-interval scan pairs from the same subject. In inference, the pipeline performs end-to-end skull stripping and brain tissue segmentation from a single T1-weighted MRI scan in its native space, i.e., without performing image interpolation. For longitudinal evaluation, each image is independently segmented first, and then measures of change are computed. We evaluate the presented pipeline in two different MRI datasets, MIRIAD and ADNI1, which have longitudinal and short-interval imaging from healthy controls (HC) and Alzheimer's disease (AD) subjects. In short-interval scan pairs, tissue similarity regularization reduces the quantification error and improves the consistency of measured tissue volumes. In the longitudinal case, the proposed pipeline shows reduced variability of atrophy measures and higher effect sizes of differences in annualized rates between HC and AD subjects. Our pipeline obtains a Cohen's d effect size of d=2.07 on the MIRIAD dataset, an increase from the reference pipeline used to train it (d=1.01), and higher than that of SIENA (d=1.73), a well-known state-of-the-art approach. In the ADNI1 dataset, the proposed pipeline improves its effect size (d=1.37) with respect to the reference pipeline (d=0.80) and surpasses SIENA (d=1.33). The proposed data-driven deep learning regularization reduces the biases and systematic errors learned from the reference segmentation method, which is used to generate the training targets. Improving the accuracy and reliability of atrophy quantification methods is essential to unlock brain atrophy as a diagnostic and prognostic marker in neurodegenerative pathologies.

11.
Trends Ecol Evol ; 2024 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-38821781

RESUMO

For five decades, paleontologists, paleobiologists, and ecologists have investigated patterns of punctuated equilibria in biology. Here, we step outside those fields and summarize recent advances in the theory of and evidence for punctuated equilibria, gathered from contemporary observations in geology, molecular biology, genetics, anthropology, and sociotechnology. Taken in the aggregate, these observations lead to a more general theory that we refer to as punctuated evolution. The quality of recent datasets is beginning to illustrate the mechanics of punctuated evolution in a way that can be modeled across a vast range of phenomena, from mass extinctions hundreds of millions of years ago to the possible future ahead in the Anthropocene. We expect the study of punctuated evolution to be applicable beyond biological scenarios.

12.
Sci Rep ; 13(1): 3295, 2023 02 25.
Artigo em Inglês | MEDLINE | ID: mdl-36841885

RESUMO

Symbiosis is a major engine of evolutionary innovation underlying many extant complex organisms. Lichens are a paradigmatic example that offers a unique perspective on the role of symbiosis in ecological success and evolutionary diversification. Lichen studies have produced a wealth of information regarding the importance of symbiosis, but they frequently focus on a few species, limiting our understanding of large-scale phenomena such as guilds. Guilds are groupings of lichens that assist each other's proliferation and are intimately linked by a shared set of photobionts, constituting an extensive network of relationships. To characterize the network of lichen symbionts, we used a large data set ([Formula: see text] publications) of natural photobiont-mycobiont associations. The entire lichen network was found to be modular, but this organization does not directly match taxonomic information in the data set, prompting a reconsideration of lichen guild structure and composition. The multiscale nature of this network reveals that the major lichen guilds are better represented as clusters with several substructures rather than as monolithic communities. Heterogeneous guild structure fosters robustness, with keystone species functioning as bridges between guilds and whose extinction would endanger global stability.


Assuntos
Líquens , Filogenia , Evolução Biológica , Simbiose
13.
Comput Med Imaging Graph ; 103: 102157, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36535217

RESUMO

Automated methods for segmentation-based brain volumetry may be confounded by the presence of white matter (WM) lesions, which introduce abnormal intensities that can alter the classification of not only neighboring but also distant brain tissue. These lesions are common in pathologies where brain volumetry is also an important prognostic marker, such as in multiple sclerosis (MS), and thus reducing their effects is critical for improving volumetric accuracy and reliability. In this work, we analyze the effect of WM lesions on deep learning based brain tissue segmentation methods for brain volumetry and introduce techniques to reduce the error these lesions produce on the measured volumes. We propose a 3D patch-based deep learning framework for brain tissue segmentation which is trained on the outputs of a reference classical method. To deal more robustly with pathological cases having WM lesions, we use a combination of small patches and a percentile-based input normalization. To minimize the effect of WM lesions, we also propose a multi-task double U-Net architecture performing end-to-end inpainting and segmentation, along with a training data generation procedure. In the evaluation, we first analyze the error introduced by artificial WM lesions on our framework as well as in the reference segmentation method without the use of lesion inpainting techniques. To the best of our knowledge, this is the first analysis of WM lesion effect on a deep learning based tissue segmentation approach for brain volumetry. The proposed framework shows a significantly smaller and more localized error introduced by WM lesions than the reference segmentation method, that displays much larger global differences. We also evaluated the proposed lesion effect minimization technique by comparing the measured volumes before and after introducing artificial WM lesions to healthy images. The proposed approach performing end-to-end inpainting and segmentation effectively reduces the error introduced by small and large WM lesions in the resulting volumetry, obtaining absolute volume differences of 0.01 ± 0.03% for GM and 0.02 ± 0.04% for WM. Increasing the accuracy and reliability of automated brain volumetry methods will reduce the sample size needed to establish meaningful correlations in clinical studies and allow its use in individualized assessments as a diagnostic and prognostic marker for neurodegenerative pathologies.


Assuntos
Aprendizado Profundo , Esclerose Múltipla , Substância Branca , Humanos , Substância Branca/diagnóstico por imagem , Substância Branca/patologia , Reprodutibilidade dos Testes , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Esclerose Múltipla/diagnóstico por imagem , Esclerose Múltipla/patologia , Processamento de Imagem Assistida por Computador/métodos
14.
Front Neurosci ; 16: 954662, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36248650

RESUMO

The assessment of disease activity using serial brain MRI scans is one of the most valuable strategies for monitoring treatment response in patients with multiple sclerosis (MS) receiving disease-modifying treatments. Recently, several deep learning approaches have been proposed to improve this analysis, obtaining a good trade-off between sensitivity and specificity, especially when using T1-w and T2-FLAIR images as inputs. However, the need to acquire two different types of images is time-consuming, costly and not always available in clinical practice. In this paper, we investigate an approach to generate synthetic T1-w images from T2-FLAIR images and subsequently analyse the impact of using original and synthetic T1-w images on the performance of a state-of-the-art approach for longitudinal MS lesion detection. We evaluate our approach on a dataset containing 136 images from MS patients, and 73 images with lesion activity (the appearance of new T2 lesions in follow-up scans). To evaluate the synthesis of the images, we analyse the structural similarity index metric and the median absolute error and obtain consistent results. To study the impact of synthetic T1-w images, we evaluate the performance of the new lesion detection approach when using (1) both T2-FLAIR and T1-w original images, (2) only T2-FLAIR images, and (3) both T2-FLAIR and synthetic T1-w images. Sensitivities of 0.75, 0.63, and 0.81, respectively, were obtained at the same false-positive rate (0.14) for all experiments. In addition, we also present the results obtained when using the data from the international MSSEG-2 challenge, showing also an improvement when including synthetic T1-w images. In conclusion, we show that the use of synthetic images can support the lack of data or even be used instead of the original image to homogenize the contrast of the different acquisitions in new T2 lesions detection algorithms.

15.
J R Soc Interface ; 19(196): 20220570, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36382378

RESUMO

Cumulative cultural evolution (CCE) occurs among humans who may be presented with many similar options from which to choose, as well as many social influences and diverse environments. It is unknown what general principles underlie the wide range of CCE dynamics and whether they can all be explained by the same unified paradigm. Here, we present a scalable evolutionary model of discrete choice with social learning, based on a few behavioural science assumptions. This paradigm connects the degree of transparency in social learning to the human tendency to imitate others. Computer simulations and quantitative analysis show the interaction of three primary factors-information transparency, popularity bias and population size-drives the pace of CCE. The model predicts a stable rate of evolutionary change for modest degrees of popularity bias. As popularity bias grows, the transition from gradual to punctuated change occurs, with maladaptive subpopulations arising on their own. When the popularity bias gets too severe, CCE stops. This provides a consistent framework for explaining the rich and complex adaptive dynamics taking place in the real world, such as modern digital media.


Assuntos
Evolução Cultural , Aprendizado Social , Humanos , Internet , Evolução Biológica , Densidade Demográfica
16.
Nat Ecol Evol ; 6(3): 307-314, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35027724

RESUMO

Larger geographical areas contain more species-an observation raised to a law in ecology. Less explored is whether biodiversity changes are accompanied by a modification of interaction networks. We use data from 32 spatial interaction networks from different ecosystems to analyse how network structure changes with area. We find that basic community structure descriptors (number of species, links and links per species) increase with area following a power law. Yet, the distribution of links per species varies little with area, indicating that the fundamental organization of interactions within networks is conserved. Our null model analyses suggest that the spatial scaling of network structure is determined by factors beyond species richness and the number of links. We demonstrate that biodiversity-area relationships can be extended from species counts to higher levels of network complexity. Therefore, the consequences of anthropogenic habitat destruction may extend from species loss to wider simplification of natural communities.


Assuntos
Biodiversidade , Ecossistema
17.
Neuroinformatics ; 19(3): 477-492, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33389607

RESUMO

Brain atrophy quantification plays a fundamental role in neuroinformatics since it permits studying brain development and neurological disorders. However, the lack of a ground truth prevents testing the accuracy of longitudinal atrophy quantification methods. We propose a deep learning framework to generate longitudinal datasets by deforming T1-w brain magnetic resonance imaging scans as requested through segmentation maps. Our proposal incorporates a cascaded multi-path U-Net optimised with a multi-objective loss which allows its paths to generate different brain regions accurately. We provided our model with baseline scans and real follow-up segmentation maps from two longitudinal datasets, ADNI and OASIS, and observed that our framework could produce synthetic follow-up scans that matched the real ones (Total scans= 584; Median absolute error: 0.03 ± 0.02; Structural similarity index: 0.98 ± 0.02; Dice similarity coefficient: 0.95 ± 0.02; Percentage of brain volume change: 0.24 ± 0.16; Jacobian integration: 1.13 ± 0.05). Compared to two relevant works generating brain lesions using U-Nets and conditional generative adversarial networks (CGAN), our proposal outperformed them significantly in most cases (p < 0.01), except in the delineation of brain edges where the CGAN took the lead (Jacobian integration: Ours - 1.13 ± 0.05 vs CGAN - 1.00 ± 0.02; p < 0.01). We examined whether changes induced with our framework were detected by FAST, SPM, SIENA, SIENAX, and the Jacobian integration method. We observed that induced and detected changes were highly correlated (Adj. R2 > 0.86). Our preliminary results on harmonised datasets showed the potential of our framework to be applied to various data collections without further adjustment.


Assuntos
Imageamento por Ressonância Magnética , Redes Neurais de Computação , Atrofia , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Humanos , Processamento de Imagem Assistida por Computador
18.
Front Neurosci ; 15: 608808, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33994917

RESUMO

Segmentation of brain images from Magnetic Resonance Images (MRI) is an indispensable step in clinical practice. Morphological changes of sub-cortical brain structures and quantification of brain lesions are considered biomarkers of neurological and neurodegenerative disorders and used for diagnosis, treatment planning, and monitoring disease progression. In recent years, deep learning methods showed an outstanding performance in medical image segmentation. However, these methods suffer from generalisability problem due to inter-centre and inter-scanner variabilities of the MRI images. The main objective of the study is to develop an automated deep learning segmentation approach that is accurate and robust to the variabilities in scanner and acquisition protocols. In this paper, we propose a transductive transfer learning approach for domain adaptation to reduce the domain-shift effect in brain MRI segmentation. The transductive scenario assumes that there are sets of images from two different domains: (1) source-images with manually annotated labels; and (2) target-images without expert annotations. Then, the network is jointly optimised integrating both source and target images into the transductive training process to segment the regions of interest and to minimise the domain-shift effect. We proposed to use a histogram loss in the feature level to carry out the latter optimisation problem. In order to demonstrate the benefit of the proposed approach, the method has been tested in two different brain MRI image segmentation problems using multi-centre and multi-scanner databases for: (1) sub-cortical brain structure segmentation; and (2) white matter hyperintensities segmentation. The experiments showed that the segmentation performance of a pre-trained model could be significantly improved by up to 10%. For the first segmentation problem it was possible to achieve a maximum improvement from 0.680 to 0.799 in average Dice Similarity Coefficient (DSC) metric and for the second problem the average DSC improved from 0.504 to 0.602. Moreover, the improvements after domain adaptation were on par or showed better performance compared to the commonly used traditional unsupervised segmentation methods (FIRST and LST), also achieving faster execution time. Taking this into account, this work presents one more step toward the practical implementation of deep learning algorithms into the clinical routine.

19.
Philos Trans R Soc Lond B Biol Sci ; 375(1796): 20190325, 2020 04 13.
Artigo em Inglês | MEDLINE | ID: mdl-32089118

RESUMO

A common trait of complex systems is that they can be represented by means of a network of interacting parts. It is, in fact, the network organization (more than the parts) that largely conditions most higher-level properties, which are not reducible to the properties of the individual parts. Can the topological organization of these webs provide some insight into their evolutionary origins? Both biological and artificial networks share some common architectural traits. They are often heterogeneous and sparse, and most exhibit different types of correlations, such as nestedness, modularity or hierarchical patterns. These properties have often been attributed to the selection of functionally meaningful traits. However, a proper formulation of generative network models suggests a rather different picture. Against the standard selection-optimization argument, some networks reveal the inevitable generation of complex patterns resulting from reuse and can be modelled using duplication-rewiring rules lacking functionality. These give rise to the observed heterogeneous, scale-free and modular architectures. Here, we examine the evidence for tinkering in cellular, technological and ecological webs and its impact in shaping their architecture. Our analysis suggests a serious consideration of the role played by selection as the origin of network topology. Instead, we suggest that the amplification processes associated with reuse might shape these graphs at the topological level. In biological systems, selection forces would take advantage of emergent patterns. This article is part of the theme issue 'Unifying the essential concepts of biological networks: biological insights and philosophical foundations'.


Assuntos
Biologia Celular , Ecologia , Modelos Teóricos , Tecnologia , Evolução Molecular , Fenótipo
20.
Comput Methods Programs Biomed ; 194: 105521, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32434099

RESUMO

BACKGROUND AND OBJECTIVE: Acute stroke lesion segmentation tasks are of great clinical interest as they can help doctors make better informed time-critical treatment decisions. Magnetic resonance imaging (MRI) is time demanding but can provide images that are considered the gold standard for diagnosis. Automated stroke lesion segmentation can provide with an estimate of the location and volume of the lesioned tissue, which can help in the clinical practice to better assess and evaluate the risks of each treatment. METHODS: We propose a deep learning methodology for acute and sub-acute stroke lesion segmentation using multimodal MR imaging. We pre-process the data to facilitate learning features based on the symmetry of brain hemispheres. The issue of class imbalance is tackled using small patches with a balanced training patch sampling strategy and a dynamically weighted loss function. Moreover, a combination of whole patch predictions, using a U-Net based CNN architecture, and high degree of overlapping patches reduces the need for additional post-processing. RESULTS: The proposed method is evaluated using two public datasets from the 2015 Ischemic Stroke Lesion Segmentation challenge (ISLES 2015). These involve the tasks of sub-acute stroke lesion segmentation (SISS) and acute stroke penumbra estimation (SPES) from multiple diffusion, perfusion and anatomical MRI modalities. The performance is compared against state-of-the-art methods with a blind online testing set evaluation on each of the challenges. At the time of submitting this manuscript, our approach is the first method in the online rankings for the SISS (DSC=0.59 ± 0.31) and SPES sub-tasks (DSC=0.84 ± 0.10). When compared with the rest of submitted strategies, we achieve top rank performance with a lower Hausdorff distance. CONCLUSIONS: Better segmentation results are obtained by leveraging the anatomy and pathophysiology of acute stroke lesions and using a combined approach to minimize the effects of class imbalance. The same training procedure is used for both tasks, showing the proposed methodology can generalize well enough to deal with different unrelated tasks and imaging modalities without hyper-parameter tuning. In order to promote the reproducibility of our results, a public version of the proposed method has been released to the scientific community.


Assuntos
Redes Neurais de Computação , Acidente Vascular Cerebral , Humanos , Imageamento por Ressonância Magnética , Imagem Multimodal , Reprodutibilidade dos Testes , Acidente Vascular Cerebral/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa