Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Eur Radiol Exp ; 4(1): 22, 2020 04 03.
Artigo em Inglês | MEDLINE | ID: mdl-32246291

RESUMO

PRIMAGE is one of the largest and more ambitious research projects dealing with medical imaging, artificial intelligence and cancer treatment in children. It is a 4-year European Commission-financed project that has 16 European partners in the consortium, including the European Society for Paediatric Oncology, two imaging biobanks, and three prominent European paediatric oncology units. The project is constructed as an observational in silico study involving high-quality anonymised datasets (imaging, clinical, molecular, and genetics) for the training and validation of machine learning and multiscale algorithms. The open cloud-based platform will offer precise clinical assistance for phenotyping (diagnosis), treatment allocation (prediction), and patient endpoints (prognosis), based on the use of imaging biomarkers, tumour growth simulation, advanced visualisation of confidence scores, and machine-learning approaches. The decision support prototype will be constructed and validated on two paediatric cancers: neuroblastoma and diffuse intrinsic pontine glioma. External validation will be performed on data recruited from independent collaborative centres. Final results will be available for the scientific community at the end of the project, and ready for translation to other malignant solid tumours.


Assuntos
Inteligência Artificial , Biomarcadores/análise , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/terapia , Glioma/diagnóstico por imagem , Glioma/terapia , Neuroblastoma/diagnóstico por imagem , Neuroblastoma/terapia , Criança , Computação em Nuvem , Técnicas de Apoio para a Decisão , Progressão da Doença , Europa (Continente) , Feminino , Humanos , Masculino , Fenótipo , Prognóstico , Carga Tumoral
2.
BMC Bioinformatics ; 20(Suppl 6): 579, 2019 Dec 10.
Artigo em Inglês | MEDLINE | ID: mdl-31823716

RESUMO

BACKGROUND: In recent years, the study of immune response behaviour using bottom up approach, Agent Based Modeling (ABM), has attracted considerable efforts. The ABM approach is a very common technique in the biological domain due to high demand for a large scale analysis tools for the collection and interpretation of information to solve biological problems. Simulating massive multi-agent systems (i.e. simulations containing a large number of agents/entities) requires major computational effort which is only achievable through the use of parallel computing approaches. RESULTS: This paper explores different approaches to parallelising the key component of biological and immune system models within an ABM model: pairwise interactions. The focus of this paper is on the performance and algorithmic design choices of cell interactions in continuous and discrete space where agents/entities are competing to interact with one another within a parallel environment. CONCLUSIONS: Our performance results demonstrate the applicability of these methods to a broader class of biological systems exhibiting typical cell to cell interactions. The advantage and disadvantage of each implementation is discussed showing each can be used as the basis for developing complete immune system models on parallel hardware.


Assuntos
Simulação por Computador , Sistema Imunitário , Modelos Imunológicos , Algoritmos , Humanos , Biologia de Sistemas
3.
Front Neuroinform ; 13: 19, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31001102

RESUMO

In the last decade there has been a surge in the number of big science projects interested in achieving a comprehensive understanding of the functions of the brain, using Spiking Neuronal Network (SNN) simulations to aid discovery and experimentation. Such an approach increases the computational demands on SNN simulators: if natural scale brain-size simulations are to be realized, it is necessary to use parallel and distributed models of computing. Communication is recognized as the dominant part of distributed SNN simulations. As the number of computational nodes increases, the proportion of time the simulation spends in useful computing (computational efficiency) is reduced and therefore applies a limit to scalability. This work targets the three phases of communication to improve overall computational efficiency in distributed simulations: implicit synchronization, process handshake and data exchange. We introduce a connectivity-aware allocation of neurons to compute nodes by modeling the SNN as a hypergraph. Partitioning the hypergraph to reduce interprocess communication increases the sparsity of the communication graph. We propose dynamic sparse exchange as an improvement over simple point-to-point exchange on sparse communications. Results show a combined gain when using hypergraph-based allocation and dynamic sparse communication, increasing computational efficiency by up to 40.8 percentage points and reducing simulation time by up to 73%. The findings are applicable to other distributed complex system simulations in which communication is modeled as a graph network.

4.
Appl Ergon ; 74: 48-54, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30487109

RESUMO

BACKGROUND: Predicting the energy costs of human travel over snow can be of significant value to the military and other agencies planning work efforts when snow is present. The ability to quantify, and predict, those costs can help planners determine if snow will be a factor in the execution of dismounted tasks and operations. To adjust predictive models for the effect of terrain, and more specifically for surface conditions, on energy costs, terrain coefficients (ƞ) have been developed. The physiological demands of foot travel over snow have been studied previously, and there are well established methods of predicting metabolic costs of locomotion. By applying knowledge gained from prior studies of the effects of terrain and snow, and by leveraging those existing dismounted locomotion models, this paper seeks to outline the steps in developing an improved terrain coefficient (ƞ) for snow to be used in predictive modeling. METHODS: Using published data, methods, and a well-informed understanding of the physical elements of terrain, e.g., characterization of snow sinkage (z), this study made adjustments to ƞ-values specific to snow. RESULTS: This review of published metabolic cost methods suggest that an improved ƞ-value could be developed for use with the Pandolf equation, where z = depth (h)*(1 - (snow density (ρ0)/1.186)) and ƞ = 0.0005z3 + 0.0001z2 + 0.1072z + 1.2604. CONCLUSION: While the complexity of variables related to characteristics of snow, speed of movement, and individuals confound efforts to develop a simple, predictive model, this paper provides data-driven improvements to models that are used to predict the energy costs of dismounted movements over snow.


Assuntos
Ciências Biocomportamentais/métodos , Metabolismo Energético , Previsões/métodos , Neve , Caminhada/fisiologia , Humanos , Locomoção
5.
Front Neuroinform ; 12: 68, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30455637

RESUMO

Advances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, biophysically detailed cell models pose a particular set of computational challenges, and this has led to the development of a number of domain-specific simulators. At the other level of detail, the ever growing variety of point neuron models increases the implementation barrier even for those based on the relatively simple integrate-and-fire neuron model. Independently of the model complexity, all modeling methods crucially depend on an efficient and accurate transformation of mathematical model descriptions into efficiently executable code. Neuroscientists usually publish model descriptions in terms of the mathematical equations underlying them. However, actually simulating them requires they be translated into code. This can cause problems because errors may be introduced if this process is carried out by hand, and code written by neuroscientists may not be very computationally efficient. Furthermore, the translated code might be generated for different hardware platforms, operating system variants or even written in different languages and thus cannot easily be combined or even compared. Two main approaches to addressing this issues have been followed. The first is to limit users to a fixed set of optimized models, which limits flexibility. The second is to allow model definitions in a high level interpreted language, although this may limit performance. Recently, a third approach has become increasingly popular: using code generation to automatically translate high level descriptions into efficient low level code to combine the best of previous approaches. This approach also greatly enriches efforts to standardize simulator-independent model description languages. In the past few years, a number of code generation pipelines have been developed in the computational neuroscience community, which differ considerably in aim, scope and functionality. This article provides an overview of existing pipelines currently used within the community and contrasts their capabilities and the technologies and concepts behind them.

6.
Phys Rev Lett ; 113(14): 141601, 2014 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-25325628

RESUMO

We present the gravity dual of large N supersymmetric gauge theories on a squashed five-sphere. The one-parameter family of solutions is constructed in Euclidean Romans F(4) gauged supergravity in six dimensions, and uplifts to massive type IIA supergravity. By renormalizing the theory with appropriate counterterms we evaluate the renormalized on-shell action for the solutions. We also evaluate the large N limit of the gauge theory partition function, and find precise agreement.

7.
Neuroinformatics ; 12(2): 307-23, 2014 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24253973

RESUMO

A declarative extensible markup language (SpineML) for describing the dynamics, network and experiments of large-scale spiking neural network simulations is described which builds upon the NineML standard. It utilises a level of abstraction which targets point neuron representation but addresses the limitations of existing tools by allowing arbitrary dynamics to be expressed. The use of XML promotes model sharing, is human readable and allows collaborative working. The syntax uses a high-level self explanatory format which allows straight forward code generation or translation of a model description to a native simulator format. This paper demonstrates the use of code generation in order to translate, simulate and reproduce the results of a benchmark model across a range of simulators. The flexibility of the SpineML syntax is highlighted by reproducing a pre-existing, biologically constrained model of a neural microcircuit (the striatum). The SpineML code is open source and is available at http://bimpa.group.shef.ac.uk/SpineML .


Assuntos
Simulação por Computador , Modelos Neurológicos , Rede Nervosa/fisiologia , Redes Neurais de Computação , Neurônios/fisiologia , Humanos
8.
PLoS One ; 6(5): e18539, 2011 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-21572529

RESUMO

High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a "non-democratic" mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons "vote" independently ("democratic") for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated.


Assuntos
Gráficos por Computador , Algoritmos
9.
Brief Bioinform ; 11(3): 334-47, 2010 May.
Artigo em Inglês | MEDLINE | ID: mdl-20123941

RESUMO

Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.


Assuntos
Fenômenos Fisiológicos Celulares , Gráficos por Computador , Simulação por Computador , Modelos Biológicos , Software , Interface Usuário-Computador , Algoritmos , Integração de Sistemas
10.
Org Biomol Chem ; 1(12): 2137-47, 2003 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-12945904

RESUMO

Perfluoro-4-isopropylpyridine was used as a building block for the two-step synthesis of a variety of macrocyclic systems bearing pyridine sub-units which were characterised by X-ray crystallography. Electrospray mass spectrometry revealed that complexation of either cations and, unusually, anions is possible depending on the structure of the macrocycle.


Assuntos
Compostos Heterocíclicos/química , Hidrocarbonetos Fluorados/química , Piridinas/química , Ânions/química , Cátions/química , Cristalografia por Raios X , Halogênios/química , Compostos Heterocíclicos/síntese química , Hidrocarbonetos Fluorados/síntese química , Espectroscopia de Ressonância Magnética , Estrutura Molecular , Piridinas/síntese química , Espectrometria de Massas por Ionização por Electrospray
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...