Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Philos Trans A Math Phys Eng Sci ; 378(2166): 20190054, 2020 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-31955675

RESUMEN

This paper reviews some of the challenges posed by the huge growth of experimental data generated by the new generation of large-scale experiments at UK national facilities at the Rutherford Appleton Laboratory (RAL) site at Harwell near Oxford. Such 'Big Scientific Data' comes from the Diamond Light Source and Electron Microscopy Facilities, the ISIS Neutron and Muon Facility and the UK's Central Laser Facility. Increasingly, scientists are now required to use advanced machine learning and other AI technologies both to automate parts of the data pipeline and to help find new scientific discoveries in the analysis of their data. For commercially important applications, such as object recognition, natural language processing and automatic translation, deep learning has made dramatic breakthroughs. Google's DeepMind has now used the deep learning technology to develop their AlphaFold tool to make predictions for protein folding. Remarkably, it has been able to achieve some spectacular results for this specific scientific problem. Can deep learning be similarly transformative for other scientific problems? After a brief review of some initial applications of machine learning at the RAL, we focus on challenges and opportunities for AI in advancing materials science. Finally, we discuss the importance of developing some realistic machine learning benchmarks using Big Scientific Data coming from several different scientific domains. We conclude with some initial examples of our 'scientific machine learning' benchmark suite and of the research challenges these benchmarks will enable. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'.

2.
Artículo en Inglés | MEDLINE | ID: mdl-31976892

RESUMEN

In general, development of adequately complex mathematical models, such as deep neural networks, can be an effective way to improve the accuracy of learning models. However, this is achieved at the cost of reduced post-hoc model interpretability, because what is learned by the model can become less intelligible and tractable to humans as the model complexity increases. In this paper, we target a similarity learning task in the context of image retrieval, with a focus on the model interpretability issue. An effective similarity neural network (SNN) is proposed to offer not only to seek robust retrieval performance but also to achieve satisfactory post-hoc interpretability. The network is designed by linking the neuron architecture with the organization of a concept tree and by formulating neuron operations to pass similarity information between concepts. Various ways of understanding and visualizing what is learned by the SNN neurons are proposed. We also exhaustively evaluate the proposed approach using a number of relevant datasets against a number of state-of-the-art approaches to demonstrate the effectiveness of the proposed network. Our results show that the proposed approach can offer superior performance when compared against state-of-the-art approaches. Neuron visualization results are demonstrated to support the understanding of the trained neurons.

3.
Sci Total Environ ; 654: 164-176, 2019 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-30448653

RESUMEN

Optimizing the effectiveness of early wildfire detection systems is of significant interest to the community. To this end, watchtower-based wildfire observations are continuing to be practical, often in conjunction with state-of-the-art technologies, such as automated vision systems and sensor networks. One of the major challenges that the community faces is the optimal expansion of existing systems, particularly in multiple stages due to various practical, political and financial constraints. The notion of incremental watchtower expansion while preserving or making minimal changes to an existing system is a challenging task, particularly while meeting coverage and financial constraints. Conventionally and historically, this problem has been treated as a multi-objective optimization problem, and as such, currently employed methods are predominantly focused on the full-fledged optimization problem, where the problem is re-solved every time during the expansion process. In this paper, for the first time, we propose an alternative approach, by treating the expansion as a submodular set-function maximization problem. By theoretically proving that the expansion problem is a submodular set-function maximization problem, we provide four different models and matching algorithms to handle various cases that arise during the incremental expansion process. Our evaluation of the proposed approach on a practical dataset from a forest park in China, namely, the NanJing forest park, shows that our algorithms can provide an excellent coverage by integrating visibility analysis and location allocation while meeting the stringent budgetary requirements. The proposed approach can be adapted to areas of other countries.

4.
EURASIP J Adv Signal Process ; 2017(1): 71, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-32010202

RESUMEN

Particle filtering is a numerical Bayesian technique that has great potential for solving sequential estimation problems involving non-linear and non-Gaussian models. Since the estimation accuracy achieved by particle filters improves as the number of particles increases, it is natural to consider as many particles as possible. MapReduce is a generic programming model that makes it possible to scale a wide variety of algorithms to Big data. However, despite the application of particle filters across many domains, little attention has been devoted to implementing particle filters using MapReduce. In this paper, we describe an implementation of a particle filter using MapReduce. We focus on a component that what would otherwise be a bottleneck to parallel execution, the resampling component. We devise a new implementation of this component, which requires no approximations, has O(N) spatial complexity and deterministic O((logN)2) time complexity. Results demonstrate the utility of this new component and culminate in consideration of a particle filter with 224 particles being distributed across 512 processor cores.

5.
IEEE Trans Vis Comput Graph ; 21(8): 980-93, 2015 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-26357260

RESUMEN

The existing efforts in computer assisted semen analysis have been focused on high speed imaging and automated image analysis of sperm motility. This results in a large amount of data, and it is extremely challenging for both clinical scientists and researchers to interpret, compare and correlate the multidimensional and time-varying measurements captured from video data. In this work, we use glyphs to encode a collection of numerical measurements taken at a regular interval and to summarize spatio-temporal motion characteristics using static visual representations. The design of the glyphs addresses the needs for (a) encoding some 20 variables using separable visual channels, (b) supporting scientific observation of the interrelationships between different measurements and comparison between different sperm cells and their flagella, and (c) facilitating the learning of the encoding scheme by making use of appropriate visual abstractions and metaphors. As a case study, we focus this work on video visualization for computer-aided semen analysis, which has a broad impact on both biological sciences and medical healthcare. We demonstrate that glyph-based visualization can serve as a means of external memorization of video data as well as an overview of a large set of spatiotemporal measurements. It enables domain scientists to make scientific observation in a cost-effective manner by reducing the burden of viewing videos repeatedly, while providing them with a new visual representation for conveying semen statistics.


Asunto(s)
Gráficos por Computador , Interpretación de Imagen Asistida por Computador/métodos , Análisis de Semen/métodos , Grabación en Video/métodos , Algoritmos , Humanos , Masculino , Análisis Multivariante , Motilidad Espermática/fisiología , Cola del Espermatozoide/fisiología
6.
Prog Biophys Mol Biol ; 115(2-3): 349-58, 2014 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-25091538

RESUMEN

Cardiovascular Magnetic Resonance (CMR) imaging is an essential technique for measuring regional myocardial function. However, it is a time-consuming and cognitively demanding task to interpret, identify and compare various motion characteristics based on watching CMR imagery. In this work, we focus on the problems of visualising imagery resulting from 2D myocardial tagging in CMR. In particular we provide an overview of the current state of the art of relevant visualization techniques, and a discussion on why the problem is difficult from a perceptual perspective. Finally, we introduce a proof-of-concept multilayered visualization user interface for visualizing CMR data using multiple derived attributes encoded into multivariate glyphs. An initial evaluation of the system by clinicians suggested a great potential for this visualisation technology to become a clinical practice in the future.


Asunto(s)
Corazón/anatomía & histología , Corazón/fisiología , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Imagen por Resonancia Cinemagnética/métodos , Contracción Miocárdica/fisiología , Animales , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador , Análisis Espacio-Temporal , Técnica de Sustracción , Interfaz Usuario-Computador
7.
Int J Biomed Imaging ; 2011: 137604, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-21869880

RESUMEN

Images are ubiquitous in biomedical applications from basic research to clinical practice. With the rapid increase in resolution, dimensionality of the images and the need for real-time performance in many applications, computational requirements demand proper exploitation of multicore architectures. Towards this, GPU-specific implementations of image analysis algorithms are particularly promising. In this paper, we investigate the mapping of an enhanced motion estimation algorithm to novel GPU-specific architectures, the resulting challenges and benefits therein. Using a database of three-dimensional image sequences, we show that the mapping leads to substantial performance gains, up to a factor of 60, and can provide near-real-time experience. We also show how architectural peculiarities of these devices can be best exploited in the benefit of algorithms, most specifically for addressing the challenges related to their access patterns and different memory configurations. Finally, we evaluate the performance of the algorithm on three different GPU architectures and perform a comprehensive analysis of the results.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...