Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Ecol Lett ; 27(3): e14408, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38504459

RESUMO

Although plant-soil feedback (PSF) is being recognized as an important driver of plant recruitment, our understanding of its role in species coexistence in natural communities remains limited by the scarcity of experimental studies on multispecies assemblages. Here, we experimentally estimated PSFs affecting seedling recruitment in 10 co-occurring Mediterranean woody species. We estimated weak but significant species-specific feedback. Pairwise PSFs impose similarly strong fitness differences and stabilizing-destabilizing forces, most often impeding species coexistence. Moreover, a model of community dynamics driven exclusively by PSFs suggests that few species would coexist stably, the largest assemblage with no more than six species. Thus, PSFs alone do not suffice to explain coexistence in the studied community. A topological analysis of all subcommunities in the interaction network shows that full intransitivity (with all species involved in an intransitive loop) would be rare but it would lead to species coexistence through either stable or cyclic dynamics.


Assuntos
Ecossistema , Solo , Retroalimentação , Plantas , Madeira
2.
Sensors (Basel) ; 15(9): 23763-87, 2015 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-26393597

RESUMO

In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods.

3.
Big Data ; 2024 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-38285477

RESUMO

Owing to increasing size of the real-world networks, their processing using classical techniques has become infeasible. The amount of storage and central processing unit time required for processing large networks is far beyond the capabilities of a high-end computing machine. Moreover, real-world network data are generally distributed in nature because they are collected and stored on distributed platforms. This has popularized the use of the MapReduce, a distributed data processing framework, for analyzing real-world network data. Existing MapReduce-based methods for connected components detection mainly struggle to minimize the number of MapReduce rounds and the amount of data generated and forwarded to the subsequent rounds. This article presents an efficient MapReduce-based approach for finding connected components, which does not forward the complete set of connected components to the subsequent rounds; instead, it writes them to the Hadoop Distributed File System as soon as they are found to reduce the amount of data forwarded to the subsequent rounds. It also presents an application of the proposed method in contact tracing. The proposed method is evaluated on several network data sets and compared with two state-of-the-art methods. The empirical results reveal that the proposed method performs significantly better and is scalable to find connected components in large-scale networks.

4.
J Med Imaging (Bellingham) ; 11(4): 044002, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38988992

RESUMO

Purpose: Deep learning is the standard for medical image segmentation. However, it may encounter difficulties when the training set is small. Also, it may generate anatomically aberrant segmentations. Anatomical knowledge can be potentially useful as a constraint in deep learning segmentation methods. We propose a loss function based on projected pooling to introduce soft topological constraints. Our main application is the segmentation of the red nucleus from quantitative susceptibility mapping (QSM) which is of interest in parkinsonian syndromes. Approach: This new loss function introduces soft constraints on the topology by magnifying small parts of the structure to segment to avoid that they are discarded in the segmentation process. To that purpose, we use projection of the structure onto the three planes and then use a series of MaxPooling operations with increasing kernel sizes. These operations are performed both for the ground truth and the prediction and the difference is computed to obtain the loss function. As a result, it can reduce topological errors as well as defects in the structure boundary. The approach is easy to implement and computationally efficient. Results: When applied to the segmentation of the red nucleus from QSM data, the approach led to a very high accuracy (Dice 89.9%) and no topological errors. Moreover, the proposed loss function improved the Dice accuracy over the baseline when the training set was small. We also studied three tasks from the medical segmentation decathlon challenge (MSD) (heart, spleen, and hippocampus). For the MSD tasks, the Dice accuracies were similar for both approaches but the topological errors were reduced. Conclusions: We propose an effective method to automatically segment the red nucleus which is based on a new loss for introducing topology constraints in deep learning segmentation.

5.
Int J Comput Assist Radiol Surg ; 19(2): 241-251, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37540449

RESUMO

PURPOSE: Radiological follow-up of oncology patients requires the quantitative analysis of lesion changes in longitudinal imaging studies, which is time-consuming, requires expertise, and is subject to variability. This paper presents a comprehensive graph-based method for the automatic detection and classification of lesion changes in current and prior CT scans. METHODS: The inputs are the current and prior CT scans and their organ and lesion segmentations. Classification of lesion changes is formalized as bipartite graph matching where lesion pairings are computed by adaptive overlap-based lesion matching. Six types of lesion changes are computed by connected components analysis. The method was evaluated on 208 pairs of lung and liver CT scans from 57 patients with 4600 lesions, 1713 lesion matchings and 2887 lesion changes. Ground-truth lesion segmentations, lesion matchings and lesion changes were created by an expert radiologist. RESULTS: Our method yields a lesion matching rate accuracy of 99.7% (394/395) and 95.0% (1252/1318) for the lung and liver datasets. Precision and recall are > 0.99 and 0.94 and 0.95 (respectively) for the detection of lesion changes. The analysis of lesion changes helped the radiologist detect 48 missed lesions and 8 spurious lesions in the input ground-truth lesion datasets. CONCLUSION: The classification of lesion classification provides the clinician with a readily accessible and intuitive identification and classification of the lesion changes and their patterns in support of clinical decision making. Comprehensive automatic computer-aided lesion matching and analysis of lesion changes may improve quantitative follow-up and evaluation of disease status, assessment of treatment efficacy and response to therapy.


Assuntos
Algoritmos , Neoplasias Hepáticas , Humanos , Seguimentos , Tomografia Computadorizada por Raios X/métodos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/patologia
6.
Sci Total Environ ; 912: 169119, 2024 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-38070559

RESUMO

Both droughts and tropical cyclones (TCs) are among the world's most widespread natural disasters. This paper is concentrated on the effects of TCs on the links between meteorological droughts (MDs) and agricultural droughts (ADs). Specifically, changes in characteristics of drought events and variations in propagation features of matched MD and AD event pairs are quantified by using the renowned three-dimensional connected components algorithm; both alleviation and exacerbation effects of TCs are evaluated; and the Spearman's correlation is employed to identify potential contributors to exacerbated droughts after TCs. The results show that TCs exhibit more pronounced and widespread alleviation effects on MD events compared to AD events. >98 % of small-scale drought events are terminated by TCs, leading to 65 % reduction in the total area of MD events smaller than 50,000 km2 and 32 % reduction in AD events of the same scale. In the meantime, TCs can reshape the spatiotemporal links between MDs and ADs by reducing the overall propagation rate from 77 % to 40 % and ameliorating the characteristics of drought event pairs with higher propagation efficiency, by >40 %. After TCs, over 55 % of drought exacerbations in TC-affected regions occur first in the vicinity of the residual large-scale AD events. This occurrence is partially associated with the reduction in moisture exports from these residual droughts downwind to the interior of TC-affected regions, a process potentially facilitated by the TC-induced temperature cooling. The in-depth evaluation of this paper presents useful information for better drought preparation and mitigation under TCs.

7.
J Imaging ; 8(4)2022 Mar 24.
Artigo em Inglês | MEDLINE | ID: mdl-35448215

RESUMO

The Union-Retire CCA (UR-CCA) algorithm started a new paradigm for connected components analysis. Instead of using directed tree structures, UR-CCA focuses on connectivity. This algorithmic change leads to a reduction in required memory, with no end-of-row processing overhead. In this paper we describe a hardware architecture based on UR-CCA and its realisation on an FPGA. The memory bandwidth and pipelining challenges of hardware UR-CCA are analysed and resolved. It is shown that up to 36% of memory resources can be saved using the proposed architecture. This translates directly to a smaller device for an FPGA implementation.

8.
R Soc Open Sci ; 8(3): 201784, 2021 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-33959340

RESUMO

Sequential region labelling, also known as connected components labelling, is a standard image segmentation problem that joins contiguous foreground pixels into blobs. Despite its long development history and widespread use across diverse domains such as bone biology, materials science and geology, connected components labelling can still form a bottleneck in image processing pipelines. Here, I describe a multithreaded implementation of classical two-pass sequential region labelling and introduce an efficient collision resolution step, 'bucket fountain'. Code was validated on test images and against commercial software (Avizo). It was performance tested on images from 2 MB (161 particles) to 6.5 GB (437 508 particles) to determine whether theoretical linear scaling (O(n)) had been achieved, and on 1-40 CPU threads to measure speed improvements due to multithreading. The new implementation achieves linear scaling (b = 0.905-1.052, time ∝ pixelsb ; R 2 = 0.985-0.996), which improves with increasing thread number up to 8-16 threads, suggesting that it is memory bandwidth limited. This new implementation of sequential region labelling reduces the time required from hours to a few tens of seconds for images of several GB, and is limited only by hardware scale. It is available open source and free of charge in BoneJ.

9.
Methods Mol Biol ; 2253: 113-135, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33315221

RESUMO

In this chapter, we focus on topology measurements of the adjacent amino acid networks for a data set of oligomeric proteins and some of its subnetworks. The aim is to present many mathematical tools in order to understand the structures of proteins implicitly coded in such networks and subnetworks. We mainly investigate four important networks by computing the number of connected components, the degree distribution, and assortativity measures. We compare each result in order to prove that the four networks have quite independent topologies.


Assuntos
Aminoácidos/metabolismo , Biologia Computacional/métodos , Proteínas/química , Proteínas/metabolismo , Algoritmos , Bases de Dados de Proteínas , Modelos Moleculares , Conformação Proteica , Mapas de Interação de Proteínas
10.
Methods Mol Biol ; 2074: 233-262, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31583642

RESUMO

We review the TD-WGcluster (time delayed weighted edge clustering) software integrating static interaction networks with time series data in order to detect modules of nodes between which the information flows at similar time delays and intensities. The software has represented an advancement of the state of the art in the software for the identification of connected components due to its peculiarity of dealing with direct and weighted graphs, where the attributes of the physical entities represented by nodes vary over time. This chapter aims to deepen those theoretical aspects of the clustering model implemented by TD-WGcluster that may be of greater interest to the user. We show the instructions necessary to run the software through some exploratory cases and comment on the results obtained.


Assuntos
Análise por Conglomerados , Algoritmos , Software
11.
J R Soc Interface ; 16(151): 20180808, 2019 02 28.
Artigo em Inglês | MEDLINE | ID: mdl-30958202

RESUMO

Self-sustaining autocatalytic networks play a central role in living systems, from metabolism at the origin of life, simple RNA networks and the modern cell, to ecology and cognition. A collectively autocatalytic network that can be sustained from an ambient food set is also referred to more formally as a 'reflexively autocatalytic food-generated' (RAF) set. In this paper, we first investigate a simplified setting for studying RAFs, which is nevertheless relevant to real biochemistry and which allows an exact mathematical analysis based on graph-theoretic concepts. This, in turn, allows for the development of efficient (polynomial-time) algorithms for questions that are computationally intractable (NP-hard) in the general RAF setting. We then show how this simplified setting for RAF systems leads naturally to a more general notion of RAFs that are 'generative' (they can be built up from simpler RAFs) and for which efficient algorithms carry over to this more general setting. Finally, we show how classical RAF theory can be extended to deal with ensembles of catalysts as well as the assignment of rates to reactions according to which catalysts (or combinations of catalysts) are available.


Assuntos
Algoritmos , Modelos Biológicos , RNA/metabolismo , Catálise
12.
J Imaging ; 5(4)2019 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-34460483

RESUMO

Single-pass connected components analysis (CCA) algorithms suffer from a time overhead to resolve labels at the end of each image row. This work demonstrates how this overhead can be eliminated by replacing the conventional raster scan by a zig-zag scan. This enables chains of labels to be correctly resolved while processing the next image row. The effect is faster processing in the worst case with no end of row overheads. CCA hardware architectures using the novel algorithm proposed in this paper are, therefore, able to process images at higher throughput than other state-of-the-art methods while reducing the hardware requirements. The latency introduced by the conversion from raster scan to zig-zag scan is compensated for by a new method of detecting object completion, which enables the feature vector for completed connected components to be output at the earliest possible opportunity.

13.
Epidemics ; 23: 71-75, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29329958

RESUMO

Contact tracing is a crucial component of the control of many infectious diseases, but is an arduous and time consuming process. Procedures that increase the efficiency of contact tracing increase the chance that effective controls can be implemented sooner and thus reduce the magnitude of the epidemic. We illustrate a procedure using Graph Theory in the context of infectious disease epidemics of farmed animals in which the epidemics are driven mainly by the shipment of animals between farms. Specifically, we created a directed graph of the recorded shipments of deer between deer farms in Pennsylvania over a timeframe and asked how the properties of the graph could be exploited to make contact tracing more efficient should Chronic Wasting Disease (a prion disease of deer) be discovered in one of the farms. We show that the presence of a large strongly connected component in the graph has a significant impact on the number of contacts that can arise.


Assuntos
Busca de Comunicante/métodos , Cervos , Fazendas , Doença de Emaciação Crônica/epidemiologia , Doença de Emaciação Crônica/prevenção & controle , Animais , Busca de Comunicante/estatística & dados numéricos , Pennsylvania
14.
J Mach Learn Res ; 13: 781-794, 2012 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-25392704

RESUMO

We consider the sparse inverse covariance regularization problem or graphical lasso with regularization parameter λ. Suppose the sample covariance graph formed by thresholding the entries of the sample covariance matrix at λ is decomposed into connected components. We show that the vertex-partition induced by the connected components of the thresholded sample covariance graph (at λ) is exactly equal to that induced by the connected components of the estimated concentration graph, obtained by solving the graphical lasso problem for the same λ. This characterizes a very interesting property of a path of graphical lasso solutions. Furthermore, this simple rule, when used as a wrapper around existing algorithms for the graphical lasso, leads to enormous performance gains. For a range of values of λ, our proposal splits a large graphical lasso problem into smaller tractable problems, making it possible to solve an otherwise infeasible large-scale problem. We illustrate the graceful scalability of our proposal via synthetic and real-life microarray examples.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA