Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.151
Filtrar
4.
PLoS One ; 15(4): e0231331, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32275731

RESUMO

Fault localization, a technique to fix and ensure the dependability of software, is rapidly becoming infeasible due to the increasing scale and complexity of multilingual programs. Compared to other fault localization techniques, slicing can directly narrow the range of the code which needed checking by abstracting a program into a reduced one by deleting irrelevant parts. Only minority slicing methods take into account the fact that the probability of different statements leading to failure is different. Moreover, no existing prioritized slicing techniques can work on multilingual programs. In this paper, we propose a new technique called weight prioritized slicing(WP-Slicing), an improved static slicing technique based on constraint logic programming, to help the programmer locate the fault quickly and precisely. WP-Slicing first converts the original program into logic facts. Then it extracts dependences from the facts, computes the static backward slice and calculates the statements' weight. Finally, WP-Slicing provides the slice in a suggested check sequence by weighted-sorting. By comparing it's slice time and locate effort with three pre-exsiting slicing techniques on five real world C projects, we prove that WP-Slicing can locate fault within less time and effort, which means WP-Slicing is more effectively.


Assuntos
Metodologias Computacionais , Software/normas
5.
BMC Bioinformatics ; 21(Suppl 2): 81, 2020 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-32164557

RESUMO

BACKGROUND: The identification of all matches of a large set of position weight matrices (PWMs) in long DNA sequences requires significant computational resources for which a number of efficient yet complex algorithms have been proposed. RESULTS: We propose BLAMM, a simple and efficient tool inspired by high performance computing techniques. The workload is expressed in terms of matrix-matrix products that are evaluated with high efficiency using optimized BLAS library implementations. The algorithm is easy to parallelize and implement on CPUs and GPUs and has a runtime that is independent of the selected p-value. In terms of single-core performance, it is competitive with state-of-the-art software for PWM matching while being much more efficient when using multithreading. Additionally, BLAMM requires negligible memory. For example, both strands of the entire human genome can be scanned for 1404 PWMs in the JASPAR database in 13 min with a p-value of 10-4 using a 36-core machine. On a dual GPU system, the same task can be performed in under 5 min. CONCLUSIONS: BLAMM is an efficient tool for identifying PWM matches in large DNA sequences. Its C++ source code is available under the GNU General Public License Version 3 at https://github.com/biointec/blamm.


Assuntos
Algoritmos , Interface Usuário-Computador , Metodologias Computacionais , Humanos , Matrizes de Pontuação de Posição Específica
8.
Islets ; 11(6): 141-151, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31743072

RESUMO

Background & objectives: Islet of Langerhans, the endocrine pancreas plays a significant role in glucose metabolism. Obesity and insulin resistance are the major factors responsible for beta cell dysfunction. Asian Indian population has increased susceptibility to diabetes in spite of having lower BMI. The morphology of islets plays a significant role in beta cell function. The present study was designed for better understanding the morphology, composition and distribution of islets in different parts of the pancreas and its impact on beta cell proportion. Methods: We observed islet morphology and beta cell area proportion by Large-scale computer-assisted analysis in 20 adult human pancreases in non-diabetic Indian population. Immunohistochemical staining with anti-synaptophysin and anti-insulin antibody was used to detect islet and beta cells respectively. Whole slide images were analyzed using ImageJ software. Results: Endocrine proportion were heterogeneously increasing from head to tail with maximum islet and beta cell distribution in the tail region. Larger islets were predominately confined to the tail region. The islets in Indian population were relatively smaller in size, but they have more beta cells (20%) when compared to American population. Interpretation & conclusions: The beta cells of larger islets are functionally more active than the smaller islets via paracrine effect. Thus, reduction in the number of larger islets may be one of the probable reasons for increased susceptibility of Indians to diabetes even at lower BMI. Knowledge about the regional distribution of islets will help the surgeons to preserve the islet rich regions during surgery.


Assuntos
Anticorpos Anti-Insulina/análise , Células Secretoras de Insulina , Ilhotas Pancreáticas , Pâncreas , Adulto , Anatomia Regional/métodos , Autopsia , Variação Biológica da População , Metodologias Computacionais , Feminino , Humanos , Imuno-Histoquímica , Índia/epidemiologia , Células Secretoras de Insulina/citologia , Células Secretoras de Insulina/imunologia , Ilhotas Pancreáticas/citologia , Ilhotas Pancreáticas/diagnóstico por imagem , Ilhotas Pancreáticas/imunologia , Masculino , Pâncreas/citologia , Pâncreas/imunologia
9.
BMC Bioinformatics ; 20(1): 564, 2019 Nov 12.
Artigo em Inglês | MEDLINE | ID: mdl-31718539

RESUMO

BACKGROUND: Analysing large and high-dimensional biological data sets poses significant computational difficulties for bioinformaticians due to lack of accessible tools that scale to hundreds of millions of data points. RESULTS: We developed a novel machine learning command line tool called PyBDA for automated, distributed analysis of big biological data sets. By using Apache Spark in the backend, PyBDA scales to data sets beyond the size of current applications. It uses Snakemake in order to automatically schedule jobs to a high-performance computing cluster. We demonstrate the utility of the software by analyzing image-based RNA interference data of 150 million single cells. CONCLUSION: PyBDA allows automated, easy-to-use data analysis using common statistical methods and machine learning algorithms. It can be used with simple command line calls entirely making it accessible to a broad user base. PyBDA is available at https://pybda.rtfd.io.


Assuntos
Algoritmos , Biologia Computacional/métodos , Automação , Metodologias Computacionais , Células HeLa , Humanos , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina
10.
BMC Public Health ; 19(1): 1487, 2019 Nov 08.
Artigo em Inglês | MEDLINE | ID: mdl-31703655

RESUMO

BACKGROUND: Healthcare services are being increasingly digitalised in European countries. However, in studies evaluating digital health technology, some people are less likely to participate than others, e.g. those who are older, those with a lower level of education and those with poorer digital skills. Such non-participation in research - deriving from the processes of non-recruitment of targeted individuals and self-selection - can be a driver of old-age exclusion from new digital health technologies. We aim to introduce, discuss and test an instrument to measure non-participation in digital health studies, in particular, the process of self-selection. METHODS: Based on a review of the relevant literature, we designed an instrument - the NPART survey questionnaire - for the analysis of self-selection, covering five thematic areas: socioeconomic factors, self-rated health and subjective overall quality of life, social participation, time resources, and digital skills and use of technology. The instrument was piloted on 70 older study persons in Sweden, approached during the recruitment process for a trial study. RESULTS: Results indicated that participants, as compared to decliners, were on average slightly younger and more educated, and reported better memory, higher social participation, and higher familiarity with and greater use of digital technologies. Overall, the survey questionnaire was able to discriminate between participants and decliners on the key aspects investigated, along the lines of the relevant literature. CONCLUSIONS: The NPART survey questionnaire can be applied to characterise non-participation in digital health research, in particular, the process of self-selection. It helps to identify underrepresented groups and their needs. Data generated from such an investigation, combined with hospital registry data on non-recruitment, allows for the implementation of improved sampling strategies, e.g. focused recruitment of underrepresented groups, and for the post hoc adjustment of results generated from biased samples, e.g. weighting procedures.


Assuntos
Participação da Comunidade/estatística & dados numéricos , Pesquisa sobre Serviços de Saúde/estatística & dados numéricos , Sujeitos da Pesquisa/estatística & dados numéricos , Idoso , Idoso de 80 Anos ou mais , Participação da Comunidade/psicologia , Metodologias Computacionais , Feminino , Humanos , Masculino , Qualidade de Vida , Sujeitos da Pesquisa/psicologia , Participação Social , Fatores Socioeconômicos , Inquéritos e Questionários , Suécia
11.
Nature ; 574(7779): 453-454, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31645738
13.
BMC Vet Res ; 15(1): 361, 2019 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-31640698

RESUMO

BACKGROUND: Since most prostatic diseases are associated with the organ's enlargement, evaluation of prostatic size is a main criterion in the diagnosis of prostatic state of health. While enlargement is a non-uniform process, volumetric measurements are believed to be advantageous to any single dimensional parameter for the diagnosis of prostatomegaly. In a previous study, volume was analysed with a slice addition technique (SAT), which was validated as highly accurate. Irrespective of high accuracy, SAT represents a complex and time-consuming procedure, which limits its clinical use. Thus, demand exists for more practical volume assessment methods. In this study, the prostatic volume of 95 canine patients (58 intact males, 37 neutered males) were analysed retrospectively by using the ellipsoid formula (Formula) and an imaging "wrap" function tool (Wrap) to help assess accuracy and applicability. Accuracy was checked against phantom measurements and results were compared to SAT measurements of the same patient pool obtained from a previously published paper. Patients were grouped according to prostatic structure (H = homogeneous, I = inhomogeneous, C = cystic) and volume using the SAT (volume group = vg: 1, 2 and 3). RESULTS: High correlation between the Formula or Wrap volume and the phantom volume was found, the values being higher for the Formula. Mean Formula volumes (vg 1: 2.2 cm3, vg 2: 14.5 cm3, vg 3: 109.4 cm3, respectively) were significantly underestimated, while mean Wrap volumes (vg 1: 3.8 cm3, vg 2: 19.5 cm3, vg 3: 159.2 cm3) were statistically equivalent to SAT measurements (vg 1: 3.1 cm3, vg 2: 18.6 cm3, vg 3: 157.2 cm3, respectively). Differences between Formula and SAT volumes ranged from 22.4-31.1%, while differences between Wrap and SAT volumes were highest in small prostates (vg 1: 22.1%) and fell with increasing prostatic size (vg 3: 1.3%). CONCLUSION: The Wrap function is highly accurate, less time-consuming and complex compared to SAT and could serve as beneficial tool for measuring prostatic volume in clinical routine after further validation in future studies. The Formula method cannot be recommended as an alternative for volumetric measurements of the prostate gland due to its underestimation of volumes compared to SAT results.


Assuntos
Cães/anatomia & histologia , Próstata/diagnóstico por imagem , Tomografia Computadorizada por Raios X/veterinária , Animais , Metodologias Computacionais , Masculino , Modelos Estatísticos , Tamanho do Órgão , Valores de Referência , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos
14.
BMC Bioinformatics ; 20(1): 451, 2019 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-31481014

RESUMO

BACKGROUND: High-throughput gene expression technologies provide complex datasets reflecting mechanisms perturbed in an experiment, typically in a treatment versus control design. Analysis of these information-rich data can be guided based on a priori knowledge, such as networks of related proteins or genes. Assessing the response of a specific mechanism and investigating its biological basis is extremely important in systems toxicology; as compounds or treatment need to be assessed with respect to a predefined set of key mechanisms that could lead to toxicity. Two-layer networks are suitable for this task, and a robust computational methodology specifically addressing those needs was previously published. The NPA package ( https://github.com/philipmorrisintl/NPA ) implements the algorithm, and a data package of eight two-layer networks representing key mechanisms, such as xenobiotic metabolism, apoptosis, or epithelial immune innate activation, is provided. RESULTS: Gene expression data from an animal study are analyzed using the package and its network models. The functionalities are implemented using R6 classes, making the use of the package seamless and intuitive. The various network responses are analyzed using the leading node analysis, and an overall perturbation, called the Biological Impact Factor, is computed. CONCLUSIONS: The NPA package implements the published network perturbation amplitude methodology and provides a set of two-layer networks encoded in the Biological Expression Language.


Assuntos
Metodologias Computacionais , Regulação da Expressão Gênica , Redes Reguladoras de Genes , Software , Algoritmos , Animais , Apoptose/genética , Ciclo Celular/genética , Bases de Dados Genéticas , Matriz Extracelular/metabolismo , Camundongos Endogâmicos C57BL , Estresse Oxidativo , Transcriptoma/genética
17.
Comput Intell Neurosci ; 2019: 1240162, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31379932

RESUMO

Biogeography-based optimization (BBO), a recent proposed metaheuristic algorithm, has been successfully applied to many optimization problems due to its simplicity and efficiency. However, BBO is sensitive to the curse of dimensionality; its performance degrades rapidly as the dimensionality of the search space increases. In this paper, a selective migration operator is proposed to scale up the performance of BBO and we name it selective BBO (SBBO). The differential migration operator is selected heuristically to explore the global area as far as possible whist the normal distributed migration operator is chosen to exploit the local area. By the means of heuristic selection, an appropriate migration operator can be used to search the global optimum efficiently. Moreover, the strategy of cooperative coevolution (CC) is adopted to solve large-scale global optimization problems (LSOPs). To deal with subgroup imbalance contribution to the whole solution in the context of CC, a more efficient computing resource allocation is proposed. Extensive experiments are conducted on the CEC 2010 benchmark suite for large-scale global optimization, and the results show the effectiveness and efficiency of SBBO compared with BBO variants and other representative algorithms for LSOPs. Also, the results confirm that the proposed computing resource allocation is vital to the large-scale optimization within the limited computation budget.


Assuntos
Algoritmos , Simulação por Computador , Metodologias Computacionais , Alocação de Recursos , Simulação por Computador/economia , Heurística , Distribuição Normal , Resolução de Problemas
18.
Comput Intell Neurosci ; 2019: 1939171, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31396269

RESUMO

The threat to people's lives and property posed by fires has become increasingly serious. To address the problem of a high false alarm rate in traditional fire detection, an innovative detection method based on multifeature fusion of flame is proposed. First, we combined the motion detection and color detection of the flame as the fire preprocessing stage. This method saves a lot of computation time in screening the fire candidate pixels. Second, although the flame is irregular, it has a certain similarity in the sequence of the image. According to this feature, a novel algorithm of flame centroid stabilization based on spatiotemporal relation is proposed, and we calculated the centroid of the flame region of each frame of the image and added the temporal information to obtain the spatiotemporal information of the flame centroid. Then, we extracted features including spatial variability, shape variability, and area variability of the flame to improve the accuracy of recognition. Finally, we used support vector machine for training, completed the analysis of candidate fire images, and achieved automatic fire monitoring. Experimental results showed that the proposed method could improve the accuracy and reduce the false alarm rate compared with a state-of-the-art technique. The method can be applied to real-time camera monitoring systems, such as home security, forest fire alarms, and commercial monitoring.


Assuntos
Cor , Sistemas Computacionais , Fogo , Máquina de Vetores de Suporte , Algoritmos , Metodologias Computacionais , Humanos
20.
PLoS One ; 14(6): e0218347, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31226125

RESUMO

We present a new asynchronous quasi-delay-insensitive (QDI) block carry lookahead adder with redundant carry (BCLARC) realized using delay-insensitive dual-rail data encoding and 4-phase return-to-zero (RTZ) and 4-phase return-to-one (RTO) handshaking. The proposed QDI BCLARC is found to be faster and energy-efficient than the existing asynchronous adders which are QDI and non-QDI (i.e., relative-timed). Compared to existing asynchronous adders corresponding to various architectures such as the ripple carry adder (RCA), the conventional carry lookahead adder (CCLA), the carry select adder (CSLA), the BCLARC, and the hybrid BCLARC-RCA, the proposed BCLARC is found to be faster and more energy-optimized. The cycle time (CT), which is expressed as the sum of the worst-case times taken for processing the data and the spacer, governs the speed. The product of average power dissipation and CT viz. the power-cycle time product (PCTP) defines the low power/energy efficiency. For a 32-bit addition, the proposed QDI BCLARC achieves the following reductions in design metrics on average over its counterparts when considering RTZ and RTO handshaking: i) 20.5% and 19.6% reductions in CT and PCTP respectively compared to an optimum QDI early output RCA, ii) 16.5% and 15.8% reductions in CT and PCTP respectively compared to an optimum relative-timed RCA, iii) 32.9% and 35.9% reductions in CT and PCTP respectively compared to an optimum uniform input-partitioned QDI early output CSLA, iv) 47.5% and 47.2% reductions in CT and PCTP respectively compared to an optimum QDI early output CCLA, v) 14.2% and 27.3% reductions in CT and PCTP respectively compared to an optimum QDI early output BCLARC, and vi) 12.2% and 11.6% reductions in CT and PCTP respectively compared to an optimum QDI early output hybrid BCLARC-RCA. The adders were implemented using a 32/28nm CMOS technology.


Assuntos
Computadores/normas , Metodologias Computacionais
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA