Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
1.
Ecotoxicol Environ Saf ; 278: 116379, 2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38714082

RESUMO

Species sensitivity distributions (SSDs) estimated by fitting a statistical distribution to ecotoxicity data are indispensable tools used to derive the hazardous concentration for 5 % of species (HC5) and thereby a predicted no-effect concentration in environmental risk assessment. Whereas various statistical distributions are available for SSD estimation, the fundamental question of which statistical distribution should be used has received limited systematic analysis. We aimed to address this knowledge gap by applying four frequently used statistical distributions (log-normal, log-logistic, Burr type III, and Weibull distributions) to acute and chronic SSD estimation using aquatic toxicity data for 191 and 31 chemicals, respectively. Based on the differences in the corrected Akaike's information criterion (AICc) as well as visual inspection of the fitting of the lower tails of SSD curves, the log-normal SSD was generally better or equally good for the majority of chemicals examined. Together with the fact that the ratios of HC5 values of other alternative SSDs to those of log-normal SSDs generally fell within the range 0.1-10, our findings indicate that the log-normal distribution can be a reasonable first candidate for SSD derivation, which does not contest the existing widespread use of log-normal SSDs.


Assuntos
Poluentes Químicos da Água , Medição de Risco , Animais , Poluentes Químicos da Água/toxicidade , Ecotoxicologia , Especificidade da Espécie , Testes de Toxicidade Aguda , Organismos Aquáticos/efeitos dos fármacos , Testes de Toxicidade Crônica , Modelos Estatísticos
2.
Artigo em Inglês | MEDLINE | ID: mdl-37360554

RESUMO

To characterize the pollutant dispersal across major metropolitan cities in India, daily particulate matter (PM10 and PM2.5) data for the study areas were collected from the National Air Quality Monitoring stations database provided by the Central Pollution Control Board (CPCB) of India. The data were analysed for three temporal ranges, i.e. before the pandemic-induced lockdown, during the lockdown, and after the upliftment of lockdown restrictions. For the purpose, the time scale ranged from 1st April to 31st May for the years 2019 (pre), 2020, and 2021 (post). Statistical distributions (lognormal, Weibull, and Gamma), aerosol optical thickness, and back trajectories were assessed for all three time periods. Most cities followed the lognormal distribution for PM2.5 during the lockdown period except Mumbai and Hyderabad. For PM10, all the regions followed the lognormal distribution. Delhi and Kolkata observed a maximum decline in particulate pollution of 41% and 52% for PM2.5 and 49% and 53% for PM10, respectively. Air mass back trajectory suggests local transmission of air mass during the lockdown period, and an undeniable decline in aerosol optical thickness was observed from the MODIS sensor. It can be concluded that statistical distribution analysis coupled with pollution models can be a counterpart in studying the dispersal and developing pollution abatement policies for specific sites. Moreover, incorporating remote sensing in pollution study can enhance the knowledge about the origin and movement of air parcels and can be helpful in taking decisions beforehand.

3.
J Biomed Inform ; 119: 103819, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34029749

RESUMO

Atrial fibrillation (AF) is a common and extremely harmful arrhythmia disease. Automatic detection of AF based on ECG helps accurate and timely detection of the condition. However, the existing AF detection methods are mostly based on complex signal transformation or precise waveform localization. This is a big challenge for complex, variable, and susceptible ECG signals. Therefore, we propose a simple feature extraction method based on gradient set (GDS) for AF detection. The method first calculates the GDS of the ECG segment and then calculates the statistical distribution feature and the information quantity feature of the GDS as the input of the classifier. Experiments on four databases include 146 subjects show that the feature extraction method for detecting AF proposed in this paper has the characteristics of simple calculation, noise tolerance, and high adaptability to all kinds of classifiers, and got the best performance on the DNN classifier we designed. Therefore, it is a good choice for feature extraction in AF detection.


Assuntos
Fibrilação Atrial , Algoritmos , Fibrilação Atrial/diagnóstico , Bases de Dados Factuais , Eletrocardiografia , Humanos , Processamento de Sinais Assistido por Computador
4.
Chaos Solitons Fractals ; 142: 110512, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33281306

RESUMO

The Covid-19 pandemic is the most important health disaster that has surrounded the world for the past eight months. There is no clear date yet on when it will end. As of 18 September 2020, more than 31 million people have been infected worldwide. Predicting the Covid-19 trend has become a challenging issue. In this study, data of COVID-19 between 20/01/2020 and 18/09/2020 for USA, Germany and the global was obtained from World Health Organization. Dataset consist of weekly confirmed cases and weekly cumulative confirmed cases for 35 weeks. Then the distribution of the data was examined using the most up-to-date Covid-19 weekly case data and its parameters were obtained according to the statistical distributions. Furthermore, time series prediction model using machine learning was proposed to obtain the curve of disease and forecast the epidemic tendency. Linear regression, multi-layer perceptron, random forest and support vector machines (SVM) machine learning methods were used. The performances of the methods were compared according to the RMSE, APE, MAPE metrics and it was seen that SVM achieved the best trend. According to estimates, the global pandemic will peak at the end of January 2021 and estimated approximately 80 million people will be cumulatively infected.

5.
Eur Radiol ; 30(1): 652-662, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31410603

RESUMO

OBJECTIVE: To compare the robustness of native T1 mapping using mean and median pixel-wise quantification methods. METHODS: Fifty-seven consecutive patients without overt signs of heart failure were examined in clinical routine for suspicion of cardiomyopathy. MRI included the acquisition of native T1 maps by a motion-corrected modified Look-Locker inversion recovery sequence at 1.5 T. Heart function status according to four established volumetric left ventricular (LV) cardio MRI parameter thresholds was used for retrospective separation into subgroups of normal (n = 26) or abnormal heart function (n = 31). Statistical normality of pixel-wise T1 was tested on each myocardial segment and mean and median segmental T1 values were assessed. RESULTS: Segments with normally distributed pixel-wise T1 (57/58%) showed no difference between mean and median quantification in either patient group, while differences were highly significant (p < 0.001) for the respective 43/42% non-normally distributed segments. Heart function differentiation between two patient groups was significant in 14 myocardial segments (p < 0.001-0.040) by median quantification compared with six (p < 0.001-0.042) by using the mean. The differences by median quantification were observed between the native T1 values of the three coronary artery territories of normal heart function patients (p = 0.023) and insignificantly in the abnormal patients (p = 0.053). CONCLUSION: Median quantification increases the robustness of myocardial native T1 definition, regardless of statistical normality of the data. Compared with the currently prevailing method of mean quantification, differentiation between LV segments and coronary artery territories is better and allows for earlier detection of heart function impairment. KEY POINTS: • Median pixel-wise quantification of native T1 maps is robust and can be applied regardless of the statistical distribution of data points. • Median quantification is more sensitive to early heart function abnormality compared with mean quantification. • The new method yields significant native T1 value differentiation between the three coronary artery territories.


Assuntos
Cardiomiopatias/diagnóstico por imagem , Cardiomiopatias/fisiopatologia , Imageamento por Ressonância Magnética/métodos , Adulto , Feminino , Coração/diagnóstico por imagem , Coração/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos
6.
Sensors (Basel) ; 21(1)2020 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-33374776

RESUMO

Positioning systems are used to determine position coordinates in navigation (air, land and marine). The accuracy of an object's position is described by the position error and a statistical analysis can determine its measures, which usually include: Root Mean Square (RMS), twice the Distance Root Mean Square (2DRMS), Circular Error Probable (CEP) and Spherical Probable Error (SEP). It is commonly assumed in navigation that position errors are random and that their distribution are consistent with the normal distribution. This assumption is based on the popularity of the Gauss distribution in science, the simplicity of calculating RMS values for 68% and 95% probabilities, as well as the intuitive perception of randomness in the statistics which this distribution reflects. It should be noted, however, that the necessary conditions for a random variable to be normally distributed include the independence of measurements and identical conditions of their realisation, which is not the case in the iterative method of determining successive positions, the filtration of coordinates or the dependence of the position error on meteorological conditions. In the preface to this publication, examples are provided which indicate that position errors in some navigation systems may not be consistent with the normal distribution. The subsequent section describes basic statistical tests for assessing the fit between the empirical and theoretical distributions (Anderson-Darling, chi-square and Kolmogorov-Smirnov). Next, statistical tests of the position error distributions of very long Differential Global Positioning System (DGPS) and European Geostationary Navigation Overlay Service (EGNOS) campaigns from different years (2006 and 2014) were performed with the number of measurements per session being 900'000 fixes. In addition, the paper discusses selected statistical distributions that fit the empirical measurement results better than the normal distribution. Research has shown that normal distribution is not the optimal statistical distribution to describe position errors of navigation systems. The distributions that describe navigation positioning system errors more accurately include: beta, gamma, logistic and lognormal distributions.

7.
Sensors (Basel) ; 20(24)2020 Dec 13.
Artigo em Inglês | MEDLINE | ID: mdl-33322229

RESUMO

Positioning systems are used to determine position coordinates in navigation (air, land, and marine). Statistical analysis of their accuracy assumes that the position errors (latitude-δφ and longitude-δλ) are random and that their distributions are consistent with the normal distribution. However, in practice, these errors do not appear in a random way, since the position determination in navigation systems is done with an iterative method. It causes so-called "Position Random Walk", similar to the term "Random Walk" known from statistics. It results in the empirical distribution of δφ and δλ being inconsistent with the normal distribution, even for samples of up to several thousand measurements. This phenomenon results in a significant overestimation of the accuracy of position determination calculated from such a short series of measurements, causing these tests to lose their representativeness. This paper attempts to determine the length of a measurement session (number of measurements) that is representative of the positioning system. This will be a measurement session of such a length that the position error statistics (δφ and δλ) represented by the standard deviation values are close to the real values and the calculated mean values (φ¯ and λ¯) are also close to the real values. Special attention will also be paid to the selection of an appropriate (statistically reliable) number of measurements to be tested statistically to verify the hypothesis that the δφ and δλ distributions are consistent with the normal distribution. Empirical measurement data are taken from different positioning systems: Global Positioning System (GPS) (168'286 fixes), Differential Global Positioning System (DGPS) (864'000 fixes), European Geostationary Navigation Overlay Service (EGNOS) (928'492 fixes), and Decca Navigator system (4052 fixes). The analyses showed that all researched positioning systems (GPS, DGPS, EGNOS and Decca Navigator) are characterized by the Position Random Walk (PRW), which resulted in that the empirical distribution of δφ and δλ being inconsistent with the normal distribution. The size of the PRW depends on the nominal accuracy of position determination by the system. It was found that measurement sessions consisting of 1000 fixes (for the GPS system) overestimate the accuracy analysis results by 109.1% and cannot be considered representative. Furthermore, when analyzing the results of long measurement campaigns (GPS and DGPS), it was found that the representative length of the measurement session differs for each positioning system and should be determined for each of them individually.

8.
Environ Monit Assess ; 192(12): 787, 2020 Nov 26.
Artigo em Inglês | MEDLINE | ID: mdl-33241491

RESUMO

The transportation of container trucks in urban areas not only frequently causes traffic jams but also produces severe air pollution. With regard to this consideration, measurements of particle concentrations and traffic volume on different polluted days were carried out to discover the varied characteristics of particles from container truck transportation in the port area. Based on the original data, descriptive statistics were performed firstly to reveal the statistical characteristics of particle number concentrations (PNC). The Kolmogorov-Smirnov test as well as the Anderson-Darling test was adopted to identify the "best-fit" distributions on PNC data while the corresponding maximum likelihood estimation was conducted to estimate the parameters of the identified distribution. Additionally, the Pearson correlation analysis and principal component analysis were performed respectively to reveal the relationships between traffic volume and PNC. The results showed that on a hazy day, PNC levels in the morning were generally higher than those in the afternoon, while on a non-hazy day, the results were opposite. Particles in all sizes on a non-hazy day and larger than 0.5 µm on a hazy day were verified to fit the lognormal distribution. In contrast to the particles below 2 µm, the particles above 2 µm exhibited higher correlations with the traffic flow of a container truck in the morning on a hazy day. These results indicate the importance of reducing air pollution from a container truck and provide policymakers with a foundation for possible measures in a port city.


Assuntos
Poluentes Atmosféricos , Poluição do Ar , Poluentes Atmosféricos/análise , Poluição do Ar/análise , Cidades , Monitoramento Ambiental , Veículos Automotores , Material Particulado/análise , Emissões de Veículos/análise
9.
Magn Reson Med ; 76(3): 963-77, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-26362832

RESUMO

PURPOSE: To develop a statistical model for the tridimensional diffusion MRI signal at each voxel that describes the signal arising from each tissue compartment in each voxel. THEORY AND METHODS: In prior work, a statistical model of the apparent diffusion coefficient was shown to well-characterize the diffusivity and heterogeneity of the mono-directional diffusion MRI signal. However, this model was unable to characterize the three-dimensional anisotropic diffusion observed in the brain. We introduce a new model that extends the statistical distribution representation to be fully tridimensional, in which apparent diffusion coefficients are extended to be diffusion tensors. The set of compartments present at a voxel is modeled by a finite sum of unimodal continuous distributions of diffusion tensors. Each distribution provides measures of each compartment microstructural diffusivity and heterogeneity. RESULTS: The ability to estimate the tridimensional diffusivity and heterogeneity of multiple fascicles and of free diffusion is demonstrated. CONCLUSION: Our novel tissue model allows for the characterization of the intra-voxel orientational heterogeneity, a prerequisite for accurate tractography while also characterizing the overall tridimensional diffusivity and heterogeneity of each tissue compartment. The model parameters can be estimated from short duration acquisitions. The diffusivity and heterogeneity microstructural parameters may provide novel indicator of the presence of disease or injury. Magn Reson Med 76:963-977, 2016. © 2015 Wiley Periodicals, Inc.


Assuntos
Encéfalo/citologia , Encéfalo/diagnóstico por imagem , Imagem de Difusão por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Modelos Neurológicos , Modelos Estatísticos , Animais , Anisotropia , Simulação por Computador , Humanos , Imageamento Tridimensional/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
10.
Med Biol Eng Comput ; 62(4): 1077-1087, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38148414

RESUMO

Thermography, as a harmless modality, thanks to its low equipment complexity in parallel with quick and cheap access, has been able to come up as a method with significant potential in the diagnosis of some cancers in recent years. However, the complexity of the images resulting from this method has caused the use of deep learning to interpret thermograms. A limiting factor in this process is the strong dependence of deep learning methods on the number of training data, which is a serious challenge in thermography due to the young age of this technology and the lack of available images. In this paper, an attempt is made to reduce the above challenge by utilizing the concept of statistical learning in such a way that the statistical distribution of the original data is estimated by using generative adversarial networks (i.e., GAN). Then, several fake images are reconstructed based on the estimated distribution in order to increase the training thermograms. Since the fake images are reconstructed based on similar statistics of real thermograms in each class, the effective features of each class are preserved to a significant extent in the reconstruction process. The use of this method indicates a significant improvement in the separation of healthy and cancerous thermograms compared to the benchmark method which does not use the concept of GAN in such a way that characteristics of sensitivity and accuracy are improved in ranges of 3-9% and 3-7%, respectively. In terms of specificity, although we have seen an improvement of up to 9%, in some cases, small drops of up to 2% have also been observed, which can still be justified due to the significant improvement in sensitivity and accuracy.


Assuntos
Aprendizado Profundo , Neoplasias , Termografia , Benchmarking , Processamento de Imagem Assistida por Computador
11.
Bioengineering (Basel) ; 11(3)2024 Feb 25.
Artigo em Inglês | MEDLINE | ID: mdl-38534494

RESUMO

Kidney disease remains one of the most common ailments worldwide, with cancer being one of its most common forms. Early diagnosis can significantly increase the good prognosis for the patient. The development of an artificial intelligence-based system to assist in kidney cancer diagnosis is crucial because kidney illness is a global health concern, and there are limited nephrologists qualified to evaluate kidney cancer. Diagnosing and categorising different forms of renal failure presents the biggest treatment hurdle for kidney cancer. Thus, this article presents a novel method for detecting and classifying kidney cancer subgroups in Computed Tomography (CT) images based on an asymmetric local statistical pixel distribution. In the first step, the input image is non-overlapping windowed, and a statistical distribution of its pixels in each cancer type is built. Then, the method builds the asymmetric statistical distribution of the image's gradient pixels. Finally, the cancer type is identified by applying the two built statistical distributions to a Deep Neural Network (DNN). The proposed method was evaluated using a dataset collected and authorised by the Dhaka Central International Medical Hospital in Bangladesh, which includes 12,446 CT images of the whole abdomen and urogram, acquired with and without contrast. Based on the results, it is possible to confirm that the proposed method outperformed state-of-the-art methods in terms of the usual correctness criteria. The accuracy of the proposed method for all kidney cancer subtypes presented in the dataset was 99.89%, which is promising.

12.
Biomed Environ Sci ; 26(8): 638-46, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-23981549

RESUMO

OBJECTIVE: To estimate the frequency of daily average PM10 concentrations exceeding the air quality standard (AQS) and the reduction of particulate matter emission to meet the AQS from the statistical properties (probability density functions) of air pollutant concentration. METHODS: The daily PM10 average concentration in Beijing, Shanghai, Guangzhou, Wuhan, and Xi'an was measured from 1 January 2004 to 31 December 2008. The PM10 concentration distribution was simulated by using the lognormal, Weibull and Gamma distributions and the best statistical distribution of PM10 concentration in the 5 cities was detected using to the maximum likelihood method. RESULTS: The daily PM10 average concentration in the 5 cities was fitted using the lognormal distribution. The exceeding duration was predicted, and the estimated PM10 emission source reductions in the 5 cities need to be 56.58%, 93.40%, 80.17%, 82.40%, and 79.80%, respectively to meet the AQS. CONCLUSION: Air pollutant concentration can be predicted by using the PM10 concentration distribution, which can be further applied in air quality management and related policy making.


Assuntos
Poluentes Atmosféricos/análise , Cidades , Material Particulado/análise , China , Monitoramento Ambiental , Funções Verossimilhança
13.
Comput Methods Programs Biomed ; 236: 107557, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37100023

RESUMO

BACKGROUND AND OBJECTIVE: Ultrasound has emerged as a promising modality for detecting middle ear effusion (MEE) in pediatric patients. Among different ultrasound techniques, ultrasound mastoid measurement was proposed to allow noninvasive detection of MEE by estimating the Nakagami parameters of backscattered signals to describe the echo amplitude distribution. This study further developed the multiregional-weighted Nakagami parameter (MNP) of the mastoid as a new ultrasound signature for assessing effusion severity and fluid properties in pediatric patients with MEE. METHODS: A total of 197 pediatric patients (n = 133 for the training group; n = 64 for the testing group) underwent multiregional backscattering measurements of the mastoid for estimating MNP values. MEE, the severity of effusion (mild to moderate vs. severe), and the fluid properties (serous and mucous) were confirmed through otoscopy, tympanometry, and grommet surgery and were compared with the ultrasound findings. The diagnostic performance was evaluated using the area under the receiver operating characteristic curve (AUROC). RESULTS: The training dataset revealed significant differences in MNPs between the control and MEE groups, between mild to moderate and severe MEE, and between serous and mucous effusion were observed (p < 0.05). As with the conventional Nakagami parameter, the MNP could be used to detect MEE (AUROC: 0.87; sensitivity: 90.16%; specificity: 75.35%). The MNP could further identify effusion severity (AUROC: 0.88; sensitivity: 73.33%; specificity: 86.87%) and revealed the possibility of characterizing fluid properties (AUROC: 0.68; sensitivity: 62.50%; specificity: 70.00%). The testing results demonstrated that the MNP method enabled MEE detection (AUROC = 0.88, accuracy = 88.28%, sensitivity = 92.59%, specificity = 84.21%), was effective in assessing MEE severity (AUROC = 0.83, accuracy = 77.78%, sensitivity = 66.67%, specificity = 83.33%), and showed potential for characterizing fluid properties of effusion (AUROC = 0.70, accuracy = 72.22%, sensitivity = 62.50%, specificity = 80.00%). CONCLUSIONS: Transmastoid ultrasound combined with the MNP not only leverages the strengths of the conventional Nakagami parameter for MEE diagnosis but also provides a means to assess MEE severity and effusion properties in pediatric patients, thereby offering a comprehensive approach to noninvasive MEE evaluation.


Assuntos
Otite Média com Derrame , Humanos , Criança , Otite Média com Derrame/diagnóstico por imagem , Otite Média com Derrame/cirurgia , Testes de Impedância Acústica , Processo Mastoide/diagnóstico por imagem , Curva ROC , Ultrassonografia
14.
Soft comput ; : 1-10, 2023 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-37362290

RESUMO

In this paper, some statistical properties of the Choquet integral are discussed. As an interesting application of Choquet integral and fuzzy measures, we introduce a new class of exponential-like distributions related to monotone set functions, called Choquet exponential distributions, by combining the properties of Choquet integral with the exponential distribution. We show some famous statistical distributions such as gamma, logistic, exponential, Rayleigh and other distributions are a special class of Choquet distributions. Then, we show that this new proposed Choquet exponential distribution is better on daily gold price data analysis. Also, a real dataset of the daily number of new infected people to coronavirus in the USA in the period of 2020/02/29 to 2020/10/19 is analyzed. The method presented in this article opens a new horizon for future research.

15.
Environ Sci Pollut Res Int ; 27(2): 1521-1532, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31755058

RESUMO

Brahmaputra is one of the perennial rivers in India which causes floods every year in the north-east state of Assam causing hindrance to normal life and damage to crops. The availability of temporal Remote Sensing (RS) data helps to study the periodical changes caused by flood event and its eventual effect on natural environment. Integrating RS and GIS methods paved a way for effective flood mapping over a large spatial extent which helps to assess the damage accurately for mitigation. In the present study, multitemporal Sentinel-1A data is exploited to assess the 2017 flood situation of Brahmaputra River in Assam state. Five data sets that are taken during flood season and one reference data taken during the non-monsoon season are used to estimate the area inundated under floods for the quantification of damage assessment. A visual interpretation map is produced using colour segmentation method by estimating the thresholds from histogram analysis. A new method is developed to identify the optimum value for threshold from statistical distribution of Synthetic Aperture Radar (SAR) data that separates flooded water and non-flooded water. From this method, the range of backscatter values for normal water are identified as - 18 to - 30 dB and the range is identified as - 19 to - 24 dB for flooded water. The results showed that the method is able to separate the flooded and non-flooded region on the microwave data set, and the derived flood extent using this method shows the inundated area of 3873.14 Km2 on peak flood date for the chosen study area.


Assuntos
Monitoramento Ambiental/métodos , Inundações , Tecnologia de Sensoriamento Remoto , Rios , Índia , Radar
16.
PeerJ Comput Sci ; 6: e298, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33816949

RESUMO

This paper proposes a slow-moving management method for a system using of intermittent demand per unit time and lead time demand of items in service enterprise inventory models. Our method uses zero-inflated truncated normal statistical distribution, which makes it possible to model intermittent demand per unit time using mixed statistical distribution. We conducted numerical experiments based on an algorithm used to forecast intermittent demand over fixed lead time to show that our proposed distributions improved the performance of the continuous review inventory model with shortages. We evaluated multi-criteria elements (total cost, fill-rate, shortage of quantity per cycle, and the adequacy of the statistical distribution of the lead time demand) for decision analysis using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). We confirmed that our method improved the performance of the inventory model in comparison to other commonly used approaches such as simple exponential smoothing and Croston's method. We found an interesting association between the intermittency of demand per unit of time, the square root of this same parameter and reorder point decisions, that could be explained using classical multiple linear regression model. We confirmed that the parameter of variability of the zero-inflated truncated normal statistical distribution used to model intermittent demand was positively related to the decision of reorder points. Our study examined a decision analysis using illustrative example. Our suggested approach is original, valuable, and, in the case of slow-moving item management for service companies, allows for the verification of decision-making using multiple criteria.

17.
Materials (Basel) ; 12(10)2019 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-31091733

RESUMO

Manufacturing process based imperfections can reduce the theoretical fatigue strength since they can be considered as pre-existent microcracks. The statistical distribution of fatigue fracture initiating defect sizes also varies with the highly-stressed volume, since the probability of a larger highly-stressed volume to inherit a potentially critical defect is elevated. This fact is widely known by the scientific community as the statistical size effect. The assessment of this effect within this paper is based on the statistical distribution of defect sizes in a reference volume V 0 compared to an arbitrary enlarged volume V α . By implementation of the crack resistance curve in the Kitagawa-Takahashi diagram, a fatigue assessment model, based on the volume-dependent probability of occurrence of inhomogeneities, is set up, leading to a multidimensional fatigue assessment map. It is shown that state-of-the-art methodologies for the evaluation of the statistical size effect can lead to noticeable over-sizing in fatigue design of approximately 10 % . On the other hand, the presented approach, which links the statistically based distribution of defect sizes in an arbitrary highly-stressed volume to a crack-resistant dependent Kitagawa-Takahashi diagram leads to a more accurate fatigue design with a maximal conservative deviation of 5 % to the experimental validation data. Therefore, the introduced fatigue assessment map improves fatigue design considering the statistical size effect of lightweight aluminium cast alloys.

18.
Int J Food Microbiol ; 281: 72-81, 2018 09 20.
Artigo em Inglês | MEDLINE | ID: mdl-29870893

RESUMO

Heat-resistant moulds (HRMs) are well known for their ability to survive pasteurization and spoil high-acid food products, which is of great concern for processors of fruit-based products worldwide. Whilst the majority of the studies on HRMs over the last decades have addressed their inactivation, few data are currently available regarding their contamination levels in fruit and fruit-based products. Thus, this study aimed to quantify and identify heat-resistant fungal ascospores from samples collected throughout the processing of pasteurized high-acid fruit products. In addition, an assessment on the effect of processing on the contamination levels of HRMs in these products was carried out. A total of 332 samples from 111 batches were analyzed from three processing plants (=three processing lines): strawberry puree (n = 88, Belgium), concentrated orange juice (n = 90, Brazil) and apple puree (n = 154, the Netherlands). HRMs were detected in 96.4% (107/111) of the batches and 59.3% (197/332) of the analyzed samples. HRMs were present in 90.9% of the samples from the strawberry puree processing line (1-215 ascospores/100 g), 46.7% of the samples from the orange juice processing line (1-200 ascospores/100 g) and 48.7% of samples from the apple puree processing line (1-84 ascospores/100 g). Despite the high occurrence, the majority (76.8%, 255/332) of the samples were either not contaminated or presented low levels of HRMs (<10 ascospores/100 g). For both strawberry puree and concentrated orange juice, processing had no statistically significant effect on the levels of HRMs (p > 0.05). On the contrary, a significant reduction (p < 0.05) in HRMs levels was observed during the processing of apple puree. Twelve species were identified belonging to four genera - Byssochlamys, Aspergillus with Neosartorya-type ascospores, Talaromyces and Rasamsonia. N. fumigata (23.6%), N. fischeri (19.1%) and B. nivea (5.5%) were the predominant species in pasteurized products. The quantitative data (contamination levels of HRMs) were fitted to exponential distributions and will ultimately be included as input to spoilage risk assessment models which would allow better control of the spoilage of heat treated fruit products caused by heat-resistant moulds.


Assuntos
Ascomicetos/fisiologia , Microbiologia de Alimentos , Sucos de Frutas e Vegetais/microbiologia , Frutas/microbiologia , Temperatura Alta , Bélgica , Brasil , Manipulação de Alimentos , Fragaria/microbiologia , Malus/microbiologia , Países Baixos , Pasteurização , Esporos Fúngicos/isolamento & purificação , Esporos Fúngicos/fisiologia
19.
Materials (Basel) ; 11(12)2018 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-30558138

RESUMO

Cast parts usually inherit internal defects such as micro shrinkage pores due to the manufacturing process. In order to assess the fatigue behaviour in both finite-life and long-life fatigue regions, this paper scientifically contributes towards a defect-based fatigue design model. Extensive fatigue and fracture mechanical tests were conducted whereby the crack initiating defect size population was fractographically evaluated. Complementary in situ X-ray computed tomography scans before and during fatigue testing enabled an experimental estimation of the lifetime until crack initiation, acting as a significant input for the fatigue model. A commonly applied fatigue assessment approach introduced by Tiryakioglu was modified by incorporating the long crack threshold value, which additionally enabled the assessment of the fatigue strength in the long-life fatigue regime. The presented design concept was validated utilising the fatigue test results, which revealed a sound agreement between the experiments and the model. Only a minor deviation of up to about five percent in case of long-life fatigue strength and up to about 9% in case of finite-lifetime were determined. Thus, the provided extension of Tiryakioglu's approach supports a unified fatigue strength assessment of cast aluminium alloys in both the finite- and long-life regimes.

20.
ACS Appl Mater Interfaces ; 10(41): 35374-35384, 2018 Oct 17.
Artigo em Inglês | MEDLINE | ID: mdl-30247016

RESUMO

Interests in nanoscale integrated ferroelectric devices using doped HfO2-based thin films are actively reviving in academia and industry. The main driving force for the formation of the metastable non-centrosymmetric ferroelectric phase is considered to be the interface/grain boundary energy effect of the small grains in polycrystalline configuration. These small grains, however, can invoke unfavorable material properties, such as nonuniform switching performance. This study provides an in-depth understanding of such aspects of this material through careful measurement and modeling of the ferroelectric switching kinetics. Various previous switching models developed for conventional ferroelectric thin-film capacitors cannot fully account for the observed time- and voltage-dependent switching current evolution. The accurate fitting of the experimental results required careful consideration of the inhomogeneous field distribution across the electrode area, which could be acquired by an appropriate mathematical formulation of polarization as a function of electric field and time. Compared with the conventional polycrystalline Pb(Zr,Ti)O3 film, the statistical distribution of the local field was found to be three times wider. The activation field and characteristic time for domain switching were larger by more than 1 order of magnitude. It indicates that doped HfO2 is inhomogeneous and "hard" ferroelectric material compared with conventional perovskite-based ferroelectrics.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA