ABSTRACT
Pupillary responses serve as sensitive indicators of cognitive processes, attentional shifts and decision-making dynamics. Our study investigates how directional uncertainty and target speed (V T) influence pupillary responses in a foveal tracking task involving the interception of a moving dot. Directional uncertainty, reflecting the unpredictability of the target's direction changes, was manipulated by altering the angular range (AR) from which random directions for the moving dot were extracted. Higher AR values were associated with reduced pupillary diameters, indicating that heightened uncertainty led to smaller pupil sizes. Additionally, an inverse U-shaped relationship between V T and pupillary responses suggested maximal diameters at intermediate speeds. Analysis of saccade-triggered responses showed a negative correlation between pupil diameter and directional uncertainty. Dynamic linear modelling revealed the influence of past successful collisions and other behavioural parameters on pupillary responses, emphasizing the intricate interaction between task variables and cognitive processing. Our results highlight the dynamic interplay between the directional uncertainty of a single moving target, V T and pupillary responses, with implications for understanding attentional mechanisms, decision-making processes and potential applications in emerging technologies.
ABSTRACT
Urban Heat Islands are a major environmental and public health concern, causing temperature increase in urban areas. This study used satellite imagery and machine learning to analyze the spatial and temporal patterns of land surface temperature distribution in the Metropolitan Area of Merida (MAM), Mexico, from 2001 to 2021. The results show that land surface temperature has increased in the MAM over the study period, while the urban footprint has expanded. The study also found a high correlation (r> 0.8) between changes in land surface temperature and land cover classes (urbanization/deforestation). If the current urbanization trend continues, the difference between the land surface temperature of the MAM and its surroundings is expected to reach 3.12 °C ± 1.11 °C by the year 2030. Hence, the findings of this study suggest that the Urban Heat Island effect is a growing problem in the MAM and highlight the importance of satellite imagery and machine learning for monitoring and developing mitigation strategies.
ABSTRACT
BACKGROUND: Battling malaria's morbidity and mortality rates demands innovative methods related to malaria diagnosis. Thick blood smears (TBS) are the gold standard for diagnosing malaria, but their coloration quality is dependent on supplies and adherence to standard protocols. Machine learning has been proposed to automate diagnosis, but the impact of smear coloration on parasite detection has not yet been fully explored. METHODS: To develop Coloration Analysis in Malaria (CAM), an image database containing 600 images was created. The database was randomly divided into training (70%), validation (15%), and test (15%) sets. Nineteen feature vectors were studied based on variances, correlation coefficients, and histograms (specific variables from histograms, full histograms, and principal components from the histograms). The Machine Learning Matlab Toolbox was used to select the best candidate feature vectors and machine learning classifiers. The candidate classifiers were then tuned for validation and tested to ultimately select the best one. RESULTS: This work introduces CAM, a machine learning system designed for automatic TBS image quality analysis. The results demonstrated that the cubic SVM classifier outperformed others in classifying coloration quality in TBS, achieving a true negative rate of 95% and a true positive rate of 97%. CONCLUSIONS: An image-based approach was developed to automatically evaluate the coloration quality of TBS. This finding highlights the potential of image-based analysis to assess TBS coloration quality. CAM is intended to function as a supportive tool for analyzing the coloration quality of thick blood smears.
Subject(s)
Image Processing, Computer-Assisted , Machine Learning , Image Processing, Computer-Assisted/methods , Humans , Malaria , ColorABSTRACT
PURPOSE: We developed a predictive model to assess the risk of major bleeding (MB) within 6 months of primary venous thromboembolism (VTE) in cancer patients receiving anticoagulant treatment. We also sought to describe the prevalence and incidence of VTE in cancer patients, and to describe clinical characteristics at baseline and bleeding events during follow-up in patients receiving anticoagulants. METHODS: This observational, retrospective, and multicenter study used natural language processing and machine learning (ML), to analyze unstructured clinical data from electronic health records from nine Spanish hospitals between 2014 and 2018. All adult cancer patients with VTE receiving anticoagulants were included. Both clinically- and ML-driven feature selection was performed to identify MB predictors. Logistic regression (LR), decision tree (DT), and random forest (RF) algorithms were used to train predictive models, which were validated in a hold-out dataset and compared to the previously developed CAT-BLEED score. RESULTS: Of the 2,893,108 cancer patients screened, in-hospital VTE prevalence was 5.8% and the annual incidence ranged from 2.7 to 3.9%. We identified 21,227 patients with active cancer and VTE receiving anticoagulants (53.9% men, median age of 70 years). MB events after VTE diagnosis occurred in 10.9% of patients within the first six months. MB predictors included: hemoglobin, metastasis, age, platelets, leukocytes, and serum creatinine. The LR, DT, and RF models had AUC-ROC (95% confidence interval) values of 0.60 (0.55, 0.65), 0.60 (0.55, 0.65), and 0.61 (0.56, 0.66), respectively. These models outperformed the CAT-BLEED score with values of 0.53 (0.48, 0.59). CONCLUSIONS: Our study shows encouraging results in identifying anticoagulated patients with cancer-associated VTE who are at high risk of MB.
ABSTRACT
Most natural disasters result from geodynamic events such as landslides and slope collapse. These failures cause catastrophes that directly impact the environment and cause financial and human losses. Visual inspection is the primary method for detecting failures in geotechnical structures, but on-site visits can be risky due to unstable soil. In addition, the body design and hostile and remote installation conditions make monitoring these structures inviable. When a fast and secure evaluation is required, analysis by computational methods becomes feasible. In this study, a convolutional neural network (CNN) approach to computer vision is applied to identify defects in the surface of geotechnical structures aided by unmanned aerial vehicle (UAV) and mobile devices, aiming to reduce the reliance on human-led on-site inspections. However, studies in computer vision algorithms still need to be explored in this field due to particularities of geotechnical engineering, such as limited public datasets and redundant images. Thus, this study obtained images of surface failure indicators from slopes near a Brazilian national road, assisted by UAV and mobile devices. We then proposed a custom CNN and low complexity model architecture to build a binary classifier image-aided to detect faults in geotechnical surfaces. The model achieved a satisfactory average accuracy rate of 94.26%. An AUC metric score of 0.99 from the receiver operator characteristic (ROC) curve and matrix confusion with a testing dataset show satisfactory results. The results suggest that the capability of the model to distinguish between the classes 'damage' and 'intact' is excellent. It enables the identification of failure indicators. Early failure indicator detection on the surface of slopes can facilitate proper maintenance and alarms and prevent disasters, as the integrity of the soil directly affects the structures built around and above it.
ABSTRACT
This dataset includes spectra obtained through Raman spectroscopy of acetylsalicylic acid, paracetamol, and ibuprofen commercialized in San Lorenzo, Central Department of Paraguay. The pharmaceuticals were randomly purchased from pharmacies, official sales points, and street vendors, simulating purchases for self-consumption. These drugs were selected due to their high demand and consumption by the population, aiming to document and facilitate the identification of adulterations or alterations in their original structures caused by poor storage conditions. Additionally, this database will support multivariate studies for clustering using various techniques, both supervised and unsupervised, and will allow for signal processing and spectroscopic data handling.
ABSTRACT
INTRODUCTION: Pain associated with temporomandibular dysfunction (TMD) is often confused with odontogenic pain, which is a challenge in endodontic diagnosis. Validated screening questionnaires can aid in the identification and differentiation of the source of pain. Therefore, this study aimed to develop a virtual assistant based on artificial intelligence using natural language processing techniques to automate the initial screening of patients with tooth pain. METHODS: The PAINe chatbot was developed in Python (Python Software Foundation, Beaverton, OR) language using the PyCharm (JetBrains, Prague, Czech Republic) environment and the openai library to integrate the ChatGPT 4 API (OpenAI, San Francisco, CA) and the Streamlit library (Snowflake Inc, San Francisco, CA) for interface construction. The validated TMD Pain Screener questionnaire and 1 question regarding the current pain intensity were integrated into the chatbot to perform the differential diagnosis of TMD in patients with tooth pain. The accuracy of the responses was evaluated in 50 random scenarios to compare the chatbot with the validated questionnaire. The kappa coefficient was calculated to assess the agreement level between the chatbot responses and the validated questionnaire. RESULTS: The chatbot achieved an accuracy rate of 86% and a substantial level of agreement (κ = 0.70). Most responses were clear and provided adequate information about the diagnosis. CONCLUSIONS: The implementation of a virtual assistant using natural language processing based on large language models for initial differential diagnosis screening of patients with tooth pain demonstrated substantial agreement between validated questionnaires and the chatbot. This approach emerges as a practical and efficient option for screening these patients.
ABSTRACT
BACKGROUND: The growing availability of big data spontaneously generated by social media platforms allows us to leverage natural language processing (NLP) methods as valuable tools to understand the opioid crisis. OBJECTIVE: We aimed to understand how NLP has been applied to Reddit (Reddit Inc) data to study opioid use. METHODS: We systematically searched for peer-reviewed studies and conference abstracts in PubMed, Scopus, PsycINFO, ACL Anthology, IEEE Xplore, and Association for Computing Machinery data repositories up to July 19, 2022. Inclusion criteria were studies investigating opioid use, using NLP techniques to analyze the textual corpora, and using Reddit as the social media data source. We were specifically interested in mapping studies' overarching goals and findings, methodologies and software used, and main limitations. RESULTS: In total, 30 studies were included, which were classified into 4 nonmutually exclusive overarching goal categories: methodological (n=6, 20% studies), infodemiology (n=22, 73% studies), infoveillance (n=7, 23% studies), and pharmacovigilance (n=3, 10% studies). NLP methods were used to identify content relevant to opioid use among vast quantities of textual data, to establish potential relationships between opioid use patterns or profiles and contextual factors or comorbidities, and to anticipate individuals' transitions between different opioid-related subreddits, likely revealing progression through opioid use stages. Most studies used an embedding technique (12/30, 40%), prediction or classification approach (12/30, 40%), topic modeling (9/30, 30%), and sentiment analysis (6/30, 20%). The most frequently used programming languages were Python (20/30, 67%) and R (2/30, 7%). Among the studies that reported limitations (20/30, 67%), the most cited was the uncertainty regarding whether redditors participating in these forums were representative of people who use opioids (8/20, 40%). The papers were very recent (28/30, 93%), from 2019 to 2022, with authors from a range of disciplines. CONCLUSIONS: This scoping review identified a wide variety of NLP techniques and applications used to support surveillance and social media interventions addressing the opioid crisis. Despite the clear potential of these methods to enable the identification of opioid-relevant content in Reddit and its analysis, there are limits to the degree of interpretive meaning that they can provide. Moreover, we identified the need for standardized ethical guidelines to govern the use of Reddit data to safeguard the anonymity and privacy of people using these forums.
Subject(s)
Natural Language Processing , Social Media , Humans , Opioid-Related Disorders/epidemiology , Analgesics, Opioid/adverse effects , Analgesics, Opioid/therapeutic useABSTRACT
The aim of this study was to describe the dietary intake of British vegetarians according to the Nova classification and to evaluate the association between the consumption of ultra-processed foods and the nutritional quality of the diet. We used data from the UK national survey (2008/2019). Food collected through a 4-d record were classified according to the Nova system. In all tertiles of the energy contribution of ultra-processed foods, differences in the average nutrient intake, as well as in the prevalence of inadequate intake, were analysed, considering the values recommended by international authorities. Ultra-processed foods had the highest dietary contribution (56·3 % of energy intake), followed by fresh or minimally processed foods (29·2 %), processed foods (9·4 %) and culinary ingredients (5 %). A positive linear trend was found between the contribution tertiles of ultra-processed foods and the content of free sugars (ß 0·25, P < 0·001), while an inverse relationship was observed for dietary fibre (ß -0·26, P = 0·002), potassium (ß -0·38, P < 0·001), Mg (ß -0·31, P < 0·001), Cu (ß -0·22, P < 0·003), vitamin A (ß -0·37, P < 0·001) and vitamin C (ß -0·22, P < 0·001). As the contribution of ultra-processed foods to total energy intake increased (from the first to the last tertile of consumption), the prevalence of inadequate intake of free sugars increased (from 32·9 % to 60·7 %, respectively), as well as the prevalence of inadequate fibre intake (from 26·1 % to 47·5 %). The influence of ultra-processed foods on the vegetarian diet in the UK is of considerable magnitude, and the consumption of this food was associated with poorer diet quality.
Subject(s)
Diet, Vegetarian , Fast Foods , Nutritive Value , Vegetarians , Humans , United Kingdom , Adult , Female , Male , Middle Aged , Food Handling , Energy Intake , Young Adult , Diet , Dietary Fiber/analysis , Dietary Fiber/administration & dosage , Food, ProcessedABSTRACT
Tremendously negative effects have been generated in recent decades by the continuously increasing production of conventional plastics and the inadequate management of their waste products. This demands the production of materials within a circular economy, easy to recycle and to biodegrade, minimizing the environmental impact and increasing cost competitiveness. Bioplastics represent a sustainable alternative in this scenario. However, the replacement of plastics must be addressed considering several aspects along their lifecycle, from bioplastic processing to the final application of the product. In this review, the effects of using different additives, biomass sources, and processing techniques on the mechanical and thermal behavior, as well as on the biodegradability, of bioplastics is discussed. The importance of using bioplasticizers is highlighted, besides studying the role of surfactants, compatibilizers, cross-linkers, coupling agents, and chain extenders. Cellulose, lignin, starch, chitosan, and composites are analyzed as part of the non-synthetic bioplastics considered. Throughout the study, the emphasis is on the use of well-established manufacturing processes, such as extrusion, injection, compression, or blow molding, since these are the ones that satisfy the quality, productivity, and cost requirements for large-scale industrial production. Particular attention is also given to fused deposition modeling, since this additive manufacturing technique is nowadays not only used for making prototypes, but it is being integrated into the development of parts for a wide variety of biomedical and industrial applications. Finally, recyclability and the commercial requirements for bioplastics are discussed, and some future perspectives and challenges for the development of bio-based plastics are discussed, with the conclusion that technological innovations, economic incentives, and policy changes could be coupled with individually driven solutions to mitigate the negative environmental impacts associated with conventional plastics.
ABSTRACT
BACKGROUND: The introduction of natural language processing (NLP) technologies has significantly enhanced the potential of self-administered interventions for treating anxiety and depression by improving human-computer interactions. Although these advances, particularly in complex models such as generative artificial intelligence (AI), are highly promising, robust evidence validating the effectiveness of the interventions remains sparse. OBJECTIVE: The aim of this study was to determine whether self-administered interventions based on NLP models can reduce depressive and anxiety symptoms. METHODS: We conducted a systematic review and meta-analysis. We searched Web of Science, Scopus, MEDLINE, PsycINFO, IEEE Xplore, Embase, and Cochrane Library from inception to November 3, 2023. We included studies with participants of any age diagnosed with depression or anxiety through professional consultation or validated psychometric instruments. Interventions had to be self-administered and based on NLP models, with passive or active comparators. Outcomes measured included depressive and anxiety symptom scores. We included randomized controlled trials and quasi-experimental studies but excluded narrative, systematic, and scoping reviews. Data extraction was performed independently by pairs of authors using a predefined form. Meta-analysis was conducted using standardized mean differences (SMDs) and random effects models to account for heterogeneity. RESULTS: In all, 21 articles were selected for review, of which 76% (16/21) were included in the meta-analysis for each outcome. Most of the studies (16/21, 76%) were recent (2020-2023), with interventions being mostly AI-based NLP models (11/21, 52%); most (19/21, 90%) delivered some form of therapy (primarily cognitive behavioral therapy: 16/19, 84%). The overall meta-analysis showed that self-administered interventions based on NLP models were significantly more effective in reducing both depressive (SMD 0.819, 95% CI 0.389-1.250; P<.001) and anxiety (SMD 0.272, 95% CI 0.116-0.428; P=.001) symptoms compared to various control conditions. Subgroup analysis indicated that AI-based NLP models were effective in reducing depressive symptoms (SMD 0.821, 95% CI 0.207-1.436; P<.001) compared to pooled control conditions. Rule-based NLP models showed effectiveness in reducing both depressive (SMD 0.854, 95% CI 0.172-1.537; P=.01) and anxiety (SMD 0.347, 95% CI 0.116-0.578; P=.003) symptoms. The meta-regression showed no significant association between participants' mean age and treatment outcomes (all P>.05). Although the findings were positive, the overall certainty of evidence was very low, mainly due to a high risk of bias, heterogeneity, and potential publication bias. CONCLUSIONS: Our findings support the effectiveness of self-administered NLP-based interventions in alleviating depressive and anxiety symptoms, highlighting their potential to increase accessibility to, and reduce costs in, mental health care. Although the results were encouraging, the certainty of evidence was low, underscoring the need for further high-quality randomized controlled trials and studies examining implementation and usability. These interventions could become valuable components of public health strategies to address mental health issues. TRIAL REGISTRATION: PROSPERO International Prospective Register of Systematic Reviews CRD42023472120; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42023472120.
Subject(s)
Anxiety , Depression , Natural Language Processing , Humans , Depression/therapy , Depression/prevention & control , Anxiety/therapy , Anxiety/prevention & control , Self Care/methodsABSTRACT
Emoticons have been considered pragmatic cues that enhance emotional expressivity during computer-mediated communication. Yet, it is unclear how emoticons are processed in ambiguous text-based communication due to incongruences between the emoticon's emotional valence and its context. In this study, we investigated the electrophysiological correlates of contextual influence on the early emotional processing of emoticons, during an emotional congruence judgment task. Participants were instructed to judge the congruence between a text message expressing an emotional situation (positive or negative), and a subsequent emoticon expressing positive or negative emotions. We analyzed early event-related potentials elicited by emoticons related to face processing (N170) and emotional salience in visual perception processing (Early Posterior Negativity, EPN). Our results show that accuracy and Reaction Times depend on the interaction between the emotional valence of the context and the emoticon. Negative emoticons elicited a larger N170, suggesting that the emotional information of the emoticon is integrated at the early stages of the perceptual process. During emoticon processing, a valence effect was observed with enhanced EPN amplitudes in occipital areas for emoticons representing negative valences. Moreover, we observed a congruence effect in parieto-temporal sites within the same time-window, with larger amplitudes for the congruent condition. We conclude that, similar to face processing, emoticons are processed differently according to their emotional content and the context in which they are embedded. A congruent context might enhance the emotional salience of the emoticon (and therefore, its emotional expression) during the early stages of their processing.
ABSTRACT
The structural health monitoring (SHM) of buildings provides relevant data for the evaluation of the structural behavior over time, the efficiency of maintenance, strengthening, and post-earthquake conditions. This paper presents the design and implementation of a continuous SHM system based on dynamic properties, base accelerations, crack widths, out-of-plane rotations, and environmental data for the retrofitted church of Kuñotambo, a 17th century adobe structure, located in the Peruvian Andes. The system produces continuous hourly records. The organization, data collection, and processing of the SHM system follows different approaches and stages, concluding with the assessment of the structural and environmental conditions over time compared to predefined thresholds. The SHM system was implemented in May 2022 and is part of the Seismic Retrofitting Project of the Getty Conservation Institute. The initial results from the first twelve months of monitoring revealed seasonal fluctuations in crack widths, out-of-plane rotations, and natural frequencies, influenced by hygrothermal cycles, and an apparent positive trend, but more data are needed to justify the nature of these actions. This study emphasizes the necessity for extended data collection to establish robust correlations and refine monitoring strategies, aiming to enhance the longevity and safety of historic adobe structures under seismic risk.
ABSTRACT
This article presents a comprehensive collection of formulas and calculations for hand-crafted feature extraction of condition monitoring signals. The documented features include 123 for the time domain and 46 for the frequency domain. Furthermore, a machine learning-based methodology is presented to evaluate the performance of features in fault classification tasks using seven data sets of different rotating machines. The evaluation methodology involves using seven ranking methods to select the best ten hand-crafted features per method for each database, to be subsequently evaluated by three types of classifiers. This process is applied exhaustively by evaluation groups, combining our databases with an external benchmark. A summary table of the performance results of the classifiers is also presented, including the percentage of classification and the number of features required to achieve that value. Through graphic resources, it has been possible to show the prevalence of certain features over others, how they are associated with the database, and the order of importance assigned by the ranking methods. In the same way, finding which features have the highest appearance percentages for each database in all experiments has been possible. The results suggest that hand-crafted feature extraction is an effective technique with low computational cost and high interpretability for fault identification and diagnosis.
ABSTRACT
Processing of berries usually degrades anthocyanin and non-anthocyanin phenolics and diminishes antioxidant activity. In Colombia, jelly produced from the fruit of Vaccinium meridionale Swartz is a popular product among consumers. The aim of this study was to determine the effect of jelly processing steps on bioactive components. Analysis of anthocyanins (ACNs) and non-anthocyanin phenolics was performed via HPLC-PDA. Antioxidant activity was assessed by the ORACFL method. The pulping step had the highest impact on ACNs, whose total content was significantly higher in the pomace (747.6 ± 59.2 mg cyanidin 3-glucoside (cyn 3-glu)/100 g) than in the pulp (102.7 ± 8.3 mg cyn 3-glu/100 g). Similarly, pulping caused a significant decrease in flavonols, procyanidins (PACs) and ORACFL values. Despite the effects of processing, Colombian bilberry jelly can be considered a good source of phenolic compounds with high antioxidant activity. The final concentration of ACNs, hydroxycinnamic acids (HCAs) and flavonols, as well as the ORACFL values in this product were comparable to those of fresh cranberry (Vaccinium oxycoccos) and black currant (Ribes nigrum). The results also suggest that the pomace of V. meridionale can be recovered as an excellent source of bioactive compounds.
ABSTRACT
Following consumer trends and market needs, the food industry has expanded the use of unconventional sources to obtain proteins. In parallel, 3D and 4D food printing have emerged with the potential to enhance food processing. While 3D and 4D printing technologies show promising prospects for improving the performance and applicability of unconventional sourced proteins (USP) in food, this combination remains relatively unexplored. This review aims to provide an overview of the application of USP in 3D and 4D printing, focusing on their primary sources, composition, rheological, and technical-functional properties. The drawbacks, challenges, potentialities, and prospects of these technologies in food processing are also examined. This review underscores the current necessity for greater regulation of food products processed by 3D and 4D printing. The data presented here indicate that 3D and 4D printing represent viable, sustainable, and innovative alternatives for the food industry, emphasizing the potential for further exploration of 4D printing in food processing. Additional studies are warranted to explore their application with unconventional proteins.
Subject(s)
Food Handling , Printing, Three-Dimensional , Food Handling/methods , Rheology , Proteins , Food IndustryABSTRACT
This article explores the impact and potential applications of large language models in Occupational Medicine. Large language models have the ability to provide support for medical decision-making, patient screening, summarization and creation of technical, scientific, and legal documents, training and education for doctors and occupational health teams, as well as patient education, potentially leading to lower costs, reduced time expenditure, and a lower incidence of human errors. Despite promising results and a wide range of applications, large language models also have significant limitations in terms of their accuracy, the risk of generating false information, and incorrect recommendations. Various ethical aspects that have not been well elucidated by the medical and academic communities should also be considered, and the lack of regulation by government entities can create areas of legal uncertainty regarding their use in Occupational Medicine and in the legal environment. Significant future improvements can be expected in these models in the coming years, and further studies on the applications of large language models in Occupational Medicine should be encouraged.
Este artigo explora o impacto e as possíveis aplicações dos grandes modelos de linguagem na Medicina do Trabalho. Os grandes modelos de linguagem têm a capacidade de fornecer suporte durante a tomada de decisão médica, a triagem de pacientes, a sumarização e confecção de documentos técnicos, científicos e jurídicos, o treinamento e educação de médicos e da equipe de saúde ocupacional, bem como a educação de pacientes, potencialmente levando a menores custos, menor gasto de tempo e menor incidência de erros humanos. Apesar dos resultados promissores e da grande variabilidade de aplicações, os grandes modelos de linguagem apresentam também limitações significativas em relação à sua acurácia, ao risco de geração de informações falsas e a recomendações errôneas. Também devem ser considerados diversos aspectos éticos ainda não bem elucidados pela comunidade médica e acadêmica, e a falta de regulamentação pelas entidades governamentais pode gerar áreas de incerteza jurídica sobre o seu uso na Medicina do Trabalho e no ambiente judicial. Melhorias futuras significativas podem ser esperadas nesses modelos nos próximos anos, e mais estudos das aplicações dos grandes modelos de linguagem na Medicina do Trabalho devem ser encorajados.
ABSTRACT
Sous vide meat is an emerging food category, the consumption of which has increased owing to greater convenience, sensory traits, elderly consumers acceptance, and low-cost cuts use. However, required prolonged thermal treatment to achieve desired tenderness, impact energy-consumption besides triggering lipid oxidation, undesired off-flavors, and cooked meat profiles. Using a response surface methodology (RSM), this study evaluated the effects of the vegetal proteolytic papain (0 to 20 mg/kg) and low-temperature sous vide cooking (SVC) time (1 to 8 h at 65°C) in low-value marinated M. semitendinosus beefsteaks on technological characteristics associated with tenderness, and lipid oxidation. Additionally, the sensory profile traits of the pre-selected treatments were described using check-all-that-apply (CATA) and preference mapping. Shear force (WBsSF) was reduced with greater papain addition, whereas higher cooking losses (CL) were observed with longer SVC cooking times. Both the released total collagen and TBARS values increased with increasing papain concentrations and SVC times. Combining high levels of papain (>10 mg/kg) and SVC time (>6 h) resulted in lower WBsSF values (<20 N) but higher CL (>27%) and the CATA descriptors "aftertaste" and "mushy." The optimized conditions (14 mg/kg papain; 2 h SVC) also reduced WBsSF values (<26 N) with lower CL (<20%) and were most preferred and described as "juicy" and "tender" by consumers. Observed results suggest that combined mild SVC and papain may potentiate tenderness, conjointly favor juiciness and oxidation, further representing a promising tool for reducing SVC time without compromising valued sous vide sensory traits.
Subject(s)
Cooking , Papain , Taste , Cooking/methods , Animals , Cattle , Humans , Red Meat/analysis , Male , Meat/analysis , FemaleABSTRACT
With the growing concerns about the protection of ecosystem functions and services, governments have developed public policies and organizations have produced an awesome volume of digital data freely available through their websites. On the other hand, advances in data acquisition through remote sensed sources and processing through geographic information systems (GIS) and statistical tools, allowed an unprecedent capacity to manage ecosystems efficiently. However, the real-world scenario in that regard remains paradoxically challenging. The reasons can be many and diverse, but a strong candidate relates with the limited engagement among the interest parties that hampers bringing all these assets into action. The aim of the study is to demonstrate that management of ecosystem services can be significantly improved by integrating existing environmental policies with environmental big data and low-cost GIS and data processing tools. Using the Upper Rio das Velhas hydrographic basin located in the state of Minas Gerais (Brazil) as example, the study demonstrated how Principal Components Analysis based on a diversity of environmental variables assembled sub-basins into urban, agriculture, mining and heterogeneous profiles, directing management of ecosystem services to the most appropriate officially established conservation plans. The use of GIS tools, on the other hand, allowed narrowing the implementation of each plan to specific sub-basins. This optimized allocation of preferential management plans to priority areas was discussed for a number of conservation plans. A paradigmatic example was the so-called Conservation Use Potential (CUP) devoted to the protection of aquifer recharge (provision service) and control of water erosion (regulation service), as well as to the allocation of uses as function of soil capability (support service). In all cases, the efficiency gains in readiness for plans' implementation and economy of resources were prognosed as noteworthy.
Subject(s)
Conservation of Natural Resources , Ecosystem , Geographic Information Systems , Brazil , Environmental PolicyABSTRACT
OBJECTIVES: This in vitro study focused on verifying the influence of different ambient light conditions on the accuracy and precision of models obtained from digital scans. METHODOLOGY: To measure the tested illuminances: chair light/reflector; room light, and natural light at the time of scanning, a luxmeter was used. From the STL file, nine experimental groups were formed. RESULTS: Of the nine specific combinations between the three IOS and the three types of lighting, it was verified that for all of them, as well as the ICC, the accuracy was also excellent, in which the measured values were not significantly influenced by the IOS brand (p = 0.994) nor by the type of lighting (p = 0.996). For precision data, GLM indicated a statistically significant interaction between IOS and lighting type. Under LS, accuracy was significantly higher with 3Shape® than with CS 3600 CareStream®, which had significantly higher accuracy than Virtuo Vivo™ Straumann®. CONCLUSIONS: The models obtained with the three IOS evaluated exhibited excellent accuracy under the different illuminance tested and the 3Shape® under the three illuminance conditions was the device that presented the best precision, specifically when using LC and LS.