Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 105
Filtrar
1.
Heliyon ; 10(9): e29506, 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38698983

RESUMEN

Public transportation plays a critical role in meeting transportation demands, particularly in densely populated areas. The COVID-19 pandemic has highlighted the importance of public health measures, including the need to prevent the spread of the virus through public transport. The spreading of the virus on a passenger ship is studied using the Computational Fluid Dynamic (CFD) model and Monte Carlo simulation. A particular focus was the context of Bangladesh, a populous maritime nation in South Asia, where a significant proportion of the population utilizes passenger ships to meet transportation demands. In this regard, a turbulence model is used, which simulates the airflow pattern and determines the contamination zone. Parameters under investigation are voyage duration, number of passengers on board, social distance, the effect of surgical masks, and others. This study shows that the transmission rate of SARS-CoV-2 infection on public transport, such as passenger ships, is not necessarily directly proportional to voyage duration or the number of passengers onboard. This model has the potential to be applied in various other modes of transportation, including public buses and airplanes. Implementing this model may help to monitor and address potential health risks effectively in the public transport networks.

2.
Rev. int. med. cienc. act. fis. deporte ; 24(95): 1-23, mar.-2024. graf, tab
Artículo en Inglés | IBECS | ID: ibc-ADZ-313

RESUMEN

CBA is a sports event that allows fans to enjoy themselves and players to give full play, and traditional Chinese cultural values have a profound influence on it. This paper takes the 100 sets of historical rating data of the fourteen teams in CBA league as the basic basis, firstly, we simply deal with the 100 sets of historical rating data and use Excel function formula to find out the mean, extreme deviation and variance of each team, then we carry out SAS normal test, and we find that except for the very few data with large deviation, the historical rating data satisfy the normal distribution. Through the outlier algorithm to screen the values, compare the confidence intervals as well as carry out hypothesis testing, to objectively and scientifically explore the probability of each team winning the championship in the CBA league. Compare the probability of winning the championship of these fourteen teams and predict the top four teams in the CBA league to ensure that the prediction results are as reasonable as possible. With the help of hierarchical analysis to qualitatively analyze the level of each team, and then through cluster analysis to compare these data, and combined with the trend of the development of the world's basketball movement, the use of multiple regression and SPSS to analyze the level of the team's factors, in-depth thinking about the league, a more reasonable to give a more scientific to improve the probability of the team's winning the championship, and to promote better development of the basketball movement. (AU)


Asunto(s)
Humanos , Intervalos de Confianza , Pruebas de Hipótesis , Predicción , Apoyo a la Investigación como Asunto , Baloncesto
3.
Front Digit Health ; 6: 1322555, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38370362

RESUMEN

Introduction: Individuals in the midst of a mental health crisis frequently exhibit instability and face an elevated risk of recurring crises in the subsequent weeks, which underscores the importance of timely intervention in mental healthcare. This work presents a data-driven method to infer the mental state of a patient during the weeks following a mental health crisis by leveraging their historical data. Additionally, we propose a policy that determines the necessary duration for closely monitoring a patient after a mental health crisis before considering them stable. Methods: We model the patient's mental state as a Hidden Markov Process, partially observed through mental health crisis events. We introduce a closed-form solution that leverages the model parameters to optimally estimate the risk of future mental health crises. Our policy determines a patient should be closely monitored when their estimated risk of crisis exceeds a predefined threshold. The method's performance is evaluated using both simulated data and a real-world dataset comprising 162 anonymized psychiatric patients. Results: In the simulations, 96.2% of the patients identified by the policy were in an unstable state, achieving a F1 score of 0.74. In the real-world dataset, the policy yielded an F1 score of 0.79, with a sensitivity of 79.8% and specificity of 88.9%. Under this policy, 67.3% of the patients should undergo close monitoring for one week, 21.6% during 2 weeks or more, while 11.1% do not need close monitoring. Discussion: The simulation results provide compelling evidence that the method is effective under the specified assumptions. When applied to actual psychiatric patients, the proposed policy showed significant potential for providing an individualized assessment of the required duration for close and automatic monitoring after a mental health crisis to reduce the relapse risks.

4.
Biotechnol Bioeng ; 121(1): 228-237, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37902718

RESUMEN

Improving bioprocess efficiency is important to reduce the current costs of biologics on the market, bring them faster to the market, and to improve the environmental footprint. The process intensification efforts were historically focused on the main stage, while intensification of pre-stages has started to gain attention only in the past decade. Performing bioprocess pre-stages in the perfusion mode is one of the most efficient options to achieve higher viable cell densities over traditional batch methods. While the perfusion-mode operation allows to reach higher viable cell densities, it also consumes large amount of medium, making it cost-intensive. The change of perfusion rate during a process (perfusion profile) determines how much medium is consumed, thereby running a process in optimal conditions is key to reduce medium consumption. However, the selection of the perfusion profile is often made empirically, without full understanding of bioprocess dynamics. This fact is hindering potential process improvements and means for cost reduction. In this study, we propose a process modeling approach to identify the optimal perfusion profile during bioprocess pre-stages. The developed process model was used internally during process development. We could reduce perfused medium volume by 25%-45% (project-dependent), while keeping the difference in the final cell within 5%-10% compared to the original settings. Additionally, the model helps to reduce the experimental workload by 30%-70% and to predict an optimal perfusion profile when process conditions need to be changed (e.g., higher seeding density, change of operating mode from batch to perfusion, etc.). This study demonstrates the potential of process modeling as a powerful tool for optimizing bioprocess pre-stages and thereby guiding process development, improving overall bioprocess efficiency, and reducing operational costs, while strongly reducing the need for wet-lab experiments.


Asunto(s)
Reactores Biológicos , Perfusión , Recuento de Células
5.
Comput Biol Med ; 168: 107753, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38039889

RESUMEN

BACKGROUND: Trans-acting factors are of special importance in transcription regulation, which is a group of proteins that can directly or indirectly recognize or bind to the 8-12 bp core sequence of cis-acting elements and regulate the transcription efficiency of target genes. The progressive development in high-throughput chromatin capture technology (e.g., Hi-C) enables the identification of chromatin-interacting sequence groups where trans-acting DNA motif groups can be discovered. The problem difficulty lies in the combinatorial nature of DNA sequence pattern matching and its underlying sequence pattern search space. METHOD: Here, we propose to develop MotifHub for trans-acting DNA motif group discovery on grouped sequences. Specifically, the main approach is to develop probabilistic modeling for accommodating the stochastic nature of DNA motif patterns. RESULTS: Based on the modeling, we develop global sampling techniques based on EM and Gibbs sampling to address the global optimization challenge for model fitting with latent variables. The results reflect that our proposed approaches demonstrate promising performance with linear time complexities. CONCLUSION: MotifHub is a novel algorithm considering the identification of both DNA co-binding motif groups and trans-acting TFs. Our study paves the way for identifying hub TFs of stem cell development (OCT4 and SOX2) and determining potential therapeutic targets of prostate cancer (FOXA1 and MYC). To ensure scientific reproducibility and long-term impact, its matrix-algebra-optimized source code is released at http://bioinfo.cs.cityu.edu.hk/MotifHub.


Asunto(s)
Algoritmos , Programas Informáticos , Motivos de Nucleótidos/genética , Reproducibilidad de los Resultados , Cromatina/genética
6.
BMC Ecol Evol ; 23(1): 76, 2023 12 14.
Artículo en Inglés | MEDLINE | ID: mdl-38097959

RESUMEN

BACKGROUND: Gene duplication is an important process in evolution. What causes some genes to be retained after duplication and others to be lost is a process not well understood. The most prevalent theory is the gene duplicability hypothesis, that something about the function and number of interacting partners (number of subunits of protein complex, etc.), determines whether copies have more opportunity to be retained for long evolutionary periods. Some genes are also more susceptible to dosage balance effects following WGD events, making them more likely to be retained for longer periods of time. One would expect these processes that affect the retention of duplicate copies to affect the conditional probability ratio after consecutive whole genome duplication events. The probability that a gene will be retained after a second whole genome duplication event (WGD2), given that it was retained after the first whole genome duplication event (WGD1) versus the probability a gene will be retained after WGD2, given it was lost after WGD1 defines the probability ratio that is calculated. RESULTS: Since duplicate gene retention is a time heterogeneous process, the time between the events (t1) and the time since the most recent event (t2) are relevant factors in calculating the expectation for observation in any genome. Here, we use a survival analysis framework to predict the probability ratio for genomes with different values of t1 and t2 under the gene duplicability hypothesis, that some genes are more susceptible to selectable functional shifts, some more susceptible to dosage compensation, and others only drifting. We also predict the probability ratio with different values of t1 and t2 under the mutational opportunity hypothesis, that probability of retention for certain genes changes in subsequent events depending upon how they were previously retained. These models are nested such that the mutational opportunity model encompasses the gene duplicability model with shifting duplicability over time. Here we present a formalization of the gene duplicability and mutational opportunity hypotheses to characterize evolutionary dynamics and explanatory power in a recently developed statistical framework. CONCLUSIONS: This work presents expectations of the gene duplicability and mutational opportunity hypotheses over time under different sets of assumptions. This expectation will enable formal testing of processes leading to duplicate gene retention.


Asunto(s)
Genes Duplicados , Motivación , Genes Duplicados/genética , Genoma , Duplicación de Gen
7.
bioRxiv ; 2023 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-38014338

RESUMEN

Characterizing cell-cell communication and tracking its variability over time is essential for understanding the coordination of biological processes mediating normal development, progression of disease, or responses to perturbations such as therapies. Existing tools lack the ability to capture time-dependent intercellular interactions, such as those influenced by therapy, and primarily rely on existing databases compiled from limited contexts. We present DIISCO, a Bayesian framework for characterizing the temporal dynamics of cellular interactions using single-cell RNA-sequencing data from multiple time points. Our method uses structured Gaussian process regression to unveil time-resolved interactions among diverse cell types according to their co-evolution and incorporates prior knowledge of receptor-ligand complexes. We show the interpretability of DIISCO in simulated data and new data collected from CAR-T cells co-cultured with lymphoma cells, demonstrating its potential to uncover dynamic cell-cell crosstalk.

8.
World Allergy Organ J ; 16(9): 100813, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37811397

RESUMEN

Background: Food allergy (FA) has become a major public health concern affecting millions of children and adults worldwide. In Tunisia, published data on FA are scarce. Methods: This study, was intended to fill the gap and estimate the frequency of allergy to different foods in the Sfax region, Tunisia, within self-reported FA. One hundred twenty-five (125) children (56% males, 1-17 years old), and 306 adults (17% males, 18-70 years old) were interviewed using a bilingual questionnaire. Results: The number of self-reported food allergens in this sample was 105; allergens were clustered in 8 foods: fruits, seafood, eggs, milk and dairy, cereals, nuts, vegetables, and peanuts. Cutaneous reactions were the most frequent symptoms, in both children and adults. About 40% of children and 30% of adults had a family history of FA. About 81% of adults and 38% of children are allergic to at least 1 non-food allergen. The most prevalent food allergen was the fruit group in both adults and children, followed by seafood. Most food allergies were mutually exclusive and 90% of individuals have a single FA. The relationship between self-declared FA was modeled using a Bayesian network graphical model in order to estimate conditional probabilities of each FA when other FA is present. Conclusions: Our findings suggest that the prevalence of self-reported FA in Tunisia depends on dietary habits and food availability since the most frequent allergens are from foods that are highly consumed by the Tunisian population.

9.
Front Psychol ; 14: 1148275, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37771804

RESUMEN

Introduction: We present a cross-linguistic experimental study that explores the exhaustivity properties of questions embedded under wissen/to know and korrekt vorhersagen/to correctly predict in German and English. While past theoretical literature has held that such embedded questions should only be interpreted as strongly exhaustive (SE), recent experimental findings suggest an intermediate exhaustive (IE) interpretation is also available and plausible. Methods: Participants were confronted with a decision problem involving the different exhaustive readings and received a financial incentive based on their performance. We employed Bayesian analysis to create probabilistic models of participants' beliefs, linking their responses to readings based on utility maximization in simple decision problems. Results: For wissen/to know, we found that the SE reading was most probable in both languages, aligning with early theoretical literature. However, we also attested to the presence of IE readings. For korrekt vorhersagen in German, the IE reading was most probable, whereas for the English phrase "to correctly predict," a preference for the SE reading was observed. Discussion: This cross-linguistic variation correlates with independent corpus data, indicating that German vorhersagen and English to predict are not lexically equivalent. By including an explicit pragmatic component, our study complements previous work that has focused solely on the principled semantic availability of given readings.

10.
Genome Biol ; 24(1): 212, 2023 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-37730638

RESUMEN

BACKGROUND: Single-cell sequencing provides detailed insights into biological processes including cell differentiation and identity. While providing deep cell-specific information, the method suffers from technical constraints, most notably a limited number of expressed genes per cell, which leads to suboptimal clustering and cell type identification. RESULTS: Here, we present DISCERN, a novel deep generative network that precisely reconstructs missing single-cell gene expression using a reference dataset. DISCERN outperforms competing algorithms in expression inference resulting in greatly improved cell clustering, cell type and activity detection, and insights into the cellular regulation of disease. We show that DISCERN is robust against differences between batches and is able to keep biological differences between batches, which is a common problem for imputation and batch correction algorithms. We use DISCERN to detect two unseen COVID-19-associated T cell types, cytotoxic CD4+ and CD8+ Tc2 T helper cells, with a potential role in adverse disease outcome. We utilize T cell fraction information of patient blood to classify mild or severe COVID-19 with an AUROC of 80% that can serve as a biomarker of disease stage. DISCERN can be easily integrated into existing single-cell sequencing workflow. CONCLUSIONS: Thus, DISCERN is a flexible tool for reconstructing missing single-cell gene expression using a reference dataset and can easily be applied to a variety of data sets yielding novel insights, e.g., into disease mechanisms.


Asunto(s)
COVID-19 , Humanos , COVID-19/genética , Algoritmos , Ciclo Celular , Diferenciación Celular , Análisis por Conglomerados
11.
Methods Mol Biol ; 2685: 307-328, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37439990

RESUMEN

LRmix Studio performs statistical analyses on forensic casework samples by calculating a likelihood ratio (LR) following a semi-continuous, unrestricted approach. The software utilizes a basic probabilistic model allowing the comparison of two alternative hypotheses regarding the evidence profile to include known and/or unknown contributors, for a maximum of a 4-person mixture. Other statistical factors that are included in this model are the incorporation of multiple probability of drop-out values, probability of drop-in, a correction factor for population substructure, assumed contributor inclusion, and inclusion of an unknown relative in the defense hypothesis. A range of plausible probability of drop-out values can be calculated for various contributors and hypotheses based on a Monte Carlo probability method and included in the likelihood ratio calculation. The software also includes several ways to test the validity and robustness of the probabilistic model. A sensitivity analysis can be performed by calculating likelihood ratios for the given profile against a range of drop-out values. Additionally, a non-contributor test can be performed on the crime scene sample and the chosen LR parameters to test the robustness of the model. This can give a point of comparison of the likelihood ratio generated for the person of interest (POI) compared to "random man" profiles generated from uploaded allelic frequencies. Finally, the analysis can be printed in a well-structured and user-friendly report that includes all analysis parameters. Within this chapter, the reader will learn the steps to calculate a likelihood ratio using the semi-continuous software, LRmix Studio. Additional tools supplied through the software will also be explained and demonstrated.


Asunto(s)
Dermatoglifia del ADN , ADN , Masculino , Humanos , Funciones de Verosimilitud , Dermatoglifia del ADN/métodos , ADN/análisis , Repeticiones de Microsatélite , Modelos Estadísticos , Programas Informáticos
12.
New Phytol ; 240(3): 918-927, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37337836
13.
Undersea Hyperb Med ; 50(2): 67-83, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37302072

RESUMEN

The Swedish Armed Forces (SwAF) air dive tables are under revision. Currently, the air dive table from the U.S. Navy (USN) Diving Manual (DM) Rev. 6 is used with an msw-to-fsw conversion. Since 2017, the USN has been diving according to USN DM rev. 7, which incorporates updated air dive tables derived from the Thalmann Exponential Linear Decompression Algorithm (EL-DCM) with VVAL79 parameters. The SwAF decided to replicate and analyze the USN table development methodology before revising their current tables. The ambition was to potentially find a table that correlates with the desired risk of decompression sickness.  New compartmental parameters for the EL-DCM algorithm, called SWEN21B, were developed by applying maximum likelihood methods on 2,953 scientifically controlled direct ascent air dives with known outcomes of decompression sickness (DCS). The targeted probability of DCS for direct ascent air dives was ≤1% overall and ≤1‰ for neurological DCS (CNS-DCS). One hundred fifty-four wet validation dives were performed with air between 18 to 57 msw. Both direct ascent and decompression stop dives were conducted, resulting in incidences of two joint pain DCS (18 msw/59 minutes), one leg numbness CNS-DCS (51 msw/10 minutes with deco-stop), and nine marginal DCS cases, such as rashes and itching. A total of three DCS incidences, including one CNS-DCS, yield a predicted risk level (95% confidence interval) of 0.4-5.6% for DCS and 0.0-3.6% for CNS-DCS. Two out of three divers with DCS had patent foramen ovale. The SWEN21 table is recommended for the SwAF for air diving as it, after results from validation dives, suggests being within the desired risk levels for DCS and CNS-DCS.


Asunto(s)
Enfermedad de Descompresión , Buceo , Humanos , Buceo/efectos adversos , Enfermedad de Descompresión/etiología , Suecia , Descompresión/métodos , Algoritmos
14.
Appl Meas Educ ; 36(1): 80-98, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37223404

RESUMEN

Multiple choice results are inherently probabilistic outcomes, as correct responses reflect a combination of knowledge and guessing, while incorrect responses additionally reflect blunder, a confidently committed mistake. To objectively resolve knowledge from responses in an MC test structure, we evaluated probabilistic models that explicitly account for guessing, knowledge and blunder using eight assessments (>9,000 responses) from an undergraduate biotechnology curriculum. A Bayesian implementation of the models, aimed at assessing their robustness to prior beliefs in examinee knowledge, showed that explicit estimators of knowledge are markedly sensitive to prior beliefs with scores as sole input. To overcome this limitation, we examined self-ranked confidence as a proxy knowledge indicator. For our test set, three levels of confidence resolved test performance. Responses rated as least confident were correct more frequently than expected from random selection, reflecting partial knowledge, but were balanced by blunder among the most confident responses. By translating evidence-based guessing and blunder rates to pass marks that statistically qualify a desired level of examinee knowledge, our approach finds practical utility in test analysis and design.

15.
Front Artif Intell ; 6: 1097891, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37091302

RESUMEN

Modeling has actively tried to take the human out of the loop, originally for objectivity and recently also for automation. We argue that an unnecessary side effect has been that modeling workflows and machine learning pipelines have become restricted to only well-specified problems. Putting the humans back into the models would enable modeling a broader set of problems, through iterative modeling processes in which AI can offer collaborative assistance. However, this requires advances in how we scope our modeling problems, and in the user models. In this perspective article, we characterize the required user models and the challenges ahead for realizing this vision, which would enable new interactive modeling workflows, and human-centric or human-compatible machine learning pipelines.

16.
Sci Total Environ ; 881: 163496, 2023 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-37062312

RESUMEN

Bisphenol A (BPA) is a chemical with large-scale applications in the manufacturing of industrial products. Concerns have been raised regarding human exposure to BPA and dietary consumption is the main route of exposure. BPA is recognised as an endocrine disruptor with multiple adverse effects on the reproductive, immune, and nervous systems. This study aimed to conduct a probabilistic risk assessment to evaluate the human health risk based on the raw concentration data (N = 1266) of BPA in non-canned meat and meat products purchased from supermarkets and local butchers in Dublin and the surrounding area. The mean exposure levels for BPA in non-canned meat and meat products, fresh meat, and processed meat products among children were 0.019, 0.0022, and 0.015 µg (kg bw)-1 day-1, respectively. Therefore, simulated human exposures to BPA were far below the EFSA recommended current temporary tolerable daily intake (t-TDI) of 4 µg (kg bw)-1 day-1. However recently, the EFSA has proposed a draft TDI of 0.04 ng (kg bw)-1 day-1 to replace the current t-TDI. Hence, our results indicated potential health concerns as the estimated exposure levels (5th-95th percentile) were below current t-TDI but above draft TDIs. Further investigation into the source of BPA contamination in processed meat products is highly recommended. The research presented here will inform the public, meat producers and processors, and policymakers on potential exposure to BPA.


Asunto(s)
Productos de la Carne , Niño , Humanos , Carne/análisis , Dieta , Compuestos de Bencidrilo/análisis , Medición de Riesgo
17.
Front Vet Sci ; 10: 1111140, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36960143

RESUMEN

Locomotor kinematics have been challenging inputs for automated diagnostic screening of livestock. Locomotion is a highly variable behavior, and influenced by subject characteristics (e.g., body mass, size, age, disease). We assemble a set of methods from different scientific disciplines, composing an automatic, high through-put workflow which can disentangle behavioral complexity and generate precise individual indicators of non-normal behavior for application in diagnostics and research. For this study, piglets (Sus domesticus) were filmed from lateral perspective during their first 10 h of life, an age at which maturation is quick and body mass and size have major consequences for survival. We then apply deep learning methods for point digitization, calculate joint angle profiles, and apply information-preserving transformations to retrieve a multivariate kinematic data set. We train probabilistic models to infer subject characteristics from kinematics. Model accuracy was validated for strides from piglets of normal birth weight (i.e., the category it was trained on), but the models infer the body mass and size of low birth weight (LBW) piglets (which were left out of training, out-of-sample inference) to be "normal." The age of some (but not all) low birth weight individuals was underestimated, indicating developmental delay. Such individuals could be identified automatically, inspected, and treated accordingly. This workflow has potential for automatic, precise screening in livestock management.

18.
Proc Natl Acad Sci U S A ; 120(7): e2218909120, 2023 02 14.
Artículo en Inglés | MEDLINE | ID: mdl-36757892

RESUMEN

An effective evasion strategy allows prey to survive encounters with predators. Prey are generally thought to escape in a direction that is either random or serves to maximize the minimum distance from the predator. Here, we introduce a comprehensive approach to determine the most likely evasion strategy among multiple hypotheses and the role of biomechanical constraints on the escape response of prey fish. Through a consideration of six strategies with sensorimotor noise and previous kinematic measurements, our analysis shows that zebrafish larvae generally escape in a direction orthogonal to the predator's heading. By sensing only the predator's heading, this orthogonal strategy maximizes the distance from fast-moving predators, and, when operating within the biomechanical constraints of the escape response, it provides the best predictions of prey behavior among all alternatives. This work demonstrates a framework for resolving the strategic basis of evasion in predator-prey interactions, which could be applied to a broad diversity of animals.


Asunto(s)
Conducta Predatoria , Pez Cebra , Animales , Pez Cebra/fisiología , Larva/fisiología , Conducta Predatoria/fisiología , Reacción de Fuga , Fenómenos Biomecánicos
19.
Ultrasound Med Biol ; 49(3): 677-698, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36635192

RESUMEN

Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions. Conventionally, reconstruction algorithms have been derived from physical principles. These algorithms rely on assumptions and approximations of the underlying measurement model, limiting image quality in settings where these assumptions break down. Conversely, more sophisticated solutions based on statistical modeling or careful parameter tuning or derived from increased model complexity can be sensitive to different environments. Recently, deep learning-based methods, which are optimized in a data-driven fashion, have gained popularity. These model-agnostic techniques often rely on generic model structures and require vast training data to converge to a robust solution. A relatively new paradigm combines the power of the two: leveraging data-driven deep learning and exploiting domain knowledge. These model-based solutions yield high robustness and require fewer parameters and training data than conventional neural networks. In this work we provide an overview of these techniques from the recent literature and discuss a wide variety of ultrasound applications. We aim to inspire the reader to perform further research in this area and to address the opportunities within the field of ultrasound signal processing. We conclude with a future perspective on model-based deep learning techniques for medical ultrasound.


Asunto(s)
Aprendizaje Profundo , Redes Neurales de la Computación , Ultrasonografía , Algoritmos , Radiografía , Procesamiento de Imagen Asistido por Computador/métodos
20.
Drug Chem Toxicol ; 46(3): 423-429, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-35266432

RESUMEN

Tea is consumed widely around the world owing to its refreshing taste and potential health benefits. However, drinking tea is considered a major route for dietary aluminum exposure in areas where tea consumption is relatively large. To assess the health risk associated with drinking tea, the contamination level of aluminum was determined in 81 tea samples. The transfer rate of aluminum during tea brewing was investigated. Then based on the site-specific exposure parameters such as consumption data and body weight for six different subpopulations in Fujian, the exposure risks were estimated using a probabilistic approach. Results demonstrate that the contents of aluminum in green tea, white tea, oolong tea, and black tea were significantly different according to the one-way ANOVA analysis (p < 0.05). The transfer rate of aluminum were 32.6%, 31.6%, 26.3%, and 14% for white tea, black tea, oolong tea, and green tea, respectively. With respect to the oral reference dose, the exposure of inhabitants in Fujian to aluminum through drinking tea is under control (even at the 99th percentile).


Asunto(s)
Aluminio , Camellia sinensis , , Peso Corporal , Povidona/análisis
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA