Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.373
Filtrar
1.
Stud Hist Philos Sci ; 108: 19-27, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39357248

RESUMO

Gravitational redshift effects undoubtedly exist; moreover, the experimental setups which confirm the existence of these effects-the most famous of which being the Pound-Rebka experiment-are extremely well-known. Nonetheless-and perhaps surprisingly-there remains a great deal of confusion in the literature regarding what these experiments really establish. Our goal in the present article is to clarify these issues, in three concrete ways. First, although (i) Brown and Read (2016) are correct to point out that, given their sensitivity, the outcomes of experimental setups such as the original Pound-Rebka configuration can be accounted for using solely the machinery of accelerating frames in special relativity (barring some subtleties due to the Rindler spacetime necessary to model the effects rigorously), nevertheless (ii) an explanation of the results of more sensitive gravitational redshift outcomes does in fact require more. Second, although typically this 'more' is understood as the invocation of spacetime curvature within the framework of general relativity, in light of the so-called 'geometric trinity' of gravitational theories, in fact curvature is not necessary to explain even these results. Thus (a) one can often explain the results of these experiments using only the resources of special relativity, and (b) even when one cannot, one need not invoke spacetime curvature. And third: while one might think that the absence of gravitational redshift effects would imply that spacetime is flat (indeed, Minkowskian), this can be called into question given the possibility of the cancelling of gravitational redshift effects by charge in the context of the Reissner-Nordström metric. This argument is shown to be valid and both attractive forces as well as redshift effects can be effectively shielded (and even be repulsive or blueshifted, respectively) in the charged setting. Thus, it is not the case that the absence of gravitational effects implies a Minkowskian spacetime setting.

2.
Med Phys ; 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39353140

RESUMO

BACKGROUND: Cone beam computed tomography (CBCT) is a widely available modality, but its clinical utility has been limited by low detail conspicuity and quantitative accuracy. Convenient post-reconstruction denoising is subject to back projected patterned residual, but joint denoise-reconstruction is typically computationally expensive and complex. PURPOSE: In this study, we develop and evaluate a novel Metric-learning guided wavelet transform reconstruction (MEGATRON) approach to enhance image domain quality with projection-domain processing. METHODS: Projection domain based processing has the benefit of being simple, efficient, and compatible with various reconstruction toolkit and vendor platforms. However, they also typically show inferior performance in the final reconstructed image, because the denoising goals in projection and image domains do not necessarily align. Motivated by these observations, this work aims to translate the demand for quality enhancement from the quantitative image domain to the more easily operable projection domain. Specifically, the proposed paradigm consists of a metric learning module and a denoising network module. Via metric learning, enhancement objectives on the wavelet encoded sinogram domain data are defined to reflect post-reconstruction image discrepancy. The denoising network maps measured cone-beam projection to its enhanced version, driven by the learnt objective. In doing so, the denoiser operates in the convenient sinogram to sinogram fashion but reflects improvement in reconstructed image as the final goal. Implementation-wise, metric learning was formalized as optimizing the weighted fitting of wavelet subbands, and a res-Unet, which is a Unet structure with residual blocks, was used for denoising. To access quantitative reference, cone-beam projections were simulated using the X-ray based Cancer Imaging Simulation Toolkit (XCIST). In both learning modules, a data set of 123 human thoraxes, which was from Open-Source Imaging Consortium (OSIC) Pulmonary Fibrosis Progression challenge, was used. Reconstructed CBCT thoracic images were compared against ground truth FB and performance was assessed in root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). RESULTS: MEGATRON achieved RMSE in HU value, PSNR, and SSIM were 30.97 ± 4.25, 37.45 ± 1.78, and 93.23 ± 1.62, respectively. These values are on par with reported results from sophisticated physics-driven CBCT enhancement, demonstrating promise and utility of the proposed MEGATRON method. CONCLUSION: We have demonstrated that incorporating the proposed metric learning into sinogram denoising introduces awareness of reconstruction goal and improves final quantitative performance. The proposed approach is compatible with a wide range of denoiser network structures and reconstruction modules, to suit customized need or further improve performance.

3.
Bull Math Biol ; 86(11): 132, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39352417

RESUMO

There is extensive evidence that network structure (e.g., air transport, rivers, or roads) may significantly enhance the spread of epidemics into the surrounding geographical area. A new compartmental modeling framework is proposed which couples well-mixed (ODE in time) population centers at the vertices, 1D travel routes on the graph's edges, and a 2D continuum containing the rest of the population to simulate how an infection spreads through a population. The edge equations are coupled to the vertex ODEs through junction conditions, while the domain equations are coupled to the edges through boundary conditions. A numerical method based on spatial finite differences for the edges and finite elements in the 2D domain is described to approximate the model, and numerical verification of the method is provided. The model is illustrated on two simple and one complex example geometries, and a parameter study example is performed. The observed solutions exhibit exponential decay after a certain time has passed, and the cumulative infected population over the vertices, edges, and domain tends to a constant in time but varying in space, i.e., a steady state solution.


Assuntos
Doenças Transmissíveis , Simulação por Computador , Epidemias , Conceitos Matemáticos , Humanos , Epidemias/estatística & dados numéricos , Doenças Transmissíveis/epidemiologia , Doenças Transmissíveis/transmissão , Modelos Epidemiológicos , Modelos Biológicos
4.
Int J MS Care ; 26(Q3): 247-253, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-39268507

RESUMO

BACKGROUND: Multiple sclerosis (MS) is a neurological condition leading to significant disability and challenges to quality of life. To slow progression and reduce relapses, it is critical to rapidly initiate disease-modifying therapy (DMT) after diagnosis. Patient demographics may play a role in timely DMT initiation. Financial barriers may also result in delays in DMT access. METHODS: This retrospective, single-center, cross-sectional study included patients seen at a neurology clinic at a large academic medical center for an initial evaluation of MS between January 1, 2022, and June 30, 2022. As an indicator of the quality of care, the primary study outcome was whether patients were offered DMT initiation on their first clinic visit. Secondary outcomes evaluated the time to DMT initiation, including differences in care based on demographic factors and financial coverage. RESULTS: Of the 49 eligible individuals studied, 45 (91.8%) were offered DMT at their initial MS visit. Descriptive statistics appeared to demonstrate that demographic factors did not impact whether DMT was offered. However, the majority of patients experienced access barriers relating to prior authorization requirements (80.0%) and/or the need for co-pay assistance (52.0%). CONCLUSIONS: DMT was appropriately offered to a majority of patients at their initial MS visit, regardless of demographic considerations. No offer of DMT and delays in initiation were primarily due to the need for imaging and specialty referrals, as well as financial barriers. Medication assistance teams may play a crucial role in limiting delays and financial hurdles associated with insurance coverage and co-pay assistance.

5.
Am J Lifestyle Med ; 18(4): 567-573, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39262894

RESUMO

Objective: The objective of this expert consensus process was to define performance measures that can be used to document remission or long-term progress following lifestyle medicine (LM) treatment. Methods: Expert panel members with experience in intensive, therapeutic lifestyle change (ITLC) developed a list of performance measures for key disease states, using an established process for developing consensus statements adapted for the topic. Proposed performance measures were assessed for consensus using a modified Delphi process. Results: After a series of meetings and an iterative Delphi process of voting and revision, a final set of 32 performance measures achieved consensus. These were grouped in 10 domains of diseases, conditions, or risk factors, including (1) Cardiac function, (2) Cardiac risk factors, (3) Cardiac medications and procedures, (4) Patient-centered cardiac health, (5) Hypertension, (6) Type 2 diabetes and prediabetes, (7) Metabolic syndrome, (8) Inflammatory conditions, (9) Inflammatory condition patient-centered measures, and (10) Chronic kidney disease. Conclusion: These measures compose a set of performance standards that can be used to evaluate the effectiveness of LM treatment for these conditions.

6.
Heliyon ; 10(16): e36264, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39253183

RESUMO

In the university laboratory environment, it is not uncommon for individual laboratory personnel to be inadequately aware of laboratory safety standards and to fail to wear protective equipment (helmets, goggles, masks) in accordance with the prescribed norms. Manual inspection is costly and prone to leakage, and there is an urgent need to develop an efficient and intelligent detection technology. Video surveillance of laboratory protective equipment reveals that these items possess the characteristics of small targets. In light of this, a laboratory protective equipment recognition method based on the improved YOLOv7 algorithm is proposed. The Global Attention Mechanism (GAM) is introduced into the Efficient Layer Aggregation Network (ELAN) structure to construct an ELAN-G module that takes both global and local features into account. The Normalized Gaussian Wasserstein Distance (NWD) metric is introduced to replace the Complete Intersection over Union (CIoU), which improves the network's ability to detect small targets of protective equipment under experimental complex scenarios. In order to evaluate the robustness of the studied algorithm and to address the current lack of personal protective Equipment (PPE) datasets, a laboratory protective equipment dataset was constructed based on multidimensionality for the detection experiments of the algorithm. The experimental results demonstrated that the improved model achieved a mAP value of 84.2 %, representing a 2.3 % improvement compared to the original model, a 5 % improvement in the detection rate, and a 2 % improvement in the Micro-F1 score. In comparison to the prevailing algorithms, the accuracy of the studied algorithm has been markedly enhanced. The approach addresses the challenge of the challenging detection of small targets of protective equipment in complex scenarios in laboratories, and plays a pivotal role in perfecting laboratory safety management system.

7.
Med Image Anal ; 99: 103343, 2024 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-39265362

RESUMO

In computed tomography (CT) imaging, optimizing the balance between radiation dose and image quality is crucial due to the potentially harmful effects of radiation on patients. Although subjective assessments by radiologists are considered the gold standard in medical imaging, these evaluations can be time-consuming and costly. Thus, objective methods, such as the peak signal-to-noise ratio and structural similarity index measure, are often employed as alternatives. However, these metrics, initially developed for natural images, may not fully encapsulate the radiologists' assessment process. Consequently, interest in developing deep learning-based image quality assessment (IQA) methods that more closely align with radiologists' perceptions is growing. A significant barrier to this development has been the absence of open-source datasets and benchmark models specific to CT IQA. Addressing these challenges, we organized the Low-dose Computed Tomography Perceptual Image Quality Assessment Challenge in conjunction with the Medical Image Computing and Computer Assisted Intervention 2023. This event introduced the first open-source CT IQA dataset, consisting of 1,000 CT images of various quality, annotated with radiologists' assessment scores. As a benchmark, this challenge offers a comprehensive analysis of six submitted methods, providing valuable insight into their performance. This paper presents a summary of these methods and insights. This challenge underscores the potential for developing no-reference IQA methods that could exceed the capabilities of full-reference IQA methods, making a significant contribution to the research community with this novel dataset. The dataset is accessible at https://zenodo.org/records/7833096.

8.
J Forensic Leg Med ; 107: 102755, 2024 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-39293286

RESUMO

BACKGROUND: Forensic odontology involves the identification of individuals through dental records, making it a crucial tool in legal investigations. Non-metric dental traits (NMDT), which are variations in dental morphology play a key role as these traits are inherited characteristics that can help establish biological relationships or ancestry. Thus, we aim to assess the frequency and variability of NMDT in the human dentition of four ethnically mixed populations in Uttar Pradesh. This study can aid in the future by maintaining records of ethnic groups and their variability, which can be crucial for disaster victim management and forensic odontology. METHODS: The study was conducted on a total of 100 patients coming to the OPD of Oral Medicine and Oral Pathology and Microbiology of King George's Medical University from January 2022 to July 2023. Impressions of both arches were made for participants using irreversible hydrocolloid (alginate), and casts were examined under a stereomicroscope to assess 15 different morphological characteristics. RESULTS: NMDTs such as winging, shoveling, double-shoveling interruption groove, canine mesial ridge, hypocone, metacone, carabelli's trait, peg-shaped incisors, peg-shaped molar, premolar lingual cusp variation, deflecting wrinkle, protostylid, metaconulid, entoconulid was evaluated. The NMDTS were evaluated in four different ethnics Nordics, Mediterranean, Oriental Mediterranean, and Protoaustraloid amongst which various traits showed a statistically significant variation among the population of Uttar Pradesh. CONCLUSION: The sample studied presented confirmed supernumerary traits like metacone, protostylid, carabellis trait, metaconulid, premolar lingual cusp variation, and deflecting wrinkle were seen highest in Nordic, Mediterranean, Oriental Mediterranean, and Protoaustraloid ethnicity have a significant association with the Uttar Pradesh population.

9.
J Biomed Inform ; 157: 104722, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39244181

RESUMO

OBJECTIVE: Keratitis is the primary cause of corneal blindness worldwide. Prompt identification and referral of patients with keratitis are fundamental measures to improve patient prognosis. Although deep learning can assist ophthalmologists in automatically detecting keratitis through a slit lamp camera, remote and underserved areas often lack this professional equipment. Smartphones, a widely available device, have recently been found to have potential in keratitis screening. However, given the limited data available from smartphones, employing traditional deep learning algorithms to construct a robust intelligent system presents a significant challenge. This study aimed to propose a meta-learning framework, cosine nearest centroid-based metric learning (CNCML), for developing a smartphone-based keratitis screening model in the case of insufficient smartphone data by leveraging the prior knowledge acquired from slit-lamp photographs. METHODS: We developed and assessed CNCML based on 13,009 slit-lamp photographs and 4,075 smartphone photographs that were obtained from 3 independent clinical centers. To mimic real-world scenarios with various degrees of sample scarcity, we used training sets of different sizes (0 to 20 photographs per class) from the HUAWEI smartphone to train CNCML. We evaluated the performance of CNCML not only on an internal test dataset but also on two external datasets that were collected by two different brands of smartphones (VIVO and XIAOMI) in another clinical center. Furthermore, we compared the performance of CNCML with that of traditional deep learning models on these smartphone datasets. The accuracy and macro-average area under the curve (macro-AUC) were utilized to evaluate the performance of models. RESULTS: With merely 15 smartphone photographs per class used for training, CNCML reached accuracies of 84.59%, 83.15%, and 89.99% on three smartphone datasets, with corresponding macro-AUCs of 0.96, 0.95, and 0.98, respectively. The accuracies of CNCML on these datasets were 0.56% to 9.65% higher than those of the most competitive traditional deep learning models. CONCLUSIONS: CNCML exhibited fast learning capabilities, attaining remarkable performance with a small number of training samples. This approach presents a potential solution for transitioning intelligent keratitis detection from professional devices (e.g., slit-lamp cameras) to more ubiquitous devices (e.g., smartphones), making keratitis screening more convenient and effective.


Assuntos
Aprendizado Profundo , Ceratite , Smartphone , Humanos , Ceratite/diagnóstico , Algoritmos , Fotografação/métodos , Programas de Rastreamento/métodos , Programas de Rastreamento/instrumentação
10.
Math Mech Solids ; 29(10): 1935-1946, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39323615

RESUMO

This brief contribution provides an overview of the Hill-Ogden generalised strain tensors, and some considerations on their representation in generalised (curvilinear) coordinates, in a fully covariant formalism that is adaptable to a more general theory on Riemannian manifolds. These strains may be naturally defined with covariant components or naturally defined with contravariant components. Each of these two macro-families is best suited to a specific geometrical context.

11.
Lipids Health Dis ; 23(1): 316, 2024 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-39334349

RESUMO

BACKGROUND: Retention of apolipoprotein B (apoB)-containing lipoproteins within the arterial wall plays a major causal role in atherosclerotic cardiovascular disease (ASCVD). There is a single apoB molecule in all apoB-containing lipoproteins. Therefore, quantitation of apoB directly estimates the number of atherogenic particles in plasma. ApoB is the preferred measurement to refine the estimate of ASCVD risk. Low-density lipoprotein (LDL) particles are by far the most abundant apoB-containing particles. In patients with elevated lipoprotein(a) (Lp(a)), apoB may considerably underestimate risk because Mendelian randomization studies have shown that the atherogenicity of Lp(a) is approximately 7-fold greater than that of LDL on a per apoB particle basis. In subjects with increased Lp(a), the association between LDL-cholesterol and incident CHD (coronary heart disease) is increased, but the association between apoB and incident CHD is diminished or even lost. Thus, there is a need to understand the mechanisms of Lp(a), LDL-cholesterol and apoB-related CHD risk and to provide clinicians with a simple practical tool to address these complex and variable relationships. How can we understand a patient's overall lipid-driven atherogenic risk? What proportion of this risk does apoB capture? What proportion of this risk do Lp(a) particles carry? To answer these questions, we created a novel metric of atherogenic risk: risk-weighted apolipoprotein B. METHODS: In nmol/L: Risk-weighted apoB = apoB - Lp(a) + Lp(a) x 7 = apoB + Lp(a) x 6. Proportion of risk captured by apoB = apoB divided by risk-weighted apoB. Proportion of risk carried by Lp(a) = Lp(a) × 7 divided by risk-weighted apoB. RESULTS: Risk-weighted apoB agrees with risk estimation from large epidemiological studies and from several Mendelian randomization studies. CONCLUSIONS: ApoB considerably underestimates risk in individuals with high Lp(a) levels. The association between apoB and incident CHD is diminished or even lost. These phenomena can be overcome and explained by risk-weighted apoB.


Assuntos
Apolipoproteínas B , Aterosclerose , LDL-Colesterol , Lipoproteína(a) , Humanos , Lipoproteína(a)/sangue , Aterosclerose/sangue , Aterosclerose/epidemiologia , Apolipoproteínas B/sangue , LDL-Colesterol/sangue , Fatores de Risco , Apolipoproteína B-100
12.
Cancer Control ; 31: 10732748241279518, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39222957

RESUMO

PURPOSE: Performance status (PS), an essential indicator of patients' functional abilities, is often documented in clinical notes of patients with cancer. The use of natural language processing (NLP) in extracting PS from electronic medical records (EMRs) has shown promise in enhancing clinical decision-making, patient monitoring, and research studies. We designed and validated a multi-institute NLP pipeline to automatically extract performance status from free-text patient notes. PATIENTS AND METHODS: We collected data from 19,481 patients in Harris Health System (HHS) and 333,862 patients from veteran affair's corporate data warehouse (VA-CDW) and randomly selected 400 patients from each data source to train and validate (50%) and test (50%) the proposed pipeline. We designed an NLP pipeline using an expert-derived rule-based approach in conjunction with extensive post-processing to solidify its proficiency. To demonstrate the pipeline's application, we tested the compliance of PS documentation suggested by the American Society of Clinical Oncology (ASCO) Quality Metric and investigated the potential disparity in PS reporting for stage IV non-small cell lung cancer (NSCLC). We used a logistic regression test, considering patients in terms of race/ethnicity, conversing language, marital status, and gender. RESULTS: The test results on the HHS cohort showed 92% accuracy, and on VA data demonstrated 98.5% accuracy. For stage IV NSCLC patients, the proposed pipeline achieved an accuracy of 98.5%. Furthermore, our analysis revealed a documentation rate of over 85% for PS among NSCLC patients, surpassing the ASCO Quality Metrics. No disparities were observed in the documentation of PS. CONCLUSION: Our proposed NLP pipeline shows promising results in extracting PS from free-text notes from various health institutions. It may be used in longitudinal cancer data registries.


Assuntos
Registros Eletrônicos de Saúde , Processamento de Linguagem Natural , Humanos , Registros Eletrônicos de Saúde/estatística & dados numéricos , Masculino , Feminino , Neoplasias Pulmonares/terapia , Carcinoma Pulmonar de Células não Pequenas/terapia , Pessoa de Meia-Idade , Neoplasias/terapia
13.
Phys Imaging Radiat Oncol ; 31: 100622, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39220115

RESUMO

Background and purpose: In sliding-window intensity-modulated radiotherapy, increased plan modulation often leads to increased plan complexities and dose uncertainties. Dose calculation and/or measurement checks are usually adopted for pre-treatment verification. This study aims to evaluate the relationship among plan complexities, calculated doses and measured doses. Materials and methods: A total of 53 plan complexity metrics (PCMs) were selected, emphasizing small field characteristics and leaf speed/acceleration. Doses were retrieved from two beam-matched treatment devices. The intended dose was computed employing the Anisotropic Analytical Algorithm and validated through Monte Carlo (MC) and Collapsed Cone Convolution (CCC) algorithms. To measure the delivered dose, 3D diode arrays of various geometries, encompassing helical, cross, and oblique cross shapes, were utilized. Their interrelation was assessed via Spearman correlation analysis and principal component linear regression (PCR). Results: The correlation coefficients between calculation-based (CQA) and measurement-based verification quality assurance (MQA) were below 0.53. Most PCMs showed higher correlation rpcm-QA with CQA (max: 0.84) than MQA (max: 0.65). The proportion of rpcm-QA  ≥ 0.5 was the largest in the pelvis compared to head-and-neck and chest-and-abdomen, and the highest rpcm-QA occurred at 1 %/1mm. Some modulation indices for the MLC speed and acceleration were significantly correlated with CQA and MQA. PCR's determination coefficients (R2 ) indicated PCMs had higher accuracy in predicting CQA (max: 0.75) than MQA (max: 0.42). Conclusions: CQA and MQA demonstrated a weak correlation. Compared to MQA, CQA exhibited a stronger correlation with PCMs. Certain PCMs related to MLC movement effectively indicated variations in both quality assurances.

14.
Radiol Med ; 2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39225919

RESUMO

Artificial intelligence (AI) has numerous applications in radiology. Clinical research studies to evaluate the AI models are also diverse. Consequently, diverse outcome metrics and measures are employed in the clinical evaluation of AI, presenting a challenge for clinical radiologists. This review aims to provide conceptually intuitive explanations of the outcome metrics and measures that are most frequently used in clinical research, specifically tailored for clinicians. While we briefly discuss performance metrics for AI models in binary classification, detection, or segmentation tasks, our primary focus is on less frequently addressed topics in published literature. These include metrics and measures for evaluating multiclass classification; those for evaluating generative AI models, such as models used in image generation or modification and large language models; and outcome measures beyond performance metrics, including patient-centered outcome measures. Our explanations aim to guide clinicians in the appropriate use of these metrics and measures.

15.
Neural Netw ; 180: 106589, 2024 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-39217864

RESUMO

Thin pancake-like neuronal networks cultured on top of a planar microelectrode array have been extensively tried out in neuroengineering, as a substrate for the mobile robot's control unit, i.e., as a cyborg's brain. Most of these attempts failed due to intricate self-organizing dynamics in the neuronal systems. In particular, the networks may exhibit an emergent spatial map of steady nucleation sites ("n-sites") of spontaneous population spikes. Being unpredictable and independent of the surface electrode locations, the n-sites drastically change local ability of the network to generate spikes. Here, using a spiking neuronal network model with generative spatially-embedded connectome, we systematically show in simulations that the number, location, and relative activity of spontaneously formed n-sites ("the vitals") crucially depend on the samplings of three distributions: (1) the network distribution of neuronal excitability, (2) the distribution of connections between neurons of the network, and (3) the distribution of maximal amplitudes of a single synaptic current pulse. Moreover, blocking the dynamics of a small fraction (about 4%) of non-pacemaker neurons having the highest excitability was enough to completely suppress the occurrence of population spikes and their n-sites. This key result is explained theoretically. Remarkably, the n-sites occur taking into account only short-term synaptic plasticity, i.e., without a Hebbian-type plasticity. As the spiking network model used in this study is strictly deterministic, all simulation results can be accurately reproduced. The model, which has already demonstrated a very high richness-to-complexity ratio, can also be directly extended into the three-dimensional case, e.g., for targeting peculiarities of spiking dynamics in cerebral (or brain) organoids. We recommend the model as an excellent illustrative tool for teaching network-level computational neuroscience, complementing a few benchmark models.

16.
Prev Vet Med ; 233: 106331, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39243438

RESUMO

The adoption of standardized metrics and indicators of antimicrobial use (AMU) in the food animal industry is essential for the success of programs aimed at promoting the responsible and judicious use of antimicrobials in this activity. The objective of this study was to introduce the use of standardized AMU metrics and indicators to quantify the use of florfenicol and oxytetracycline in the Chilean salmon industry, and in this way evaluate the feasibility of their use given the type of health and production information currently managed by the National Fisheries and Aquaculture Service (SERNAPESCA), the Chilean agency responsible for regulating aquaculture in Chile. The data available from SERNAPESCA allowed the construction and evaluation of the most data-demanding AMU metrics and indicators. Consequently, the use of florfenicol and oxytetracycline administered by oral and parenteral routes was quantified using the treatment incidence based on both animal defined daily dose (TIDDDvet) and animal used daily dose (TIUDDA). To that end, the study included information from 1320 closed production cycles from farms rearing Atlantic salmon, coho salmon and rainbow trout that were active between January 2017 and December 2021. By applying standardized AMU metrics and indicators, we were able to determine that the median of TIDDDvet for florfenicol was 75.1 (80 % range, 20.0-158.0) DDDvet per ton-year at risk for oral procedures and 0.36 (80 % range, 0.07-1.19) DDDvet per ton-year at risk for parenteral procedures. For oxytetracycline, the median TIDDDvet was 3.09 (80 % range, 0.74-42.8) and 0.47 (80 % range, 0.09-1.68) DDDvet per ton-year at risk for oral and parenteral procedures, respectively. The median TIUDDA for treatments with florfenicol was 45.6 (80 % range, 10.9-96.5) UDDA per ton-year at risk for oral treatments and 0.28 (80 % range, 0.05-0.80) UDDA per ton-year at risk for parenteral treatments. For oxytetracycline, the median TIUDDA was 2.63 (80 % range, 0.61-28.2) UDDA per ton-year at risk for oral treatments and 0.41 (80 % range, 0.08-1.29) UDDA per ton-year at risk for parenteral treatments. This study demonstrates that it is feasible to move from traditional AMU metrics and indicators to standardized ones in the Chilean salmon industry. This is possible because the competent authority requires salmon farms to report detailed health and production information at a high frequency. The use of standardized AMU metrics and indicators can help the authority to have a more comprehensive view of the antimicrobial use in the Chilean salmon industry.

17.
Narra J ; 4(2): e917, 2024 08.
Artigo em Inglês | MEDLINE | ID: mdl-39280327

RESUMO

Since its public release on November 30, 2022, ChatGPT has shown promising potential in diverse healthcare applications despite ethical challenges, privacy issues, and possible biases. The aim of this study was to identify and assess the most influential publications in the field of ChatGPT utility in healthcare using bibliometric analysis. The study employed an advanced search on three databases, Scopus, Web of Science, and Google Scholar, to identify ChatGPT-related records in healthcare education, research, and practice between November 27 and 30, 2023. The ranking was based on the retrieved citation count in each database. The additional alternative metrics that were evaluated included (1) Semantic Scholar highly influential citations, (2) PlumX captures, (3) PlumX mentions, (4) PlumX social media and (5) Altmetric Attention Scores (AASs). A total of 22 unique records published in 17 different scientific journals from 14 different publishers were identified in the three databases. Only two publications were in the top 10 list across the three databases. Variable publication types were identified, with the most common being editorial/commentary publications (n=8/22, 36.4%). Nine of the 22 records had corresponding authors affiliated with institutions in the United States (40.9%). The range of citation count varied per database, with the highest range identified in Google Scholar (1019-121), followed by Scopus (242-88), and Web of Science (171-23). Google Scholar citations were correlated significantly with the following metrics: Semantic Scholar highly influential citations (Spearman's correlation coefficient ρ=0.840, p<0.001), PlumX captures (ρ=0.831, p<0.001), PlumX mentions (ρ=0.609, p=0.004), and AASs (ρ=0.542, p=0.009). In conclusion, despite several acknowledged limitations, this study showed the evolving landscape of ChatGPT utility in healthcare. There is an urgent need for collaborative initiatives by all stakeholders involved to establish guidelines for ethical, transparent, and responsible use of ChatGPT in healthcare. The study revealed the correlation between citations and alternative metrics, highlighting its usefulness as a supplement to gauge the impact of publications, even in a rapidly growing research field.


Assuntos
Bibliometria , Humanos , Mídias Sociais , Aniversários e Eventos Especiais
18.
Sci Total Environ ; 952: 175803, 2024 Nov 20.
Artigo em Inglês | MEDLINE | ID: mdl-39197779

RESUMO

Restoration measures have been widely implemented in wetland ecosystems globally to bend the curve of biodiversity loss and restore associated ecological functions. However, assessments of the effectiveness of wetland restoration have predominantly focused on the recovery of taxonomic composition, while few studies have assessed the effectiveness of these efforts from a food web perspective. Here, we incorporated stable isotope approach to investigate trophic structure in natural and restored wetlands in Northeast China. The investigated consumers, including zooplankton, macroinvertebrates, and fish, exhibited lower δ15N and higher δ13C values in restored wetlands than in natural wetlands. Natural wetlands exhibited higher trophic positions and a wider range of trophic levels compared to restored wetlands. Primary consumers in natural wetlands relied more on particulate organic matter (POM, 42.9 % ± 24.1 %), while those in restored wetlands were more dependent on substrate organic matter (SOM, 42.3 % ± 23.9 %). Compared to natural wetlands, isotopic richness was significantly lower in restored wetlands, with smaller isotopic variation (SEAs) in basal resources, aquatic invertebrates, and fish. Our findings reveal that the recovery of trophic structures in restored wetlands lags behind that of taxonomic composition. Future restoration efforts should prioritize enhancing habitat heterogeneity and resource availability to support a diverse range of trophic levels. Monitoring trophic dynamics is essential for assessing the progress of wetland restoration and should be integrated into monitoring schemes.


Assuntos
Monitoramento Ambiental , Cadeia Alimentar , Invertebrados , Áreas Alagadas , China , Animais , Invertebrados/fisiologia , Biodiversidade , Peixes , Zooplâncton , Conservação dos Recursos Naturais/métodos , Recuperação e Remediação Ambiental/métodos
19.
Sci Rep ; 14(1): 18694, 2024 08 12.
Artigo em Inglês | MEDLINE | ID: mdl-39134599

RESUMO

Guaifenesin (GUA) is determined in dosage forms and plasma using two methods. The spectrofluorimetric technique relies on the measurement of native fluorescence intensity at 302 nm upon excitation wavelength "223 nm". The method was validated according to ICH and FDA guidelines. A concentration range of 0.1-1.1 µg/mL was used, with limit of detection (LOD) and quantification (LOQ) values 0.03 and 0.08 µg/mL, respectively. This method was used to measure GUA in tablets and plasma, with %recovery of 100.44% ± 0.037 and 101.03% ± 0.751. Furthermore, multivariate chemometric-assisted spectrophotometric methods are used for the determination of GUA, paracetamol (PARA), oxomemazine (OXO), and sodium benzoate (SB) in their lab mixtures. The concentration ranges of 2.0-10.0, 4.0-16.0, 2.0-10.0, and 3.0-10.0 µg/mL for OXO, GUA, PARA, and SB; respectively, were used. LOD and LOQ were 0.33, 0.68, 0.28, and 0.29 µg/mL, and 1.00, 2.06, 0.84, and 0.87 µg/mL for PARA, GUA, OXO, and SB. For the suppository application, the partial least square (PLS) model was used with %recovery 98.49% ± 0.5, 98.51% ± 0.64, 100.21% ± 0.36 & 98.13% ± 0.51, although the multivariate curve resolution alternating least-squares (MCR-ALS) model was used with %recovery 101.39 ± 0.45, 99.19 ± 0.2, 100.24 ± 0.12, and 98.61 ± 0.32 for OXO, GUA, PARA, and SB. Analytical Eco-scale and Analytical Greenness Assessment were used to assess the greenness level of our techniques.


Assuntos
Guaifenesina , Limite de Detecção , Espectrometria de Fluorescência , Guaifenesina/análise , Guaifenesina/administração & dosagem , Humanos , Espectrometria de Fluorescência/métodos , Comprimidos , Química Verde/métodos
20.
Heliyon ; 10(14): e33962, 2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-39108853

RESUMO

We discuss the existence of a fixed point for a self mapping and its uniqueness satisfying ( ϕ ˙ , η ˙ ) -generalized contractive condition including altering distance functions of rational terms in an ordered b-metric space. It is also discussed whether the two self-maps under the same contraction condition can be coincident and coupled coincident. The results are backed up by a dearth of numerical examples and application to nonlinear quadratic integral equation.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA