RESUMO
Significance: Advancements in label-free microscopy could provide real-time, non-invasive imaging with unique sources of contrast and automated standardized analysis to characterize heterogeneous and dynamic biological processes. These tools would overcome challenges with widely used methods that are destructive (e.g., histology, flow cytometry) or lack cellular resolution (e.g., plate-based assays, whole animal bioluminescence imaging). Aim: This perspective aims to (1) justify the need for label-free microscopy to track heterogeneous cellular functions over time and space within unperturbed systems and (2) recommend improvements regarding instrumentation, image analysis, and image interpretation to address these needs. Approach: Three key research areas (cancer research, autoimmune disease, and tissue and cell engineering) are considered to support the need for label-free microscopy to characterize heterogeneity and dynamics within biological systems. Based on the strengths (e.g., multiple sources of molecular contrast, non-invasive monitoring) and weaknesses (e.g., imaging depth, image interpretation) of several label-free microscopy modalities, improvements for future imaging systems are recommended. Conclusion: Improvements in instrumentation including strategies that increase resolution and imaging speed, standardization and centralization of image analysis tools, and robust data validation and interpretation will expand the applications of label-free microscopy to study heterogeneous and dynamic biological systems.
Assuntos
Técnicas Histológicas , Microscopia , Animais , Citometria de Fluxo , Processamento de Imagem Assistida por ComputadorRESUMO
ABSTRACT Purpose: To compare the refractive prediction error of Hill-radial basis function 3.0 with those of 3 conventional formulas and 11 combination methods in eyes with short axial lengths. Methods: The refractive prediction error was calculated using 4 formulas (Hoffer Q, SRK-T, Haigis, and Hill-RBF) and 11 combination methods (average of two or more methods). The absolute error was determined, and the proportion of eyes within 0.25-diopter (D) increments of absolute error was analyzed. Furthermore, the intraclass correlation coefficients of each method were computed to evaluate the agreement between target refractive error and postoperative spherical equivalent. Results: This study included 87 eyes. Based on the refractive prediction error findings, Hoffer Q formula exhibited the highest myopic errors, followed by SRK-T, Hill-RBF, and Haigis. Among all the methods, the Haigis and Hill-RBF combination yielded a mean refractive prediction error closest to zero. The SRK-T and Hill-RBF combination showed the lowest mean absolute error, whereas the Hoffer Q, SRK-T, and Haigis combination had the lowest median absolute error. Hill-radial basis function exhibited the highest intraclass correlation coefficient, whereas SRK-T showed the lowest. Haigis and Hill-RBF, as well as the combination of both, demonstrated the lowest proportion of refractive surprises (absolute error >1.00 D). Among the individual formulas, Hill-RBF had the highest success rate (absolute error ≤0.50 D). Moreover, among all the methods, the SRK-T and Hill-RBF combination exhibited the highest success rate. Conclusions: Hill-radial basis function showed accuracy comparable to or surpassing that of conventional formulas in eyes with short axial lengths. The use and integration of various formulas in cataract surgery for eyes with short axial lengths may help reduce the incidence of refractive surprises.
RESUMO
Lipidic mesophase drug carriers have demonstrated the capacity to host and effectively deliver a wide range of active pharmaceutical ingredients, yet they have not been as extensively commercialized as other lipid-based products, such as liposomal delivery systems. Indeed, scientists are primarily focused on investigating the physics of these systems, especially in biological environments. Meanwhile, the production methods remain less advanced, and researchers are still uncertain about how the manufacturing process might affect the quality of formulations. Bringing these products to the market will require an industrial translation process. In this scenario, we have developed a robust strategy to produce lipidic mesophase-based drug delivery systems using a dual-syringe setup. We identified four critical process parameters in the newly developed method (dual-syringe method), in comparison to eight in the standard production method (gold standard), and we defined their optimal limits following a Quality by Design approach. The robustness and versatility of the proposed method were assessed experimentally by incorporating drugs with diverse physicochemical properties and augmented by machine learning which, by predicting the drug release from lipidic mesophases, reduces the formulation development time and costs.
Assuntos
Inteligência Artificial , Lipídeos , Lipídeos/química , Liberação Controlada de Fármacos , Sistemas de Liberação de Medicamentos , Portadores de Fármacos/química , Lipossomos/química , Tamanho da PartículaRESUMO
Aqueous zinc-ion batteries (AZIBs) have become a research hotspot, but the inevitable zinc dendrites and parasitic reactions in the zinc anode seriously hinder their further development. In this study, three covalent triazine frameworks (DCPY-CTF, CTF-1 and FCTF) have been synthesized and used as artificial protective coatings, in which the fluorinated triazine framework (FCTF) increases the zinc-philic site, thus better promoting dendritic free zinc deposition and inhibiting hydrogen evolution reactions. Excitingly, both experimental results and theoretical calculations indicate that the FCTF interface adjusts the deposition of Zn2+ along the (002) plane, effectively alleviating the formation of zinc dendrites. As expected, Zn@FCTF symmetric cells exhibit cycling stability of over 4000 h (0.25 mA cm-2), meanwhile Zn@FCTF//NHVO full cells provide a high specific capacity of 280 mAh/g at 1.0 A/g, which are superior to those of bare Zn anode. This work provides new insights for suppressing hydrogen evolution and promoting dendrite-free zinc deposition to construct highly stable and reversible AZIBs.
RESUMO
Photoenzyme-coupled catalytic systems offer a promising avenue for selectively converting CO2 into high-value chemicals or fuels. However, two key challenges currently hinder their widespread application: the heavy reliance on the costly coenzyme NADH, and the necessity for metal-electron mediators or photosensitizers to address sluggish reaction kinetics. Herein, we present a robust 2D/2D MXene/C3N5 heterostructured artificial photosynthesis platform for in situ NADH regeneration and photoenzyme synergistic CO2 conversion to HCOOH. The efficiencies of utilizing and transmitting photogenerated charges are significantly enhanced by the abundant π-π conjugation electrons and well-engineered 2D/2D hetero-interfaces. Noteworthy is the achievement of nearly 100 % NADH regeneration efficiency within just 2.5 h by 5 % Ti3C2/C3N5 without electron mediators, and an impressive HCOOH production rate of 3.51 mmol g-1h-1 with nearly 100 % selectivity. This study represents a significant advancement in attaining the highest NADH yield without electron mediator and provides valuable insights into the development of superior 2D/2D heterojunctions for CO2 conversion.
RESUMO
Purpose: To develop and validate a deep learning algorithm capable of differentiating small choroidal melanomas from nevi. Design: Retrospective multicenter cohort study. Participants: A total of 802 images from 688 patients diagnosed with choroidal nevi or melanoma. Methods: Wide field and standard field fundus photographs were collected from patients diagnosed with choroidal nevi or melanoma by ocular oncologists during clinical examinations. A lesion was classified as a nevus if it was followed for at least 5 years without being rediagnosed as melanoma. A neural network optimized for image classification was trained and validated on cohorts of 495 and 168 images and subsequently tested on independent sets of 86 and 53 images. Main Outcome Measures: Area under the curve (AUC) in receiver operating characteristic analysis for differentiating small choroidal melanomas from nevi. Results: The algorithm achieved an AUC of 0.88 in both test cohorts, outperforming ophthalmologists using the Mushroom shape, Orange pigment, Large size, Enlargement, and Subretinal fluid (AUC 0.77) and To Find Small Ocular Melanoma Using Helpful Hints Daily (AUC 0.67) risk factors (DeLong's test, P < 0.001). The algorithm performed equally well for wide field and standard field photos (AUC 0.89 for both when analyzed separately). Using an optimal operating point of 0.63 (on a scale from 0.00 to 1.00) determined from the training and validation datasets, the algorithm achieved 100% sensitivity and 74% specificity in the first test cohort (F-score 0.72), and 80% sensitivity and 81% specificity in the second (F-score 0.71), which consisted of images from external clinics nationwide. It outperformed 12 ophthalmologists in sensitivity (Mann-Whitney U, P = 0.006) but not specificity (P = 0.54). The algorithm showed higher sensitivity than both resident and consultant ophthalmologists (Dunn's test, P = 0.04 and P = 0.006, respectively) but not ocular oncologists (P > 0.99, all P values Bonferroni corrected). Conclusions: This study develops and validates a deep learning algorithm for differentiating small choroidal melanomas from nevi, matching or surpassing the discriminatory performance of experienced human ophthalmologists. Further research will aim to validate its utility in clinical settings. Financial Disclosures: Financial DisclosuresProprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
RESUMO
Preparation of brain slices for electrophysiological and imaging experiments has been developed several decades ago, and the method is still widely used due to its simplicity and advantages over other techniques. It can be easily combined with other well established and recently developed methods as immunohistochemistry and morphological analysis or opto- and chemogenetics. Several aspects of this technique are covered by a plethora of excellent and detailed review papers, in which one can gain a deep insight of variations in it. In this chapter, I briefly describe the solutions, equipment, and preparation techniques routinely used in our laboratory. I also aim to present how certain "old school" brain slice lab devices can be made in a cost-efficient way. These devices can be easily adapted for the special needs of the experiments. I also aim to present some differences in the preparatory techniques of acutely isolated human brain tissue.
Assuntos
Encéfalo , Humanos , Encéfalo/metabolismo , Animais , Camundongos , Envelhecimento/fisiologiaRESUMO
Purpose: To compare the utility of ChatGPT-4 as an online uveitis patient education resource with existing patient education websites. Design: Evaluation of technology. Participants: Not applicable. Methods: The term "uveitis" was entered into the Google search engine, and the first 8 nonsponsored websites were selected to be enrolled in the study. Information regarding uveitis for patients was extracted from Healthline, Mayo Clinic, WebMD, National Eye Institute, Ocular Uveitis and Immunology Foundation, American Academy of Ophthalmology, Cleveland Clinic, and National Health Service websites. ChatGPT-4 was then prompted to generate responses about uveitis in both standard and simplified formats. To generate the simplified response, the following request was added to the prompt: 'Please provide a response suitable for the average American adult, at a sixth-grade comprehension level.' Three dual fellowship-trained specialists, all masked to the sources, graded the appropriateness of the contents (extracted from the existing websites) and responses (generated responses by ChatGPT-4) in terms of personal preference, comprehensiveness, and accuracy. Additionally, 5 readability indices, including Flesch Reading Ease, Flesch-Kincaid Grade Level, Gunning Fog Index, Coleman-Liau Index, and Simple Measure of Gobbledygook index were calculated using an online calculator, Readable.com, to assess the ease of comprehension of each answer. Main Outcome Measures: Personal preference, accuracy, comprehensiveness, and readability of contents and responses about uveitis. Results: A total of 497 contents and responses, including 71 contents from existing websites, 213 standard responses, and 213 simplified responses from ChatGPT-4 were recorded and graded. Standard ChatGPT-4 responses were preferred and perceived to be more comprehensive by dually trained (uveitis and retina) specialist ophthalmologists while maintaining similar accuracy level compared with existing websites. Moreover, simplified ChatGPT-4 responses matched almost all existing websites in terms of personal preference, accuracy, and comprehensiveness. Notably, almost all readability indices suggested that standard ChatGPT-4 responses demand a higher educational level for comprehension, whereas simplified responses required lower level of education compared with the existing websites. Conclusions: This study shows that ChatGPT can provide patients with an avenue to access comprehensive and accurate information about uveitis, tailored to their educational level. Financial Disclosures: The author(s) have no proprietary or commercial interest in any materials discussed in this article.
RESUMO
It is sobering that many liver failure patients die in the absence of liver transplantation (LT), and reducing its morbidity and mortality urgently needs more non-transplant treatment options. Among the several artificial liver support devices available, therapeutic plasma exchange (TPE) is the only one that improves survival in acute liver failure (ALF) patients. In many other disorders, data on survival benefits and successful bridging to transplant is encouraging. TPE removes the entire plasma, including damage-associated-molecular patterns, and replaces it with healthy donor fresh frozen plasma. In contrast, other artificial liver support systems (ALSS) correct the blood composition through dialysis techniques. TPE has become increasingly popular due to advances in apheresis techniques and a better understanding of its applicability in treating liver failure's pathophysiology. It provides metabolicdetoxification, and synthetic functions and modulates early innate immunity, fulfilling the role of ALSS. TPE is readily available in intensive care units, dialysis units, or blood banks and has enormous potential to improve survival outcomes. Hepatologists must take advantage of this treatment option by thoroughly understanding its most frequent indications and its rationale and techniques. This primer on TPE for liver clinicians covers its current clinical, technical, and practical applications, addresses the knowledge gaps, and provides future directions.
RESUMO
One of the great challenges of document analysis is determining document forgeries. The present work proposes a non-destructive approach to discriminate natural and artificially aged papers using infrared spectroscopy and soft independent modeling by class analogy (SIMCA) algorithms. This is of particular interest in cases of document falsifications made by artificial aging, for this study, SIMCA, and Data-Driven SIMCA (DD-SIMCA) classification models were built using naturally aged paper samples, taken from three time periods: 1st period from 1998 to 2003; 2nd period from 2004 to 2009; and 3rd period from 2010 to 2015. Artificially aged samples (exposed to high temperature or UV radiation) were used as test sets. Promising results in detecting document falsifications related to aging were obtained. Samples artificially aged at high temperature were correctly discriminated from the authentic samples (naturally aged) with 100% accuracy. In contrast, the samples under the photodegradation process showed a lower classification performance, with results above 90%.
RESUMO
Objective: Large language models such as ChatGPT have demonstrated significant potential in question-answering within ophthalmology, but there is a paucity of literature evaluating its ability to generate clinical assessments and discussions. The objectives of this study were to (1) assess the accuracy of assessment and plans generated by ChatGPT and (2) evaluate ophthalmologists' abilities to distinguish between responses generated by clinicians versus ChatGPT. Design: Cross-sectional mixed-methods study. Subjects: Sixteen ophthalmologists from a single academic center, of which 10 were board-eligible and 6 were board-certified, were recruited to participate in this study. Methods: Prompt engineering was used to ensure ChatGPT output discussions in the style of the ophthalmologist author of the Medical College of Wisconsin Ophthalmic Case Studies. Cases where ChatGPT accurately identified the primary diagnoses were included and then paired. Masked human-generated and ChatGPT-generated discussions were sent to participating ophthalmologists to identify the author of the discussions. Response confidence was assessed using a 5-point Likert scale score, and subjective feedback was manually reviewed. Main Outcome Measures: Accuracy of ophthalmologist identification of discussion author, as well as subjective perceptions of human-generated versus ChatGPT-generated discussions. Results: Overall, ChatGPT correctly identified the primary diagnosis in 15 of 17 (88.2%) cases. Two cases were excluded from the paired comparison due to hallucinations or fabrications of nonuser-provided data. Ophthalmologists correctly identified the author in 77.9% ± 26.6% of the 13 included cases, with a mean Likert scale confidence rating of 3.6 ± 1.0. No significant differences in performance or confidence were found between board-certified and board-eligible ophthalmologists. Subjectively, ophthalmologists found that discussions written by ChatGPT tended to have more generic responses, irrelevant information, hallucinated more frequently, and had distinct syntactic patterns (all P < 0.01). Conclusions: Large language models have the potential to synthesize clinical data and generate ophthalmic discussions. While these findings have exciting implications for artificial intelligence-assisted health care delivery, more rigorous real-world evaluation of these models is necessary before clinical deployment. Financial Disclosures: The author(s) have no proprietary or commercial interest in any materials discussed in this article.
RESUMO
In the rapidly evolving landscape of artificial intelligence (AI) and machine learning (ML), understanding and correctly applying the concept of the applicability domain (AD) has emerged as an essential part. This chapter begins with an introduction and background on the critical area of AD. It dives into the definition and different methodologies associated with the applicability domain, laying a solid foundation for further exploration. A detailed examination of AD's role within the framework of AI and ML is undertaken, supported by in-depth theoretical foundations. The paper then proceeds to delineate the various measures of AD in AI and ML, offering insights into methods like DA index (κ, γ, δ), class probability estimation, and techniques involving local vicinity, boosting, classification neural networks, and subgroup discovery (SGD), among others. We also discussed a series of AD methods employed in Quantitative Structure-Activity Relationship (QSAR) studies. Lastly, the diverse applications of AD are addressed, underlining its widespread influence across different sectors. This chapter is intended to offer a thorough understanding of AD and its applications, particularly in AI and ML, leading to more informed research and decision-making in these fields as a good amount of literature already exists regarding AD of QSAR modeling.
Assuntos
Inteligência Artificial , Aprendizado de Máquina , Relação Quantitativa Estrutura-Atividade , Redes Neurais de Computação , Humanos , AlgoritmosRESUMO
The discovery of molecular toxicity in a clinical drug candidate can have a significant impact on both the cost and timeline of the drug discovery process. Early identification of potentially toxic compounds during screening library preparation or, alternatively, during the hit validation process is critical to ensure that valuable time and resources are not spent pursuing compounds that may possess a high propensity for human toxicity. This report focuses on the application of computational molecular filters, applied either pre- or post-screening, to identify and remove known reactive and/or potentially toxic compounds from consideration in drug discovery campaigns.
Assuntos
Biologia Computacional , Descoberta de Drogas , Ensaios de Triagem em Larga Escala , Bibliotecas de Moléculas Pequenas , Ensaios de Triagem em Larga Escala/métodos , Bibliotecas de Moléculas Pequenas/toxicidade , Humanos , Descoberta de Drogas/métodos , Biologia Computacional/métodos , Avaliação Pré-Clínica de Medicamentos/métodos , Desenho de Fármacos , Toxicologia/métodosRESUMO
Developmental toxicity is key human health endpoint, especially relevant for safeguarding maternal and child well-being. It is an object of increasing attention from international regulatory bodies such as the US EPA (US Environmental Protection Agency) and ECHA (European CHemicals Agency). In this challenging scenario, non-test methods employing explainable artificial intelligence based techniques can provide a significant help to derive transparent predictive models whose results can be easily interpreted to assess the developmental toxicity of new chemicals at very early stages. To accomplish this task, we have developed web platforms such as TIRESIA and TISBE.Based on a benchmark dataset, TIRESIA employs an explainable artificial intelligence approach combined with SHAP analysis to unveil the molecular features responsible for calculating the developmental toxicity. Descending from TIRESIA, TISBE employs a larger dataset, an explainable artificial intelligence framework based on a fragment-based fingerprint encoding, a consensus classifier, and a new double top-down applicability domain. We report here some practical examples for getting started with TIRESIA and TISBE.
Assuntos
Inteligência Artificial , Humanos , Internet , Animais , Testes de Toxicidade/métodos , SoftwareRESUMO
RNA ribozyme (Walter Engelke, Biologist (London, England) 49:199-203, 2002) datasets typically contain from a few hundred to a few thousand naturally occurring sequences. However, the potential sequence space of RNA is huge. For example, the number of possible RNA sequences of length 150 nucleotides is approximately 1 0 90 , a figure that far surpasses the estimated number of atoms in the known universe, which is around 1 0 80 . This disparity highlights a vast realm of sequence variability that remains unexplored by natural evolution. In this context, generative models emerge as a powerful tool. Learning from existing natural instances, these models can create artificial variants that extend beyond the currently known sequences. In this chapter, we will go through the use of a generative model based on direct coupling analysis (DCA) (Russ et al., Science 369:440-445, 2020; Trinquier et al., Nat Commun 12:5800, 2021; Calvanese et al., Nucleic Acids Res 52(10):5465-5477, 2024) applied to the twister ribozyme RNA family with three key applications: generating artificial twister ribozymes, designing potentially functional mutations of a natural wild type, and predicting mutational effects.
Assuntos
Evolução Molecular , Conformação de Ácido Nucleico , RNA Catalítico , RNA Catalítico/genética , RNA Catalítico/metabolismo , AlgoritmosRESUMO
Purpose: To develop and validate machine learning (ML) models to predict choroidal nevus transformation to melanoma based on multimodal imaging at initial presentation. Design: Retrospective multicenter study. Participants: Patients diagnosed with choroidal nevus on the Ocular Oncology Service at Wills Eye Hospital (2007-2017) or Mayo Clinic Rochester (2015-2023). Methods: Multimodal imaging was obtained, including fundus photography, fundus autofluorescence, spectral domain OCT, and B-scan ultrasonography. Machine learning models were created (XGBoost, LGBM, Random Forest, Extra Tree) and optimized for area under receiver operating characteristic curve (AUROC). The Wills Eye Hospital cohort was used for training and testing (80% training-20% testing) with fivefold cross validation. The Mayo Clinic cohort provided external validation. Model performance was characterized by AUROC and area under precision-recall curve (AUPRC). Models were interrogated using SHapley Additive exPlanations (SHAP) to identify the features most predictive of conversion from nevus to melanoma. Differences in AUROC and AUPRC between models were tested using 10 000 bootstrap samples with replacement and results. Main Outcome Measures: Area under receiver operating curve and AUPRC for each ML model. Results: There were 2870 nevi included in the study, with conversion to melanoma confirmed in 128 cases. Simple AI Nevus Transformation System (SAINTS; XGBoost) was the top-performing model in the test cohort [pooled AUROC 0.864 (95% confidence interval (CI): 0.864-0.865), pooled AUPRC 0.244 (95% CI: 0.243-0.246)] and in the external validation cohort [pooled AUROC 0.931 (95% CI: 0.930-0.931), pooled AUPRC 0.533 (95% CI: 0.531-0.535)]. Other models also had good discriminative performance: LGBM (test set pooled AUROC 0.831, validation set pooled AUROC 0.815), Random Forest (test set pooled AUROC 0.812, validation set pooled AUROC 0.866), and Extra Tree (test set pooled AUROC 0.826, validation set pooled AUROC 0.915). A model including only nevi with at least 5 years of follow-up demonstrated the best performance in AUPRC (test: pooled 0.592 (95% CI: 0.590-0.594); validation: pooled 0.656 [95% CI: 0.655-0.657]). The top 5 features in SAINTS by SHAP values were: tumor thickness, largest tumor basal diameter, tumor shape, distance to optic nerve, and subretinal fluid extent. Conclusions: We demonstrate accuracy and generalizability of a ML model for predicting choroidal nevus transformation to melanoma based on multimodal imaging. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
RESUMO
Resumen La narrativa mitológica de Epimeteo y Prometeo, retratada por Platón, sirve de introducción a la importancia de la inteligencia artificial (IA). El hombre se caracteriza en este mito, frente al resto de criaturas, por tener un don divino: la capacidad de crear herramientas. La IA representa un avance revolucionario al sustituir la labor intelectual humana, destacando su capacidad para generar nuevo conocimiento de forma autónoma. En el ámbito científico, la IA agiliza la revisión por pares y mejora la eficiencia en la evaluación de manuscritos, además de aportar elementos creativos, como la reescritura, traducción o creación de ilustraciones. Sin embargo, su implementación debe ser ética, limitada a un asistente y bajo la supervisión experta para evitar errores y abusos. La IA, una herramienta divina en evolución, requiere que cada uno de sus avances se estudie y aplique críticamente.
Abstract The mythological story of Epimetheus and Prometheus, as told by Plato, serves as an introduction to the meaning of artificial intelligence (AI). In this myth, man, unlike other creatures, is endowed with a divine gift: the ability to create tools. AI represents a revolutionary advance, replacing human intellectual labour and emphasising its ability to autonomously generate new knowledge. In the scientific field, AI is speeding up peer review processes and increasing the efficiency of manuscript evaluation, while also contributing creative elements such as rewriting, translating or creating illustrations. However, its use must be ethical, limited to an assisting role, and subject to expert oversight to prevent errors and misuse. AI, an evolving divine tool, requires critical study and application of each of its advances.
RESUMO
There is a massive hype of artificial intelligence (AI) allegedly revolutionizing medicine. However, algorithms have been at the core of medicine for centuries and have been implemented in technologies such as computed tomography and magnetic resonance imaging machines for decades. They have given decision support in electrocardiogram machines without much attention. So, what is new with AI? To withstand the massive hype of AI, we should learn from the little child in H.C. Andersen's fairytale "The emperor's new clothes" revealing the collective figment of the emperor having new clothes. While AI certainly accelerates algorithmic medicine, we must learn from history and avoid implementing AI because it allegedly is new - we must implement it because we can demonstrate that it is useful.
RESUMO
Imitation learning (IL), a burgeoning frontier in machine learning, holds immense promise across diverse domains. In recent years, its integration into robotics has sparked significant interest, offering substantial advancements in autonomous control processes. This paper presents an exhaustive insight focusing on the implementation of imitation learning techniques in agricultural robotics. The survey rigorously examines varied research endeavors utilizing imitation learning to address pivotal agricultural challenges. Methodologically, this survey comprehensively investigates multifaceted aspects of imitation learning applications in agricultural robotics. The survey encompasses the identification of agricultural tasks that can potentially be addressed through imitation learning, detailed analysis of specific models and frameworks, and a thorough assessment of performance metrics employed in the surveyed studies. Additionally, it includes a comparative analysis between imitation learning techniques and conventional control methodologies in the realm of robotics. The findings derived from this survey unveil profound insights into the applications of imitation learning in agricultural robotics. These methods are highlighted for their potential to significantly improve task execution in dynamic and high-dimensional action spaces prevalent in agricultural settings, such as precision farming. Despite promising advancements, the survey discusses considerable challenges in data quality, environmental variability, and computational constraints that IL must overcome. The survey also addresses the ethical and social implications of implementing such technologies, emphasizing the need for robust policy frameworks to manage the societal impacts of automation. These findings hold substantial implications, showcasing the potential of imitation learning to revolutionize processes in agricultural robotics. This research significantly contributes to envisioning innovative applications and tools within the agricultural robotics domain, promising heightened productivity and efficiency in robotic agricultural systems. It underscores the potential for remarkable enhancements in various agricultural processes, signaling a transformative trajectory for the sector, particularly in the realm of robotics and autonomous systems.
RESUMO
âPurpose The purpose of this study was to evaluate the capabilities of large language models (LLMs) in understanding radiation safety and protection. We assessed the performance of generative pe-trained transformer (GPT)-4 (OpenAI, USA) and Gemini Advanced (Google DeepMind, London) using questions from the First-Class Radiation Protection Supervisor Examination in Japan. Methods The study involved GPT-4 and Gemini Advanced answering questions from the 68th First-Class Radiation Protection Supervisor Examination in Japan. The number of correct and incorrect answers based on the subject, the presence or absence of calculation, the passage length, and the format (textual or graphical questions) were analyzed in this study. Comparisons of the results between GPT-4 and Gemini Advanced were performed. Results The overall accuracy rates of GPT-4 and Gemini Advanced were 71.0% and 65.3%, respectively. A significant difference was observed in the subject (P < 0.0001 in GPT-4 and P = 0.0127 in Gemini Advanced). The accuracy rate of laws and regulations was lower than in the other subjects. There was no significant difference in the presence or absence of calculation or the passage length. The performance of both LLMs was significantly better in textual questions than in graphical questions (P = 0.0003 in GPT-4 and P < 0.0001 in Gemini Advanced). The performance of the two LLMs did not differ significantly based on the subject, the presence or absence of calculation, the passage length, or the format. Conclusions GPT-4 and Gemini Advanced demonstrated sufficient understanding of physics, chemistry, biology, and practical operations to meet the passing standard for the average score. However, in laws and regulations, their performance was insufficient, possibly due to frequent revisions and the complexity of detailed regulations, and further machine learning is required.