Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 242
Filtrar
1.
Proc Natl Acad Sci U S A ; 120(29): e2216217120, 2023 07 18.
Artigo em Inglês | MEDLINE | ID: mdl-37428910

RESUMO

Animals are often faced with time-critical decisions without prior information about their actions' outcomes. In such scenarios, individuals budget their investment into the task to cut their losses in case of an adverse outcome. In animal groups, this may be challenging because group members can only access local information, and consensus can only be achieved through distributed interactions among individuals. Here, we combined experimental analyses with theoretical modeling to investigate how groups modulate their investment into tasks in uncertain conditions. Workers of the arboreal weaver ant Oecophylla smaragdina form three-dimensional chains using their own bodies to bridge vertical gaps between existing trails and new areas to explore. The cost of a chain increases with its length because ants participating in the structure are prevented from performing other tasks. The payoffs of chain formation, however, remain unknown to the ants until the chain is complete and they can explore the new area. We demonstrate that weaver ants cap their investment into chains, and do not form complete chains when the gap is taller than 90 mm. We show that individual ants budget the time they spend in chains depending on their distance to the ground, and propose a distance-based model of chain formation that explains the emergence of this tradeoff without the need to invoke complex cognition. Our study provides insights into the proximate mechanisms that lead individuals to engage (or not) in collective actions and furthers our knowledge of how decentralized groups make adaptive decisions in uncertain conditions.


Assuntos
Formigas , Cognição , Animais , Incerteza , Consenso
2.
Biochem Biophys Res Commun ; 731: 150396, 2024 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-39018974

RESUMO

Individual cells have numerous competencies in physiological and metabolic spaces. However, multicellular collectives can reliably navigate anatomical morphospace towards much larger, reliable endpoints. Understanding the robustness and control properties of this process is critical for evolutionary developmental biology, bioengineering, and regenerative medicine. One mechanism that has been proposed for enabling individual cells to coordinate toward specific morphological outcomes is the sharing of stress (where stress is a physiological parameter that reflects the current amount of error in the context of a homeostatic loop). Here, we construct and analyze a multiscale agent-based model of morphogenesis in which we quantitatively examine the impact of stress sharing on the ability to reach target morphology. We found that stress sharing improves the morphogenetic efficiency of multicellular collectives; populations with stress sharing reached anatomical targets faster. Moreover, stress sharing influenced the future fate of distant cells in the multi-cellular collective, enhancing cells' movement and their radius of influence, consistent with the hypothesis that stress sharing works to increase cohesiveness of collectives. During development, anatomical goal states could not be inferred from observation of stress states, revealing the limitations of knowledge of goals by an extern observer outside the system itself. Taken together, our analyses support an important role for stress sharing in natural and engineered systems that seek robust large-scale behaviors to emerge from the activity of their competent components.

3.
J Synchrotron Radiat ; 31(Pt 2): 420-429, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38386563

RESUMO

Alignment of each optical element at a synchrotron beamline takes days, even weeks, for each experiment costing valuable beam time. Evolutionary algorithms (EAs), efficient heuristic search methods based on Darwinian evolution, can be utilized for multi-objective optimization problems in different application areas. In this study, the flux and spot size of a synchrotron beam are optimized for two different experimental setups including optical elements such as lenses and mirrors. Calculations were carried out with the X-ray Tracer beamline simulator using swarm intelligence (SI) algorithms and for comparison the same setups were optimized with EAs. The EAs and SI algorithms used in this study for two different experimental setups are the Genetic Algorithm (GA), Non-dominated Sorting Genetic Algorithm II (NSGA-II), Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC). While one of the algorithms optimizes the lens position, the other focuses on optimizing the focal distances of Kirkpatrick-Baez mirrors. First, mono-objective evolutionary algorithms were used and the spot size or flux values checked separately. After comparison of mono-objective algorithms, the multi-objective evolutionary algorithm NSGA-II was run for both objectives - minimum spot size and maximum flux. Every algorithm configuration was run several times for Monte Carlo simulations since these processes generate random solutions and the simulator also produces solutions that are stochastic. The results show that the PSO algorithm gives the best values over all setups.

4.
Network ; : 1-57, 2024 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-38913877

RESUMO

The purpose of this paper is to test the performance of the recently proposed weighted superposition attraction-repulsion algorithms (WSA and WSAR) on unconstrained continuous optimization test problems and constrained optimization problems. WSAR is a successor of weighted superposition attraction algorithm (WSA). WSAR is established upon the superposition principle from physics and mimics attractive and repulsive movements of solution agents (vectors). Differently from the WSA, WSAR also considers repulsive movements with updated solution move equations. WSAR requires very few algorithm-specific parameters to be set and has good convergence and searching capability. Through extensive computational tests on many benchmark problems including CEC'2015 and CEC'2020 performance of the WSAR is compared against WSA and other metaheuristic algorithms. It is statistically shown that the WSAR algorithm is able to produce good and competitive results in comparison to its predecessor WSA and other metaheuristic algorithms.

5.
Sensors (Basel) ; 24(8)2024 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-38676223

RESUMO

Vector Quantization (VQ) is a technique with a wide range of applications. For example, it can be used for image compression. The codebook design for VQ has great significance in the quality of the quantized signals and can benefit from the use of swarm intelligence. Initialization of the Linde-Buzo-Gray (LBG) algorithm, which is the most popular VQ codebook design algorithm, is a step that directly influences VQ performance, as the convergence speed and codebook quality depend on the initial codebook. A widely used initialization alternative is random initialization, in which the initial set of codevectors is drawn randomly from the training set. Other initialization methods can lead to a better quality of the designed codebooks. The present work evaluates the impacts of initialization strategies on swarm intelligence algorithms for codebook design in terms of the quality of the designed codebooks, assessed by the quality of the reconstructed images, and in terms of the convergence speed, evaluated by the number of iterations. Initialization strategies consist of a combination of codebooks obtained by initialization algorithms from the literature with codebooks composed of vectors randomly selected from the training set. The possibility of combining different initialization techniques provides new perspectives in the search for the quality of the VQ codebooks. Nine initialization strategies are presented, which are compared with random initialization. Initialization strategies are evaluated on the following algorithms for codebook design based on swarm clustering: modified firefly algorithm-Linde-Buzo-Gray (M-FA-LBG), modified particle swarm optimization-Linde-Buzo-Gray (M-PSO-LBG), modified fish school search-Linde-Buzo-Gray (M-FSS-LBG) and their accelerated versions (M-FA-LBGa, M-PSO-LBGa and M-FSS-LBGa) which are obtained by replacing the LBG with the accelerated LBG algorithm. The simulation results point out to the benefits of the proposed initialization strategies. The results show gains up to 4.43 dB in terms of PSNR for image Clock with M-PSO-LBG codebooks of size 512 and codebook design time savings up to 67.05% for image Clock, with M-FF-LBGa codebooks with size N=512, by using initialization strategies in substitution to Random initialization.

6.
Evol Comput ; : 1-30, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38889349

RESUMO

Heuristic optimization methods such as Particle Swarm Optimization depend on their parameters to achieve optimal performance on a given class of problems. Some modifications of heuristic algorithms aim at adapting those parameters during the optimization process. We present a novel approach to design such adaptation strategies using continuous fuzzy feedback control. Fuzzy feedback provides a simple interface where probes are sampled in the optimization process and parameters are fed back to the optimizer. The probes are turned into parameters by a fuzzy process optimized beforehand to maximize performance on a training benchmark. Utilizing this framework, we systematically established 127 different Fuzzy Particle Swarm Optimization algorithms featuring a maximum of 7 parameters under fuzzy control. These newly devised algorithms exhibit superior performance compared to both traditional PSO and some of its best parameter control variants. The performance is reported in the single-objective bound-constrained numerical optimization competition of CEC 2020. Additionally, two specific controls, highlighted for their efficacy and dependability, demonstrated commendable performance in real-world scenarios from CEC 2011.

7.
Entropy (Basel) ; 26(7)2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-39056895

RESUMO

In recent years, the scientific community has increasingly recognized the complex multi-scale competency architecture (MCA) of biology, comprising nested layers of active homeostatic agents, each forming the self-orchestrated substrate for the layer above, and, in turn, relying on the structural and functional plasticity of the layer(s) below. The question of how natural selection could give rise to this MCA has been the focus of intense research. Here, we instead investigate the effects of such decision-making competencies of MCA agential components on the process of evolution itself, using in silico neuroevolution experiments of simulated, minimal developmental biology. We specifically model the process of morphogenesis with neural cellular automata (NCAs) and utilize an evolutionary algorithm to optimize the corresponding model parameters with the objective of collectively self-assembling a two-dimensional spatial target pattern (reliable morphogenesis). Furthermore, we systematically vary the accuracy with which the uni-cellular agents of an NCA can regulate their cell states (simulating stochastic processes and noise during development). This allows us to continuously scale the agents' competency levels from a direct encoding scheme (no competency) to an MCA (with perfect reliability in cell decision executions). We demonstrate that an evolutionary process proceeds much more rapidly when evolving the functional parameters of an MCA compared to evolving the target pattern directly. Moreover, the evolved MCAs generalize well toward system parameter changes and even modified objective functions of the evolutionary process. Thus, the adaptive problem-solving competencies of the agential parts in our NCA-based in silico morphogenesis model strongly affect the evolutionary process, suggesting significant functional implications of the near-ubiquitous competency seen in living matter.

8.
Artif Life ; 29(4): 433-467, 2023 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-37432100

RESUMO

Collectiveness is an important property of many systems-both natural and artificial. By exploiting a large number of individuals, it is often possible to produce effects that go far beyond the capabilities of the smartest individuals or even to produce intelligent collective behavior out of not-so-intelligent individuals. Indeed, collective intelligence, namely, the capability of a group to act collectively in a seemingly intelligent way, is increasingly often a design goal of engineered computational systems-motivated by recent technoscientific trends like the Internet of Things, swarm robotics, and crowd computing, to name only a few. For several years, the collective intelligence observed in natural and artificial systems has served as a source of inspiration for engineering ideas, models, and mechanisms. Today, artificial and computational collective intelligence are recognized research topics, spanning various techniques, kinds of target systems, and application domains. However, there is still a lot of fragmentation in the research panorama of the topic within computer science, and the verticality of most communities and contributions makes it difficult to extract the core underlying ideas and frames of reference. The challenge is to identify, place in a common structure, and ultimately connect the different areas and methods addressing intelligent collectives. To address this gap, this article considers a set of broad scoping questions providing a map of collective intelligence research, mostly by the point of view of computer scientists and engineers. Accordingly, it covers preliminary notions, fundamental concepts, and the main research perspectives, identifying opportunities and challenges for researchers on artificial and computational collective intelligence engineering.


Assuntos
Inteligência Artificial , Robótica , Humanos , Inteligência
9.
Sensors (Basel) ; 23(2)2023 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-36679554

RESUMO

The Aquila Optimizer (AO) is a new bio-inspired meta-heuristic algorithm inspired by Aquila's hunting behavior. Adaptive Aquila Optimizer Combining Niche Thought with Dispersed Chaotic Swarm (NCAAO) is proposed to address the problem that although the Aquila Optimizer (AO) has a strong global exploration capability, it has an insufficient local exploitation capability and a slow convergence rate. First, to improve the diversity of populations in the algorithm and the uniformity of distribution in the search space, DLCS chaotic mapping is used to generate the initial populations so that the algorithm is in a better exploration state. Then, to improve the search accuracy of the algorithm, an adaptive adjustment strategy of de-searching preferences is proposed. The exploration and development phases of the NCAAO algorithm are effectively balanced by changing the search threshold and introducing the position weight parameter to adaptively adjust the search process. Finally, the idea of small habitats is effectively used to promote the exchange of information between groups and accelerate the rapid convergence of groups to the optimal solution. To verify the optimization performance of the NCAAO algorithm, the improved algorithm was tested on 15 standard benchmark functions, the Wilcoxon rank sum test, and engineering optimization problems to test the optimization-seeking ability of the improved algorithm. The experimental results show that the NCAAO algorithm has better search performance and faster convergence speed compared with other intelligent algorithms.


Assuntos
Águias , Animais , Algoritmos , Benchmarking , Engenharia , Heurística
10.
Sensors (Basel) ; 23(5)2023 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-36905062

RESUMO

Recent studies have shown the efficacy of mobile elements in optimizing the energy consumption of sensor nodes. Current data collection approaches for waste management applications focus on exploiting IoT-enabled technologies. However, these techniques are no longer sustainable in the context of smart city (SC) waste management applications due to the emergence of large-scale wireless sensor networks (LS-WSNs) in smart cities with sensor-based big data architectures. This paper proposes an energy-efficient swarm intelligence (SI) Internet of Vehicles (IoV)-based technique for opportunistic data collection and traffic engineering for SC waste management strategies. This is a novel IoV-based architecture exploiting the potential of vehicular networks for SC waste management strategies. The proposed technique involves deploying multiple data collector vehicles (DCVs) traversing the entire network for data gathering via a single-hop transmission. However, employing multiple DCVs comes with additional challenges including costs and network complexity. Thus, this paper proposes analytical-based methods to investigate critical tradeoffs in optimizing energy consumption for big data collection and transmission in an LS-WSN such as (1) finding the optimal number of data collector vehicles (DCVs) required in the network and (2) determining the optimal number of data collection points (DCPs) for the DCVs. These critical issues affect efficient SC waste management and have been overlooked by previous studies exploring waste management strategies. Simulation-based experiments using SI-based routing protocols validate the efficacy of the proposed method in terms of the evaluation metrics.

11.
Sensors (Basel) ; 23(18)2023 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-37765722

RESUMO

High-rise building fires pose a serious threat to the lives and property safety of people. The lack of reliable and accurate positioning means is one of the main difficulties faced by rescuers. In the absence of prior knowledge of the high-rise building fire environment, the coverage deployment of mobile base stations is a challenging problem that has not received much attention in the literature. This paper studies the problem of the autonomous optimal deployment of base stations in high-rise building fire environments based on a UAV group. A novel problem formulation is proposed that solves the non-line-of-sight (NLOS) positioning problem in complex and unknown environments. The purpose of this paper is to realize the coverage and deployment of mobile base stations in complex and unknown fire environments. The NLOS positioning problem in the fire field environment is turned into the line-of-sight (LOS) positioning problem through the optimization algorithm. And there are more than three LOS base stations nearby at any point in the fire field. A control law which is formulated in a mathematically precise problem statement is developed that guarantees to meet mobile base stations' deployment goals and to avoid collision. Finally, the positioning accuracy of our method and that of the common method were compared under many different cases. The simulation result showed that the positioning error of a simulated firefighter in the fire field environment was improved from more than 10 m (the positioning error of the traditional method) to less than 1 m.

12.
Sensors (Basel) ; 23(21)2023 Oct 28.
Artigo em Inglês | MEDLINE | ID: mdl-37960486

RESUMO

Real-time monitoring of rock stability during the mining process is critical. This paper first proposed a RIME algorithm (CCRIME) based on vertical and horizontal crossover search strategies to improve the quality of the solutions obtained by the RIME algorithm and further enhance its search capabilities. Then, by constructing a binary version of CCRIME, the key parameters of FKNN were optimized using a binary conversion method. Finally, a discrete CCRIME-based BCCRIME was developed, which uses an S-shaped function transformation approach to address the feature selection issue by converting the search result into a real number that can only be zero or one. The performance of CCRIME was examined in this study from various perspectives, utilizing 30 benchmark functions from IEEE CEC2017. Basic algorithm comparison tests and sophisticated variant algorithm comparison experiments were also carried out. In addition, this paper also used collected microseismic and blasting data for classification prediction to verify the ability of the BCCRIME-FKNN model to process real data. This paper provides new ideas and methods for real-time monitoring of rock mass stability during deep well mineral resource mining.

13.
Sensors (Basel) ; 23(13)2023 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-37447729

RESUMO

The template matching technique is one of the most applied methods to find patterns in images, in which a reduced-size image, called a target, is searched within another image that represents the overall environment. In this work, template matching is used via a co-design system. A hardware coprocessor is designed for the computationally demanding step of template matching, which is the calculation of the normalized cross-correlation coefficient. This computation allows invariance in the global brightness changes in the images, but it is computationally more expensive when using images of larger dimensions, or even sets of images. Furthermore, we investigate the performance of six different swarm intelligence techniques aiming to accelerate the target search process. To evaluate the proposed design, the processing time, the number of iterations, and the success rate were compared. The results show that it is possible to obtain approaches capable of processing video images at 30 frames per second with an acceptable average success rate for detecting the tracked target. The search strategies based on PSO, ABC, FFA, and CS are able to meet the processing time of 30 frame/s, yielding average accuracy rates above 80% for the pipelined co-design implementation. However, FWA, EHO, and BFOA could not achieve the required timing restriction, and they achieved an acceptance rate around 60%. Among all the investigated search strategies, the PSO provides the best performance, yielding an average processing time of 16.22 ms coupled with a 95% success rate.


Assuntos
Algoritmos , Inteligência Artificial , Inteligência
14.
Sensors (Basel) ; 23(2)2023 Jan 16.
Artigo em Inglês | MEDLINE | ID: mdl-36679822

RESUMO

Collaborative robots represent an evolution in the field of swarm robotics that is pervasive in modern industrial undertakings from manufacturing to exploration. Though there has been much work on path planning for autonomous robots employing floor plans, energy-efficient navigation of autonomous robots in unknown environments is gaining traction. This work presents a novel methodology of low-overhead collaborative sensing, run-time mapping and localization, and navigation for robot swarms. The aim is to optimize energy consumption for the swarm as a whole rather than individual robots. An energy- and information-aware management algorithm is proposed to optimize the time and energy required for a swarm of autonomous robots to move from a launch area to the predefined destination. This is achieved by modifying the classical Partial Swarm SLAM technique, whereby sections of objects discovered by different members of the swarm are stitched together and broadcast to members of the swarm. Thus, a follower can find the shortest path to the destination while avoiding even far away obstacles in an efficient manner. The proposed algorithm reduces the energy consumption of the swarm as a whole due to the fact that the leading robots sense and discover respective optimal paths and share their discoveries with the followers. The simulation results show that the robots effectively re-optimized the previous solution while sharing necessary information within the swarm. Furthermore, the efficiency of the proposed scheme is shown via comparative results, i.e., reducing traveling distance by 13% for individual robots and up to 11% for the swarm as a whole in the performed experiments.


Assuntos
Robótica , Robótica/métodos , Algoritmos , Simulação por Computador
15.
Sensors (Basel) ; 23(12)2023 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-37420600

RESUMO

Wireless Sensor Networks (WSNs) have been successfully utilized for developing various collaborative and intelligent applications that can provide comfortable and smart-economic life. This is because the majority of applications that employ WSNs for data sensing and monitoring purposes are in open practical environments, where security is often the first priority. In particular, the security and efficacy of WSNs are universal and inevitable issues. One of the most effective methods for increasing the lifetime of WSNs is clustering. In cluster-based WSNs, Cluster Heads (CHs) play a critical role; however, if the CHs are compromised, the gathered data loses its trustworthiness. Hence, trust-aware clustering techniques are crucial in a WSN to improve node-to-node communication as well as to enhance network security. In this work, a trust-enabled data-gathering technique based on the Sparrow Search Algorithm (SSA) for WSN-based applications, called DGTTSSA, is introduced. In DGTTSSA, the swarm-based SSA optimization algorithm is modified and adapted to develop a trust-aware CH selection method. A fitness function is created based on the nodes' remaining energy and trust values in order to choose more efficient and trustworthy CHs. Moreover, predefined energy and trust threshold values are taken into account and are dynamically adjusted to accommodate the changes in the network. The proposed DGTTSSA and the state-of-the-art algorithms are evaluated in terms of the Stability and Instability Period, Reliability, CHs Average Trust Value, Average Residual Energy, and Network Lifetime. The simulation results indicate that DGTTSSA selects the most trustworthy nodes as CHs and offers a significantly longer network lifetime than previous efforts in the literature. Moreover, DGTTSSA improves the instability period compared to LEACH-TM, ETCHS, eeTMFGA, and E-LEACH up to 90%, 80%, 79%, 92%, respectively, when BS is located at the center, up to 84%, 71%, 47%, 73%, respectively, when BS is located at the corner, and up to 81%, 58%, 39%, 25%, respectively, when BS is located outside the network.


Assuntos
Algoritmos , Conscientização , Análise por Conglomerados , Reprodutibilidade dos Testes
16.
Sensors (Basel) ; 23(3)2023 Jan 28.
Artigo em Inglês | MEDLINE | ID: mdl-36772503

RESUMO

Continuous advancements of technologies such as machine-to-machine interactions and big data analysis have led to the internet of things (IoT) making information sharing and smart decision-making possible using everyday devices. On the other hand, swarm intelligence (SI) algorithms seek to establish constructive interaction among agents regardless of their intelligence level. In SI algorithms, multiple individuals run simultaneously and possibly in a cooperative manner to address complex nonlinear problems. In this paper, the application of SI algorithms in IoT is investigated with a special focus on the internet of medical things (IoMT). The role of wearable devices in IoMT is briefly reviewed. Existing works on applications of SI in addressing IoMT problems are discussed. Possible problems include disease prediction, data encryption, missing values prediction, resource allocation, network routing, and hardware failure management. Finally, research perspectives and future trends are outlined.


Assuntos
Internet das Coisas , Dispositivos Eletrônicos Vestíveis , Humanos , Algoritmos , Cognição , Inteligência , Internet
17.
Sensors (Basel) ; 23(3)2023 Feb 02.
Artigo em Inglês | MEDLINE | ID: mdl-36772680

RESUMO

Given its advantages in low latency, fast response, context-aware services, mobility, and privacy preservation, edge computing has emerged as the key support for intelligent applications and 5G/6G Internet of things (IoT) networks. This technology extends the cloud by providing intermediate services at the edge of the network and improving the quality of service for latency-sensitive applications. Many AI-based solutions with machine learning, deep learning, and swarm intelligence have exhibited the high potential to perform intelligent cognitive sensing, intelligent network management, big data analytics, and security enhancement for edge-based smart applications. Despite its many benefits, there are still concerns about the required capabilities of intelligent edge computing to deal with the computational complexity of machine learning techniques for big IoT data analytics. Resource constraints of edge computing, distributed computing, efficient orchestration, and synchronization of resources are all factors that require attention for quality of service improvement and cost-effective development of edge-based smart applications. In this context, this paper aims to explore the confluence of AI and edge in many application domains in order to leverage the potential of the existing research around these factors and identify new perspectives. The confluence of edge computing and AI improves the quality of user experience in emergency situations, such as in the Internet of vehicles, where critical inaccuracies or delays can lead to damage and accidents. These are the same factors that most studies have used to evaluate the success of an edge-based application. In this review, we first provide an in-depth analysis of the state of the art of AI in edge-based applications with a focus on eight application areas: smart agriculture, smart environment, smart grid, smart healthcare, smart industry, smart education, smart transportation, and security and privacy. Then, we present a qualitative comparison that emphasizes the main objective of the confluence, the roles and the use of artificial intelligence at the network edge, and the key enabling technologies for edge analytics. Then, open challenges, future research directions, and perspectives are identified and discussed. Finally, some conclusions are drawn.

18.
J Digit Imaging ; 36(2): 401-413, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36414832

RESUMO

Radiologists today play a central role in making diagnostic decisions and labeling images for training and benchmarking artificial intelligence (AI) algorithms. A key concern is low inter-reader reliability (IRR) seen between experts when interpreting challenging cases. While team-based decisions are known to outperform individual decisions, inter-personal biases often creep up in group interactions which limit nondominant participants from expressing true opinions. To overcome the dual problems of low consensus and interpersonal bias, we explored a solution modeled on bee swarms. Two separate cohorts, three board-certified radiologists, (cohort 1), and five radiology residents (cohort 2) collaborated on a digital swarm platform in real time and in a blinded fashion, grading meniscal lesions on knee MR exams. These consensus votes were benchmarked against clinical (arthroscopy) and radiological (senior-most radiologist) standards of reference using Cohen's kappa. The IRR of the consensus votes was then compared to the IRR of the majority and most confident votes of the two cohorts. IRR was also calculated for predictions from a meniscal lesion detecting AI algorithm. The attending cohort saw an improvement of 23% in IRR of swarm votes (k = 0.34) over majority vote (k = 0.11). Similar improvement of 23% in IRR (k = 0.25) in 3-resident swarm votes over majority vote (k = 0.02) was observed. The 5-resident swarm had an even higher improvement of 30% in IRR (k = 0.37) over majority vote (k = 0.07). The swarm consensus votes outperformed individual and majority vote decision in both the radiologists and resident cohorts. The attending and resident swarms also outperformed predictions from a state-of-the-art AI algorithm.


Assuntos
Inteligência Artificial , Radiologistas , Animais , Humanos , Consenso , Reprodutibilidade dos Testes , Inteligência
19.
Prep Biochem Biotechnol ; 53(6): 690-703, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36269079

RESUMO

Serratia marcescens strain UCCM 00009 produced a mixture of gelatinase and keratinase to facilitate feather degradation but concomitant production of prodigiosin could make waste feather valorization biotechnologically more attractive. This article describes prodigiosin fermentation through co-valorization of waste feather and waste frying peanut oil by S. marcescens UCCM 00009 for anticancer, antioxidant, and esthetic applications. The stochastic conditions for waste feather degradation (WFD), modeled by multi-objective particle swarm-embedded-neural network optimization (ANN-PSO), revealed a gelatinase/keratinase ratio of 1.71 for optimal prodigiosin production and WFD. Luedeking-Piret kinetics revealed a non-exclusive, non-growth-associated prodigiosin yield of 9.66 g/L from the degradation of 88.55% waste feather within 96 h. The polyethylene glycol (PEG) 6000/Na+ citrate aqueous two-phase system-purified serratiopeptidase demonstrated gelatinolytic and keratinolytic activities that were stable for 240 h at 55 °C and pH 9.0. In vitro evaluations revealed that the prodigiosin inhibited methicillin-resistant Staphylococcus aureus at IC50 of 4.95 µg/mL, the plant-pathogen, Sclerotinia sclerotiorum, at IC50 of 2.58 µg/mL, breast carcinoma at IC50 of 0.60 µg/mL and 2,2-diphenyl-1-picryl-hydrazyl hydrate (DPPH) free-radical at IC50 of 96.63 µg/mL). The pigment also demonstrated commendable textile dyeing potential of fiber and cotton fabrics. The technology promises cost-effective prodigiosin development through sustainable waste feather-waste frying oil co-management.


Assuntos
Staphylococcus aureus Resistente à Meticilina , Prodigiosina , Animais , Plumas , Heurística , Serratia marcescens
20.
Psychiatr Danub ; 35(3): 355-368, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37917841

RESUMO

BACKGROUND: Anxiety and depression are two leading human psychological disorders. In this work, several swarm intelligence-based metaheuristic techniques have been employed to find an optimal feature set for the diagnosis of these two human psychological disorders. SUBJECTS AND METHODS: To diagnose depression and anxiety among people, a random dataset comprising 1128 instances and 46 attributes has been considered and examined. The dataset was collected and compiled manually by visiting the number of clinics situated in different cities of Haryana (one of the states of India). Afterwards, nine emerging meta-heuristic techniques (Genetic algorithm, binary Grey Wolf Optimizer, Ant Colony Optimization, Particle Swarm Optimization, Artificial Bee Colony, Firefly Algorithm, Dragonfly Algorithm, Bat Algorithm and Whale Optimization Algorithm) have been employed to find the optimal feature set used to diagnose depression and anxiety among humans. To avoid local optima and to maintain the balance between exploration and exploitation, a new hybrid feature selection technique called Restricted Crossover Mutation based Whale Optimization Algorithm (RCM-WOA) has been designed. RESULTS: The swarm intelligence-based meta-heuristic algorithms have been applied to the datasets. The performance of these algorithms has been evaluated using different performance metrics such as accuracy, sensitivity, specificity, precision, recall, f-measure, error rate, execution time and convergence curve. The rate of accuracy reached utilizing the proposed method RCM-WOA is 91.4%. CONCLUSION: Depression and Anxiety are two critical psychological disorders that may lead to other chronic and life-threatening human disorders. The proposed algorithm (RCM-WOA) was found to be more suitable compared to the other state of art methods.


Assuntos
Depressão , Baleias , Animais , Humanos , Depressão/diagnóstico , Depressão/genética , Algoritmos , Ansiedade/diagnóstico , Transtornos de Ansiedade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA