ABSTRACT
The rapidly advancing field of brain-computer (BCI) and brain-to-brain interfaces (BBI) is stimulating interest across various sectors including medicine, entertainment, research, and military. The developers of large-scale brain-computer networks, sometimes dubbed 'Mindplexes' or 'Cloudminds', aim to enhance cognitive functions by distributing them across expansive networks. A key technical challenge is the efficient transmission and storage of information. One proposed solution is employing blockchain technology over Web 3.0 to create decentralised cognitive entities. This paper explores the potential of a decentralised web for coordinating large brain-computer constellations, and its associated benefits, focusing in particular on the conceptual and ethical challenges this innovation may pose pertaining to (1) Identity, (2) Sovereignty (encompassing Autonomy, Authenticity, and Ownership), (3) Responsibility and Accountability, and (4) Privacy, Safety, and Security. We suggest that while a decentralised web can address some concerns and mitigate certain risks, underlying ethical issues persist. Fundamental questions about entity definition within these networks, the distinctions between individuals and collectives, and responsibility distribution within and between networks, demand further exploration.
Subject(s)
Brain-Computer Interfaces , Internet , Personal Autonomy , Privacy , Humans , Brain-Computer Interfaces/ethics , Social Responsibility , Blockchain/ethics , Computer Security/ethics , Ownership/ethics , Politics , Cognition , Safety , Technology/ethicsABSTRACT
This article is devoted to the study of the correlation between the emotional state of a person and the posture of his or her body in the sitting position. In order to carry out the study, we developed the first version of the hardware-software system based on a posturometric armchair, allowing the characteristics of the posture of a sitting person to be evaluated using strain gauges. Using this system, we revealed the correlation between sensor readings and human emotional states. We showed that certain readings of a sensor group are formed for a certain emotional state of a person. We also found that the groups of triggered sensors, their composition, their number, and their location are related to the states of a particular person, which led to the need to build personalized digital pose models for each person. The intellectual component of our hardware-software complex is based on the concept of co-evolutionary hybrid intelligence. The system can be used during medical diagnostic procedures and rehabilitation processes, as well as in controlling people whose professional activity is connected with increased psycho-emotional load and can cause cognitive disorders, fatigue, and professional burnout and can lead to the development of diseases.
Subject(s)
Emotions , Posture , Humans , Male , Female , Sitting Position , Computers , SoftwareABSTRACT
Sleep staging is a vital aspect of sleep assessment, serving as a critical tool for evaluating the quality of sleep and identifying sleep disorders. Manual sleep staging is a laborious process, while automatic sleep staging is seldom utilized in clinical practice due to issues related to the inadequate accuracy and interpretability of classification results in automatic sleep staging models. In this work, a hybrid intelligent model is presented for automatic sleep staging, which integrates data intelligence and knowledge intelligence, to attain a balance between accuracy, interpretability, and generalizability in the sleep stage classification. Specifically, it is built on any combination of typical electroencephalography (EEG) and electrooculography (EOG) channels, including a temporal fully convolutional network based on the U-Net architecture and a multi-task feature mapping structure. The experimental results show that, compared to current interpretable automatic sleep staging models, our model achieves a Macro-F1 score of 0.804 on the ISRUC dataset and 0.780 on the Sleep-EDFx dataset. Moreover, we use knowledge intelligence to address issues of excessive jumps and unreasonable sleep stage transitions in the coarse sleep graphs obtained by the model. We also explore the different ways knowledge intelligence affects coarse sleep graphs by combining different sleep graph correction methods. Our research can offer convenient support for sleep physicians, indicating its significant potential in improving the efficiency of clinical sleep staging.
Subject(s)
Sleep Stages , Sleep , Polysomnography/methods , Electroencephalography/methods , Electrooculography/methodsABSTRACT
It is uncertain how the application of artificial intelligence (AI) technology transforms industrial work. We address this question from the perspective of cognitive systems, which, in this case, includes considerations of AI and process transparency, resilience, division of labor, and worker skills. We draw from a case study on glass tempering that includes a machine-vision-based quality control system and an advanced automation process control system. Based on task analysis and background literature, we develop the concept of hybrid intelligence that implies balanced AI transparency that supports upskilling and resilience. So-called fragmented intelligence, in turn, may result from the combination of the complexity of advanced automation along with the complexity of the process physics that places critical emphasis on expert knowledge. This combination can result in the so-called "double black box effect", given that designing for understandability for the line workers might not be feasible: expert networks are needed for resilience.
Subject(s)
Artificial Intelligence , Humans , Task Performance and Analysis , Industry , Glass , AutomationABSTRACT
Organ segmentation is a crucial task in various medical imaging applications. Many deep learning models have been developed to do this, but they are slow and require a lot of computational resources. To solve this problem, attention mechanisms are used which can locate important objects of interest within medical images, allowing the model to segment them accurately even when there is noise or artifact. By paying attention to specific anatomical regions, the model becomes better at segmentation. Medical images have unique features in the form of anatomical information, which makes them different from natural images. Unfortunately, most deep learning methods either ignore this information or do not use it effectively and explicitly. Combined natural intelligence with artificial intelligence, known as hybrid intelligence, has shown promising results in medical image segmentation, making models more robust and able to perform well in challenging situations. In this paper, we propose several methods and models to find attention regions in medical images for deep learning-based segmentation via non-deep-learning methods. We developed these models and trained them using hybrid intelligence concepts. To evaluate their performance, we tested the models on unique test data and analyzed metrics including false negatives quotient and false positives quotient. Our findings demonstrate that object shape and layout variations can be explicitly learned to create computational models that are suitable for each anatomic object. This work opens new possibilities for advancements in medical image segmentation and analysis.
ABSTRACT
Organ segmentation is a fundamental requirement in medical image analysis. Many methods have been proposed over the past 6 decades for segmentation. A unique feature of medical images is the anatomical information hidden within the image itself. To bring natural intelligence (NI) in the form of anatomical information accumulated over centuries into deep learning (DL) AI methods effectively, we have recently introduced the idea of hybrid intelligence (HI) that combines NI and AI and a system based on HI to perform medical image segmentation. This HI system has shown remarkable robustness to image artifacts, pathology, deformations, etc. in segmenting organs in the Thorax body region in a multicenter clinical study. The HI system utilizes an anatomy modeling strategy to encode NI and to identify a rough container region in the shape of each object via a non-DL-based approach so that DL training and execution are applied only to the fuzzy container region. In this paper, we introduce several advances related to modeling of the NI component so that it becomes substantially more efficient computationally, and at the same time, is well integrated with the DL portion (AI component) of the system. We demonstrate a 9-40 fold computational improvement in the auto-segmentation task for radiation therapy (RT) planning via clinical studies obtained from 4 different RT centers, while retaining state-of-the-art accuracy of the previous system in segmenting 11 objects in the Thorax body region.
ABSTRACT
Introduction: This study explores the role and potential of large language models (LLMs) and generative intelligence in the fashion industry. These technologies are reshaping traditional methods of design, production, and retail, leading to innovation, product personalization, and enhanced customer interaction. Methods: Our research analyzes the current applications and limitations of LLMs in fashion, identifying challenges such as the need for better spatial understanding and design detail processing. We propose a hybrid intelligence approach to address these issues. Results: We find that while LLMs offer significant potential, their integration into fashion workflows requires improvements in understanding spatial parameters and creating tools for iterative design. Discussion: Future research should focus on overcoming these limitations and developing hybrid intelligence solutions to maximize the potential of LLMs in the fashion industry.
ABSTRACT
A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing and swarming technologies promises to transform how we think about collective and collaborative cognitive tasks across domains, ranging from research to entertainment, and from therapeutics to military applications. As these tools continue to improve, we are prompted to monitor how they may affect our society on a broader level, but also how they may reshape our fundamental understanding of agency, responsibility, and other key concepts of our moral landscape. In this paper we take a closer look at this class of technologies - Technologies for Collective Minds - to see not only how their implementation may react with commonly held moral values, but also how they challenge our underlying concepts of what constitutes collective or individual agency. We argue that prominent contemporary frameworks for understanding collective agency and responsibility are insufficient in terms of accurately describing the relationships enabled by Technologies for Collective Minds, and that they therefore risk obstructing ethical analysis of the implementation of these technologies in society. We propose a more multidimensional approach to better understand this set of technologies, and to facilitate future research on the ethics of Technologies for Collective Minds.
ABSTRACT
The scientific community has been looking for novel approaches to develop nanostructures inspired by nature. However, due to the complicated processes involved, controlling the height of these nanostructures is challenging. Nanoscale capillary force lithography (CFL) is one way to use a photopolymer and alter its properties by exposing it to ultraviolet radiation. Nonetheless, the working mechanism of CFL is not fully understood due to a lack of enough information and first principles. One of these obscure behaviors is the sudden jump phenomenon-the sudden change in the height of the photopolymer depending on the UV exposure time and height of nano-grating (based on experimental data). This paper uses known physical principles alongside artificial intelligence to uncover the unknown physical principles responsible for the sudden jump phenomenon. The results showed promising results in identifying air diffusivity, dynamic viscosity, surface tension, and electric potential as the previously unknown physical principles that collectively explain the sudden jump phenomenon.
ABSTRACT
Diagnostic errors impact patient health and healthcare costs. Artificial Intelligence (AI) shows promise in mitigating this burden by supporting Medical Doctors in decision-making. However, the mere display of excellent or even superhuman performance by AI in specific tasks does not guarantee a positive impact on medical practice. Effective AI assistance should target the primary causes of human errors and foster effective collaborative decision-making with human experts who remain the ultimate decision-makers. In this narrative review, we apply these principles to the specific scenario of AI assistance during colonoscopy. By unraveling the neurocognitive foundations of the colonoscopy procedure, we identify multiple bottlenecks in perception, attention, and decision-making that contribute to diagnostic errors, shedding light on potential interventions to mitigate them. Furthermore, we explored how existing AI devices fare in clinical practice and whether they achieved an optimal integration with the human decision-maker. We argue that to foster optimal Human-AI collaboration, future research should expand our knowledge of factors influencing AI's impact, establish evidence-based cognitive models, and develop training programs based on them. These efforts will enhance human-AI collaboration, ultimately improving diagnostic accuracy and patient outcomes. The principles illuminated in this review hold more general value, extending their relevance to a wide array of medical procedures and beyond.
ABSTRACT
Putting real-time medical data processing applications into practice comes with some challenges such as scalability and performance. Processing medical images from different collaborators is an example of such applications, in which chest X-ray data are processed to extract knowledge. It is not easy to process data and get the required information in real time using central processing techniques when data get very large in size. In this paper, real-time data are filtered and forwarded to the right processing node by using the proposed topic-based hierarchical publish/subscribe messaging middleware in the distributed scalable network of collaborating computation nodes instead of classical approaches of centralized computation. This enables processing streaming medical data in near real time and makes a warning system possible. End users have the capability of filtering/searching. The returned search results can be images (COVID-19 or non-COVID-19) and their meta-data are gender and age. Here, COVID-19 is detected using a novel capsule network-based model from chest X-ray images. This middleware allows for a smaller search space as well as shorter times for obtaining search results.
ABSTRACT
Recently, deep learning networks have achieved considerable success in segmenting organs in medical images. Several methods have used volumetric information with deep networks to achieve segmentation accuracy. However, these networks suffer from interference, risk of overfitting, and low accuracy as a result of artifacts, in the case of very challenging objects like the brachial plexuses. In this paper, to address these issues, we synergize the strengths of high-level human knowledge (i.e., natural intelligence (NI)) with deep learning (i.e., artificial intelligence (AI)) for recognition and delineation of the thoracic brachial plexuses (BPs) in computed tomography (CT) images. We formulate an anatomy-guided deep learning hybrid intelligence approach for segmenting thoracic right and left brachial plexuses consisting of 2 key stages. In the first stage (AAR-R), objects are recognized based on a previously created fuzzy anatomy model of the body region with its key organs relevant for the task at hand wherein high-level human anatomic knowledge is precisely codified. The second stage (DL-D) uses information from AAR-R to limit the search region to just where each object is most likely to reside and performs encoder-decoder delineation in slices. The proposed method is tested on a dataset that consists of 125 images of the thorax acquired for radiation therapy planning of tumors in the thorax and achieves a Dice coefficient of 0.659.
ABSTRACT
BACKGROUND: Automatic segmentation of 3D objects in computed tomography (CT) is challenging. Current methods, based mainly on artificial intelligence (AI) and end-to-end deep learning (DL) networks, are weak in garnering high-level anatomic information, which leads to compromised efficiency and robustness. This can be overcome by incorporating natural intelligence (NI) into AI methods via computational models of human anatomic knowledge. PURPOSE: We formulate a hybrid intelligence (HI) approach that integrates the complementary strengths of NI and AI for organ segmentation in CT images and illustrate performance in the application of radiation therapy (RT) planning via multisite clinical evaluation. METHODS: The system employs five modules: (i) body region recognition, which automatically trims a given image to a precisely defined target body region; (ii) NI-based automatic anatomy recognition object recognition (AAR-R), which performs object recognition in the trimmed image without DL and outputs a localized fuzzy model for each object; (iii) DL-based recognition (DL-R), which refines the coarse recognition results of AAR-R and outputs a stack of 2D bounding boxes (BBs) for each object; (iv) model morphing (MM), which deforms the AAR-R fuzzy model of each object guided by the BBs output by DL-R; and (v) DL-based delineation (DL-D), which employs the object containment information provided by MM to delineate each object. NI from (ii), AI from (i), (iii), and (v), and their combination from (iv) facilitate the HI system. RESULTS: The HI system was tested on 26 organs in neck and thorax body regions on CT images obtained prospectively from 464 patients in a study involving four RT centers. Data sets from one separate independent institution involving 125 patients were employed in training/model building for each of the two body regions, whereas 104 and 110 data sets from the 4 RT centers were utilized for testing on neck and thorax, respectively. In the testing data sets, 83% of the images had limitations such as streak artifacts, poor contrast, shape distortion, pathology, or implants. The contours output by the HI system were compared to contours drawn in clinical practice at the four RT centers by utilizing an independently established ground-truth set of contours as reference. Three sets of measures were employed: accuracy via Dice coefficient (DC) and Hausdorff boundary distance (HD), subjective clinical acceptability via a blinded reader study, and efficiency by measuring human time saved in contouring by the HI system. Overall, the HI system achieved a mean DC of 0.78 and 0.87 and a mean HD of 2.22 and 4.53 mm for neck and thorax, respectively. It significantly outperformed clinical contouring in accuracy and saved overall 70% of human time over clinical contouring time, whereas acceptability scores varied significantly from site to site for both auto-contours and clinically drawn contours. CONCLUSIONS: The HI system is observed to behave like an expert human in robustness in the contouring task but vastly more efficiently. It seems to use NI help where image information alone will not suffice to decide, first for the correct localization of the object and then for the precise delineation of the boundary.
Subject(s)
Artificial Intelligence , Humans , Cone-Beam Computed TomographyABSTRACT
Target prioritization is essential for drug discovery and repositioning. Applying computational methods to analyze and process multi-omics data to find new drug targets is a practical approach for achieving this. Despite an increasing number of methods for generating datasets such as genomics, phenomics, and proteomics, attempts to integrate and mine such datasets remain limited in scope. Developing hybrid intelligence solutions that combine human intelligence in the scientific domain and disease biology with the ability to mine multiple databases simultaneously may help augment drug target discovery and identify novel drug-indication associations. We believe that integrating different data sources using a singular numerical scoring system in a hybrid intelligent framework could help to bridge these different omics layers and facilitate rapid drug target prioritization for studies in drug discovery, development or repositioning. Herein, we describe our prototype of the StarGazer pipeline which combines multi-source, multi-omics data with a novel target prioritization scoring system in an interactive Python-based Streamlit dashboard. StarGazer displays target prioritization scores for genes associated with 1844 phenotypic traits, and is available via https://github.com/AstraZeneca/StarGazer.
ABSTRACT
PURPOSE: The integration of Artificial Intelligence into medical practices has recently been advocated for the promise to bring increased efficiency and effectiveness to these practices. Nonetheless, little research has so far been aimed at understanding the best human-AI interaction protocols in collaborative tasks, even in currently more viable settings, like independent double-reading screening tasks. METHODS: To this aim, we report about a retrospective case-control study, involving 12 board-certified radiologists, in the detection of knee lesions by means of Magnetic Resonance Imaging, in which we simulated the serial combination of two Deep Learning models with humans in eight double-reading protocols. Inspired by the so-called Kasparov's Laws, we investigate whether the combination of humans and AI models could achieve better performance than AI models alone, and whether weak reader, when supported by fit-for-use interaction protocols, could out-perform stronger readers. RESULTS: We discuss two main findings: groups of humans who perform significantly worse than a state-of-the-art AI can significantly outperform it if their judgements are aggregated by majority voting (in concordance with the first part of the Kasparov's law); small ensembles of significantly weaker readers can significantly outperform teams of stronger readers, supported by the same computational tool, when the judgments of the former ones are combined within "fit-for-use" protocols (in concordance with the second part of the Kasparov's law). CONCLUSION: Our study shows that good interaction protocols can guarantee improved decision performance that easily surpasses the performance of individual agents, even of realistic super-human AI systems. This finding highlights the importance of focusing on how to guarantee better co-operation within human-AI teams, so to enable safer and more human sustainable care practices.