ABSTRACT
How neurons detect the direction of motion is a prime example of neural computation: Motion vision is found in the visual systems of virtually all sighted animals, it is important for survival, and it requires interesting computations with well-defined linear and nonlinear processing steps-yet the whole process is of moderate complexity. The genetic methods available in the fruit fly Drosophila and the charting of a connectome of its visual system have led to rapid progress and unprecedented detail in our understanding of how neurons compute the direction of motion in this organism. The picture that emerged incorporates not only the identity, morphology, and synaptic connectivity of each neuron involved but also its neurotransmitters, its receptors, and their subcellular localization. Together with the neurons' membrane potential responses to visual stimulation, this information provides the basis for a biophysically realistic model of the circuit that computes the direction of visual motion.
Subject(s)
Motion Perception , Animals , Motion Perception/physiology , Visual Pathways/physiology , Drosophila/physiology , Vision, Ocular , Neurons/physiology , Photic StimulationABSTRACT
Images projected onto the retina of an animal eye are rarely still. Instead, they usually contain motion signals originating either from moving objects or from retinal slip caused by self-motion. Accordingly, motion signals tell the animal in which direction a predator, prey, or the animal itself is moving. At the neural level, visual motion detection has been proposed to extract directional information by a delay-and-compare mechanism, representing a classic example of neural computation. Neurons responding selectively to motion in one but not in the other direction have been identified in many systems, most prominently in the mammalian retina and the fly optic lobe. Technological advances have now allowed researchers to characterize these neurons' upstream circuits in exquisite detail. Focusing on these upstream circuits, we review and compare recent progress in understanding the mechanisms that generate direction selectivity in the early visual system of mammals and flies.
Subject(s)
Motion Perception/physiology , Neurons/physiology , Retina/physiology , Visual Pathways/physiology , Animals , Humans , MotionABSTRACT
The ability to detect and assess world-relative object-motion is a critical computation performed by the visual system. This computation, however, is greatly complicated by the observer's movements, which generate a global pattern of motion on the observer's retina. How the visual system implements this computation is poorly understood. Since we are potentially able to detect a moving object if its motion differs in velocity (or direction) from the expected optic flow generated by our own motion, here we manipulated the relative motion velocity between the observer and the object within a stationary scene as a strategy to test how the brain accomplishes object-motion detection. Specifically, we tested the neural sensitivity of brain regions that are known to respond to egomotion-compatible visual motion (i.e., egomotion areas: cingulate sulcus visual area, posterior cingulate sulcus area, posterior insular cortex [PIC], V6+, V3A, IPSmot/VIP, and MT+) to a combination of different velocities of visually induced translational self- and object-motion within a virtual scene while participants were instructed to detect object-motion. To this aim, we combined individual surface-based brain mapping, task-evoked activity by functional magnetic resonance imaging, and parametric and representational similarity analyses. We found that all the egomotion regions (except area PIC) responded to all the possible combinations of self- and object-motion and were modulated by the self-motion velocity. Interestingly, we found that, among all the egomotion areas, only MT+, V6+, and V3A were further modulated by object-motion velocities, hence reflecting their possible role in discriminating between distinct velocities of self- and object-motion. We suggest that these egomotion regions may be involved in the complex computation required for detecting scene-relative object-motion during self-motion.
Subject(s)
Motion Perception , Neocortex , Humans , Motion Perception/physiology , Brain Mapping , Motion , Gyrus Cinguli , Photic Stimulation/methodsABSTRACT
Mechanoluminescence (ML)-based sensors are emerging as promising wearable devices, attracting attention for their self-powered visualization of mechanical stimuli. However, challenges such as weak brightness, high activation threshold, and intermittent signal output have hindered their development. Here, a mechanoluminescent/electric dual-mode strain sensor is presented that offers enhanced ML sensing and reliable electrical sensing simultaneously. The strain sensor is fabricated via an optimized dip-coating method, featuring a sandwich structure with a single-walled carbon nanotube (SWNT) interlayer and two polydimethylsiloxane (PDMS)/ZnS:Cu luminescence layers. The integral mechanical reinforcement framework provided by the SWNT interlayer improves the ML intensity of the SWNT/PDMS/ZnS:Cu composite film. Compared to conventional nanoparticle fillers, the ML intensity is enhanced nearly tenfold with a trace amount of SWNT (only 0.01 wt.%). In addition, the excellent electrical conductivity of SWNT forms a conductive network, ensuring continuous and stable electrical sensing. These strain sensors enable comprehensive and precise monitoring of human behavior through both electrical (relative resistance change) and optical (ML intensity) methods, paving the way for the development of advanced visual sensing and smart wearable electronics in the future.
ABSTRACT
PURPOSE: Widening the availability of fetal MRI with fully automatic real-time planning of radiological brain planes on 0.55T MRI. METHODS: Deep learning-based detection of key brain landmarks on a whole-uterus echo planar imaging scan enables the subsequent fully automatic planning of the radiological single-shot Turbo Spin Echo acquisitions. The landmark detection pipeline was trained on over 120 datasets from varying field strength, echo times, and resolutions and quantitatively evaluated. The entire automatic planning solution was tested prospectively in nine fetal subjects between 20 and 37 weeks. A comprehensive evaluation of all steps, the distance between manual and automatic landmarks, the planning quality, and the resulting image quality was conducted. RESULTS: Prospective automatic planning was performed in real-time without latency in all subjects. The landmark detection accuracy was 4.2 ± $$ \pm $$ 2.6 mm for the fetal eyes and 6.5 ± $$ \pm $$ 3.2 for the cerebellum, planning quality was 2.4/3 (compared to 2.6/3 for manual planning) and diagnostic image quality was 2.2 compared to 2.1 for manual planning. CONCLUSIONS: Real-time automatic planning of all three key fetal brain planes was successfully achieved and will pave the way toward simplifying the acquisition of fetal MRI thereby widening the availability of this modality in nonspecialist centers.
Subject(s)
Brain , Fetus , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Humans , Brain/diagnostic imaging , Brain/embryology , Magnetic Resonance Imaging/methods , Female , Pregnancy , Fetus/diagnostic imaging , Image Processing, Computer-Assisted/methods , Deep Learning , Prenatal Diagnosis/methods , Prospective Studies , Echo-Planar Imaging/methods , Algorithms , Image Interpretation, Computer-Assisted/methodsABSTRACT
OBJECTIVE: To address the challenge of assessing sedation status in critically ill patients in the intensive care unit (ICU), we aimed to develop a non-contact automatic classifier of agitation using artificial intelligence and deep learning. METHODS: We collected the video recordings of ICU patients and cut them into 30-second (30-s) and 2-second (2-s) segments. All of the segments were annotated with the status of agitation as "Attention" and "Non-attention". After transforming the video segments into movement quantification, we constructed the models of agitation classifiers with Threshold, Random Forest, and LSTM and evaluated their performances. RESULTS: The video recording segmentation yielded 427 30-s and 6405 2-s segments from 61 patients for model construction. The LSTM model achieved remarkable accuracy (ACC 0.92, AUC 0.91), outperforming other methods. CONCLUSION: Our study proposes an advanced monitoring system combining LSTM and image processing to ensure mild patient sedation in ICU care. LSTM proves to be the optimal choice for accurate monitoring. Future efforts should prioritize expanding data collection and enhancing system integration for practical application.
Subject(s)
Deep Learning , Psychomotor Agitation , Humans , Psychomotor Agitation/diagnosis , Artificial Intelligence , Intensive Care Units , Critical CareABSTRACT
Accurately capturing human movements is a crucial element of health status monitoring and a necessary precondition for realizing future virtual reality/augmented reality applications. Flexible motion sensors with exceptional sensitivity are capable of detecting physical activities by converting them into resistance fluctuations. Silver nanowires (AgNWs) have become a preferred choice for the development of various types of sensors due to their outstanding electrical conductivity, transparency, and flexibility within polymer composites. Herein, we present the design and fabrication of a flexible strain sensor based on silver nanowires. Suitable substrate materials were selected, and the sensor's sensitivity and fatigue properties were characterized and tested, with the sensor maintaining reliability after 5000 deformation cycles. Different sensors were prepared by controlling the concentration of silver nanowires to achieve the collection of motion signals from various parts of the human body. Additionally, we explored potential applications of these sensors in fields such as health monitoring and virtual reality. In summary, this work integrated the acquisition of different human motion signals, demonstrating great potential for future multifunctional wearable electronic devices.
Subject(s)
Nanowires , Silver , Wearable Electronic Devices , Nanowires/chemistry , Humans , Silver/chemistry , Movement/physiology , Electric Conductivity , Biosensing Techniques/instrumentation , Biosensing Techniques/methods , Monitoring, Physiologic/instrumentation , Monitoring, Physiologic/methodsABSTRACT
Fiber-based flexible sensors have promising application potential in human motion and healthcare monitoring, owing to their merits of being lightweight, flexible, and easy to process. Now, high-performance elastic fiber-based strain sensors with high sensitivity, a large working range, and excellent durability are in great demand. Herein, we have easily and quickly prepared a highly sensitive and durable fiber-based strain sensor by dip coating a highly stretchable polyurethane (PU) elastic fiber in an MXene/waterborne polyurethane (WPU) dispersion solution. Benefiting from the electrostatic repulsion force between the negatively charged WPU and MXene sheets in the mixed solution, very homogeneous and stable MXene/WPU dispersion was successfully obtained, and the interconnected conducting networks were correspondingly formed in a coated MXene/WPU shell layer, which makes the as-prepared strain sensor exhibit a gauge factor of over 960, a large sensing range of over 90%, and a detection limit as low as 0.5% strain. As elastic fiber and mixed solution have the same polymer constitute, and tight bonding of the MXene/WPU conductive composite on PU fibers was achieved, enabling the as-prepared strain sensor to endure over 2500 stretching-releasing cycles and thus show good durability. Full-scale human motion detection was also performed by the strain sensor, and a body posture monitoring, analysis, and correction prototype system were developed via embedding the fiber-based strain sensors into sweaters, strongly indicating great application prospects in exercise, sports, and healthcare.
Subject(s)
Disgust , Nitrites , Transition Elements , Wearable Electronic Devices , Humans , Polyurethanes , Delivery of Health CareABSTRACT
This paper is on the autonomous detection of humans in off-limits mountains. In off-limits mountains, a human rarely exists, thus human detection is an extremely rare event. Due to the advances in artificial intelligence, object detection-classification algorithms based on a Convolution Neural Network (CNN) can be used for this application. However, considering off-limits mountains, there should be no person in general. Thus, it is not desirable to run object detection-classification algorithms continuously, since they are computationally heavy. This paper addresses a time-efficient human detector system, based on both motion detection and object classification. The proposed scheme is to run a motion detection algorithm from time to time. In the camera image, we define a feasible human space where a human can appear. Once motion is detected inside the feasible human space, one enables the object classification, only inside the bounding box where motion is detected. Since motion detection inside the feasible human space runs much faster than an object detection-classification method, the proposed approach is suitable for real-time human detection with low computational loads. As far as we know, no paper in the literature used the feasible human space, as in our paper. The outperformance of our human detector system is verified by comparing it with other state-of-the-art object detection-classification algorithms (HOG detector, YOLOv7 and YOLOv7-tiny) under experiments. This paper demonstrates that the accuracy of the proposed human detector system is comparable to other state-of-the-art algorithms, while outperforming in computational speed. Our experiments show that in environments with no humans, the proposed human detector runs 62 times faster than YOLOv7 method, while showing comparable accuracy.
Subject(s)
Algorithms , Artificial Intelligence , Humans , Motion , Neural Networks, ComputerABSTRACT
Due to their potential applications in physiological monitoring, diagnosis, human prosthetics, haptic perception, and human-machine interaction, flexible tactile sensors have attracted wide research interest in recent years. Thanks to the advances in material engineering, high performance flexible tactile sensors have been obtained. Among the representative pressure sensing materials, 2D layered nanomaterials have many properties that are superior to those of bulk nanomaterials and are more suitable for high performance flexible sensors. As a class of 2D inorganic compounds in materials science, MXene has excellent electrical, mechanical, and biological compatibility. MXene-based composites have proven to be promising candidates for flexible tactile sensors due to their excellent stretchability and metallic conductivity. Therefore, great efforts have been devoted to the development of MXene-based composites for flexible sensor applications. In this paper, the controllable preparation and characterization of MXene are introduced. Then, the recent progresses on fabrication strategies, operating mechanisms, and device performance of MXene composite-based flexible tactile sensors, including flexible piezoresistive sensors, capacitive sensors, piezoelectric sensors, triboelectric sensors are reviewed. After that, the applications of MXene material-based flexible electronics in human motion monitoring, healthcare, prosthetics, and artificial intelligence are discussed. Finally, the challenges and perspectives for MXene-based tactile sensors are summarized.
Subject(s)
Artificial Intelligence , Stereognosis , Humans , Electric Conductivity , ElectricityABSTRACT
PURPOSE: To improve motion robustness of functional fetal MRI scans by developing an intrinsic real-time motion correction method. MRI provides an ideal tool to characterize fetal brain development and growth. It is, however, a relatively slow imaging technique and therefore extremely susceptible to subject motion, particularly in functional MRI experiments acquiring multiple Echo-Planar-Imaging-based repetitions, for example, diffusion MRI or blood-oxygen-level-dependency MRI. METHODS: A 3D UNet was trained on 125 fetal datasets to track the fetal brain position in each repetition of the scan in real time. This tracking, inserted into a Gadgetron pipeline on a clinical scanner, allows updating the position of the field of view in a modified echo-planar imaging sequence. The method was evaluated in real-time in controlled-motion phantom experiments and ten fetal MR studies (17 + 4-34 + 3 gestational weeks) at 3T. The localization network was additionally tested retrospectively on 29 low-field (0.55T) datasets. RESULTS: Our method achieved real-time fetal head tracking and prospective correction of the acquisition geometry. Localization performance achieved Dice scores of 84.4% and 82.3%, respectively for both the unseen 1.5T/3T and 0.55T fetal data, with values higher for cephalic fetuses and increasing with gestational age. CONCLUSIONS: Our technique was able to follow the fetal brain even for fetuses under 18 weeks GA in real-time at 3T and was successfully applied "offline" to new cohorts on 0.55T. Next, it will be deployed to other modalities such as fetal diffusion MRI and to cohorts of pregnant participants diagnosed with pregnancy complications, for example, pre-eclampsia and congenital heart disease.
Subject(s)
Fetus , Magnetic Resonance Imaging , Female , Humans , Pregnancy , Prospective Studies , Retrospective Studies , Magnetic Resonance Imaging/methods , Fetus/diagnostic imaging , Brain/diagnostic imaging , MotionABSTRACT
Self-healable and stretchable elastomeric material is essential for the development of flexible electronics devices to ensure their stable performance. In this study, a strain sensor (PIH2 T1 -tri/CNT-3) composed of self-repairable crosslinked elastomer substrate (PIH2 T1 -tri, containing multiple reversible repairing sites such as disulfide, imine, and hydrogen bonds) and conductive layer (carbon nanotube, CNT) was prepared. The PIH2 T1 -tri elastomer had excellent self-healing ability (healing efficiency=91 %). It exhibited good mechanical integrity in terms of elongation at break (672 %), tensile strength (1.41â MPa). The Young's modulus (0.39â MPa) was close to that of human skin. The PIH2 T1 -tri/CNT-3 sensor also demonstrated an effective self-healing function for electrical conduction and sensing property. Meanwhile, it had high sensitivity (gauge factor (GF)=24.1), short response time (120â ms), and long-term durability (4000â cycles). This study offers a novel self-healable elastomer platform with carbon based conductive components to develop flexible strain sensors towards high performance soft electronics.
ABSTRACT
Complex engineered systems are often equipped with suites of sensors and ancillary devices that monitor their performance and maintenance needs. MRI scanners are no different in this regard. Some of the ancillary devices available to support MRI equipment, the ones of particular interest here, have the distinction of actually participating in the image acquisition process itself. Most commonly, such devices are used to monitor physiological motion or variations in the scanner's imaging fields, allowing the imaging and/or reconstruction process to adapt as imaging conditions change. "Classic" examples include electrocardiography (ECG) leads and respiratory bellows to monitor cardiac and respiratory motion, which have been standard equipment in scan rooms since the early days of MRI. Since then, many additional sensors and devices have been proposed to support MRI acquisitions. The main physical properties that they measure may be primarily "mechanical" (eg acceleration, speed, and torque), "acoustic" (sound and ultrasound), "optical" (light and infrared), or "electromagnetic" in nature. A review of these ancillary devices, as currently available in clinical and research settings, is presented here. In our opinion, these devices are not in competition with each other: as long as they provide useful and unique information, do not interfere with each other and are not prohibitively cumbersome to use, they might find their proper place in future suites of sensors. In time, MRI acquisitions will likely include a plurality of complementary signals. A little like the microbiome that provides genetic diversity to organisms, these devices can provide signal diversity to MRI acquisitions and enrich measurements. Machine-learning (ML) algorithms are well suited at combining diverse input signals toward coherent outputs, and they could make use of all such information toward improved MRI capabilities. EVIDENCE LEVEL: 2 TECHNICAL EFFICACY: Stage 1.
Subject(s)
Heart , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Heart/physiology , Electrocardiography , Motion , Movement/physiologyABSTRACT
The optic flow, i.e., the displacement of retinal images of objects in the environment induced by self-motion, is an important source of spatial information, especially for fast-flying insects. Spatial information over a wide range of distances, from the animal's immediate surroundings over several hundred metres to kilometres, is necessary for mediating behaviours, such as landing manoeuvres, collision avoidance in spatially complex environments, learning environmental object constellations and path integration in spatial navigation. To facilitate the processing of spatial information, the complexity of the optic flow is often reduced by active vision strategies. These result in translations and rotations being largely separated by a saccadic flight and gaze mode. Only the translational components of the optic flow contain spatial information. In the first step of optic flow processing, an array of local motion detectors provides a retinotopic spatial proximity map of the environment. This local motion information is then processed in parallel neural pathways in a task-specific manner and used to control the different components of spatial behaviour. A particular challenge here is that the distance information extracted from the optic flow does not represent the distances unambiguously, but these are scaled by the animal's speed of locomotion. Possible ways of coping with this ambiguity are discussed.
Subject(s)
Motion Perception , Optic Flow , Animals , Flight, Animal/physiology , Insecta/physiology , Motion Perception/physiology , SaccadesABSTRACT
Due to the rapid development of the IoT ecosystem, a large amount of IoT data has been generated. However, in the current development process of the Internet of Things, how to ensure data security is a key issue. Traditional IoT security protection is relatively basic. With the increase of data size, these facilities need to be further improved in terms of computing, information security, storage information, and stability. However, traditional security and privacy protection methods are often insufficient for large topology structures. Blockchain is based on distributed networks, characterized by monotonous and decentralized data. Blockchain also provides a new approach to encrypting IoT security information. This article introduces blockchain technology into the Internet of Things, which can prevent reliance on centralized servers and effectively protect internet data and information security. Then this article also analyzed the motion recognition and detection of community sports, as well as the widespread application of computer vision and machine learning technologies in various fields such as computer mining and data security. Motion detection technology is very important in the field of computer vision and has applications in various real-world scenarios. This article studies new ideas for information encryption in the computer Internet of Things, and also analyzes machine learning and motion detection systems and applies them to community sports, improving the development of community sports.
Subject(s)
Internet of Things , Sports , Humans , Ecosystem , Internet , Machine LearningABSTRACT
Divisive normalization is a model of canonical computation of brain circuits. We demonstrate that two cascaded divisive normalization processors (DNPs), carrying out intensity/contrast gain control and elementary motion detection, respectively, can model the robust motion detection realized by the early visual system of the fruit fly. We first introduce a model of elementary motion detection and rewrite its underlying phase-based motion detection algorithm as a feedforward divisive normalization processor. We then cascade the DNP modeling the photoreceptor/amacrine cell layer with the motion detection DNP. We extensively evaluate the DNP for motion detection in dynamic environments where light intensity varies by orders of magnitude. The results are compared to other bio-inspired motion detectors as well as state-of-the-art optic flow algorithms under natural conditions. Our results demonstrate the potential of DNPs as canonical building blocks modeling the analog processing of early visual systems. The model highlights analog processing for accurately detecting visual motion, in both vertebrates and invertebrates. The results presented here shed new light on employing DNP-based algorithms in computer vision.
Subject(s)
Drosophila , Motion Perception , Animals , Vision, Ocular , Brain , MotionABSTRACT
The precise timing of spikes emitted by neurons plays a crucial role in shaping the response of efferent biological neurons. This temporal dimension of neural activity holds significant importance in understanding information processing in neurobiology, especially for the performance of neuromorphic hardware, such as event-based cameras. Nonetheless, many artificial neural models disregard this critical temporal dimension of neural activity. In this study, we present a model designed to efficiently detect temporal spiking motifs using a layer of spiking neurons equipped with heterogeneous synaptic delays. Our model capitalizes on the diverse synaptic delays present on the dendritic tree, enabling specific arrangements of temporally precise synaptic inputs to synchronize upon reaching the basal dendritic tree. We formalize this process as a time-invariant logistic regression, which can be trained using labeled data. To demonstrate its practical efficacy, we apply the model to naturalistic videos transformed into event streams, simulating the output of the biological retina or event-based cameras. To evaluate the robustness of the model in detecting visual motion, we conduct experiments by selectively pruning weights and demonstrate that the model remains efficient even under significantly reduced workloads. In conclusion, by providing a comprehensive, event-driven computational building block, the incorporation of heterogeneous delays has the potential to greatly improve the performance of future spiking neural network algorithms, particularly in the context of neuromorphic chips.
Subject(s)
Learning , Neural Networks, Computer , Action Potentials/physiology , Algorithms , Neurons/physiologyABSTRACT
PURPOSE: This study aimed to develop and validate a deep learning-based method that detects inter-breath-hold motion from an estimated cardiac long axis image reconstructed from a stack of short axis cardiac cine images. METHODS: Cardiac cine magnetic resonance image data from all short axis slices and 2-/3-/4-chamber long axis slices were considered for the study. Data from 740 subjects were used for model development, and data from 491 subjects were used for testing. The method utilized the slice orientation information to calculate the intersection line of a short axis plane and a long axis plane. An estimated long axis image is shown along with a long axis image as a motion-free reference image, which enables visual assessment of the inter-breath-hold motion from the estimated long axis image. The estimated long axis image was labeled as either a motion-corrupted or a motion-free image. Deep convolutional neural network (CNN) models were developed and validated using the labeled data. RESULTS: The method was fully automatic in obtaining long axis images reformatted from a 3D stack of short axis slices and predicting the presence/absence of inter-breath-hold motion. The deep CNN model with EfficientNet-B0 as a feature extractor was effective at motion detection with an area under the receiver operating characteristic (AUC) curve of 0.87 for the testing data. CONCLUSION: The proposed method can automatically assess inter-breath-hold motion in a stack of cardiac cine short axis slices. The method can help prospectively reacquire problematic short axis slices or retrospectively correct motion.
Subject(s)
Breath Holding , Heart , Humans , Retrospective Studies , Heart/diagnostic imaging , Motion , Neural Networks, ComputerABSTRACT
Insect nervous systems offer unique advantages for studying interactions between sensory systems and behavior, given their complexity with high tractability. By examining the neural coding of salient environmental stimuli and resulting behavioral output in the context of environmental stressors, we gain an understanding of the effects of these stressors on brain and behavior and provide insight into normal function. The implication of neonicotinoid (neonic) pesticides in contributing to declines of nontarget species, such as bees, has motivated the development of new compounds that can potentially mitigate putative resistance in target species and declines of nontarget species. We used a neuroethologic approach, including behavioral assays and multineuronal recording techniques, to investigate effects of imidacloprid (IMD) and the novel insecticide sulfoxaflor (SFX) on visual motion-detection circuits and related escape behavior in the tractable locust system. Despite similar LD50 values, IMD and SFX evoked different behavioral and physiological effects. IMD significantly attenuated collision avoidance behaviors and impaired responses of neural populations, including decreases in spontaneous firing and neural habituation. In contrast, SFX displayed no effect at a comparable sublethal dose. These results show that neonics affect population responses and habituation of a visual motion detection system. We propose that differences in the sublethal effects of SFX reflect a different mode of action than that of IMD. More broadly, we suggest that neuroethologic assays for comparative neurotoxicology are valuable tools for fully addressing current issues regarding the proximal effects of environmental toxicity in nontarget species.
Subject(s)
Environmental Exposure , Escape Reaction/drug effects , Insecticides/toxicity , Motor Neurons/drug effects , Neonicotinoids/toxicity , Nitro Compounds/toxicity , Pyridines/toxicity , Sulfur Compounds/toxicity , Animals , Habituation, Psychophysiologic/drug effects , Lethal Dose 50 , Locusta migratoria/drug effects , MotionABSTRACT
This paper describes the preliminary results of measuring the impact of human body movements on plants. The scope of this project is to investigate if a plant perceives human activity in its vicinity. In particular, we analyze the influence of eurythmic gestures of human actors on lettuce and beans. In an eight-week experiment, we exposed rows of lettuce and beans to weekly eurythmic movements (similar to Qi Gong) of a eurythmist, while at the same time measuring changes in voltage between the roots and leaves of lettuce and beans using the plant spikerbox. We compared this experimental group of vegetables to a control group of vegetables whose voltage differential was also measured while not being exposed to eurythmy. We placed a plant spikerbox connected to lettuce or beans in the vegetable plot while the eurythmist was performing their gestures about 2 m away; a second spikerbox was connected to a control plant 20 m away. Using t-tests, we found a clear difference between the experimental and the control group, which was also verified with a machine learning model. In other words, the vegetables showed a noticeably different pattern in electric potentials in response to eurythmic gestures.