Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 30
1.
Front Robot AI ; 11: 1393795, 2024.
Article En | MEDLINE | ID: mdl-38873120

Introduction: Flow state, the optimal experience resulting from the equilibrium between perceived challenge and skill level, has been extensively studied in various domains. However, its occurrence in industrial settings has remained relatively unexplored. Notably, the literature predominantly focuses on Flow within mentally demanding tasks, which differ significantly from industrial tasks. Consequently, our understanding of emotional and physiological responses to varying challenge levels, specifically in the context of industry-like tasks, remains limited. Methods: To bridge this gap, we investigate how facial emotion estimation (valence, arousal) and Heart Rate Variability (HRV) features vary with the perceived challenge levels during industrial assembly tasks. Our study involves an assembly scenario that simulates an industrial human-robot collaboration task with three distinct challenge levels. As part of our study, we collected video, electrocardiogram (ECG), and NASA-TLX questionnaire data from 37 participants. Results: Our results demonstrate a significant difference in mean arousal and heart rate between the low-challenge (Boredom) condition and the other conditions. We also found a noticeable trend-level difference in mean heart rate between the adaptive (Flow) and high-challenge (Anxiety) conditions. Similar differences were also observed in a few other temporal HRV features like Mean NN and Triangular index. Considering the characteristics of typical industrial assembly tasks, we aim to facilitate Flow by detecting and balancing the perceived challenge levels. Leveraging our analysis results, we developed an HRV-based machine learning model for discerning perceived challenge levels, distinguishing between low and higher-challenge conditions. Discussion: This work deepens our understanding of emotional and physiological responses to perceived challenge levels in industrial contexts and provides valuable insights for the design of adaptive work environments.

2.
PLoS Genet ; 20(2): e1011168, 2024 Feb.
Article En | MEDLINE | ID: mdl-38412177

Artificial intelligence (AI) for facial diagnostics is increasingly used in the genetics clinic to evaluate patients with potential genetic conditions. Current approaches focus on one type of AI called Deep Learning (DL). While DL- based facial diagnostic platforms have a high accuracy rate for many conditions, less is understood about how this technology assesses and classifies (categorizes) images, and how this compares to humans. To compare human and computer attention, we performed eye-tracking analyses of geneticist clinicians (n = 22) and non-clinicians (n = 22) who viewed images of people with 10 different genetic conditions, as well as images of unaffected individuals. We calculated the Intersection-over-Union (IoU) and Kullback-Leibler divergence (KL) to compare the visual attentions of the two participant groups, and then the clinician group against the saliency maps of our deep learning classifier. We found that human visual attention differs greatly from DL model's saliency results. Averaging over all the test images, IoU and KL metric for the successful (accurate) clinician visual attentions versus the saliency maps were 0.15 and 11.15, respectively. Individuals also tend to have a specific pattern of image inspection, and clinicians demonstrate different visual attention patterns than non-clinicians (IoU and KL of clinicians versus non-clinicians were 0.47 and 2.73, respectively). This study shows that humans (at different levels of expertise) and a computer vision model examine images differently. Understanding these differences can improve the design and use of AI tools, and lead to more meaningful interactions between clinicians and AI technologies.


Artificial Intelligence , Computers , Humans , Computer Simulation
3.
Psychother Res ; : 1-16, 2024 Feb 28.
Article En | MEDLINE | ID: mdl-38415369

OBJECTIVE: Given the importance of emotions in psychotherapy, valid measures are essential for research and practice. As emotions are expressed at different levels, multimodal measurements are needed for a nuanced assessment. Natural Language Processing (NLP) could augment the measurement of emotions. The study explores the validity of sentiment analysis in psychotherapy transcripts. METHOD: We used a transformer-based NLP algorithm to analyze sentiments in 85 transcripts from 35 patients. Construct and criterion validity were evaluated using self- and therapist reports and process and outcome measures via correlational, multitrait-multimethod, and multilevel analyses. RESULTS: The results provide indications in support of the sentiments' validity. For example, sentiments were significantly related to self- and therapist reports of emotions in the same session. Sentiments correlated significantly with in-session processes (e.g., coping experiences), and an increase in positive sentiments throughout therapy predicted better outcomes after treatment termination. DISCUSSION: Sentiment analysis could serve as a valid approach to assessing the emotional tone of psychotherapy sessions and may contribute to the multimodal measurement of emotions. Future research could combine sentiment analysis with automatic emotion recognition in facial expressions and vocal cues via the Nonverbal Behavior Analyzer (NOVA). Limitations (e.g., exploratory study with numerous tests) and opportunities are discussed.

4.
IEEE Trans Pattern Anal Mach Intell ; 46(2): 805-822, 2024 Feb.
Article En | MEDLINE | ID: mdl-37851557

Automatically recognising apparent emotions from face and voice is hard, in part because of various sources of uncertainty, including in the input data and the labels used in a machine learning framework. This paper introduces an uncertainty-aware multimodal fusion approach that quantifies modality-wise aleatoric or data uncertainty towards emotion prediction. We propose a novel fusion framework, in which latent distributions over unimodal temporal context are learned by constraining their variance. These variance constraints, Calibration and Ordinal Ranking, are designed such that the variance estimated for a modality can represent how informative the temporal context of that modality is w.r.t. emotion recognition. When well-calibrated, modality-wise uncertainty scores indicate how much their corresponding predictions are likely to differ from the ground truth labels. Well-ranked uncertainty scores allow the ordinal ranking of different frames across different modalities. To jointly impose both these constraints, we propose a softmax distributional matching loss. Our evaluation on AVEC 2019 CES, CMU-MOSEI, and IEMOCAP datasets shows that the proposed multimodal fusion method not only improves the generalisation performance of emotion recognition models and their predictive uncertainty estimates, but also makes the models robust to novel noise patterns encountered at test time.

5.
Front Psychol ; 14: 1245857, 2023.
Article En | MEDLINE | ID: mdl-37954185

Introduction: In Industry 4.0, collaborative tasks often involve operators working with collaborative robots (cobots) in shared workspaces. Many aspects of the operator's well-being within this environment still need in-depth research. Moreover, these aspects are expected to differ between neurotypical (NT) and Autism Spectrum Disorder (ASD) operators. Methods: This study examines behavioral patterns in 16 participants (eight neurotypical, eight with high-functioning ASD) during an assembly task in an industry-like lab-based robotic collaborative cell, enabling the detection of potential risks to their well-being during industrial human-robot collaboration. Each participant worked on the task for five consecutive days, 3.5 h per day. During these sessions, six video clips of 10 min each were recorded for each participant. The videos were used to extract quantitative behavioral data using the NOVA annotation tool and analyzed qualitatively using an ad-hoc observational grid. Also, during the work sessions, the researchers took unstructured notes of the observed behaviors that were analyzed qualitatively. Results: The two groups differ mainly regarding behavior (e.g., prioritizing the robot partner, gaze patterns, facial expressions, multi-tasking, and personal space), adaptation to the task over time, and the resulting overall performance. Discussion: This result confirms that NT and ASD participants in a collaborative shared workspace have different needs and that the working experience should be tailored depending on the end-user's characteristics. The findings of this study represent a starting point for further efforts to promote well-being in the workplace. To the best of our knowledge, this is the first work comparing NT and ASD participants in a collaborative industrial scenario.

6.
medRxiv ; 2023 Jul 28.
Article En | MEDLINE | ID: mdl-37577564

Deep learning (DL) and other types of artificial intelligence (AI) are increasingly used in many biomedical areas, including genetics. One frequent use in medical genetics involves evaluating images of people with potential genetic conditions to help with diagnosis. A central question involves better understanding how AI classifiers assess images compared to humans. To explore this, we performed eye-tracking analyses of geneticist clinicians and non-clinicians. We compared results to DL-based saliency maps. We found that human visual attention when assessing images differs greatly from the parts of images weighted by the DL model. Further, individuals tend to have a specific pattern of image inspection, and clinicians demonstrate different visual attention patterns than non-clinicians.

7.
Front Psychol ; 14: 1182959, 2023.
Article En | MEDLINE | ID: mdl-37404593

Introduction: Since the COVID-19 pandemic, working environments and private lives have changed dramatically. Digital technologies and media have become more and more important and have found their way into nearly all private and work environments. Communication situations have been largely relocated to virtual spaces. One of these scenarios is digital job interviews. Job interviews are usually-also in the non-digital world-perceived as stressful and associated with biological stress responses. We here present and evaluate a newly developed laboratory stressor that is based on a digital job interview-scenario. Methods: N = 45 healthy people participated in the study (64.4% female; mean age: 23.2 ± 3.6 years; mean body mass index = 22.8 ± 4.0 kg/m2). Salivary alpha-amylase (sAA) and cortisol were assessed as measures for biological stress responses. Furthermore, perceived stress was rated at the time points of the saliva samplings. The job interviews lasted between 20 and 25 min. All materials, including instructions for the experimenter (i.e., the job interviewer) and the data set used for statistical analysis, as well as a multimodal data set, which includes further measures, are publicly available. Results: Typical subjective and biological stress-response patterns were found, with peak sAA and perceived stress levels observed immediately after the job interviews and peak cortisol concentrations 5 min afterwards. Female participants experienced the scenario as more stressful than male participants. Cortisol peaks were higher for participants who experienced the situation as a threat in comparison to participants who experienced it as a challenge. Associations between the strength of the stress response with further person characteristics and psychological variables such as BMI, age, coping styles, and personality were not found. Discussion: Overall, our method is well-suited to induce biological and perceived stress, mostly independent of person characteristics and psychological variables. The setting is naturalistic and easily implementable in standardized laboratory settings.

8.
Lancet ; 402(10401): 545-554, 2023 Aug 12.
Article En | MEDLINE | ID: mdl-37414064

BACKGROUND: Transcranial direct current stimulation (tDCS) has been proposed as a feasible treatment for major depressive disorder (MDD). However, meta-analytic evidence is heterogenous and data from multicentre trials are scarce. We aimed to assess the efficacy of tDCS versus sham stimulation as an additional treatment to a stable dose of selective serotonin reuptake inhibitors (SSRIs) in adults with MDD. METHODS: The DepressionDC trial was triple-blind, randomised, and sham-controlled and conducted at eight hospitals in Germany. Patients being treated at a participating hospital aged 18-65 years were eligible if they had a diagnosis of MDD, a score of at least 15 on the Hamilton Depression Rating Scale (21-item version), no response to at least one antidepressant trial in their current depressive episode, and treatment with an SSRI at a stable dose for at least 4 weeks before inclusion; the SSRI was continued at the same dose during stimulation. Patients were allocated (1:1) by fixed-blocked randomisation to receive either 30 min of 2 mA bifrontal tDCS every weekday for 4 weeks, then two tDCS sessions per week for 2 weeks, or sham stimulation at the same intervals. Randomisation was stratified by site and baseline Montgomery-Åsberg Depression Rating Scale (MADRS) score (ie, <31 or ≥31). Participants, raters, and operators were masked to treatment assignment. The primary outcome was change on the MADRS at week 6, analysed in the intention-to-treat population. Safety was assessed in all patients who received at least one treatment session. The trial was registered with ClinicalTrials.gov (NCT02530164). FINDINGS: Between Jan 19, 2016, and June 15, 2020, 3601 individuals were assessed for eligibility. 160 patients were included and randomly assigned to receive either active tDCS (n=83) or sham tDCS (n=77). Six patients withdrew consent and four patients were found to have been wrongly included, so data from 150 patients were analysed (89 [59%] were female and 61 [41%] were male). No intergroup difference was found in mean improvement on the MADRS at week 6 between the active tDCS group (n=77; -8·2, SD 7·2) and the sham tDCS group (n=73; -8·0, 9·3; difference 0·3 [95% CI -2·4 to 2·9]). Significantly more participants had one or more mild adverse events in the active tDCS group (50 [60%] of 83) than in the sham tDCS group (33 [43%] of 77; p=0·028). INTERPRETATION: Active tDCS was not superior to sham stimulation during a 6-week period. Our trial does not support the efficacy of tDCS as an additional treatment to SSRIs in adults with MDD. FUNDING: German Federal Ministry of Education and Research.

9.
Stud Health Technol Inform ; 302: 917-921, 2023 May 18.
Article En | MEDLINE | ID: mdl-37203536

COVID-19 presence classification and severity prediction via (3D) thorax computed tomography scans have become important tasks in recent times. Especially for capacity planning of intensive care units, predicting the future severity of a COVID-19 patient is crucial. The presented approach follows state-of-theart techniques to aid medical professionals in these situations. It comprises an ensemble learning strategy via 5-fold cross-validation that includes transfer learning and combines pre-trained 3D-versions of ResNet34 and DenseNet121 for COVID19 classification and severity prediction respectively. Further, domain-specific preprocessing was applied to optimize model performance. In addition, medical information like the infection-lung-ratio, patient age, and sex were included. The presented model achieves an AUC of 79.0% to predict COVID-19 severity, and 83.7% AUC to classify the presence of an infection, which is comparable with other currently popular methods. This approach is implemented using the AUCMEDI framework and relies on well-known network architectures to ensure robustness and reproducibility.


COVID-19 , Humans , Reproducibility of Results , Intensive Care Units , Learning , Research Design
10.
Stud Health Technol Inform ; 302: 932-936, 2023 May 18.
Article En | MEDLINE | ID: mdl-37203539

Computer vision has useful applications in precision medicine and recognizing facial phenotypes of genetic disorders is one of them. Many genetic disorders are known to affect faces' visual appearance and geometry. Automated classification and similarity retrieval aid physicians in decision-making to diagnose possible genetic conditions as early as possible. Previous work has addressed the problem as a classification problem; however, the sparse label distribution, having few labeled samples, and huge class imbalances across categories make representation learning and generalization harder. In this study, we used a facial recognition model trained on a large corpus of healthy individuals as a pre-task and transferred it to facial phenotype recognition. Furthermore, we created simple baselines of few-shot meta-learning methods to improve our base feature descriptor. Our quantitative results on GestaltMatcher Database (GMDB) show that our CNN baseline surpasses previous works, including GestaltMatcher, and few-shot meta-learning strategies improve retrieval performance in frequent and rare classes.


Diagnosis, Computer-Assisted , Face , Genetic Diseases, Inborn , Phenotype , Humans , Genetic Diseases, Inborn/diagnostic imaging
11.
Eur Stroke J ; 8(1): 387-397, 2023 03.
Article En | MEDLINE | ID: mdl-37021189

Background: Hypertension is the leading modifiable risk factor for cerebral small vessel diseases (SVDs). Yet, it is unknown whether antihypertensive drug classes differentially affect microvascular function in SVDs. Aims: To test whether amlodipine has a beneficial effect on microvascular function when compared to either losartan or atenolol, and whether losartan has a beneficial effect when compared to atenolol in patients with symptomatic SVDs. Design: TREAT-SVDs is an investigator-led, prospective, open-label, randomised crossover trial with blinded endpoint assessment (PROBE design) conducted at five study sites across Europe. Patients aged 18 years or older with symptomatic SVD who have an indication for antihypertensive treatment and are suffering from either sporadic SVD and a history of lacunar stroke or vascular cognitive impairment (group A) or CADASIL (group B) are randomly allocated 1:1:1 to one of three sequences of antihypertensive treatment. Patients stop their regular antihypertensive medication for a 2-week run-in period followed by 4-week periods of monotherapy with amlodipine, losartan and atenolol in random order as open-label medication in standard dose. Outcomes: The primary outcome measure is cerebrovascular reactivity (CVR) as determined by blood oxygen level dependent brain MRI signal response to hypercapnic challenge with change in CVR in normal appearing white matter as primary endpoint. Secondary outcome measures are mean systolic blood pressure (BP) and BP variability (BPv). Discussion: TREAT-SVDs will provide insights into the effects of different antihypertensive drugs on CVR, BP, and BPv in patients with symptomatic sporadic and hereditary SVDs. Funding: European Union's Horizon 2020 programme. Trial registration: NCT03082014.


Amlodipine , Antihypertensive Agents , Humans , Amlodipine/pharmacology , Antihypertensive Agents/pharmacology , Blood Pressure , Atenolol/pharmacology , Losartan/pharmacology , Cross-Over Studies , Prospective Studies , Randomized Controlled Trials as Topic
12.
Front Psychol ; 14: 1293513, 2023.
Article En | MEDLINE | ID: mdl-38250116

Stress, a natural process affecting individuals' wellbeing, has a profound impact on overall quality of life. Researchers from diverse fields employ various technologies and methodologies to investigate it and alleviate the negative effects of this phenomenon. Wearable devices, such as smart bands, capture physiological data, including heart rate variability, motions, and electrodermal activity, enabling stress level monitoring through machine learning models. However, labeling data for model accuracy assessment poses a significant challenge in stress-related research due to incomplete or inaccurate labels provided by individuals in their daily lives. To address this labeling predicament, our study proposes implementing Semi-Supervised Learning (SSL) models. Through comparisons with deep learning-based supervised models and clustering-based unsupervised models, we evaluate the performance of our SSL models. Our experiments show that our SSL models achieve 77% accuracy with a classifier trained on an augmented dataset prepared using the label propagation (LP) algorithm. Additionally, our deep autoencoder network achieves 76% accuracy. These results highlight the superiority of SSL models over unsupervised learning techniques and their comparable performance to supervised learning models, even with limited labeled data. By relieving the burden of labeling in daily life stress recognition, our study advances stress-related research, recognizing stress as a natural process rather than a disease. This facilitates the development of more efficient and accurate stress monitoring methods in the wild.

13.
Front Psychiatry ; 13: 1026015, 2022.
Article En | MEDLINE | ID: mdl-36386975

Background: Emotions play a key role in psychotherapy. However, a problem with examining emotional states via self-report questionnaires is that the assessment usually takes place after the actual emotion has been experienced which might lead to biases and continuous human ratings are time and cost intensive. Using the AI-based software package Non-Verbal Behavior Analyzer (NOVA), video-based emotion recognition of arousal and valence can be applied in naturalistic psychotherapeutic settings. In this study, four emotion recognition models (ERM) each based on specific feature sets (facial: OpenFace, OpenFace-Aureg; body: OpenPose-Activation, OpenPose-Energy) were developed and compared in their ability to predict arousal and valence scores correlated to PANAS emotion scores and processes of change (interpersonal experience, coping experience, affective experience) as well as symptoms (depression and anxiety in HSCL-11). Materials and methods: A total of 183 patient therapy videos were divided into a training sample (55 patients), a test sample (50 patients), and a holdout sample (78 patients). The best ERM was selected for further analyses. Then, ERM based arousal and valence scores were correlated with patient and therapist estimates of emotions and processes of change. Furthermore, using regression models arousal and valence were examined as predictors of symptom severity in depression and anxiety. Results: The ERM based on OpenFace produced the best agreement to the human coder rating. Arousal and valence correlated significantly with therapists' ratings of sadness, shame, anxiety, and relaxation, but not with the patient ratings of their own emotions. Furthermore, a significant negative correlation indicates that negative valence was associated with higher affective experience. Negative valence was found to significantly predict higher anxiety but not depression scores. Conclusion: This study shows that emotion recognition with NOVA can be used to generate ERMs associated with patient emotions, affective experiences and symptoms. Nevertheless, limitations were obvious. It seems necessary to improve the ERMs using larger databases of sessions and the validity of ERMs needs to be further investigated in different samples and different applications. Furthermore, future research should take ERMs to identify emotional synchrony between patient and therapists into account.

14.
Front Artif Intell ; 5: 903875, 2022.
Article En | MEDLINE | ID: mdl-35910188

One of the most prominent methods for explaining the behavior of Deep Reinforcement Learning (DRL) agents is the generation of saliency maps that show how much each pixel attributed to the agents' decision. However, there is no work that computationally evaluates and compares the fidelity of different perturbation-based saliency map approaches specifically for DRL agents. It is particularly challenging to computationally evaluate saliency maps for DRL agents since their decisions are part of an overarching policy, which includes long-term decision making. For instance, the output neurons of value-based DRL algorithms encode both the value of the current state as well as the expected future reward after doing each action in this state. This ambiguity should be considered when evaluating saliency maps for such agents. In this paper, we compare five popular perturbation-based approaches to create saliency maps for DRL agents trained on four different Atari 2,600 games. The approaches are compared using two computational metrics: dependence on the learned parameters of the underlying deep Q-network of the agents (sanity checks) and fidelity to the agents' reasoning (input degradation). During the sanity checks, we found that a popular noise-based saliency map approach for DRL agents shows little dependence on the parameters of the output layer. We demonstrate that this can be fixed by tweaking the algorithm such that it focuses on specific actions instead of the general entropy within the output values. For fidelity, we identify two main factors that influence which saliency map approach should be chosen in which situation. Particular to value-based DRL agents, we show that analyzing the agents' choice of action requires different saliency map approaches than analyzing the agents' state value estimation.

15.
Front Artif Intell ; 5: 825565, 2022.
Article En | MEDLINE | ID: mdl-35464995

With the ongoing rise of machine learning, the need for methods for explaining decisions made by artificial intelligence systems is becoming a more and more important topic. Especially for image classification tasks, many state-of-the-art tools to explain such classifiers rely on visual highlighting of important areas of the input data. Contrary, counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image in a way such that the classifier would have made a different prediction. By doing so, the users of counterfactual explanation systems are equipped with a completely different kind of explanatory information. However, methods for generating realistic counterfactual explanations for image classifiers are still rare. Especially in medical contexts, where relevant information often consists of textural and structural information, high-quality counterfactual images have the potential to give meaningful insights into decision processes. In this work, we present GANterfactual, an approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques. Additionally, we conduct a user study to evaluate our approach in an exemplary medical use case. Our results show that, in the chosen medical use-case, counterfactual explanations lead to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems that work with saliency maps, namely LIME and LRP.

16.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2896-2902, 2021 11.
Article En | MEDLINE | ID: mdl-34891852

Cancer is a major public health issue and takes the second-highest toll of deaths caused by non-communicable diseases worldwide. Automatically detecting lesions at an early stage is essential to increase the chance of a cure. This study proposes a novel dilated Faster R-CNN with modulated deformable convolution and modulated deformable positive-sensitive region of interest pooling to detect lesions in computer tomography images. A pre-trained VGG-16 is transferred as the backbone of Faster R-CNN, followed by a region proposal network and a region of interest pooling layer to achieve lesion detection. The modulated deformable convolutional layers are employed to learn deformable convolutional filters, while the modulated deformable positive-sensitive region of interest pooling provides an enhanced feature extraction on the feature maps. Moreover, dilated convolutions are combined with the modulated deformable convolutions to fine-tune the VGG-16 model with multi-scale receptive fields. In the experiments evaluated on the DeepLesion dataset, the modulated deformable positive-sensitive region of interest pooling model achieves the highest sensitivity score of 58.8 % on average with dilation of [4, 4, 4] and outperforms state-of-the-art models in the range of [2], [8] average false positives per image. This research demonstrates the suitability of dilation modifications and the possibility of enhancing the performance using a modulated deformable positive-sensitive region of interest pooling layer for universal lesion detectors.


Neural Networks, Computer , Tomography, X-Ray Computed
17.
IEEE Trans Cybern ; 50(3): 1230-1239, 2020 Mar.
Article En | MEDLINE | ID: mdl-30872254

The task of obtaining meaningful annotations is a tedious work, incurring considerable costs and time consumption. Dynamic active learning and cooperative learning are recently proposed approaches to reduce human effort of annotating data with subjective phenomena. In this paper, we introduce a novel generic annotation framework, with the aim to achieve the optimal tradeoff between label reliability and cost reduction by making efficient use of human and machine work force. To this end, we use dropout to assess model uncertainty and thereby to decide which instances can be automatically labeled by the machine and which ones require human inspection. In addition, we propose an early stopping criterion based on inter-rater agreement in order to focus human resources on those ambiguous instances that are difficult to label. In contrast to the existing algorithms, the new confidence measures are not only applicable to binary classification tasks but also regression problems. The proposed method is evaluated on the benchmark datasets for non-native English prosody estimation, provided in the Interspeech computational paralinguistics challenge. In the result, the novel dynamic cooperative learning algorithm yields 0.424 Spearman's correlation coefficient compared to 0.413 with passive learning, while reducing the amount of human annotations by 74%.


Data Curation/methods , Man-Machine Systems , Supervised Machine Learning , Adult , Algorithms , Databases, Factual , Female , Humans , Male , Middle Aged , Reproducibility of Results , Young Adult
18.
IEEE Trans Vis Comput Graph ; 24(1): 605-615, 2018 01.
Article En | MEDLINE | ID: mdl-28866500

We extend the popular brushing and linking technique by incorporating personal agency in the interaction. We map existing research related to brushing and linking into a design space that deconstructs the interaction technique into three components: source (what is being brushed), link (the expression of relationship between source and target), and target (what is revealed as related to the source). Using this design space, we created MyBrush, a unified interface that offers personal agency over brushing and linking by giving people the flexibility to configure the source, link, and target of multiple brushes. The results of three focus groups demonstrate that people with different backgrounds leveraged personal agency in different ways, including performing complex tasks and showing links explicitly. We reflect on these results, paving the way for future research on the role of personal agency in information visualization.


Computer Graphics , Software , Humans , Personal Autonomy
19.
Front Psychol ; 8: 2342, 2017.
Article En | MEDLINE | ID: mdl-29375448

Despite being a pan-cultural phenomenon, laughter is arguably the least understood behaviour deployed in social interaction. As well as being a response to humour, it has other important functions including promoting social affiliation, developing cooperation and regulating competitive behaviours. This multi-functional feature of laughter marks it as an adaptive behaviour central to facilitating social cohesion. However, it is not clear how laughter achieves this social cohesion. We consider two approaches to understanding how laughter facilitates social cohesion - the 'representational' approach and the 'affect-induction' approach. The representational approach suggests that laughter conveys information about the expresser's emotional state, and the listener decodes this information to gain knowledge about the laugher's felt state. The affect-induction approach views laughter as a tool to influence the affective state of listeners. We describe a modified version of the affect-induction approach, in which laughter is combined with additional factors - including social context, verbal information, other social signals and knowledge of the listener's emotional state - to influence an interaction partner. This view asserts that laughter by itself is ambiguous: the same laughter may induce positive or negative affect in a listener, with the outcome determined by the combination of these additional factors. Here we describe two experiments exploring which of these approaches accurately describes laughter. Participants judged the genuineness of audio-video recordings of social interactions containing laughter. Unknown to the participants the recordings contained either the original laughter or replacement laughter from a different part of the interaction. When replacement laughter was matched for intensity, genuineness judgements were similar to judgements of the original unmodified recordings. When replacement laughter was not matched for intensity, genuineness judgements were generally significantly lower. These results support the affect-induction view of laughter by suggesting that laughter is inherently underdetermined and ambiguous, and that its interpretation is determined by the context in which it occurs.

20.
Neuromuscul Disord ; 24(6): 467-73, 2014 Jun.
Article En | MEDLINE | ID: mdl-24780149

Steroids are nowadays routinely used as a long-term treatment in Duchenne muscular dystrophy (DMD). Their effects on body composition were assessed using dual X-ray absorptiometry. The study followed over 2 years 29 genetically confirmed DMD patients: 21 in the steroid-treated group and 8 in the steroid-naïve group. After 2 years of steroid treatment, the lean tissue mass values increased significantly (p<0.0001), the percentage of body fat mass remained practically constant (p=0.94) in comparison with the initial visit. In the steroid-naïve patients, there were no significant increases in the lean tissue mass but deterioration in body composition confirmed by a significant increase in the percentage of body fat mass. Besides, significant negative correlations were found between the percentage of body fat mass and the MFM total score (R=-0.79, n=76, p<0.0001). A 2-year steroid treatment improves significantly body composition of boys with DMD through a significant increase in lean tissue mass. We suggest that a thorough check of body composition should be carried out before steroid treatment discontinuation in case of overweight gain.


Body Composition/drug effects , Muscular Dystrophy, Duchenne/drug therapy , Steroids/therapeutic use , Absorptiometry, Photon , Adolescent , Child , Child, Preschool , Humans , Male , Motor Activity/drug effects , Muscular Dystrophy, Duchenne/diagnostic imaging , Steroids/administration & dosage
...