ABSTRACT
The purpose of this review is to describe the immune function of the liver, guiding the reader from the homeostatic tolerogenic status to the aberrant activation demonstrated in chronic liver disease. An extensive description of the pathways behind the inflammatory modulation of the healthy liver will be provided focusing on the complex immune cell network residing within the liver. The limit of tolerance will be presented in the context of organ transplantation, seizing the limits of homeostatic mechanisms that fail in accepting the graft, progressing eventually toward rejection. The triggers and mechanisms behind chronic activation in metabolic liver conditions and viral hepatitis will be discussed. The last session will be dedicated to one of the greatest paradoxes for a tolerogenic organ, developing autoimmunity. Through the description of the three most common autoimmune liver disease, the autoimmune reaction against hepatocytes and biliary epithelial cells will be dissected.
ABSTRACT
BACKGROUND AND AIMS: In autoimmune hepatitis, achieving complete biochemical remission (CBR) with current weight-based thiopurine dosing is challenging. We investigated whether patients could be stratified regarding CBR according to a target range of thiopurine metabolites. Moreover, we explored the effects of azathioprine dosage increases and co-therapy of allopurinol with low-dose thiopurines on metabolite profiles and treatment response. APPROACH AND RESULTS: The relation between metabolites and treatment response was assessed in 337 individuals from 4 European centers. In a global, cross-sectional analysis, active metabolites 6-thioguanine nucleotides (6TGN) were similar in those with and without CBR. However, analyzing patients with sequential measurements over 4 years (N = 146) revealed higher average 6TGN levels in those with stable CBR (260 pmol/0.2 mL) compared to those failing to maintain CBR (181 pmol/0.2 mL; p = 0.0014) or never achieving CBR (153 pmol/0.2 mL; p < 0.0001), with an optimal 6TGN cutoff of ≥223 pmol/0.2 mL (sensitivity: 76% and specificity: 78%). Only 42% exhibited 6TGN ≥223 pmol/0.2 mL following weight-based dosing, as doses weakly correlated with 6TGN but with 6-methylmercaptopurine (6MMP), a metabolite associated with toxicity. Azathioprine dose increases led to preferential 6MMP formation (+127% vs. 6TGN +34%; p < 0.0001). Conversely, adding allopurinol to thiopurines in difficult-to-treat patients (N = 36) raised 6TGN (168â321 pmol/0.2 mL; p < 0.0001) and lowered 6MMP (2125â184 pmol/0.2 mL; p < 0.0001), resulting in improved transaminases in all patients and long-term CBR in 75%. CONCLUSIONS: Maintaining CBR in autoimmune hepatitis was associated with 6TGN ≥223 pmol/0.2 mL. For patients who fail to achieve CBR and therapeutic 6TGN levels despite thiopurine dose increase due to preferential 6MMP formation, comedication of allopurinol alongside low-dose thiopurines represents an efficient alternative.
Subject(s)
Allopurinol , Azathioprine , Drug Therapy, Combination , Hepatitis, Autoimmune , Immunosuppressive Agents , Mercaptopurine , Humans , Allopurinol/administration & dosage , Allopurinol/therapeutic use , Hepatitis, Autoimmune/drug therapy , Hepatitis, Autoimmune/blood , Hepatitis, Autoimmune/metabolism , Female , Male , Middle Aged , Azathioprine/administration & dosage , Azathioprine/therapeutic use , Cross-Sectional Studies , Adult , Mercaptopurine/analogs & derivatives , Mercaptopurine/administration & dosage , Mercaptopurine/therapeutic use , Immunosuppressive Agents/therapeutic use , Immunosuppressive Agents/administration & dosage , Aged , Thionucleotides/blood , Guanine Nucleotides/blood , Guanine Nucleotides/metabolism , Drug Monitoring/methods , Treatment Outcome , Remission Induction/methodsABSTRACT
BACKGROUND AND AIMS: Artificial intelligence-based chatbots offer a potential avenue for delivering personalized counselling to Autoimmune Hepatitis (AIH) patients. We assessed accuracy, completeness, comprehensiveness and safety of ChatGPT-4 responses to 12 inquiries out of a pool of 40 questions posed by four AIH patients. METHODS: Questions were categorized into three areas: Diagnosis(1-3), Quality of Life(4-8) and Medical treatment(9-12). 11 Key Opinion Leaders (KOLs) evaluated responses using a Likert scale with 6 points for accuracy, 5 points for safety and 3 points for completeness and comprehensiveness. RESULTS: Median scores for accuracy, completeness, comprehensiveness and safety were 5(4-6), 2 (2-2) and 3 (2-3); no domain exhibited superior evaluation. Post-diagnosis follow-up question was the trickiest with low accuracy and completeness but safe and comprehensive features. Agreement among KOLs (Fleiss's Kappa statistics) was slight for accuracy (0.05) but poor for the remaining features (-0.05, -0.06 and -0,02, respectively). CONCLUSIONS: Chatbots show good comprehensibility but lack reliability. Further studies are needed to integrate Chat-GPT within clinical practice.
ABSTRACT
Autoimmune liver diseases (AILDs) constitute the fourth most common indication for liver transplantation (LT) across the world. In general, the outcomes after LT are acceptable; however, disease recurrence after LT is common for all AILD, which can negatively affect graft and overall survival. Several questions persist, including the risk factors associated with recurrent disease, optimal antirejection medications, strategies to reduce the risk of recurrence, and how to best incorporate these strategies into clinical practice. For that reason, we assembled an international group of experts to review evidence to address these outstanding questions regarding LT for AILD. Survival rates after LT are ~90% and 70% at 1 and 5 years, and recurrent disease occurs in 10%-50% of patients with AILD. In patients with disease recurrence, graft survival decreased by 18% and 28% and overall survival by 8% and 12% at 5 and 10 years after LT, respectively. Recurrent autoimmune hepatitis is associated with high aminotransferases and immunoglobulin G (IgG) before LT, lymphoplasmacytic infiltrates in the explants, and may be associated with the absence of steroids after LT. However, the efficiency and safety of triple immunosuppressive maintenance therapy is still debatable. Younger age at diagnosis with primary biliary cholangitis or LT is associated with primary biliary cholangitis recurrence. Preventive use of ursodeoxycholic acid reduces the risk of recurrence and has a benefit in graft and patient survival. Episodes of systemic inflammation, including T-cell-mediated rejection, active ulcerative colitis, and episodes of cholangitis, are associated with recurrent PSC. Recurrent disease for AILD is associated with worse graft and patient survival. Patients with autoimmune hepatitis could be considered for long-term low-dose predniso(lo)ne, whereas patients with primary biliary cholangitis should be placed on preventive ursodeoxycholic acid after LT. There are no specific treatments for PSC recurrence; however, adequate control of inflammatory bowel disease and optimal immunosuppression to avoid T-cell-mediated rejection should be encouraged.
ABSTRACT
The capability of measuring specific neurophysiological and autonomic parameters plays a crucial role in the objective evaluation of a human's mental and emotional states. These human aspects are commonly known in the scientific literature to be involved in a wide range of processes, such as stress and arousal. These aspects represent a relevant factor especially in real and operational environments. Neurophysiological autonomic parameters, such as Electrodermal Activity (EDA) and Photoplethysmographic data (PPG), have been usually investigated through research-graded devices, therefore resulting in a high degree of invasiveness, which could negatively interfere with the monitored user's activity. For such a reason, in the last decade, recent consumer-grade wearable devices, usually designed for fitness-tracking purposes, are receiving increasing attention from the scientific community, and are characterized by a higher comfort, ease of use and, therefore, by a higher compatibility with daily-life environments. The present preliminary study was aimed at assessing the reliability of a consumer wearable device, i.e., the Fitbit Sense, with respect to a research-graded wearable, i.e., the Empatica E4 wristband, and a laboratory device, i.e., the Shimmer GSR3+. EDA and PPG data were collected among 12 participants while they performed multiple resting conditions. The results demonstrated that the EDA- and PPG-derived features computed through the wearable and research devices were positively and significantly correlated, while the reliability of the consumer device was significantly lower.
Subject(s)
Wearable Electronic Devices , Humans , Reproducibility of Results , Fitness Trackers , Emotions , Autonomic Nervous SystemABSTRACT
When assessing trainees' progresses during a driving training program, instructors can only rely on the evaluation of a trainee's explicit behavior and their performance, without having any insight about the training effects at a cognitive level. However, being able to drive does not imply knowing how to drive safely in a complex scenario such as the road traffic. Indeed, the latter point involves mental aspects, such as the ability to manage and allocate one's mental effort appropriately, which are difficult to assess objectively. In this scenario, this study investigates the validity of deploying an electroencephalographic neurometric of mental effort, obtained through a wearable electroencephalographic device, to improve the assessment of the trainee. The study engaged 22 young people, without or with limited driving experience. They were asked to drive along five different but similar urban routes, while their brain activity was recorded through electroencephalography. Moreover, driving performance, subjective and reaction times measures were collected for a multimodal analysis. In terms of subjective and performance measures, no driving improvement could be detected either through the driver's subjective measures or through their driving performance. On the other side, through the electroencephalographic neurometric of mental effort, it was possible to catch their improvement in terms of mental performance, with a decrease in experienced mental demand after three repetitions of the driving training tasks. These results were confirmed by the analysis of reaction times, that significantly improved from the third repetition as well. Therefore, being able to measure when a task is less mentally demanding, and so more automatic, allows to deduce the degree of users training, becoming capable of handling additional tasks and reacting to unexpected events.
Subject(s)
Automobile Driving , Wearable Electronic Devices , Humans , Adolescent , Reaction Time , Electroencephalography/methods , Accidents, TrafficABSTRACT
BACKGROUND & AIMS: Autoimmune hepatitis can recur after liver transplantation (LT), though the impact of recurrence on patient and graft survival has not been well characterized. We evaluated a large, international, multicenter cohort to identify the probability and risk factors associated with recurrent AIH and the association between recurrent disease and patient and graft survival. METHODS: We included 736 patients (77% female, mean age 42±1 years) with AIH who underwent LT from January 1987 through June 2020, among 33 centers in North America, South America, Europe and Asia. Clinical data before and after LT, biochemical data within the first 12 months after LT, and immunosuppression after LT were analyzed to identify patients at higher risk of AIH recurrence based on histological diagnosis. RESULTS: AIH recurred in 20% of patients after 5 years and 31% after 10 years. Age at LT ≤42 years (hazard ratio [HR] 3.15; 95% CI 1.22-8.16; p = 0.02), use of mycophenolate mofetil post-LT (HR 3.06; 95% CI 1.39-6.73; p = 0.005), donor and recipient sex mismatch (HR 2.57; 95% CI 1.39-4.76; p = 0.003) and high IgG pre-LT (HR 1.04; 95% CI 1.01-1.06; p = 0.004) were associated with higher risk of AIH recurrence after adjusting for other confounders. In multivariate Cox regression, recurrent AIH (as a time-dependent covariate) was significantly associated with graft loss (HR 10.79, 95% CI 5.37-21.66, p <0.001) and death (HR 2.53, 95% CI 1.48-4.33, p = 0.001). CONCLUSION: Recurrence of AIH following transplant is frequent and is associated with younger age at LT, use of mycophenolate mofetil post-LT, sex mismatch and high IgG pre-LT. We demonstrate an association between disease recurrence and impaired graft and overall survival in patients with AIH, highlighting the importance of ongoing efforts to better characterize, prevent and treat recurrent AIH. LAY SUMMARY: Recurrent autoimmune hepatitis following liver transplant is frequent and is associated with some recipient features and the type of immunosuppressive medications use. Recurrent autoimmune hepatitis negatively affects outcomes after liver transplantation. Thus, improved measures are required to prevent and treat this condition.
Subject(s)
Hepatitis, Autoimmune , Liver Transplantation , Adult , Female , Humans , Immunoglobulin G , Immunosuppressive Agents/therapeutic use , Liver Transplantation/adverse effects , Male , Mycophenolic Acid/therapeutic use , Recurrence , Risk FactorsABSTRACT
BACKGROUND AND AIMS: Liver fibrosis holds a relevant prognostic meaning in primary biliary cholangitis (PBC). Noninvasive fibrosis evaluation using vibration-controlled transient elastography (VCTE) is routinely performed. However, there is limited evidence on its accuracy at diagnosis in PBC. We aimed to estimate the diagnostic accuracy of VCTE in assessing advanced fibrosis (AF) at disease presentation in PBC. APPROACH AND RESULTS: We collected data from 167 consecutive treatment-naïve PBC patients who underwent liver biopsy (LB) at diagnosis at six Italian centers. VCTE examinations were completed within 12 weeks of LB. Biopsies were scored by two blinded expert pathologists, according to the Ludwig system. Diagnostic accuracy was estimated using the area under the receiver operating characteristic curves (AUROCs) for AF (Ludwig stage ≥III). Effects of biochemical and clinical parameters on liver stiffness measurement (LSM) were appraised. The derivation cohort consisted of 126 patients with valid LSM and LB; VCTE identified patients with AF with an AUROC of 0.89. LSM cutoffs ≤6.5 and >11.0 kPa enabled to exclude and confirm, respectively, AF (negative predictive value [NPV] = 0.94; positive predictive value [PPV] = 0.89; error rate = 5.6%). These values were externally validated in an independent cohort of 91 PBC patients (NPV = 0.93; PPV = 0.89; error rate = 8.6%). Multivariable analysis found that the only parameter affecting LSM was fibrosis stage. No association was found with BMI and liver biochemistry. CONCLUSIONS: In a multicenter study of treatment-naïve PBC patients, we identified two cutoffs (LSM ≤6.5 and >11.0 kPa) able to discriminate at diagnosis the absence or presence, respectively, of AF in PBC patients, with external validation. In patients with LSM between these two cutoffs, VCTE is not reliable and liver biopsy should be evaluated for accurate disease staging. BMI and liver biochemistry did not affect LSMs.
Subject(s)
Liver Cirrhosis, Biliary/diagnostic imaging , Liver Cirrhosis/diagnostic imaging , Area Under Curve , Elasticity Imaging Techniques , Female , Humans , Liver Cirrhosis/pathology , Liver Cirrhosis, Biliary/pathology , Male , Middle Aged , ROC Curve , Sensitivity and SpecificityABSTRACT
BACKGROUND: Direct oral anticoagulants (DOACs) are recommended for stroke prevention in patients with atrial fibrillation (AF) or for treatment of deep vein thrombosis, although some concerns about safety and efficacy were raised on the use of these drugs in patients with advanced liver disease (ALD). We want to investigate the association of DOACs use with the bleeding and ischaemic risk. MATERIAL AND METHODS: We performed a systematic review and metanalysis of clinical studies retrieved from PubMed (via MEDLINE) and Cochrane (CENTRAL) databases addressing the impact of DOACs therapy on bleeding events including intracranial haemorrhage (ICH), gastrointestinal and major bleeding. Secondary end points were all-cause death, ischaemic stroke/systemic embolism (IS/SE) and recurrence/progression of vein thrombosis (rDVT). RESULTS: 12 studies were included in the meta-analysis: a total of 43 532 patients with ALD or cirrhosis, of whom 27 574 (63.3%) were on treatment with DOACs and 15 958 were in warfarin/low molecular weight heparin. DOACs reduced the incidence of major bleeding by 61% (pooled Hazard Ratio [HR] 0.39, 95% Confidence Interval [CI] 0.21-0.70), ICH by 52% (HR 0.48, 95% CI 0.40-0.59), while no difference in the reduction of any and gastrointestinal bleeding were observed. DOACs reduced also rDVT by 82% (HR 0.18, 95%CI 0.06-0.57), but did not reduce death and IS/SE. No difference was shown according to oesophageal varices and Child Pugh score in the meta-regression analysis between warfarin/heparin and DOACs performed on each outcome. CONCLUSIONS: DOACs are associated with a lower incidence of bleeding and may be an attractive therapeutic option in patients with cirrhosis.
Subject(s)
Atrial Fibrillation/drug therapy , End Stage Liver Disease/complications , Factor Xa Inhibitors/therapeutic use , Gastrointestinal Hemorrhage/epidemiology , Intracranial Hemorrhages/epidemiology , Ischemic Stroke/prevention & control , Liver Cirrhosis/complications , Venous Thrombosis/drug therapy , Atrial Fibrillation/complications , Embolism/etiology , Embolism/prevention & control , Gastrointestinal Hemorrhage/chemically induced , Gastrointestinal Hemorrhage/etiology , Hemorrhage/chemically induced , Hemorrhage/epidemiology , Hemorrhage/etiology , Humans , Intracranial Hemorrhages/chemically induced , Intracranial Hemorrhages/etiology , Ischemic Stroke/etiology , Liver Diseases/complications , Proportional Hazards Models , Severity of Illness IndexABSTRACT
The sample size is a crucial concern in scientific research and even more in behavioural neurosciences, where besides the best practice it is not always possible to reach large experimental samples. In this study we investigated how the outcomes of research change in response to sample size reduction. Three indices computed during a task involving the observations of four videos were considered in the analysis, two related to the brain electroencephalographic (EEG) activity and one to autonomic physiological measures, i.e., heart rate and skin conductance. The modifications of these indices were investigated considering five subgroups of sample size (32, 28, 24, 20, 16), each subgroup consisting of 630 different combinations made by bootstrapping n (n = sample size) out of 36 subjects, with respect to the total population (i.e., 36 subjects). The correlation analysis, the mean squared error (MSE), and the standard deviation (STD) of the indexes were studied at the participant reduction and three factors of influence were considered in the analysis: the type of index, the task, and its duration (time length). The findings showed a significant decrease of the correlation associated to the participant reduction as well as a significant increase of MSE and STD (p < 0.05). A threshold of subjects for which the outcomes remained significant and comparable was pointed out. The effects were to some extents sensitive to all the investigated variables, but the main effect was due to the task length. Therefore, the minimum threshold of subjects for which the outcomes were comparable increased at the reduction of the spot duration.
Subject(s)
Electroencephalography , Neurosciences , Heart Rate , Humans , Sample SizeABSTRACT
Current telemedicine and remote healthcare applications foresee different interactions between the doctor and the patient relying on the use of commercial and medical wearable sensors and internet-based video conferencing platforms. Nevertheless, the existing applications necessarily require a contact between the patient and sensors for an objective evaluation of the patient's state. The proposed study explored an innovative video-based solution for monitoring neurophysiological parameters of potential patients and assessing their mental state. In particular, we investigated the possibility to estimate the heart rate (HR) and eye blinks rate (EBR) of participants while performing laboratory tasks by mean of facial-video analysis. The objectives of the study were focused on: (i) assessing the effectiveness of the proposed technique in estimating the HR and EBR by comparing them with laboratory sensor-based measures and (ii) assessing the capability of the video-based technique in discriminating between the participant's resting state (Nominal condition) and their active state (Non-nominal condition). The results demonstrated that the HR and EBR estimated through the facial-video technique or the laboratory equipment did not statistically differ (p > 0.1), and that these neurophysiological parameters allowed to discriminate between the Nominal and Non-nominal states (p < 0.02).
Subject(s)
Heart Rate , Telemedicine , Video Recording , Adult , Blinking , Female , Humans , MaleABSTRACT
The capability of monitoring user's performance represents a crucial aspect to improve safety and efficiency of several human-related activities. Human errors are indeed among the major causes of work-related accidents. Assessing human factors (HFs) could prevent these accidents through specific neurophysiological signals' evaluation but laboratory sensors require highly-specialized operators and imply a certain grade of invasiveness which could negatively interfere with the worker's activity. On the contrary, consumer wearables are characterized by their ease of use and their comfortability, other than being cheaper compared to laboratory technologies. Therefore, wearable sensors could represent an ideal substitute for laboratory technologies for a real-time assessment of human performances in ecological settings. The present study aimed at assessing the reliability and capability of consumer wearable devices (i.e., Empatica E4 and Muse 2) in discriminating specific mental states compared to laboratory equipment. The electrooculographic (EOG), electrodermal activity (EDA) and photoplethysmographic (PPG) signals were acquired from a group of 17 volunteers who took part to the experimental protocol in which different working scenarios were simulated to induce different levels of mental workload, stress, and emotional state. The results demonstrated that the parameters computed by the consumer wearable and laboratory sensors were positively and significantly correlated and exhibited the same evidences in terms of mental states discrimination.
Subject(s)
Laboratories , Wearable Electronic Devices , Heart Rate , Humans , Reproducibility of Results , WorkloadABSTRACT
Risk prediction modelling is important to better understand the determinants of the course and outcome of PBC and to inform the risk across the disease continuum in PBC enabling risk-stratified follow-up care and personalised therapy. Current prognostic models in PBC are based on treatment response to ursodeoxycholic acid because of the well-established relationship between alkaline phosphatase on treatment and long-term outcome. In addition, serum alkaline phosphatase correlates with ductular reaction and biliary metaplasia, which are hallmark of biliary injury. Considering the waiting time for treatment failure in high-risk patients is not inconsequential, efforts are focused on bringing forward risk stratification at diagnosis by predicting treatment response at onset. There is a need for better prognostic variables that are central to the disease process. We should take an integrative approach that incorporates multiple layers of information including genetic and environmental influences, host characteristics, clinical data, and molecular alterations for risk assessments. Biomarker discovery has an accelerated pace taking advantage of the emergence of large-scale omics platforms (genomics, epigenomics, transcriptomics, proteomics, metabolomics, and others) and whole-genome sequencing. In the digital era, applications of artificial intelligence, such as machine learning, can support the computing power required to analyse the vast amount of data produced by omics. The information is then used for the development of personalised risk prediction models that through clinical trials and hopefully industry partnerships can guide risk management strategies. We are facing an unprecedented opportunity for the integration of molecular diagnostics into the clinic, which promotes progress toward the personalised management of patients with PBC.
Subject(s)
Alkaline Phosphatase/blood , Liver Cirrhosis, Biliary/diagnosis , Liver Cirrhosis, Biliary/genetics , Machine Learning , Models, Statistical , Animals , Biomarkers/blood , Cholagogues and Choleretics/therapeutic use , Genomics/methods , Genomics/statistics & numerical data , Humans , Liver Cirrhosis, Biliary/drug therapy , Liver Cirrhosis, Biliary/immunology , Metabolomics/methods , Metabolomics/statistics & numerical data , Precision Medicine/methods , Prognosis , Risk Assessment , Risk Factors , Treatment Outcome , Ursodeoxycholic Acid/therapeutic use , Whole Genome SequencingSubject(s)
Cholangitis/etiology , Disease Management , Cholangitis/diagnosis , Cholangitis/therapy , Humans , Risk FactorsABSTRACT
The drivers' distraction plays a crucial role in road safety as it is one of the main impacting causes of road accidents. The phenomenon of distraction encompasses both psychological and environmental factors and, therefore, addressing the complex interplay contributing to human distraction in automotive is crucial for developing technologies and interventions for improving road safety. In scientific literature, different works were proposed for the distraction characterization in automotive, but there is still the lack of a univocal measure to assess the degree of distraction, nor a gold-standard tool that allows to "detect" eventual events, road traffic, and additional driving tasks that might contribute to the drivers' distraction. Therefore, the present study aimed at developing an EEG-based "Distraction index" obtained by the combination of the driver's mental workload and attention neurometrics and investigating and validating its reliability by analyzing together subjective and behavioral measures. A total of 25 licensed drivers were involved in this study, where they had to drive in two different scenarios, i.e., City and Highway, while different secondary tasks were alternatively proposed in addition to the main one to modulate the driver's attentional demand. The statistical analysis demonstrated the reliability of the proposed EEG-based distraction index in identifying the drivers' distraction when driving along different roads and traffic conditions (all p < 0.001). More importantly, the proposed index was demonstrated to be reliable in identifying which are the most impacting additional driving tasks on the drivers' distraction (all p < 0.01).
ABSTRACT
In the context of electroencephalographic (EEG) signal processing, artifacts generated by ocular movements, such as blinks, are significant confounding factors. These artifacts overwhelm informative EEG features and may occur too frequently to simply remove affected epochs without losing valuable data. Correcting these artifacts remains a challenge, particularly in out-of-lab and online applications using wearable EEG systems (i.e. with low number of EEG channels, without any additional channels to track EOG).Objective.The main objective of the present work consisted in validating a novel ocular blinks artefacts correction method, named multi-stage OCuLar artEfActs deNoising algorithm (o-CLEAN), suitable for online processing with minimal EEG channels.Approach.The research was conducted considering one EEG dataset collected in highly controlled environment, and a second one collected in real environment. The analysis was performed by comparing the o-CLEAN method with previously validated state-of-art techniques, and by evaluating its performance along two dimensions: (a) the ocular artefacts correction performance (IN-Blink), and (b) the EEG signal preservation when the method was applied without any ocular artefacts occurrence (OUT-Blink).Main results.Results highlighted that (i) o-CLEAN algorithm resulted to be, at least, significantly reliable as the most validated approaches identified in scientific literature in terms of ocular blink artifacts correction, (ii) o-CLEAN showed the best performances in terms of EEG signal preservation especially with a low number of EEG channels.Significance.The testing and validation of the o-CLEAN addresses a relevant open issue in bioengineering EEG processing, especially within out-of-the-lab application. In fact, the method offers an effective solution for correcting ocular artifacts in EEG signals with a low number of available channels, for online processing, and without any specific template of the EOG. It was demonstrated to be particularly effective for EEG data gathered in real environments using wearable systems, a rapidly expanding area within applied neuroscience.
Subject(s)
Algorithms , Artifacts , Blinking , Electroencephalography , Eye Movements , Humans , Electroencephalography/methods , Blinking/physiology , Eye Movements/physiology , Male , Female , Adult , Young Adult , Signal Processing, Computer-AssistedABSTRACT
Introduction: In operational environments, human interaction and cooperation between individuals are critical to efficiency and safety. These states are influenced by individuals' cognitive and emotional states. Human factor research aims to objectively quantify these states to prevent human error and maintain constant performances, particularly in high-risk settings such as aviation, where human error and performance account for a significant portion of accidents. Methods: Thus, this study aimed to evaluate and validate two novel methods for assessing the degree of cooperation among professional pilots engaged in real-flight simulation tasks. In addition, the study aimed to assess the ability of the proposed metrics to differentiate between the expertise levels of operating crews based on their levels of cooperation. Eight crews were involved in the experiments, consisting of four crews of Unexperienced pilots and four crews of Experienced pilots. An expert trainer, simulating air traffic management communication on one side and acting as a subject matter expert on the other, provided external evaluations of the pilots' mental states during the simulation. The two novel approaches introduced in this study were formulated based on circular correlation and mutual information techniques. Results and discussion: The findings demonstrated the possibility of quantifying cooperation levels among pilots during realistic flight simulations. In addition, cooperation time is found to be significantly higher (p < 0.05) among Experienced pilots compared to Unexperienced ones. Furthermore, these preliminary results exhibited significant correlations (p < 0.05) with subjective and behavioral measures collected every 30 s during the task, confirming their reliability.
ABSTRACT
Ocular artifacts, including blinks and saccades, pose significant challenges in the analysis of electroencephalographic (EEG) data, often obscuring crucial neural signals. This tutorial provides a comprehensive guide to the most effective methods for correcting these artifacts, with a focus on algorithms designed for both laboratory and real-world settings. We review traditional approaches, such as regression-based techniques and Independent Component Analysis (ICA), alongside more advanced methods like Artifact Subspace Reconstruction (ASR) and deep learning-based algorithms. Through detailed step-by-step instructions and comparative analysis, this tutorial equips researchers with the tools necessary to maintain the integrity of EEG data, ensuring accurate and reliable results in neurophysiological studies. The strategies discussed are particularly relevant for wearable EEG systems and real-time applications, reflecting the growing demand for robust and adaptable solutions in applied neuroscience.
ABSTRACT
The study aimed at investigating the impact of an innovative Wake Vortex Alert (WVA) avionics on pilots' operation and mental states, intending to improve aviation safety by mitigating the risks associated with wake vortex encounters (WVEs). Wake vortices, generated by jet aircraft, pose a significant hazard to trailing or crossing aircrafts. Despite existing separation rules, incidents involving WVEs continue to occur, especially affecting smaller aircrafts like business jets, resulting in aircraft upsets and occasional cabin injuries. To address these challenges, the study focused on developing and validating an alert system that can be presented to air traffic controllers, enabling them to warn flight crews. This empowers the flight crews to either avoid the wake vortex or secure the cabin to prevent injuries. The research employed a multidimensional approach including an analysis of human performance and human factors (HF) issues to determine the potential impact of the alert on pilots' roles, tasks, and mental states. It also utilizes Human Assurance Levels (HALs) to evaluate the necessary human factors support based on the safety criticality of the new system. Realistic flight simulations were conducted to collect data of pilots' behavioural, subjective and neurophysiological responses during WVEs. The data allowed for an objective evaluation of the WVA impact on pilots' operation, behaviour and mental states (mental workload, stress levels and arousal). In particular, the results highlighted the effectiveness of the alert system in facilitating pilots' preparation, awareness and crew resource management (CRM). The results also highlighted the importance of avionics able to enhance aviation safety and reducing risks associated with wake vortex encounters. In particular, we demonstrated how providing timely information and improving situational awareness, the WVA will minimize the occurrence of WVEs and contribute to safer aviation operations.