RESUMO
Adam-type algorithms have become a preferred choice for optimization in the deep learning setting; however, despite their success, their convergence is still not well understood. To this end, we introduce a unified framework for Adam-type algorithms, termed UAdam. It is equipped with a general form of the second-order moment, which makes it possible to include Adam and its existing and future variants as special cases, such as NAdam, AMSGrad, AdaBound, AdaFom, and Adan. The approach is supported by a rigorous convergence analysis of UAdam in the general nonconvex stochastic setting, showing that UAdam converges to the neighborhood of stationary points with a rate of O(1/T). Furthermore, the size of the neighborhood decreases as the parameter ß1 increases. Importantly, our analysis only requires the first-order momentum factor to be close enough to 1, without any restrictions on the second-order momentum factor. Theoretical results also reveal the convergence conditions of vanilla Adam, together with the selection of appropriate hyperparameters. This provides a theoretical guarantee for the analysis, applications, and further developments of the whole general class of Adam-type algorithms. Finally, several numerical experiments are provided to support our theoretical findings.
RESUMO
OBJECTIVE: Sleep monitoring has extensively utilized electroencephalogram (EEG) data collected from the scalp, yielding very large data repositories and well-trained analysis models. Yet, this wealth of data is lacking for emerging, less intrusive modalities, such as ear-EEG. METHODS AND PROCEDURES: The current study seeks to harness the abundance of open-source scalp EEG datasets by applying models pre-trained on data, either directly or with minimal fine-tuning; this is achieved in the context of effective sleep analysis from ear-EEG data that was recorded using a single in-ear electrode, referenced to the ipsilateral mastoid, and developed in-house as described in our previous work. Unlike previous studies, our research uniquely focuses on an older cohort (17 subjects aged 65-83, mean age 71.8 years, some with health conditions), and employs LightGBM for transfer learning, diverging from previous deep learning approaches. RESULTS: Results show that the initial accuracy of the pre-trained model on ear-EEG was 70.1%, but fine-tuning the model with ear-EEG data improved its classification accuracy to 73.7%. The fine-tuned model exhibited a statistically significant improvement (p < 0.05, dependent t-test) for 10 out of the 13 participants, as reflected by an enhanced average Cohen's kappa score (a statistical measure of inter-rater agreement for categorical items) of 0.639, indicating a stronger agreement between automated and expert classifications of sleep stages. Comparative SHAP value analysis revealed a shift in feature importance for the N3 sleep stage, underscoring the effectiveness of the fine-tuning process. CONCLUSION: Our findings underscore the potential of fine-tuning pre-trained scalp EEG models on ear-EEG data to enhance classification accuracy, particularly within an older population and using feature-based methods for transfer learning. This approach presents a promising avenue for ear-EEG analysis in sleep studies, offering new insights into the applicability of transfer learning across different populations and computational techniques. CLINICAL IMPACT: An enhanced ear-EEG method could be pivotal in remote monitoring settings, allowing for continuous, non-invasive sleep quality assessment in elderly patients with conditions like dementia or sleep apnea.
Assuntos
Eletroencefalografia , Couro Cabeludo , Humanos , Eletroencefalografia/métodos , Idoso , Couro Cabeludo/fisiologia , Idoso de 80 Anos ou mais , Masculino , Feminino , Sono/fisiologia , Processamento de Sinais Assistido por Computador , Orelha/fisiologia , Aprendizado de Máquina , Polissonografia/métodosRESUMO
The rapidly increasing prevalence of debilitating breathing disorders, such as chronic obstructive pulmonary disease (COPD), calls for a meaningful integration of artificial intelligence (AI) into respiratory healthcare. Deep learning techniques are "data hungry" whilst patient-based data is invariably expensive and time consuming to record. To this end, we introduce a novel COPD-simulator, a physical apparatus with an easy to replicate design which enables rapid and effective generation of a wide range of COPD-like data from healthy subjects, for enhanced training of deep learning frameworks. To ensure the faithfulness of our domain-aware COPD surrogates, the generated waveforms are examined through both flow waveforms and photoplethysmography (PPG) waveforms (as a proxy for intrathoracic pressure) in terms of duty cycle, sample entropy, FEV1/FVC ratios and flow-volume loops. The proposed simulator operates on healthy subjects and is able to generate FEV1/FVC obstruction ratios ranging from greater than 0.8 to less than 0.2, mirroring values that can observed in the full spectrum of real-world COPD. As a final stage of verification, a simple convolutional neural network is trained on surrogate data alone, and is used to accurately detect COPD in real-world patients. When training solely on surrogate data, and testing on real-world data, a comparison of true positive rate against false positive rate yields an area under the curve of 0.75, compared with 0.63 when training solely on real-world data.
RESUMO
The Ear-ECG provides a continuous Lead I like electrocardiogram (ECG) by measuring the potential difference related to heart activity by electrodes which are embedded within earphones. However, the significant increase in wearability and comfort enabled by Ear-ECG is often accompanied by a degradation in signal quality - an obstacle that is shared by the majority of wearable technologies. We aim to resolve this issue by introducing a Deep Matched Filter (Deep-MF) for the highly accurate detection of R-peaks in wearable ECG, thus enhancing the utility of Ear-ECG in real-world scenarios. The Deep-MF consists of an encoder stage, partially initialised with an ECG template, and an R-peak classifier stage. Through its operation as a Matched Filter, the encoder searches for matches with an ECG template in the input signal, prior to filtering these matches with the subsequent convolutional layers and selecting peaks corresponding to the ground-truth ECG. The latent representation of R-peak information is then fed into a R-peak classifier, of which the output provides precise R-peak locations. The proposed Deep Matched Filter is evaluated using leave-one-subject-out cross-validation over 36 subjects with an age range of 18-75, with the Deep-MF outperforming existing algorithms for R-peak detection in noisy ECG. The Deep-MF achieves a median R-peak recall of 94.9% and a median precision of 91.2% across subjects when evaluated with leave-one-subject-out cross validation. Overall, this Deep-Match framework serves as a valuable step forward for the real-world functionality of Ear-ECG and, through its interpretable operation, the acceptance of deep learning models in e-Health.
Assuntos
Algoritmos , Aprendizado Profundo , Eletrocardiografia , Processamento de Sinais Assistido por Computador , Humanos , Eletrocardiografia/métodos , Dispositivos Eletrônicos Vestíveis , Adulto , Orelha/fisiologiaRESUMO
The ear is well positioned to accommodate both brain and vital signs monitoring, via so-called hearable devices. Consequently, ear-based electroencephalography has recently garnered great interest. However, despite the considerable potential of hearable based cardiac monitoring, the biophysics and characteristic cardiac rhythm of ear-based electrocardiography (ECG) are not yet well understood. To this end, we map the cardiac potential on the ear through volume conductor modelling and measurements on multiple subjects. In addition, in order to demonstrate real-world feasibility of in-ear ECG, measurements are conducted throughout a long-time simulated driving task. As a means of evaluation, the correspondence between the cardiac rhythms obtained via the ear-based and standard Lead I measurements, with respect to the shape and timing of the cardiac rhythm, is verified through three measures of similarity: the Pearson correlation, and measures of amplitude and timing deviations. A high correspondence between the cardiac rhythms obtained via the ear-based and Lead I measurements is rigorously confirmed through agreement between simulation and measurement, while the real-world feasibility was conclusively demonstrated through efficacious cardiac rhythm monitoring during prolonged driving. This work opens new avenues for seamless, hearable-based cardiac monitoring that extends beyond heart rate detection to offer cardiac rhythm examination in the community.
RESUMO
This work aims to classify physiological states using heart rate variability (HRV) features extracted from electrocardiograms recorded in the ears (ear-ECG). The physiological states considered in this work are: (a) normal breathing, (b) controlled slow breathing, and (c) mental exercises. Since both (b) and (c) cause higher variance in heartbeat intervals, breathing-related features (SpO2 and mean breathing interval) from the ear Photoplethysmogram (ear-PPG) are used to facilitate classification. This work: 1) proposes a scheme that, after initialization, automatically extracts R-peaks from low signal-to-noise ratio ear-ECG; 2) verifies the feasibility of extracting meaningful HRV features from ear-ECG; 3) quantitatively compares several ear-ECG sites; and 4) discusses the benefits of combining ear-ECG and ear-PPG features.
Assuntos
Orelha , Fotopletismografia , Frequência Cardíaca/fisiologia , Respiração , EletrocardiografiaRESUMO
Sleep disorders are a prevalent problem among older adults, yet obtaining an accurate and reliable assessment of sleep quality can be challenging. Traditional polysomnography (PSG) is the gold standard for sleep staging, but is obtrusive, expensive, and requires expert assistance. To this end, we propose a minimally invasive single-channel single ear-EEG automatic sleep staging method for older adults. The method employs features from the frequency, time, and structural complexity domains, which provide a robust classification of sleep stages from a standardised viscoelastic earpiece. Our method is verified on a dataset of older adults and achieves a kappa value of at least 0.61, indicating substantial agreement. This paves the way for a non-invasive, cost-effective, and portable alternative to traditional PSG for sleep staging.
Assuntos
Transtornos do Sono-Vigília , Sono , Humanos , Idoso , Polissonografia/métodos , Fases do Sono , Eletroencefalografia/métodosRESUMO
The success of deep learning methods has enabled many modern wearable health applications, but has also highlighted the critical caveat of their extremely data hungry nature. While the widely explored wrist and finger photoplethysmography (PPG) sites are less affected, given the large available databases, this issue is prohibitive to exploring the full potential of novel recording locations such as in-ear wearables. To this end, we assess the feasibility of transfer learning from finger PPG to in-ear PPG in the context of deep learning for respiratory monitoring. This is achieved by introducing an encoder-decoder framework which is set up to extract respiratory waveforms from PPG, whereby simultaneously recorded gold standard respiratory waveforms (capnography, impedance pneumography and air flow) are used as a training reference. Next, the data augmentation and training pipeline is examined for both training on finger PPG and the subsequent fine tuning on in-ear PPG. The results indicate that, through training on two large finger PPG data sets (95 subjects) and then retraining on our own small in-ear PPG data set (6 subjects), the model achieves lower and more consistent test error for the prediction of the respiratory waveforms, compared to training on the small in-ear data set alone. This conclusively demonstrates the feasibility of transfer learning from finger PPG to in-ear PPG, leading to better generalisation across a wide range of respiratory rates.
Assuntos
Dedos , Fotopletismografia , Humanos , Fotopletismografia/métodos , Estudos de Viabilidade , Monitorização Fisiológica , Aprendizado de MáquinaRESUMO
Accurate pulse-oximeter readings are critical for clinical decisions, especially when arterial blood-gas tests - the gold standard for determining oxygen saturation levels - are not available, such as when determining COVID-19 severity. Several studies demonstrate that pulse oxygen saturation estimated from photoplethysmography (PPG) introduces a racial bias due to the more profound scattering of light in subjects with darker skin due to the increased presence of melanin. This leads to an overestimation of blood oxygen saturation in those with darker skin that is increased for low blood oxygen levels and can result in a patient not receiving potentially life-saving supplemental oxygen. This racial bias has been comprehensively studied in conventional finger pulse oximetry but in other less commonly used measurement sites, such as in-ear pulse oximetry, it remains unexplored. Different measurement sites can have thinner epidermis compared with the finger and lower exposure to sunlight (such as is the case with the ear canal), and we hypothesise that this could reduce the bias introduced by skin tone on pulse oximetry. To this end, we compute SpO2 in different body locations, during rest and breath-holds, and compare with the index finger. The study involves a participant pool covering 6-pigmentation categories from Fitzpatrick's Skin Pigmentation scale. These preliminary results indicate that locations characterized by cartilaginous highly vascularized tissues may be less prone to the influence of melanin and pigmentation in the estimation of SpO2, paving the way for the development of non-discriminatory pulse oximetry devices.
Assuntos
Racismo , Pigmentação da Pele , Humanos , Melaninas , Oximetria/métodos , OxigênioRESUMO
At present, a medium-level microcontroller is capable of performing edge computing and can handle the computation of neural network kernel functions. This makes it possible to implement a complete end-to-end solution incorporating signal acquisition, digital signal processing, and machine learning for the classification of cardiac arrhythmias on a small wearable device. In this work, we describe the design and implementation of several classifiers for atrial fibrillation detection on a general-purpose ARM Cortex-M4 microcontroller. We used the CMSIS-DSP library, which supports Naïve Bayes and Support Vector Machine classifiers, with different kernel functions. We also developed Python scripts to automatically transfer the Python model (trained in Scikit-learn) to the C environment. To train and evaluate the models, we used part of the data from the PhysioNet/Computing in Cardiology Challenge 2020 and performed simple classification of atrial fibrillation based on heart-rate irregularity. The performance of the classifiers was tested on a general-purpose ARM Cortex-M4 microcontroller (STM32WB55RG). Our study reveals that among the tested classifiers, the SVM classifier with RBF kernel function achieves the highest accuracy of 96.9%, sensitivity of 98.4%, and specificity of 95.8%. The execution time of this classifier was 720 µs per recording. We also discuss the advantages of moving computing tasks to edge devices, including increased power efficiency of the system, improved patient data privacy and security, and reduced overall system operation costs. In addition, we highlight a problem with false-positive detection and unclear significance of device-detected atrial fibrillation.
Assuntos
Fibrilação Atrial , Humanos , Fibrilação Atrial/diagnóstico , Teorema de Bayes , Algoritmos , Frequência Cardíaca , Redes Neurais de ComputaçãoRESUMO
Modern data analytics applications are increasingly characterized by exceedingly large and multidimensional data sources. This represents a challenge for traditional machine learning models, as the number of model parameters needed to process such data grows exponentially with the data dimensions, an effect known as the curse of dimensionality. Recently, tensor decomposition (TD) techniques have shown promising results in reducing the computational costs associated with large-dimensional models while achieving comparable performance. However, such tensor models are often unable to incorporate the underlying domain knowledge when compressing high-dimensional models. To this end, we introduce a novel graph-regularized tensor regression (GRTR) framework, whereby domain knowledge about intramodal relations is incorporated into the model in the form of a graph Laplacian matrix. This is then used as a regularization tool to promote a physically meaningful structure within the model parameters. By virtue of tensor algebra, the proposed framework is shown to be fully interpretable, both coefficient-wise and dimension-wise. The GRTR model is validated in a multiway regression setting and compared against competing models and is shown to achieve improved performance at reduced computational costs. Detailed visualizations are provided to help readers gain an intuitive understanding of the employed tensor operations.
RESUMO
Duplex ultrasound (DUS) is the most widely used method for surveillance of arteriovenous fistulae (AVF) created for dialysis. However, DUS is poor at predicting AVF outcomes and there is a need for novel methods that can more accurately evaluate multidirectional AVF flow. In this study we aimed to evaluate the feasibility of detecting AVF stenosis using a novel method combining tensor-decomposition of B-mode ultrasound cine loops (videos) of blood flow and machine learning classification. Classification of stenosis was based on the DUS assessment of blood flow volume, vessel diameter size, flow velocity, and spectral waveform features. Real-time B-mode cine loops of the arterial inflow, anastomosis, and venous outflow of the AVFs were analysed. Tensor decompositions were computed from both the 'full-frame' (whole-image) videos and 'cropped' videos (to include areas of blood flow only). The resulting output were labelled for the presence of stenosis, as per the DUS findings, and used as a set of features for classification using a Long Short-Term Memory (LSTM) neural network. A total of 61 out of 66 available videos were used for analysis. The whole-image classifier failed to beat random guessing, achieving a mean area under the receiver operating characteristics (AUROC) value of 0.49 (CI 0.48 to 0.50). In contrast, the 'cropped' video classifier performed better with a mean AUROC of 0.82 (CI 0.66 to 0.96), showing promising predictive power despite the small size of the dataset. The combined application of tensor decomposition and machine learning are promising for the detection of AVF stenosis and warrant further investigation.
Assuntos
Fístula Arteriovenosa , Derivação Arteriovenosa Cirúrgica , Humanos , Diálise Renal/métodos , Constrição Patológica/diagnóstico por imagem , Velocidade do Fluxo Sanguíneo , Aprendizado de MáquinaRESUMO
Photoplethysmography is a key sensing technology which is used in wearable devices such as smartwatches and fitness trackers. Currently, photoplethysmography sensors are used to monitor physiological parameters including heart rate and heart rhythm, and to track activities like sleep and exercise. Yet, wearable photoplethysmography has potential to provide much more information on health and wellbeing, which could inform clinical decision making. This Roadmap outlines directions for research and development to realise the full potential of wearable photoplethysmography. Experts discuss key topics within the areas of sensor design, signal processing, clinical applications, and research directions. Their perspectives provide valuable guidance to researchers developing wearable photoplethysmography technology.
Assuntos
Fotopletismografia , Dispositivos Eletrônicos Vestíveis , Monitores de Aptidão Física , Processamento de Sinais Assistido por Computador , Frequência Cardíaca/fisiologiaRESUMO
Graph neural networks (GNNs) tend to suffer from high computation costs due to the exponentially increasing scale of graph data and a large number of model parameters, which restricts their utility in practical applications. To this end, some recent works focus on sparsifying GNNs (including graph structures and model parameters) with the lottery ticket hypothesis (LTH) to reduce inference costs while maintaining performance levels. However, the LTH-based methods suffer from two major drawbacks: 1) they require exhaustive and iterative training of dense models, resulting in an extremely large training computation cost, and 2) they only trim graph structures and model parameters but ignore the node feature dimension, where vast redundancy exists. To overcome the above limitations, we propose a comprehensive graph gradual pruning framework termed CGP. This is achieved by designing a during-training graph pruning paradigm to dynamically prune GNNs within one training process. Unlike LTH-based methods, the proposed CGP approach requires no retraining, which significantly reduces the computation costs. Furthermore, we design a cosparsifying strategy to comprehensively trim all the three core elements of GNNs: graph structures, node features, and model parameters. Next, to refine the pruning operation, we introduce a regrowth process into our CGP framework, to reestablish the pruned but important connections. The proposed CGP is evaluated over a node classification task across six GNN architectures, including shallow models graph convolutional network (GCN) and graph attention network (GAT), shallow-but-deep-propagation models simple graph convolution (SGC) and approximate personalized propagation of neural predictions (APPNP), and deep models GCN via initial residual and identity mapping (GCNII) and residual GCN (ResGCN), on a total of 14 real-world graph datasets, including large-scale graph datasets from the challenging Open Graph Benchmark (OGB). Experiments reveal that the proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of the existing methods.
RESUMO
Monitoring diabetes saves lives. To this end, we introduce a novel, unobtrusive, and readily deployable in-ear device for the continuous and non-invasive measurement of blood glucose levels (BGLs). The device is equipped with a low-cost commercially available pulse oximeter whose infrared wavelength (880 nm) is used for the acquisition of photoplethysmography (PPG). For rigor, we considered a full range of diabetic conditions (non-diabetic, pre-diabetic, type I diabetic, and type II diabetic). Recordings spanned nine different days, starting in the morning while fasting, up to a minimum of a two-hour period after eating a carbohydrate-rich breakfast. The BGLs from PPG were estimated using a suite of regression-based machine learning models, which were trained on characteristic features of PPG cycles pertaining to high and low BGLs. The analysis shows that, as desired, an average of 82% of the BGLs estimated from PPG lie in region A of the Clarke error grid (CEG) plot, with 100% of the estimated BGLs in the clinically acceptable CEG regions A and B. These results demonstrate the potential of the ear canal as a site for non-invasive blood glucose monitoring.
Assuntos
Glicemia , Fotopletismografia , Fotopletismografia/métodos , Automonitorização da Glicemia , Oximetria/métodos , OxigênioRESUMO
A class of doubly stochastic graph shift operators (GSO) is proposed, which is shown to exhibit: (i) lower and upper L2-boundedness for locally stationary random graph signals, (ii) L2-isometry for i.i.d. random graph signals with the asymptotic increase in the incoming neighbourhood size of vertices, and (iii) preservation of the mean of any graph signal - all prerequisites for reliable graph neural networks. These properties are obtained through a statistical consistency analysis of the proposed graph shift operator, and by exploiting the dual role of the doubly stochastic GSO as a Markov (diffusion) matrix and as an unbiased expectation operator. For generality, we consider directed graphs which exhibit asymmetric connectivity matrices. The proposed approach is validated through an example on the estimation of a vector field.
Assuntos
Algoritmos , Redes Neurais de Computação , DifusãoRESUMO
Objective: Quality of intraoperative teamwork may have a direct impact on patient outcomes. Heart rate variability (HRV) synchrony may be useful for objective assessment of team cohesion and good teamwork. The primary aim of this study was to investigate the feasibility of using HRV synchrony in surgical teams. Secondary aims were to investigate the association of HRV synchrony with length of procedure (LOP), complications, number of intraoperative glitches and length of stay (LOS). We also investigated the correlation between HRV synchrony and team familiarity, pre- and intraoperative stress levels (STAI questionnaire), NOTECHS score and experience of team members. Methods: Ear, nose and throat (ENT) and vascular surgeons (consultant and registrar team members) were recruited into the study. Baseline demographics including level of team members' experience were gathered before each procedure. For each procedure, continuous electrocardiogram (ECG) recording was performed and questionnaires regarding pre- and intraoperative stress levels and non-technical skills (NOTECHS) scores were collected for each team member. An independent observer documented the time of each intraoperative glitch. Statistical analysis was conducted using stepwise multiple linear regression. Results: Four HRV synchrony metrics which may be markers of efficient surgical collaboration were identified from the data: 1. number of HRV synchronies per hour of procedure, 2. number of HRV synchrony trends per hour of procedure, 3. length of HRV synchrony trends per hour of procedure, 4. area under the HRV synchrony trend curve per hour of procedure. LOP was inversely correlated with number of HRV synchrony trends per hour of procedure (p < 0.0001), area under HRV synchrony trend curve per hour of procedure (p = 0.001), length of HRV synchrony trends per hour of procedure (p = 0.002) and number of HRV synchronies per hour of procedure (p < 0.0001). LOP was positively correlated with: FS (p = 0.043; R = 0.358) and intraoperative STAI score of the whole team (p = 0.007; R = 0.493). Conclusions: HRV synchrony metrics within operating teams may be used as an objective marker to quantify surgical teamwork. We have shown that LOP is shorter when the intraoperative surgical teams' HRV is more synchronised.
Assuntos
Frequência Cardíaca , Humanos , Projetos PilotoRESUMO
The feasibility of using in-ear [Formula: see text] to track cognitive workload induced by gaming is investigated. This is achieved by examining temporal variations in cognitive workload through the game Geometry Dash, with 250 trials across 7 subjects. The relationship between performance and cognitive load in Dark Souls III boss fights is also investigated followed by a comparison of the cognitive workload responses across three different genres of game. A robust decrease in in-ear [Formula: see text] is observed in response to cognitive workload induced by gaming, which is consistent with existing results from memory tasks. The results tentatively suggest that in-ear [Formula: see text] may be able to distinguish cognitive workload alone, whereas heart rate and breathing rate respond similarly to both cognitive workload and stress. This study demonstrates the feasibility of low cost wearable cognitive workload tracking in gaming with in-ear [Formula: see text], with applications to the play testing of games and biofeedback in games of the future.
Assuntos
Jogos de Vídeo , Carga de Trabalho , Cognição , Frequência Cardíaca , HumanosRESUMO
The extension of sample entropy methodologies to multivariate signals has received considerable attention, with traditional univariate entropy methods, such as sample entropy (SampEn) and fuzzy entropy (FuzzyEn), introduced to measure the complexity of chaotic systems in terms of irregularity and randomness. The corresponding multivariate methods, multivariate multiscale sample entropy (MMSE) and multivariate multiscale fuzzy entropy (MMFE), were developed to explore the structural richness within signals at high scales. However, the requirement of high scale limits the selection of embedding dimension and thus, the performance is unavoidably restricted by the trade-off between the data size and the required high scale. More importantly, the scale of interest in different situations is varying, yet little is known about the optimal setting of the scale range in MMSE and MMFE. To this end, we extend the univariate cosine similarity entropy (CSE) method to the multivariate case, and show that the resulting multivariate multiscale cosine similarity entropy (MMCSE) is capable of quantifying structural complexity through the degree of self-correlation within signals. The proposed approach relaxes the prohibitive constraints between the embedding dimension and data length, and aims to quantify the structural complexity based on the degree of self-correlation at low scales. The proposed MMCSE is applied to the examination of the complex and quaternion circularity properties of signals with varying correlation behaviors, and simulations show the MMCSE outperforming the standard methods, MMSE and MMFE.
RESUMO
An ability to extract detailed spirometry-like breathing waveforms from wearable sensors promises to greatly improve respiratory health monitoring. Photoplethysmography (PPG) has been researched in depth for estimation of respiration rate, given that it varies with respiration through overall intensity, pulse amplitude and pulse interval. We compare and contrast the extraction of these three respiratory modes from both the ear canal and finger and show a marked improvement in the respiratory power for respiration induced intensity variations and pulse amplitude variations when recording from the ear canal. We next employ a data driven multi-scale method, noise assisted multivariate empirical mode decomposition (NA-MEMD), which allows for simultaneous analysis of all three respiratory modes to extract detailed respiratory waveforms from in-ear PPG. For rigour, we considered in-ear PPG recordings from healthy subjects, both older and young, patients with chronic obstructive pulmonary disease (COPD) and idiopathic pulmonary fibrosis (IPF) and healthy subjects with artificially obstructed breathing. Specific in-ear PPG waveform changes are observed for COPD, such as a decreased inspiratory duty cycle and an increased inspiratory magnitude, when compared with expiratory magnitude. These differences are used to classify COPD from healthy and IPF waveforms with a sensitivity of 87% and an overall accuracy of 92%. Our findings indicate the promise of in-ear PPG for COPD screening and unobtrusive respiratory monitoring in ambulatory scenarios and in consumer wearables.