Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 22
1.
J Cachexia Sarcopenia Muscle ; 14(2): 847-859, 2023 04.
Article En | MEDLINE | ID: mdl-36775841

BACKGROUND: Personalized survival prediction is important in gastric cancer patients after gastrectomy based on large datasets with many variables including time-varying factors in nutrition and body morphometry. One year after gastrectomy might be the optimal timing to predict long-term survival because most patients experience significant nutritional change, muscle loss, and postoperative changes in the first year after gastrectomy. We aimed to develop a personalized prognostic artificial intelligence (AI) model to predict 5 year survival at 1 year after gastrectomy. METHODS: From a prospectively built gastric surgery registry from a tertiary hospital, 4025 gastric cancer patients (mean age 56.1 ± 10.9, 36.2% females) treated gastrectomy and survived more than a year were selected. Eighty-nine variables including clinical and derived time-varying variables were used as input variables. We proposed a multi-tree extreme gradient boosting (XGBoost) algorithm, an ensemble AI algorithm based on 100 datasets derived from repeated five-fold cross-validation. Internal validation was performed in split datasets (n = 1121) by comparing our proposed model and six other AI algorithms. External validation was performed in 590 patients from other hospitals (mean age 55.9 ± 11.2, 37.3% females). We performed a sensitivity analysis to analyse the effect of the nutritional and fat/muscle indices using a leave-one-out method. RESULTS: In the internal validation, our proposed model showed AUROC of 0.8237, which outperformed the other AI algorithms (0.7988-0.8165), 80.00% sensitivity, 72.34% specificity, and 76.17% balanced accuracy. In the external validation, our model showed AUROC of 0.8903, 86.96% sensitivity, 74.60% specificity, and 80.78% balanced accuracy. Sensitivity analysis demonstrated that the nutritional and fat/muscle indices influenced the balanced accuracy by 0.31% and 6.29% in the internal and external validation set, respectively. Our developed AI model was published on a website for personalized survival prediction. CONCLUSIONS: Our proposed AI model provides substantially good performance in predicting 5 year survival at 1 year after gastric cancer surgery. The nutritional and fat/muscle indices contributed to increase the prediction performance of our AI model.


Stomach Neoplasms , Female , Humans , Male , Prognosis , Stomach Neoplasms/surgery , Artificial Intelligence , Gastrectomy/adverse effects , Gastrectomy/methods , Algorithms
2.
J Med Virol ; 95(2): e28462, 2023 02.
Article En | MEDLINE | ID: mdl-36602055

One of the effective ways to minimize the spread of COVID-19 infection is to diagnose it as early as possible before the onset of symptoms. In addition, if the infection can be simply diagnosed using a smartwatch, the effectiveness of preventing the spread will be greatly increased. In this study, we aimed to develop a deep learning model to diagnose COVID-19 before the onset of symptoms using heart rate (HR) data obtained from a smartwatch. In the deep learning model for the diagnosis, we proposed a transformer model that learns HR variability patterns in presymptom by tracking relationships in sequential HR data. In the cross-validation (CV) results from the COVID-19 unvaccinated patients, our proposed deep learning model exhibited high accuracy metrics: sensitivity of 84.38%, specificity of 85.25%, accuracy of 84.85%, balanced accuracy of 84.81%, and area under the receiver operating characteristics (AUROC) of 0.8778. Furthermore, we validated our model using external multiple datasets including healthy subjects, COVID-19 patients, as well as vaccinated patients. In the external healthy subject group, our model also achieved high specificity of 77.80%. In the external COVID-19 unvaccinated patient group, our model also provided similar accuracy metrics to those from the CV: balanced accuracy of 87.23% and AUROC of 0.8897. In the COVID-19 vaccinated patients, the balanced accuracy and AUROC dropped by 66.67% and 0.8072, respectively. The first finding in this study is that our proposed deep learning model can simply and accurately diagnose COVID-19 patients using HRs obtained from a smartwatch before the onset of symptoms. The second finding is that the model trained from unvaccinated patients may provide less accurate diagnosis performance compared with the vaccinated patients. The last finding is that the model trained in a certain period of time may provide degraded diagnosis performances as the virus continues to mutate.


COVID-19 , Deep Learning , Humans , Heart Rate , ROC Curve , Tomography, X-Ray Computed/methods
3.
Comput Methods Programs Biomed ; 226: 107126, 2022 Nov.
Article En | MEDLINE | ID: mdl-36130416

BACKGROUND AND OBJECTIVE: Recently, various algorithms have been introduced using wrist-worn photoplethysmography (PPG) to provide high accuracy of instantaneous heart rate (HR) estimation, including during high-intensity exercise. Most studies focus on using acceleration and/or gyroscope signals for the motion artifact (MA) reference, which attenuates or cancels out noise from the MA-corrupted PPG signals. We aim to open and pave the path to find an appropriate MA reference selection for MA cancelation in PPG. METHODS: We investigated how the acceleration and gyroscope reference signals correlate with the MAs of the distorted PPG signals and derived both mathematically and experimentally an adaptive MA reference selection approach. We applied our algorithm to five state-of-the-art (SOTA) methods for the performance evaluation. In addition, we compared the four MA reference selection approaches, i.e. with acceleration signal only, with gyroscope signal only, with both signals, and using our proposed adaptive selection. RESULTS: When applied to 47 PPG recordings acquired during intensive physical exercise from two different datasets, our proposed adaptive MA reference selection method provided higher accuracy than the other MA selection approaches for all five SOTA methods. CONCLUSION: Our proposed adaptive MA reference selection approach can be used in other MA cancelation methods and reduces the HR estimation error. SIGNIFICANCE: We believe that this study helps researchers to address acceleration and gyroscope signals as accurate MA references, which eventually improves the overall performance for estimating HRs through the various algorithms developed by research groups.


Artifacts , Photoplethysmography , Photoplethysmography/methods , Signal Processing, Computer-Assisted , Motion , Heart Rate/physiology , Algorithms , Acceleration
4.
Sci Rep ; 12(1): 7141, 2022 05 03.
Article En | MEDLINE | ID: mdl-35504945

Photoplethysmography imaging (PPGI) sensors have attracted a significant amount of attention as they enable the remote monitoring of heart rates (HRs) and thus do not require any additional devices to be worn on fingers or wrists. In this study, we mounted PPGI sensors on a robot for active and autonomous HR (R-AAH) estimation. We proposed an algorithm that provides accurate HR estimation, which can be performed in real time using vision and robot manipulation algorithms. By simplifying the extraction of facial skin images using saturation (S) values in the HSV color space, and selecting pixels based on the most frequent S value within the face image, we achieved a reliable HR assessment. The results of the proposed algorithm using the R-AAH method were evaluated by rigorous comparison with the results of existing algorithms on the UBFC-RPPG dataset (n = 42). The proposed algorithm yielded an average absolute error (AAE) of 0.71 beats per minute (bpm). The developed algorithm is simple, with a processing time of less than 1 s (275 ms for an 8-s window). The algorithm was further validated on our own dataset (BAMI-RPPG dataset [n = 14]) with an AAE of 0.82 bpm.


Algorithms , Photoplethysmography , Diagnostic Imaging , Face , Heart Rate/physiology , Photoplethysmography/methods
5.
J Med Internet Res ; 24(1): e34415, 2022 01 03.
Article En | MEDLINE | ID: mdl-34982041

BACKGROUND: Detection and quantification of intra-abdominal free fluid (ie, ascites) on computed tomography (CT) images are essential processes for finding emergent or urgent conditions in patients. In an emergency department, automatic detection and quantification of ascites will be beneficial. OBJECTIVE: We aimed to develop an artificial intelligence (AI) algorithm for the automatic detection and quantification of ascites simultaneously using a single deep learning model (DLM). METHODS: We developed 2D DLMs based on deep residual U-Net, U-Net, bidirectional U-Net, and recurrent residual U-Net (R2U-Net) algorithms to segment areas of ascites on abdominopelvic CT images. Based on segmentation results, the DLMs detected ascites by classifying CT images into ascites images and nonascites images. The AI algorithms were trained using 6337 CT images from 160 subjects (80 with ascites and 80 without ascites) and tested using 1635 CT images from 40 subjects (20 with ascites and 20 without ascites). The performance of the AI algorithms was evaluated for diagnostic accuracy of ascites detection and for segmentation accuracy of ascites areas. Of these DLMs, we proposed an AI algorithm with the best performance. RESULTS: The segmentation accuracy was the highest for the deep residual U-Net model with a mean intersection over union (mIoU) value of 0.87, followed by U-Net, bidirectional U-Net, and R2U-Net models (mIoU values of 0.80, 0.77, and 0.67, respectively). The detection accuracy was the highest for the deep residual U-Net model (0.96), followed by U-Net, bidirectional U-Net, and R2U-Net models (0.90, 0.88, and 0.82, respectively). The deep residual U-Net model also achieved high sensitivity (0.96) and high specificity (0.96). CONCLUSIONS: We propose a deep residual U-Net-based AI algorithm for automatic detection and quantification of ascites on abdominopelvic CT scans, which provides excellent performance.


Artificial Intelligence , Deep Learning , Algorithms , Ascites/diagnostic imaging , Emergency Service, Hospital , Humans , Tomography, X-Ray Computed
6.
Front Physiol ; 12: 778720, 2021.
Article En | MEDLINE | ID: mdl-34912242

Artificial intelligence (AI) technologies have been applied in various medical domains to predict patient outcomes with high accuracy. As AI becomes more widely adopted, the problem of model bias is increasingly apparent. In this study, we investigate the model bias that can occur when training a model using datasets for only one particular gender and aim to present new insights into the bias issue. For the investigation, we considered an AI model that predicts severity at an early stage based on the medical records of coronavirus disease (COVID-19) patients. For 5,601 confirmed COVID-19 patients, we used 37 medical records, namely, basic patient information, physical index, initial examination findings, clinical findings, comorbidity diseases, and general blood test results at an early stage. To investigate the gender-based AI model bias, we trained and evaluated two separate models-one that was trained using only the male group, and the other using only the female group. When the model trained by the male-group data was applied to the female testing data, the overall accuracy decreased-sensitivity from 0.93 to 0.86, specificity from 0.92 to 0.86, accuracy from 0.92 to 0.86, balanced accuracy from 0.93 to 0.86, and area under the curve (AUC) from 0.97 to 0.94. Similarly, when the model trained by the female-group data was applied to the male testing data, once again, the overall accuracy decreased-sensitivity from 0.97 to 0.90, specificity from 0.96 to 0.91, accuracy from 0.96 to 0.91, balanced accuracy from 0.96 to 0.90, and AUC from 0.97 to 0.95. Furthermore, when we evaluated each gender-dependent model with the test data from the same gender used for training, the resultant accuracy was also lower than that from the unbiased model.

7.
Sci Rep ; 11(1): 23534, 2021 12 07.
Article En | MEDLINE | ID: mdl-34876644

The aim of the study is to develop artificial intelligence (AI) algorithm based on a deep learning model to predict mortality using abbreviate injury score (AIS). The performance of the conventional anatomic injury severity score (ISS) system in predicting in-hospital mortality is still limited. AIS data of 42,933 patients registered in the Korean trauma data bank from four Korean regional trauma centers were enrolled. After excluding patients who were younger than 19 years old and those who died within six hours from arrival, we included 37,762 patients, of which 36,493 (96.6%) survived and 1269 (3.4%) deceased. To enhance the AI model performance, we reduced the AIS codes to 46 input values by organizing them according to the organ location (Region-46). The total AIS and six categories of the anatomic region in the ISS system (Region-6) were used to compare the input features. The AI models were compared with the conventional ISS and new ISS (NISS) systems. We evaluated the performance pertaining to the 12 combinations of the features and models. The highest accuracy (85.05%) corresponded to Region-46 with DNN, followed by that of Region-6 with DNN (83.62%), AIS with DNN (81.27%), ISS-16 (80.50%), NISS-16 (79.18%), NISS-25 (77.09%), and ISS-25 (70.82%). The highest AUROC (0.9084) corresponded to Region-46 with DNN, followed by that of Region-6 with DNN (0.9013), AIS with DNN (0.8819), ISS (0.8709), and NISS (0.8681). The proposed deep learning scheme with feature combination exhibited high accuracy metrics such as the balanced accuracy and AUROC than the conventional ISS and NISS systems. We expect that our trial would be a cornerstone of more complex combination model.


Wounds and Injuries/mortality , Abbreviated Injury Scale , Artificial Intelligence/statistics & numerical data , Benchmarking/statistics & numerical data , Databases, Factual/statistics & numerical data , Hospital Mortality , Humans , Injury Severity Score , Trauma Centers/statistics & numerical data
8.
J Cachexia Sarcopenia Muscle ; 12(6): 2220-2230, 2021 12.
Article En | MEDLINE | ID: mdl-34704369

BACKGROUND: Sarcopenia is defined as muscle wasting, characterized by a progressive loss of muscle mass and function due to ageing. Diagnosis of sarcopenia typically involves both muscle imaging and the physical performance of people exhibiting signs of muscle weakness. Despite its worldwide prevalence, a molecular method for accurately diagnosing sarcopenia has not been established. METHODS: We develop an artificial intelligence (AI) diagnosis model of sarcopenia using a published transcriptome dataset comprising patients from multiple ethnicities. For the AI model for sarcopenia diagnosis, we use a transcriptome database comprising 17 339 genes from 118 subjects. Among the 17 339 genes, we select 27 features as the model inputs. For feature selection, we use a random forest, extreme gradient boosting and adaptive boosting. Using the top 27 features, we propose a four-layer deep neural network, named DSnet-v1, for sarcopenia diagnosis. RESULTS: Among isolated testing datasets, DSnet-v1 provides high sensitivity (100%), specificity (94.12%), accuracy (95.83%), balanced accuracy (97.06%) and area under receiver operating characteristics (0.99). To extend the number of patient data, we develop a web application (http://sarcopeniaAI.ml/), where the model can be accessed unrestrictedly to diagnose sarcopenia if the transcriptome is available. A focused analysis of the top 27 genes for their differential or co-expression with other genes implied the potential existence of race-specific factors for sarcopenia, suggesting the possibility of identifying causal factors of sarcopenia when a more extended dataset is provided. CONCLUSIONS: Our new AI model, DSnet-v1, accurately diagnoses sarcopenia and is currently available publicly to assist healthcare providers in diagnosing and treating sarcopenia.


Artificial Intelligence , Sarcopenia , Biomarkers , Humans , Intelligence , Prognosis , Sarcopenia/diagnosis , Sarcopenia/epidemiology , Sarcopenia/genetics
9.
J Med Internet Res ; 23(4): e27060, 2021 04 19.
Article En | MEDLINE | ID: mdl-33764883

BACKGROUND: The number of deaths from COVID-19 continues to surge worldwide. In particular, if a patient's condition is sufficiently severe to require invasive ventilation, it is more likely to lead to death than to recovery. OBJECTIVE: The goal of our study was to analyze the factors related to COVID-19 severity in patients and to develop an artificial intelligence (AI) model to predict the severity of COVID-19 at an early stage. METHODS: We developed an AI model that predicts severity based on data from 5601 COVID-19 patients from all national and regional hospitals across South Korea as of April 2020. The clinical severity of COVID-19 was divided into two categories: low and high severity. The condition of patients in the low-severity group corresponded to no limit of activity, oxygen support with nasal prong or facial mask, and noninvasive ventilation. The condition of patients in the high-severity group corresponded to invasive ventilation, multi-organ failure with extracorporeal membrane oxygenation required, and death. For the AI model input, we used 37 variables from the medical records, including basic patient information, a physical index, initial examination findings, clinical findings, comorbid diseases, and general blood test results at an early stage. Feature importance analysis was performed with AdaBoost, random forest, and eXtreme Gradient Boosting (XGBoost); the AI model for predicting COVID-19 severity among patients was developed with a 5-layer deep neural network (DNN) with the 20 most important features, which were selected based on ranked feature importance analysis of 37 features from the comprehensive data set. The selection procedure was performed using sensitivity, specificity, accuracy, balanced accuracy, and area under the curve (AUC). RESULTS: We found that age was the most important factor for predicting disease severity, followed by lymphocyte level, platelet count, and shortness of breath or dyspnea. Our proposed 5-layer DNN with the 20 most important features provided high sensitivity (90.2%), specificity (90.4%), accuracy (90.4%), balanced accuracy (90.3%), and AUC (0.96). CONCLUSIONS: Our proposed AI model with the selected features was able to predict the severity of COVID-19 accurately. We also made a web application so that anyone can access the model. We believe that sharing the AI model with the public will be helpful in validating and improving its performance.


Artificial Intelligence , COVID-19/epidemiology , Adolescent , Adult , Aged , Aged, 80 and over , COVID-19/mortality , Child , Child, Preschool , Female , Humans , Infant , Infant, Newborn , Male , Middle Aged , Models, Statistical , Mortality , Republic of Korea/epidemiology , Research Design , Retrospective Studies , Risk Factors , SARS-CoV-2 , Young Adult
10.
J Med Internet Res ; 22(12): e25442, 2020 12 23.
Article En | MEDLINE | ID: mdl-33301414

BACKGROUND: COVID-19, which is accompanied by acute respiratory distress, multiple organ failure, and death, has spread worldwide much faster than previously thought. However, at present, it has limited treatments. OBJECTIVE: To overcome this issue, we developed an artificial intelligence (AI) model of COVID-19, named EDRnet (ensemble learning model based on deep neural network and random forest models), to predict in-hospital mortality using a routine blood sample at the time of hospital admission. METHODS: We selected 28 blood biomarkers and used the age and gender information of patients as model inputs. To improve the mortality prediction, we adopted an ensemble approach combining deep neural network and random forest models. We trained our model with a database of blood samples from 361 COVID-19 patients in Wuhan, China, and applied it to 106 COVID-19 patients in three Korean medical institutions. RESULTS: In the testing data sets, EDRnet provided high sensitivity (100%), specificity (91%), and accuracy (92%). To extend the number of patient data points, we developed a web application (BeatCOVID19) where anyone can access the model to predict mortality and can register his or her own blood laboratory results. CONCLUSIONS: Our new AI model, EDRnet, accurately predicts the mortality rate for COVID-19. It is publicly available and aims to help health care providers fight COVID-19 and improve patients' outcomes.


COVID-19/mortality , Adult , Aged , Artificial Intelligence , China , Female , Hospitalization , Humans , Male , Middle Aged , Neural Networks, Computer , Republic of Korea , SARS-CoV-2
11.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1290-1293, 2020 07.
Article En | MEDLINE | ID: mdl-33018224

Intracranial hemorrhage (ICH) is a life-threatening condition, the outcome of which is associated with stroke, trauma, aneurysm, vascular malformations, high blood pressure, illicit drugs and blood clotting disorders. In this study, we presented the feasibility of the automatic identification and classification of ICH using a head CT image based on deep learning technique. The subtypes of ICH for the classification was intraparenchymal, intraventricular, subarachnoid, subdural and epidural. We first performed windowing to provide three different images: brain window, bone window and subdural window, and trained 4,516,842 head CT images using CNN-LSTM model. We used the Xception model for the deep CNN, and 64 nodes and 32 timesteps for LSTM. For the performance evaluation, we tested 727,392 head CT images, and found the resultant weighted multi-label logarithmic loss was 0.07528. We believe that our proposed method enhances the accuracy of ICH identification and classification and can assist radiologists in the interpretation of head CT images, particularly for brain-related quantitative analysis.


Intracranial Hemorrhages , Stroke , Brain , Feasibility Studies , Humans , Intracranial Hemorrhages/diagnostic imaging , Tomography, X-Ray Computed
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 4425-4428, 2020 07.
Article En | MEDLINE | ID: mdl-33018976

In this paper, we have presented Turtulebot-assisted instantaneous heart rate (HR) estimator using camera based remote photoplethysmography. We used a Turtlebot with a camera to record human face. For the face detection, we used Haar Cascade algorithm. To increase the accuracy of the HR estimation, we combined a plane-orthogonal-to-skin (POS) model with finite state machine (FSM) framework. By combining POS and FSM framework, we achieved 1.08 bpm of MAE, which is the lowest error comparing to the state-of-art methods.


Robotics , Algorithms , Heart Rate , Humans , Photoplethysmography , Skin
13.
J Med Internet Res ; 22(6): e19569, 2020 06 29.
Article En | MEDLINE | ID: mdl-32568730

BACKGROUND: Coronavirus disease (COVID-19) has spread explosively worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) is a relevant screening tool due to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely occupied fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE: We aimed to rapidly develop an AI technique to diagnose COVID-19 pneumonia in CT images and differentiate it from non-COVID-19 pneumonia and nonpneumonia diseases. METHODS: A simple 2D deep learning framework, named the fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning using one of four state-of-the-art pretrained deep learning models (VGG16, ResNet-50, Inception-v3, or Xception) as a backbone. For training and testing of FCONet, we collected 3993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and nonpneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training set and a testing set at a ratio of 8:2. For the testing data set, the diagnostic performance of the four pretrained FCONet models to diagnose COVID-19 pneumonia was compared. In addition, we tested the FCONet models on an external testing data set extracted from embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS: Among the four pretrained models of FCONet, ResNet-50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100.00%, and accuracy 99.87%) and outperformed the other three pretrained models in the testing data set. In the additional external testing data set using low-quality CT images, the detection accuracy of the ResNet-50 model was the highest (96.97%), followed by Xception, Inception-v3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS: FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing data set, the FCONet model based on ResNet-50 appears to be the best model, as it outperformed other FCONet models based on VGG16, Xception, and Inception-v3.


Coronavirus Infections/diagnostic imaging , Deep Learning , Pneumonia, Viral/diagnostic imaging , Tomography, X-Ray Computed/methods , Tomography, X-Ray Computed/standards , Betacoronavirus , COVID-19 , Coronavirus Infections/pathology , Female , Humans , Male , Middle Aged , Pandemics , Pneumonia, Viral/pathology , SARS-CoV-2 , Sensitivity and Specificity
14.
Evol Bioinform Online ; 15: 1176934319888904, 2019.
Article En | MEDLINE | ID: mdl-31798298

With an aging population that continues to grow, health care technology plays an increasingly active role, especially for chronic disease management. In the health care market, cloud platform technology is becoming popular, as both patients and physicians demand cost efficiency, easy access to information, and security. Especially for asthma and chronic obstructive pulmonary disease (COPD) patients, it is recommended that pulmonary function test (PFT) be performed on a daily basis. However, it is difficult for patients to frequently visit a hospital to perform the PFT. In this study, we present an application and cloud platform for remote PFT monitoring that can be directly measured by smartphone microphone with no external devices. In addition, we adopted the IBM Watson Internet-of-Things (IoT) platform for PFT monitoring, using a smartphone's built-in microphone with a high-resolution time-frequency representation. We successfully demonstrated real-time PFT monitoring using the cloud platform. The PFT parameters of FEV1/FVC (%) could be remotely monitored when a subject performed the PFT test. As a pilot study, we tested 13 healthy subjects, and found that the absolute error mean was 4.12 and the standard deviation was 3.45 on all 13 subjects. With the developed applications on the cloud platform, patients can freely measure the PFT parameters without restriction on time and space, and a physician can monitor the patients' status in real time. We hope that the PFT monitoring platform will work as a means for early detection and treatment of patients with pulmonary diseases, especially those having asthma and COPD.

15.
PLoS One ; 14(4): e0215014, 2019.
Article En | MEDLINE | ID: mdl-30951559

Accurate estimation of the instantaneous heart rate (HR) using a reflectance-type photoplethysmography (PPG) sensor is challenging because the dominant frequency observed in the PPG signal corrupted by motion artifacts (MAs) does not usually overlap the true HR, especially during high-intensity exercise. Recent studies have proposed various MA cancellation and HR estimation algorithms that use simultaneously measured acceleration signals as noise references for accurate HR estimation. These algorithms provide accurate results with a mean absolute error (MAE) of approximately 2 beats per minute (bpm). However, some of their results deviate significantly from the true HRs by more than 5 bpm. To overcome this problem, the present study modifies the power spectrum of the PPG signal by emphasizing the power of the frequency corresponding to the true HR. The modified power spectrum is obtained using a Gaussian kernel function and a previous estimate of the instantaneous HR. Because the modification is effective only when the previous estimate is accurate, a recently reported finite state machine framework is used for real-time validation of each instantaneous HR result. The power spectrum of the PPG signal is modified only when the previous estimate is validated. Finally, the proposed algorithm is verified by rigorous comparison of its results with those of existing algorithms using the ISPC dataset (n = 23). Compared to the method without MA cancellation, the proposed algorithm decreases the MAE value significantly from 6.73 bpm to 1.20 bpm (p < 0.001). Furthermore, the resultant MAE value is lower than that obtained by any other state-of-the-art method. Significant reduction (from 10.89 bpm to 2.14 bpm, p < 0.001) is also shown in a separate experiment with 24 subjects.


Algorithms , Exercise/physiology , Heart Rate/physiology , Models, Cardiovascular , Photoplethysmography , Adult , Female , Humans , Male
16.
IEEE Trans Biomed Eng ; 66(10): 2789-2799, 2019 10.
Article En | MEDLINE | ID: mdl-30703006

OBJECTIVE: Obtaining accurate estimates of instantaneous heart rates (HRs) using reflectance-type photoplethysmography (PPG) sensors is challenging because the dominant frequency observed in the PPG signal can be corrupted by motion artifacts (MAs), especially during exercise. To address this problem, we propose multi-mode particle filtering (MPF) methods. METHODS: We propose four MPF methods based on different approaches to particle weighting and HR determination. We compare the MPF performances with single-mode particle filtering and other state-of-the-art methods. RESULTS: When applied to 47 PPG recordings obtained during intensive physical exercise from two different databases, the proposed MPF methods exhibit an average absolute error of less than two beats per minute, which is less than the errors of the SPF and other state-of-the-art methods. Furthermore, the MPF methods require only 6.4-6.5 ms in an 8 s window. CONCLUSION: The MPF methods significantly reduce the HR estimation error and can be implemented in real-time in practical applications. SIGNIFICANCE: Our proposed MPF methods accurately estimate HRs even during intensive physical exercise, with robustness evidenced by their accuracy even when PPG signals are severely corrupted by MAs in several consecutive windows. The proposed methods can also be applied to other time-varying physiological feature-monitoring problems.


Algorithms , Exercise/physiology , Heart Rate/physiology , Photoplethysmography/instrumentation , Photoplethysmography/methods , Wearable Electronic Devices , Electrocardiography/instrumentation , Humans
17.
IEEE J Biomed Health Inform ; 23(4): 1595-1606, 2019 07.
Article En | MEDLINE | ID: mdl-30235152

Accurate estimation of heart rate (HR) using reflectance-type photoplethysmographic (PPG) signals during intensive physical exercise is challenging because of very low signal-to-noise ratio and unpredictable motion artifacts (MA), which are frequently uncorrelated with reference signals, such as accelerometer signals. In this paper, we propose a finite state machine framework based novel algorithm for HR estimation and validation, which exploits the crest factor from the periodogram obtained after MA removal, and the estimated HR changes in consecutive windows as the estimation accuracy indicators. Our proposed algorithm automatically provides only accurate HR estimation results in real time by ignoring the estimation results when true HRs are not reflected in PPG signals or when the MAs uncorrelated with accelerometer signals are dominant. The performance of the HR estimation is rigorously compared with existing algorithms on the publicly available database of 23 PPG recordings measured during intensive physical exercise. Our algorithm exhibits an average absolute error of 0.99 beats per minute and an average relative error of 0.88%. The algorithm is simple; the computational time is [Formula: see text] for 8 s window. Also, the algorithm framework can be combined with existing methods to improve estimation accuracy.


Exercise/physiology , Heart Rate/physiology , Photoplethysmography/methods , Signal Processing, Computer-Assisted , Wearable Electronic Devices , Algorithms , Artifacts , Humans
18.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3633-3636, 2019 Jul.
Article En | MEDLINE | ID: mdl-31946663

Heart rate (HR) estimation using wearable reflectance-type photoplethysmographic (PPG) signals is challenging due to low signal-to-noise ratio (SNR). Especially during intensive exercise, motion artifacts (MAs) overwhelm PPG signals in an unpredictable way. To overcome the issue, an acceleration signal as a reference signal has been adopted by simultaneously measuring with PPG signal. However, MAs are frequently uncorrelated with accelerometer signals under various activities. In this paper, we present a learning-based framework for HR estimation. The proposed framework is based on the deep neural network (DNN). For the feasibility study, we presented a simple network with two fully connected layers. We first extracted power spectra from the measured PPG signal and the acceleration signal. The two power spectra were then used for the input layer in the network. In addition, to inform the PPG signal quality, we added the acceleration signal intensity for the input layer. The proposed simple DNN network was trained for 10 epochs in IEEE Signal Processing Cup 2015 (ISPC) dataset (n=23). Subsequently, the trained network provided low mean absolute error (MAE) of 2.31 bpm in the ISPC dataset. We further tested the network on the new BAMI dataset (n=5), and found that it provided 4.72 bpm of MAE. On the other hand, the MAE without the learning frame was 15.73 bpm. In this study, we found that the simple DNN technique is effective. More training issues were also discussed.


Heart Rate , Neural Networks, Computer , Photoplethysmography , Signal Processing, Computer-Assisted , Wearable Electronic Devices , Acceleration , Algorithms , Artifacts , Feasibility Studies , Humans
19.
IEEE J Transl Eng Health Med ; 6: 1800513, 2018.
Article En | MEDLINE | ID: mdl-29910995

OBJECTIVE: chest computed tomography (CT) images and their quantitative analyses have become increasingly important for a variety of purposes, including lung parenchyma density analysis, airway analysis, diaphragm mechanics analysis, and nodule detection for cancer screening. Lung segmentation is an important prerequisite step for automatic image analysis. We propose a novel lung segmentation method to minimize the juxta-pleural nodule issue, a notorious challenge in the applications. METHOD: we initially used the Chan-Vese (CV) model for active lung contours and adopted a Bayesian approach based on the CV model results, which predicts the lung image based on the segmented lung contour in the previous frame image or neighboring upper frame image. Among the resultant juxta-pleural nodule candidates, false positives were eliminated through concave points detection and circle/ellipse Hough transform. Finally, the lung contour was modified by adding the final nodule candidates to the area of the CV model results. RESULTS: to evaluate the proposed method, we collected chest CT digital imaging and communications in medicine images of 84 anonymous subjects, including 42 subjects with juxta-pleural nodules. There were 16 873 images in total. Among the images, 314 included juxta-pleural nodules. Our method exhibited a disc similarity coefficient of 0.9809, modified hausdorff distance of 0.4806, sensitivity of 0.9785, specificity of 0.9981, accuracy of 0.9964, and juxta-pleural nodule detection rate of 96%. It outperformed existing methods, such as the CV model used alone, the normalized CV model, and the snake algorithm. Clinical impact: the high accuracy with the juxta-pleural nodule detection in the lung segmentation can be beneficial for any computer aided diagnosis system that uses lung segmentation as an initial step.

20.
PLoS One ; 12(10): e0187108, 2017.
Article En | MEDLINE | ID: mdl-29088260

We describe a wearable sensor developed for cardiac rehabilitation (CR) exercise. To effectively guide CR exercise, the dedicated CR wearable sensor (DCRW) automatically recommends the exercise intensity to the patient by comparing heart rate (HR) measured in real time with a predefined target heart rate zone (THZ) during exercise. The CR exercise includes three periods: pre-exercise, exercise with intensity guidance, and post-exercise. In the pre-exercise period, information such as THZ, exercise type, exercise stage order, and duration of each stage are set up through a smartphone application we developed for iPhones and Android devices. The set-up information is transmitted to the DCRW via Bluetooth communication. In the period of exercise with intensity guidance, the DCRW continuously estimates HR using a reflected pulse signal in the wrist. To achieve accurate HR measurements, we used multichannel photo sensors and increased the chances of acquiring a clean signal. Subsequently, we used singular value decomposition (SVD) for de-noising. For the median and variance of RMSEs in the measured HRs, our proposed method with DCRW provided lower values than those from a single channel-based method and template-based multiple-channel method for the entire exercise stage. In the post-exercise period, the DCRW transmits all the measured HR data to the smartphone application via Bluetooth communication, and the patient can monitor his/her own exercise history.


Cardiac Rehabilitation , Exercise Therapy/methods , Exercise/physiology , Heart Rate/physiology , Monitoring, Physiologic/instrumentation , Adult , Algorithms , Heart/physiology , Humans , Models, Theoretical , Pilot Projects , Smartphone/statistics & numerical data
...