RESUMO
Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its wide applicability in medical care, smart homes, and security monitoring. In this study, we developed and implemented a deep-learning-based hierarchical fusion framework for the recognition of egocentric activities of daily living (ADLs) in a wearable hybrid sensor system comprising motion sensors and cameras. Long short-term memory (LSTM) and a convolutional neural network are used to perform egocentric ADL recognition based on motion sensor data and photo streaming in different layers, respectively. The motion sensor data are used solely for activity classification according to motion state, while the photo stream is used for further specific activity recognition in the motion state groups. Thus, both motion sensor data and photo stream work in their most suitable classification mode to significantly reduce the negative influence of sensor differences on the fusion results. Experimental results show that the proposed method not only is more accurate than the existing direct fusion method (by up to 6%) but also avoids the time-consuming computation of optical flow in the existing method, which makes the proposed algorithm less complex and more suitable for practical application.
RESUMO
OBJECTIVE: To investigate the capabilities of classification models based on hierarchical fusion framework of multi-classifier using a random projection strategy for differentiation of renal cell carcinoma (RCC) from small renal angiomyolipoma (< 4 cm) without visible fat (AMLwvf). METHODS: We retrospectively collected the clinical data from 163 patients with pathologically proven small renal mass, including 118 with RCC and 45 with AMLwvf.Target region of interest (ROI) delineation was performed on an unenhanced phase (UP) CT image slice displaying the largest lesion area.The radiomics features were used to establish a hierarchical fusion method.On the projection-based level, the homogeneous classifiers were fused, and the fusion results were further fused at the classifier-based level to construct a multi-classifier fusion system based on random projection for differentiation of AMLwvf and RCC.The discriminative capability of this model was quantitatively evaluated using 5-fold cross validation and 4 evaluation indexes[specificity, sensitivity, accuracy and area under ROC curve (AUC)].We quantitatively compared this multi-classifier fusion framework against different classification models using a single classifier and several multi-classifier ensemble models. RESULTS: When the projection number was set at 10, the proposed hierarchical fusion differentiation framework achieved the best results on all the evaluation measurements.At the optimal projection number of 10, the specificity, sensitivity, average accuracy and AUC of the multi-classifier ensemble classification system for differentiation between AMLwvf and RCC were 0.853, 0.693, 0.809 and 0.870, respectively. CONCLUSION: The proposed model constructed based on a multi-classifier fusion system using random projection shows better performance to differentiate RCC from AMLwvf than the AMLwvf and RCC discrimination models based on a single classification algorithm and the currently available benchmark ensemble methods.