閱讀全文 | |
篇名 |
Human Activity Recognition with Multimodal Sensing of Wearable Sensors
|
---|---|
並列篇名 | Human Activity Recognition with Multimodal Sensing of Wearable Sensors |
作者 | Chun-Mei Ma、Hui Zhao、Ying Li、Pan-Pan Wu、Tao Zhang、Bo-Jue Wang |
英文摘要 | Human activity sensed by wearable sensors has multi-granularity data characteristics. Although deep learning-based approaches have greatly improved the accuracy of recognition, most of them mainly focus on designing new models to obtain deeper features, ignoring the different effects of different deep features on the accuracy of recognition. We think that discriminative features learning would improve the recognition performance. In this paper, we propose an end-to-end model ABLSTM that consists of Attention model and BLSTM model to recognize human activities. Specifically, the BLSTM model is used to extract deep features of various activities. After that, the Attention model is used to obtain the discriminative features representation by reducing the irrelevant features and enhancing the positive correlation features to each activity. Therefore, compared with traditional deep learning-based approaches, such as CNN and RNN based etc., the features learned by ABLSTM are more discriminative, which can be in response to the changes of activities. By testing our model on two public benchmark datasets: UCI and Opportunity. The results show that our model can well recognize human activities with F1 scores as high as 99.0% and 92.7% respectively on the two datasets, which pushes the state-of-the-art in human activities recognition of mobile sensing. |
起訖頁 | 024-037 |
關鍵詞 | human activity recognition、multimodal sensory data、discriminative features representation、wearable sensors |
刊名 | 電腦學刊 |
期數 | 202112 (32:6期) |
DOI |
|
QR Code | |
該期刊 上一篇
| A Security Edge Computing Offloading Solution for 5G Cellular Network |
該期刊 下一篇
| Unsupervised Learning of Depth and Ego-Motion from Continuous Monocular Images |