Federated recognition mechanism based on enhanced temporal-spatial learning using mobile edge sensors for firefighters
Tóm tắt
Interest in Human Action Recognition (HAR), which encompasses both household and industrial settings, is growing. HAR describes a computer system’s capacity to accurately recognize and evaluate human activities and behaviors, akin to what humans call perception. Real-time federated activity identification architecture is suggested in this work to monitor smartphone user behavior. The main aim is to decrease accidents happening in an indoor environment and assure the security of older individuals in an indoor setting. The idea lends itself to a multitude of uses, including monitoring the elderly, entertainment, and spying. In this paper, we present a new smartphone sensor-based human motion awareness federated recognition scheme using a temporal-spatial weighted BILSTM-CNN framework. We verify new federated recognition based on temporal-spatial data better than existing machine learning schemes in terms of activity recognition accuracy. Several methods and strategies in the literature have been used to attain higher HAR accuracy. In particular, six categories of typical everyday human activities are highlighted, including walking, jumping, standing, moving from one level to another, and picking up items. Smartphone-based sensors are utilized to detect the motion activities carried out by elderly people based on the raw inertial measurement unit (IMU) data. Then, weighted bidirectional long short-term memory (BILSTM) networks are for learning about temporal motion features; they are swiftly followed by single-dimensional convolutional neural networks (CNN), which are built for reasoning about spatial structure features. Additionally, the awareness mechanism highlights the data segments to choose discriminative contextual data. Finally, a sizeable dataset of HDL activity datasets is gathered for model validation and training. The results confirm that the proposed ML framework performs 18.7% better in terms of accuracy, 27.9% for the case of precision, and 0.24.1% when evaluating based on the F1-score for client 1. Similarly, for client 2 and client 3, the performance betterment in terms of accuracy is 18.4% and 10.1%, respectively.
Tài liệu tham khảo
Abdel-Salam, R., Mostafa, R., Hadhood, M. 2021. Human activity recognition using wearable sensors: review, challenges, evaluation benchmark. In: Deep Learning for Human Activity Recognition: Second International Workshop, DL-HAR 2020, Held in Conjunction with IJCAI-PRICAI 2020, Kyoto, Japan, January 8, 2021, Proceedings 2. Kyoto: Springer; p. 1–15.
Abduljabbar, R.L., H. Dia, and P.-W. Tsai. 2021. Development and evaluation of bidirectional LSTM freeway traffic forecasting models using simulation data. Scientific Reports 11 (1): 1–16.
Ahmad, N., L. Han, K. Iqbal, R. Ahmad, M.A. Abid, and N. Iqbal. 2019. SARM: Salah activities recognition model based on smartphone. Electronics 8 (8): 881.
Bobick, A.F., and J.W. Davis. 2001. The recognition of human movement using temporal templates. IEEE Transactions on pattern analysis and machine intelligence 23 (3): 257–267.
Braunagel, C., Kasneci, E., Stolzmann, W., Rosenstiel, W. 2015. Driver-activity recognition in the context of conditionally autonomous driving. In: 2015 IEEE 18th International Conference on Intelligent Transportation Systems. Gran Canaria: IEEE; pp. 1652–1657.
Challa, S.K., A. Kumar, and V.B. Semwal. 2022. A multibranch CNN-BiLSTM model for human activity recognition using wearable sensor data. The Visual Computer 38 (12): 4095–4109.
Chen, N., and P. Wang. 2018. Advanced combined LSTM-CNN model for twitter sentiment analysis. In 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS), 684–687. https://doi.org/10.1109/CCIS.2018.8691381.
Chen, Y., Zhong, K., Zhang, J., Sun, Q., Zhao, X. 2016. LSTM networks for mobile human activity recognition. In: 2016 International Conference on Artificial Intelligence: Technologies and Applications. Nanjing: Atlantis Press; p. 50–53.
Chen, C..-F..R., R. Panda, K. Ramakrishnan, R. Feris, J. Cohn, A. Oliva, and Q. Fan. 2021. Deep analysis of CNN-based spatio-temporal representations for action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6165–6175.
Deng, J., Y. Hao, M.S. Khokhar, R. Kumar, J. Cai, J. Kumar, and M.U. Aftab. 2021. Trends in vehicle re-identification past, present, and future: a comprehensive review. Mathematics 9 (24): 3162.
Doniec, R.J., S. Siecin´ski, K..M. Duraj, N..J. Piaseczna, K. Mocny-Pachon´ska, and E..J.. Tkacz. 2020. Recognition of drivers’ activity based on 1d convolutional neural network. Electronics 9 (12): 2002.
Dua, N., S.N. Singh, and V.B. Semwal. 2021. Multi-input CNN-GRU based human activity recognition using wearable sensors. Computing 103: 1461–1478.
Edel, M., K¨oppe, E. 2016. Binarized-BLSTM-RNN based human activity recognition. In: 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN). Alcala de Henares: IEEE; p. 1–7.
Fan, C., and F. Gao. 2021. Enhanced human activity recognition using wearable sensors via a hybrid feature selection method. Sensors 21 (19): 6434.
Fan, Y., L. Gongshen, M. Kui, and S. Zhaoying. 2018. Neural feedback text clustering with BiLSTM-CNN-kmeans. IEEE Access 6: 57460–57469.
Geng, Y., J. Chen, R. Fu, G. Bao, and K. Pahlavan. 2015. Enlighten wearable physiological monitoring systems: On-body rf characteristics based human motion classification using a support vector machine. IEEE transactions on mobile computing 15 (3): 656–671.
Gupta, S.C., Kumar, D., Athavale, V. 2021 A review on human action recognition approaches. In: 2021 10th IEEE International Conference on Communication Systems and Network Technologies (CSNT). Bhopal: IEEE; p. 338–344.
He, W., Wang, S. 2022. Mongolian word segmentation based on BiLSTM-CNN-CRF model. In: Mobile Wireless Middleware, Operating Systems and Applications: 10th International Conference on Mobile Wireless Middleware, Operating Systems and Applications (MOBILWARE 2021). Hohhot: Springer; p. 123–135.
He, D., Z. Zhou, C. Gan, F. Li, X. Liu, X, Y. Li, Y, L. Wang, and S. Wen. 2019. Stnet: Local and global spatial-temporal modeling for action recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, 8401–8408.
Hu, Y., Cao, L., Lv, F., Yan, S., Gong, Y., Huang, T.S. 2009. Action detection in complex scenes with spatial and temporal ambiguities. In: 2009 IEEE 12th International Conference on Computer Vision. Kyoto: IEEE; p. 128–135.
Huang, Z., Leng, J. 2010. Analysis of Hu’s moment invariants on image scaling and rotation. In: 2010 2nd International Conference on Computer Engineering and Technology, vol. 7. Bali Island: IEEE; p. 7–476.
Ibrahim, M..S., S. Muralidharan, Z. Deng, A. Vahdat, and G. Mori. 2016. A hierarchical deep temporal model for group activity recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1971–1980.
Iqbal, N., R. Ahmad, F. Jamil, and D.-H. Kim. 2021. Hybrid features prediction model of movie quality using multi-machine learning techniques for effective business resource planning. Journal of Intelligent & Fuzzy Systems 40 (5): 9361–9382.
Iqbal, N., A. Rizwan, A.N. Khan, R. Ahmad, B.W. Kim, K. Kim, and D.-H. Kim. 2021. Boreholes data analysis architecture based on clustering and prediction models for enhancing underground safety verification. IEEE Access 9: 78428–78451.
Jamil, F., and D.H. Kim. 2019. Improving accuracy of the alpha–beta filter algorithm using an ANN-based learning mechanism in indoor navigation system. Sensors 19 (18): 3946.
Jamil, H., F. Qayyum, F. Jamil, and D.-H. Kim. 2021. Enhanced PDR-BLE compensation mechanism based on hmm and AWCLA for improving indoor localization. Sensors 21 (21): 6972.
Jamil, F., N. Iqbal, S. Ahmad, D. Kim, et al. 2021. Peer-to-peer energy trading mechanism based on blockchain and machine learning for sustainable electrical power supply in smart grid. Ieee Access 9: 39193–39217.
Jamil, H., F. Qayyum, N. Iqbal, F. Jamil, and D.H. Kim. 2022. Optimal ensemble scheme for human activity recognition and floor detection based on AutoML and weighted soft voting using smartphone sensors. IEEE Sensors Journal 23 (3): 2878–2890.
Jamil, H., F. Qayyum, N. Iqbal, F. Jamil, and D.H. Kim. 2023. Optimal ensemble scheme for human activity recognition and floor detection based on AutoML and weighted soft voting using smartphone sensors. IEEE Sensors Journal 23 (3): 2878–2890. https://doi.org/10.1109/JSEN.2022.3228120.
Kellokumpu, V., G. Zhao, and M. Pietik¨ainen. 2011. Recognition of human actions using texture descriptors. Machine Vision and Applications 22: 767–780.
Khan, M.A., N. Iqbal, H. Jamil, D.-H. Kim, et al. 2023. An optimized ensemble prediction model using AutoML based on soft voting classifier for network intrusion detection. Journal of Network and Computer Applications 212: 103560.
Kwon, B., J. Kim, K. Lee, Y.K. Lee, S. Park, and S. Lee. 2017. Implementation of a virtual training simulator based on 360° multi-view human action recognition. IEEE Access 5: 12496–12511.
Lee, J., Kang, S.-j. 2021. Skeleton action recognition using two-stream adaptive graph convolutional networks. In: 2021 36th International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC). Grand Hyatt Jeju: IEEE; p. 1–3.
Lee, B., Hong, S., Lee, H., Kim, E. 2011. Regularized eigenspace-based gait recognition system for human identification. In: 2011 6th IEEE Conference on Industrial Electronics and Applications. Beijing: IEEE; p. 1966–1970.
Li, Y., and L. Wang. 2022. Human activity recognition based on residual network and BiLSTM. Sensors 22 (2): 635.
Liang, Y., M.L. Reyes, and J.D. Lee. 2007. Real-time detection of driver cognitive distraction using support vector machines. IEEE transactions on Intelligent Transportation Systems 8 (2): 340–350.
Lu, X., H. Yao, S. Zhao, X. Sun, and S. Zhang. 2019. Action recognition with multi-scale trajectory-pooled 3d convolutional descriptors. Multimedia Tools and Applications 78: 507–523.
Mandal, B., and H.-L. Eng. 2012. Regularized discriminant analysis for holistic human activity recognition. IEEE Intelligent Systems 27 (01): 21–31.
Mokhtari, N., A. N´ed´elec, and P. De Loor. 2022. Human activity recognition: A spatio-temporal image encoding of 3d skeleton data for online action detection. Valletta: In VISIGRAPP (5: VISAPP); p. 448–455.
Nafea, O., W. Abdul, G. Muhammad, and M. Alsulaiman. 2021. Sensor-based human activity recognition with spatio-temporal deep learning. Sensors 21 (6): 2141.
Okon, O.D., Meng, L. 2017. Detecting distracted driving with deep learning. In: Interactive Collaborative Robotics: Second International Conference, ICR 2017, Hatfield, UK, September 12-16, 2017, Proceedings 2. Hatfield: Springer; p. 170–179.
Pu, S., Chu, L., Hou, Z., Hu, J., Huang, Y., Zhang, Y. 2022. Spatial-temporal feature extraction and evaluation network for citywide traffic condition prediction. Beijing: arXiv preprint arXiv:2207.11034.
Qin, Z., Y. Zhang, S. Meng, Z. Qin, and K.-K.R. Choo. 2020. Imaging and fusing time series for wearable sensor-based human activity recognition. Information Fusion 53: 80–87.
Raziani, S., and M. Azimbagirad. 2022. Deep CNN hyperparameter optimization algorithms for sensor-based human activity recognition. Neuroscience Informatics 2 (3): 100078.
Rezaei, M., Klette, R. 2014. Look at the driver, look at the road: No distraction! No accident! In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Columbus: p. 129–136.
Senthilkumar, N., M. Manimegalai, S. Karpakam, S. Ashokkumar, and M. Premkumar. 2022. Human action recognition based on spatial–temporal relational model and LSTM-CNN framework. Materials Today: Proceedings 57: 2087–2091.
Shakya, S.R., C. Zhang, and Z. Zhou. 2018. Comparative study of machine learning and deep learning architecture for human activity recognition using accelerometer data. International Journal of Machine Learning and Computing 8 (6): 577–582.
Soeiro, A., S. Shahedi, and S. Maheronnaghsh. 2021. A framework to implement occupational health and safety innovation. In 4th Symposium on Occupational Safety and Health Proceedings Book.
Song, X., C. Lan, W. Zeng, J. Xing, X. Sun, and J. Yang. 2019. Temporal–spatial mapping for action recognition. IEEE Transactions on Circuits and Systems for Video Technology 30 (3): 748–759.
Su, H., Zou, J., Wang, W. 2013. Human activity recognition based on silhouette analysis using local binary patterns. In: 2013 10th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD). Shenyang: IEEE; p. 924–929.
Tan, K.S., K.M. Lim, C.P. Lee, and L.C. Kwek. 2022. Bidirectional long short-term memory with temporal dense sampling for human action recognition. Expert Systems with Applications 210: 118484.
Tang, Y., Q. Teng, L. Zhang, F. Min, and J. He. 2020. Layer-wise training convolutional neural networks with smaller filters for human activity recognition using wearable sensors. IEEE Sensors Journal 21 (1): 581–592.
Tang, Y., L. Zhang, F. Min, and J. He. 2022. Multiscale deep feature learning for human activity recognition using wearable sensors. IEEE Transactions on Industrial Electronics 70 (2): 2106–2116.
Tran, D., H. Manh Do, W. Sheng, H. Bai, and G. Chowdhary. 2018. Real-time detection of distracted driving based on deep learning. IET Intelligent Transport Systems 12 (10): 1210–1219.
Tuan, N.A., Xu, R., Kim, D. 2023. Enhanced interoperating mechanism between OneM2M and OCF platform based on rules engine and interworking proxy in heterogeneous IoT networks. Adelaide SA 5005: IEEE Access.
Tufek, N., M. Yalcin, M. Altintas, F. Kalaoglu, Y. Li, and S.K. Bahadir. 2019. Human action recognition using deep learning methods on limited sensory data. IEEE Sensors Journal 20 (6): 3101–3112.
Turaga, P., R. Chellappa, V.S. Subrahmanian, and O. Udrea. 2008. Machine recognition of human activities: a survey. IEEE Transactions on Circuits and Systems for Video technology 18 (11): 1473–1488.
Wang, L., Y. Xu, J. Cheng, H. Xia, J. Yin, and J. Wu. 2018. Human action recognition by learning spatio-temporal features with deep neural networks. IEEE access 6: 17913–17922.
Wang, J., Y. Chen, S. Hao, X. Peng, and L. Hu. 2019. Deep learning for sensor-based activity recognition: a survey. Pattern recognition letters 119: 3–11.
Wang, J., C. Lu, and K. Zhang. 2020. Textile-based strain sensor for human motion detection. Energy & Environmental Materials 3 (1): 80–100.
Wang, X., L. Zhang, W. Huang, S. Wang, H. Wu, J. He, and A. Song. 2021. Deep convolutional networks with tunable speed–accuracy tradeoff for human activity recognition using wearables. IEEE Transactions on Instrumentation and Measurement 71: 1–12.
Wang, L., Xiong, Y., Wang, Z., Qiao, Y. 2015. Towards good practices for very deep two-stream convnets. Beijing: arXiv preprint arXiv:1507.02159.
Wang, L., Y. Qiao, and X. Tang. 2015. Action recognition with trajectory-pooled deep-convolutional descriptors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4305–4314.
Wawrzyniak, S., Niemiro, W. 2015. Clustering approach to the problem of human activity recognition using motion data. In: 2015 Federated Conference on Computer Science and Information Systems (fedcsis). Lódz: IEEE; p. 411–416.
Weidinger, J. 2022. What is known and what remains unexplored: A review of the firefighter information technologies literature. International Journal of Disaster Risk Reduction 103115: 103115–103127.
Wu, J., L. Wang, L. Wang, J. Guo, and G. Wu. 2019. Learning actor relation graphs for group activity recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9964–9974.
Xia, K., J. Huang, and H. Wang. 2020. LSTM-CNN architecture for human activity recognition. IEEE Access 8: 56855–56866.
Xu, X., J. Tang, X. Zhang, X. Liu, H. Zhang, and Y. Qiu. 2013. Exploring techniques for vision based human activity recognition: Methods, systems, and evaluation. Sensors 13 (2): 1635.
Yang, W., W. Zuo, and B. Cui. 2019. Detecting malicious URLs via a keyword-based convolutional gated-recurrent-unit neural network. Ieee Access 7: 29891–29900.
Yuan, H., D. Ni, and M. Wang. 2021. Spatio-temporal dynamic inference network for group activity recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7476–7485.
Zhang, S., L. Yao, A. Sun, and Y. Tay. 2019. Deep learning based recommender system: a survey and new perspectives. ACM computing surveys (CSUR) 52 (1): 1–38.
Zhao, R., Ali, H., Van der Smagt, P. 2017. Two-stream RNN/CNN for action recognition in 3D videos. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Vancouver: IEEE; p. 4260–4267.