RestHAR: Residual feature learning transformer for human activity recognition from multi-sensor data


In recent times, surge in the use of smartphones in our daily lives has created a huge opportunity for paving the road towards human-centric computing by utilizing the rich data which gets recorded by it’s multiple sensors. Sensor-based human activity recognition has a tremendous amount of real-world applications such as health monitoring, surveillance, smart homes, and ambient assisted living. This paper presents a joint residual feature extractor and a transformer-based deep neural network for end-to-end human activity recognition using raw multi-sensor data captured from smartphones or wearable devices. Unlike conventional handcrafted feature extraction, this approach outperforms all present approaches showing state-of-the-art generalizable performance over multiple benchmark datasets. It achieves a test accuracy of 95.2% on the UCI HAR dataset and 96.4% test accuracy on the WISDM dataset.

Presented at 8th Intl. Conference on Soft Computing & Machine Intelligence
Debaditya Shome
Debaditya Shome
Graduate researcher experienced in AI / ML

My research interests include Self-supervised learning, Generative learning, Computer Vision and it’s interplay with different modalities of data such as natural language and audio.