Human activity recognition can be used to devise assistants that provide proactive support by exploiting the knowledge of the user’s context, determined from sensors located on-body. By using on body sensors, typically integrated into clothing or found in mobile gadgets, the system can operate at any time, regardless of the user’s location. Activity recognition is an enabling technology that can lead to great societal beneﬁts. There are a wide range of domains which can beneﬁt from it, as evidenced in the wearable, mobile and pervasive computing communities: health care (in particular elderly assistance), industrial worker assistance, sports, entertainment, human-computer interfaces, human-robot interaction, etc. The design and development of activity recognition systems pose important challenges to the machine learning community. They typically involve high-dimensional, multimodal streams of data characterized by a large variability, where data portions may be missing or labels can be unreliable. Notwithstanding the large amount of research endeavors aimed at tackling these issues, the comparison of different approaches is often not possible due to the lack of common benchmarking tools and datasets that allow for replicable and fair testing procedures across several research groups. The aim of this workshop is to discuss and compare different methods for robust activity recognition, as well as putting forward the need for common resources for such comparison. To promote such comparison, the workshop will present the outcome of a machine learning challenge where contributed methods will be evaluated on a public benchmark database of daily activities recorded using a multimodal network of on-body sensors.