Professor of Multimedia Signal Processing Director, Centre for Intelligent Sensing School of Elec. Eng and Computer Science
Andrea Cavallaro is Professor of Multimedia Signal Processing and the founding Director of the Centre for Intelligent Sensing at Queen Mary University of London, UK. He is Fellow of the International Association for Pattern Recognition (IAPR) and Turing Fellow at the Alan Turing Institute, the UK National Institute for Data Science and Artificial Intelligence. He received his Ph.D. in Electrical Engineering from the Swiss Federal Institute of Technology (EPFL), Lausanne, in 2002. He was a Research Fellow with British Telecommunications (BT) in 2004/2005 and was awarded the Royal Academy of Engineering Teaching Prize in 2007; three student paper awards on target tracking and perceptually sensitive coding at IEEE ICASSP in 2005, 2007 and 2009; and the best paper award at IEEE AVSS 2009. Prof. Cavallaro is Editor-in-Chief of Signal Processing: Image Communication; Chair of the IEEE Image, Video, and Multidimensional Signal Processing Technical Committee; an IEEE Signal Processing Society Distinguished Lecturer; and an elected member of the IEEE Video Signal Processing and Communication Technical Committee. He is Senior Area Editor for the IEEE Transactions on Image Processing and Associate Editor for the IEEE Transactions on Circuits and Systems for Video Technology. He is a past Area Editor for the IEEE Signal Processing Magazine (2012-2014) and past Associate Editor for the IEEE Transactions on Image Processing (2011-2015), IEEE Transactions on Signal Processing (2009-2011), IEEE Transactions on Multimedia (2009-2010), IEEE Signal Processing Magazine (2008-2011) and IEEE Multimedia. He is a past elected member of the IEEE Multimedia Signal Processing Technical Committee and past chair of the Awards committee of the IEEE Signal Processing Society, Image, Video, and Multidimensional Signal Processing Technical Committee. Prof. Cavallaro has published over 270 journal and conference papers, one monograph on Video tracking (2011, Wiley) and three edited books: Multi-camera networks (2009, Elsevier); Analysis, retrieval and delivery of multimedia content (2012, Springer); and Intelligent multimedia surveillance (2013, Springer).
Robust and privacy-preserving multimodal learning with body cameras
High-quality miniature cameras and associated sensors, such as microphones and inertial measurement units, are increasingly worn by people and embedded in robots. The pervasiveness of these ego-centric sensors is offering countless opportunities for new applications and services through the recognition of actions, activities and interactions. However, inferences from ego-centric data are challenging due to unconventional and rapidly changing capturing conditions. Furthermore, personal data generated by and through these sensors facilitate non-consensual, non-essential inferences when data are shared with social media services and health apps. In this talk I will first present the main challenges in learning, classifying and processing body-camera signals and then show how exploiting multiple modalities helps address these challenges. In particular, I will discuss action recognition, audio-visual person re-identification and scene recognition as specific application examples using ego-centric data. Finally, I will show how to design on-device machine learning models and feature learning frameworks that enable privacy-preserving service.