INDUSTRY TRACK Invited Speakers
Speaker: Florian Michahelles
Thursday, September 17th, 2020 at 02:00PM – 02:25PM (Rome)
Thursday, September 17th, 2020 at 02:00PM – 02:25PM (Rome)
Bio: Florian Michahelles is a father of three kids, avid wood worker and enthusiastic cyclist. Very recently, Florian has become a Full Professor of Ubiquitous Computing at the Institute of Visual Computing and Human-Centered Technology at TU Wien.
In his research, Florian focuses on next-generation ubiquitous computing systems and their application in authentic, real-world settings. Ubiquitous computing covers a wide variety of topics, including sensor-rich environments, interactive and smart spaces, new interaction paradigms, Internet of Things, mobile and context-aware computing, awareness and privacy, and tangible, situated, and embodied interaction.
Before joining TU Wien, Florian Michahelles was the Siemens research group head of artificial & human intelligence in Berkeley. He and his team developed digital companions to support human workers with AI in daily industrial practices. They applied machine learning and knowledge graphs for modeling domain knowledge and human-computer interaction interfaces to seamlessly interface with the user. During his time at Siemens, Florian collaborated with the UC Berkeley in research and education.
In his academic activities, Florian led the Auto-ID Labs at ETH Zurich in the field of RFID, IoT Architecture, and mobile applications. Florian Michahelles received his Ph.D. from ETH Zurich for his research in the participative design of wearable computing applications and the development of innovative business cases for ubiquitous computing. He holds an MSc from the Ludwig-Maximilians University Munich in Computer Science.
Title: Ubiquitous Computing – it’s all about the user
Abstract: With the emergence of cost-efficient sensors, ubiquitous connectivity, abundance of computation and storage, and lean processes we tend to forget who we are designing for.
This talk will introduce the notion of digital companion as an opportunity for successful collaboration between human user and machine. A Digital Companion is an entity that enhances human capabilities. It digests, integrates, and shares information with humans so they can leverage their intuition and ingenuity while relying on the computation power of the machine. This talk will present the example of a digital lathe as a first embodiment of a digital companion.
In his research, Florian focuses on next-generation ubiquitous computing systems and their application in authentic, real-world settings. Ubiquitous computing covers a wide variety of topics, including sensor-rich environments, interactive and smart spaces, new interaction paradigms, Internet of Things, mobile and context-aware computing, awareness and privacy, and tangible, situated, and embodied interaction.
Before joining TU Wien, Florian Michahelles was the Siemens research group head of artificial & human intelligence in Berkeley. He and his team developed digital companions to support human workers with AI in daily industrial practices. They applied machine learning and knowledge graphs for modeling domain knowledge and human-computer interaction interfaces to seamlessly interface with the user. During his time at Siemens, Florian collaborated with the UC Berkeley in research and education.
In his academic activities, Florian led the Auto-ID Labs at ETH Zurich in the field of RFID, IoT Architecture, and mobile applications. Florian Michahelles received his Ph.D. from ETH Zurich for his research in the participative design of wearable computing applications and the development of innovative business cases for ubiquitous computing. He holds an MSc from the Ludwig-Maximilians University Munich in Computer Science.
Title: Ubiquitous Computing – it’s all about the user
Abstract: With the emergence of cost-efficient sensors, ubiquitous connectivity, abundance of computation and storage, and lean processes we tend to forget who we are designing for.
This talk will introduce the notion of digital companion as an opportunity for successful collaboration between human user and machine. A Digital Companion is an entity that enhances human capabilities. It digests, integrates, and shares information with humans so they can leverage their intuition and ingenuity while relying on the computation power of the machine. This talk will present the example of a digital lathe as a first embodiment of a digital companion.
Speaker: Chulhong Min
Thursday, September 17th, 2020 at 03:05PM – 03:30PM (Rome)
Thursday, September 17th, 2020 at 03:05PM – 03:30PM (Rome)
Bio: Chulhong Min is a research scientist in the Applications, Platforms & Software Systems (APSS) lab at Nokia Bell Labs, Cambridge, UK and a visiting fellow at University of Cambridge, UK. His current research explores the design of next-generation sensory AI systems to realise transformative multi-modal, multi-device sensing for disruptive mobile, wearable, and IoT services. Broadly, his research interests include mobile and embedded systems, Internet of things (IoT), and human-computer interaction. His work is published at ACM MobiSys, ACM SenSys, ACM UbiComp, ACM BuildSys, and prestigious journals. He won the best paper award at ACM CSCW 2014 and multiple demonstration awards. He served on a number of technical program committees and organising committees of various premier conferences and is an Associate Editor of ACM Proceedings on Interactive, Mobile, Wearable and Ubiquitous Technologies.
Title: Sensory AI software platform for multi-device environments
Abstract: Sensory devices are now pervasive. These mobile, wearable, and IoT devices on and near our body are increasingly embracing bleeding-edge machine learning algorithms to uncover remarkable sensory applications. In this transformation, we are observing the emergence of multi-device systems as a natural course of multiple sensory devices surrounding us. The multiplicity is opening up an exciting opportunity to leverage the sensor redundancy and high availability afforded by multiple devices, thereby enabling rich, powerful, and collaborative sensing applications. However, such multiplicity comes at the expense of increasing complexity. Two key caveats that contribute to this complexity are device and data variabilities caused by runtime factors. In this talk, I will introduce the design and development of a brand-new software platform, offering best-effort inference in multi-device environments. Specifically, I will cover two key technical perspectives: multi-device model selection and device-to-device data translation. This talk will end with a discussion of the exciting applications we can begin to tackle.
Title: Sensory AI software platform for multi-device environments
Abstract: Sensory devices are now pervasive. These mobile, wearable, and IoT devices on and near our body are increasingly embracing bleeding-edge machine learning algorithms to uncover remarkable sensory applications. In this transformation, we are observing the emergence of multi-device systems as a natural course of multiple sensory devices surrounding us. The multiplicity is opening up an exciting opportunity to leverage the sensor redundancy and high availability afforded by multiple devices, thereby enabling rich, powerful, and collaborative sensing applications. However, such multiplicity comes at the expense of increasing complexity. Two key caveats that contribute to this complexity are device and data variabilities caused by runtime factors. In this talk, I will introduce the design and development of a brand-new software platform, offering best-effort inference in multi-device environments. Specifically, I will cover two key technical perspectives: multi-device model selection and device-to-device data translation. This talk will end with a discussion of the exciting applications we can begin to tackle.