It’s Time to Stop Staring at Your Phone’s Mobile Map: The Importance of HCI Perspectives for Next-generation Navigation Devices

Who and when?
Prof. Johannes Schöning, from the University of Bremen, Germany, is going to visit our University on Thursday, June 8th. His talk will start at 1 pm in the Visual Computing Lab C 061.

Topic: It’s Time to Stop Staring at Your Phone’s Mobile Map: The Importance of HCI Perspectives for Next-generation Navigation Devices
In my talk I give a broad overview of my research. My research interests lie at the intersection of human-computer interaction (HCI), geographic information science and ubiquitous interface technologies. In our lab, we investigate how people interact with digital spatial information and create new methods and novel interfaces to help people interact with spatial information.

For example, catastrophic incidents associated with GPS devices and other personal navigation technologies are all too common: a tourist drives his rental car across a beach and directly into the Atlantic Ocean, a person in Belgium intending to drive to a nearby train station ends up in Croatia, a family traveling on a dirt road gets stranded for four days in the Australian outback. I will characterise key patterns that exist in these incidents and enumerate implications for research and design in HCI that emerge from these patterns.

In addition, researchers and mapping platforms have shown growing interest in optimizing routes for criteria other than simple travel time, e.g. identifying the “simplest route”, the “safest route”, or “most beautiful” route. However, despite the ubiquity of algorithmic routing and its potential to define how millions of people move around the world, very little is known about the externalities that arise when adopting these new optimization criteria, for instance potential redistribution of traffic to certain neighborhoods and increased route complexity (with its associated risks). I will present the first controlled examination of these externalities, doing so across multiple mapping platforms, alternative optimizations, and cities.

Vita:
Johannes Schöning is a Lichtenberg Professor and Professor of Human-Computer Interaction (HCI) at the University of Bremen in Germany. Before, and coming to Bremen, he was a visiting lecturer at UCL, UK, helping to setup the Intel Collaborative Research Institute for Sustainable Cities and had a faculty position at Hasselt University, Belgium. He is also currently a visiting professor at the Madeira Interactive Technologies Institute (M-ITI), Portugal. Previously, he worked in Saarbrücken, where he was a senior consultant at the German Research Centre for Artificial Intelligence (DFKI). During his time at DFKI, he received a PhD in computer science at Saarland University (2010), which was supported by the Deutsche Telekom Labs in Berlin. He obtained his Master’s degree in Geoinformatics at the University of Münster at the Institute for Geoinformatics (2007).

2017/05/31

Value of the Open

Who and when?
Prof. Walid Karam, from the University of Balamand, Balamand Al Kurah, Lebanon, is going to visit our University on Friday, May 19th. His talk will start at 11 am in the Visual Computing Lab C 061.

Topic: Value of the Open
We live in an open world despite all boundaries. Technology and the Internet has brought people closer. Societies have evolved to rely on the knowledge economy. Education has transformed from supply of information and basic scientific facts and skills to innovation and creativity. There are new business concepts that strongly rely on open innovation, and new mechanisms to create values. Laws have been put in place to support these new methodologies. This talk will shed the light on these concepts, and will highlight the value of “openness” in the knowledge society.

Vita:
Walid Karam is a Professor and researcher at the University of Balamand. He is founding member of LINC (the Lebanese Internet Center) and LERN (the Lebanese Education & Research Network) as well as board and senior member of the internet Society (Lebanon Chapter) and Computer Society (Lebanon section). He received his degree from Georgia Tech (Bachelor of Electrical Engineering) and Telecom-ParisTech (PhD in Computer & Telecommunications).

2017/05/16

Visually guided underwater robots

Who and when?
Prof. Michael Jenkin, from York University, Toronto, Canada, is going to visit our University on Wednesday, March 1st. His talk will start at 10 am in the Visual Computing Lab C 061.

Topic: Visually guided underwater robots
Vision has proven to be a particularly effective sensor for robots operating on and above the surface of the earth. In this domain vision has been used to track features, build environmental representations, solve localization tasks, avoid obstacles and to provide a conduit for human-robot communication. But how well does this sensing modality work underwater? Utilizing the AQUA2 underwater platform I have been involved in a long-term research project that has been developing solutions to these and other problems associated with underwater vehicles capable of operating in a 6DOF environment. Results for environmental reconstruction, localization and gait planning will be presented along with some highlights of ongoing work with a new vehicle (Milton) that underwent its first sea trials last summer, and which will be used extensively in trials early in 2017.

Vita:
Michael Jenkin is a Professor of Electrical Engineering and Computer Science, and a member of the Centre for Vision Research at York University, Canada. Working in the fields of visually guided autonomous robots and virtual reality, he has published over 150 research papers including co-authoring Computational Principles of Mobile Robotics with Gregory Dudek and a series of co-edited books on human and machine vision with Laurence Harris.

Michael Jenkin’s current research interests include work on sensing strategies for AQUA, an amphibious autonomous robot being developed as a collaboration between Dalhousie University, McGill University and York University; the development of tools and techniques to support crime scene investigation; and the understanding of the perception of self-motion and orientation in unusual environments including microgravity.

2017/02/15

Self-motion and self-orientation: studies using Virtual Reality and the human centrifuge

Who and when?
Dr. Laurence Harris, Director of the York Centre for Vision Research at York University, Canada, is going to visit our University on Wednesday, December 14th 2016. His talk will start at 10:30 am in the Visual Computing Lab C 061.

Topic: Self-motion and self-orientation: studies using Virtual Reality and the human centrifuge
More details can be found in this anouncement.

2016/12/13

From J9 to OMR: Developing Technologies to Improve Virtual Machines

Who and when?
Dr. Kenneth Kent, Director of the IBM Center for Advanced Studies and Professor at the University of New Brunswick, Canada, is going to visit our University on Wednesday, December 7th 2016. His talk will start at 11:00 am in the Visual Computing Lab C 061.

Topic: From J9 to OMR: Developing Technologies to Improve Virtual Machines
The Java virtual machine forms the underlying platform for many of the technologies that IBM deploys in the cloud and enterprise domains. Having an efficient platform is key for ensuring that client applications operate in an optimal manner. For the last 5 years, CAS-Atlantic has collaborated with IBM to develop several technologies to increase the performance of their JVM. In this talk, Dr. Kent will give a snapshot of some of the projects that have been undertaken and the outcomes. In addition, he will give an overview of the next 5 years of research that IBM and CAS-Atlantic has planned for collaborating together on OMR to provide a virtual machine that supports multiple languages, not just Java.

Vita:
Having served as the Director of the Information Technology Centre, he cooperates with industrial partners including IBM, Altera, Protocase and Buterfly Energy Systems. His collaboration with IBM has led to the creation of the Centre for Advanced Studies – Atlantic at UNB where he is the founding Director.

His research interests in Virtual Machines and FPGA Architectures have led to numerous publications and a number of tools widely used in the open-source community. He is an active member in the scientific community having served as co-programme chair, co-general chair and steering committee member of the IEEE Rapid Systems Prototyping Symposium and co-programme chair of the Highly Efficient Architectures and Reconfigurable Technologies Workshop.

He is a member of the National Science and Engineering Research Council Strategic Grant selection committee and an executive board member of Science Atlantic.

2016/12/06

Human-Computer Interaction Research at Otago

Who and when?
Dr. Holger Regenbrecht, Professor at the University of Otago, New Zealand, is going to visit our University on Friday, December 2nd 2016. His talk will start at 10:30 am in the Visual Computing Lab C 061.

Topic: Human-Computer Interaction Research at Otago
After a brief introduction into the research areas of the Information Science department Holger will present a selection of his own research projects in human-computer interaction with an emphasis on Virtual and Mixed Reality (VR, MR) technologies. In particular he will talk about different projects in telepresence, mobile systems, and MR interaction.

Holger Regenbrecht has been involved in research and development in the fields of Virtual and Augmented Reality for over 20 years. He leads the Computer-Mediated Realities Lab at the University of Otago. Holger has worked as a computer programmer, project manager, and researcher for clients in civil engineering and architecture, automotive and aerospace, and health and wellbeing. His work spans theory, concepts, techniques, technologies, and applications.

Vita:
Dr. Holger Regenbrecht has been working in the fields of Virtual and Augmented Reality for over 15 years. He was initiator and manager of the Virtual Reality Laboratory at Bauhaus University Weimar (Germany) and the Mixed Reality Laboratory at DaimlerChrysler Research and Technology (Ulm, Germany).

His research interests include Human-Computer Interaction (HCI), Applied Computer Science and Information Technology, (collaborative) Augmented reality, 3D Teleconferencing, psychological aspects of Mixed Reality, three-dimensional user interfaces (3DU) and computer-aided therapy and rehabilitation.

He is a member of IEEE, ACM, and igroup.org and serves as a reviewer and auditor for several conferences, journals and institutions.

2016/11/25

Sign Language for Cars – 3D Hand Gesture Control Goes Automotive

Who and when?
Dr. Alexander Barth, Technical Manager for Vision Engineering at Delphi, Wuppertal, is going to visit our University on Tuesday, November 15th 2016. His talk will start at 10:00 am in the Visual Computing Lab C 061.

Topic: Sign Language for Cars – 3D Hand Gesture Control Goes Automotive
Hand gestures are a natural and intuitive way of human communication. They can also be used for human-machine interaction, e.g. for controlling a TV, computer game, or, as recently introduced by BMW, the infotainment system of a car.

With the wave of a hand or flick of a finger, drivers can browse through a music playlist, zoom in and out of navigation maps or accept phone calls. In future cars hand gestures could even replace conventional controls like buttons and sliders.

The biggest challenge of such system is to distinguish intended hand gesture commands from random movements to avoid triggering unintended commands.

The technology behind this innovation is 3D imaging, computer vision, and machine learning. This talk will give an introduction to the technical concepts and applications of 3D hand gesture control in cars based on Time-of-Flight cameras.

Vita:
Alexander Barth received his BSc and MSc in Computer Science from the Bonn-Rhein-Sieg University of Applied Sciences and his PhD degree in Engineering Science from the University of Bonn in 2010. After four years with Daimler Research and Development in Sindelfingen, Germany, and three years with Mercedes-Benz Research and Development North America in the Silicon Valley where he was working on stereo vision-based driver assistance systems and automated driving, Alexander Barth joined the automotive supplier Delphi at Wuppertal as Technical Manager for Vision Engineering. His current R&D work focuses on 3D hand gesture control systems for in-vehicle applications.

2016/09/15

Medical Data Understanding

Who and when?
Prof. Dr. Marcin Grzegorzek, from the University of Siegen, is going to visit our University on Wednesday, September 14th 2016. His talk will start at 11:00 am in room C 116.

Topic: Medical Data Understanding
On the one hand, the demographic change and the shortage of medical staff (especially in rural areas) critically challenge healthcare systems in industrialised countries. On the other hand, the digitalisation of our society progresses with a tremendous speed, so that more and more health-related data are available in a digital form. For instance, people wear intelligent glasses or/and smart watches, provide digital data with standardised medical devices (e.g., blood pressure and blood sugar meters following the standard ISO/IEEE 11073) or/and deliver personal behavioural data by their smartphones. Pattern recognition algorithms that automatically analyse and interpret that huge amount of heterogeneous data towards prevention (early risk detection), diagnosis, assistance in therapy/aftercare/rehabilitation as well as nursing will experience an extremely high scientific, societal and economic priority in the near future.

In this talk, apart from a general overview and introduction to the topic, Marcin Grzegorzek will present his scientific vision addressing the research direction motivated above. It includes the development of original pattern recognition algorithms for holistic health assessment. In his research, Marcin considers mainly the steps of prevention/early risk detection as well as therapy assistance in the context of neurodegenerative diseases. After a general introduction of his scientific vision, Marcin will shortly present two of the related projects he currently leads: (1) Cognitive Village: Adaptively Learning, Technical Support System for Elderly (funded by the German Federal Ministry of Education and Research); (2) My-AHA: My Active and Healthy Ageing (EC Horizon 2020). Apart from the development of adaptive machine learning software, aspects of hardware, user acceptance as well as ELSI (Ethical, Legal and Social Implications) are also considered in these projects. Marcin will close his talk by a summary and some insights into possible future scientific directions in the area of medical data understanding.

Vita:
Marcin Grzegorzek is Professor for Pattern Recognition at the University of Siegen, Professor for Multimedia at the University of Economics in Katowice and Chairman of the Board of Data Understanding Lab Ltd. He studied Computer Science at the Silesian University of Technology, did his PhD at the Pattern Recognition Lab at the University of Erlangen-Nuremberg, worked scientifically as Postdoc in the Multimedia and Vision Research Group at the Queen Mary University of London and at the Institute for Web Science and Technologies at the University of Koblenz-Landau, did his habilitation at the AGH University of Science and Technology in Kraków. He published around 100 papers in pattern recognition, image processing, machine learning, and multimedia analysis and acted as examiner in 16 finalised doctoral procedures (reviewer in 7, supervisor in 2)

2016/09/13

Prototyping Cognitive Interaction Technology using Virtual Reality

Who and when?
Dr. Thies Pfeiffer, from CITEC’s Central Lab Facilities at the University of Bielefeld, is going to visit our University on Thursday, June 2nd 2016. His talk will start at 4:00 pm in the Visual Computing Lab C 061.

Topic: Prototyping Cognitive Interaction Technology using Virtual Reality
Cognitive Interaction Technology (CIT) can manifest in many different devices and contexts. In our research projects at CITEC we are targeting such diverse technological platforms as humanoid or industrial robots, smart kitchens or smart homes, eyewear computing or swarming robots, just to give some examples.

As CIT is a very human-centered approach, it is crucial to include the target audience in the design and research process at very early stages. Often, CIT involves a co-development and co-evolution of hardware and software. This process, however, imposes a high risk of failure, i.e. by not meeting the requirements of the target audience, and makes working real prototypes only available for testing and evaluation at very late stages of the projects.

In this talk, I will present current and future work at CITEC’s Central Lab Facilities on using Virtual Reality technology for prototyping human-computer interactions of future Cognitive Interaction Technologies. In particular, I will present our work on using eye tracking in combination with augmented and virtual reality technology to assess human perception and cognitive processes.

Vita:
Thies Pfeiffer received his Diploma in Informatics and Natural Sciences (2003) and his Doktor rer. nat. (2010) at Bielefeld University, Germany, in the area of human-computer interaction and virtual reality. After working in the area of Psycholinguistics, Artificial Intelligence and Virtual Reality, he is now at CITEC’s Central Lab Facilities and responsible for the Virtual Reality Laboratory. His areas of research are gaze-based interaction, multimodal interaction, human-computer interaction, usability and virtual/augmented reality.

2016/06/01

Moving through virtual reality when you can’t move far in reality

Who and when?
Prof. Dr. Bernhard Riecke, Associate Professor at the School of Interactive Arts and Technology, Simon Fraser University, Surrey/Vancouver Canada, is going to visit our University on Wednesday, May 25th 2016. His talk will start at 4:00 pm in the Visual Computing Lab C 061.

Topic: Moving through virtual reality when you can’t move far in reality
While computer graphics quality is steadily increasing, most 3D simulations/movies/games do not give people a compelling sensation of really being in and moving through the simulated space. Why is this? How can we use self-motion illusions to provide a more compelling and embodied sensation of really moving through (and not just staring at) virtual environments? Are self-motion illusions (vection) good for anything? What are the contributions and interactions, including non-visual modalities? How can we design human-computer interfaces that facilitate navigation and spatial orientation when free-space walking is unfeasible?

Vita:
Associate Professor Bernhard Riecke joined Simon Fraser University in 2008 after receiving his PhD from Tübingen University and the Max Planck Institute for Biological Cybernetics and working as a postdoctoral fellow at Vanderbilt University and the Max Planck Institute. His research approach combines fundamental scientific research with an applied perspective of improving human-computer interaction. For example, he uses multidisciplinary research approaches and immersive virtual environments to investigate what constitutes effective, robust, embodied and intuitive human spatial cognition, orientation and behaviour as well as presence and immersion. This fundamental knowledge is used to guide the design of novel, more effective human-computer interfaces and interaction paradigms that enable similarly effective processes in computer-mediated environments such as virtual reality, immersive gaming, and multimedia.

2016/05/02