Prototyping Cognitive Interaction Technology using Virtual Reality

Who and when?
Dr. Thies Pfeiffer, from CITEC’s Central Lab Facilities at the University of Bielefeld, is going to visit our University on Thursday, June 2nd 2016. His talk will start at 4:00 pm in the Visual Computing Lab C 061.

Topic: Prototyping Cognitive Interaction Technology using Virtual Reality
Cognitive Interaction Technology (CIT) can manifest in many different devices and contexts. In our research projects at CITEC we are targeting such diverse technological platforms as humanoid or industrial robots, smart kitchens or smart homes, eyewear computing or swarming robots, just to give some examples.

As CIT is a very human-centered approach, it is crucial to include the target audience in the design and research process at very early stages. Often, CIT involves a co-development and co-evolution of hardware and software. This process, however, imposes a high risk of failure, i.e. by not meeting the requirements of the target audience, and makes working real prototypes only available for testing and evaluation at very late stages of the projects.

In this talk, I will present current and future work at CITEC’s Central Lab Facilities on using Virtual Reality technology for prototyping human-computer interactions of future Cognitive Interaction Technologies. In particular, I will present our work on using eye tracking in combination with augmented and virtual reality technology to assess human perception and cognitive processes.

Vita:
Thies Pfeiffer received his Diploma in Informatics and Natural Sciences (2003) and his Doktor rer. nat. (2010) at Bielefeld University, Germany, in the area of human-computer interaction and virtual reality. After working in the area of Psycholinguistics, Artificial Intelligence and Virtual Reality, he is now at CITEC’s Central Lab Facilities and responsible for the Virtual Reality Laboratory. His areas of research are gaze-based interaction, multimodal interaction, human-computer interaction, usability and virtual/augmented reality.

2016/06/01

Moving through virtual reality when you can’t move far in reality

Who and when?
Prof. Dr. Bernhard Riecke, Associate Professor at the School of Interactive Arts and Technology, Simon Fraser University, Surrey/Vancouver Canada, is going to visit our University on Wednesday, May 25th 2016. His talk will start at 4:00 pm in the Visual Computing Lab C 061.

Topic: Moving through virtual reality when you can’t move far in reality
While computer graphics quality is steadily increasing, most 3D simulations/movies/games do not give people a compelling sensation of really being in and moving through the simulated space. Why is this? How can we use self-motion illusions to provide a more compelling and embodied sensation of really moving through (and not just staring at) virtual environments? Are self-motion illusions (vection) good for anything? What are the contributions and interactions, including non-visual modalities? How can we design human-computer interfaces that facilitate navigation and spatial orientation when free-space walking is unfeasible?

Vita:
Associate Professor Bernhard Riecke joined Simon Fraser University in 2008 after receiving his PhD from Tübingen University and the Max Planck Institute for Biological Cybernetics and working as a postdoctoral fellow at Vanderbilt University and the Max Planck Institute. His research approach combines fundamental scientific research with an applied perspective of improving human-computer interaction. For example, he uses multidisciplinary research approaches and immersive virtual environments to investigate what constitutes effective, robust, embodied and intuitive human spatial cognition, orientation and behaviour as well as presence and immersion. This fundamental knowledge is used to guide the design of novel, more effective human-computer interfaces and interaction paradigms that enable similarly effective processes in computer-mediated environments such as virtual reality, immersive gaming, and multimedia.

2016/05/02

New Technologies Driving Visual Computing Research

Who and when?
Prof. Dr.-Ing. Marcus Magnor from the Institute of Computer Graphics at TU Braunschweig, is going to visit our university on Wednesday, April 27th. His talk will start at 2:30 pm in the Visual Computing Lab C 061.

Topic: New Technologies Driving Visual Computing Research
Recent developments in consumer electronics have a profound impact even on fundamental research agendas and conference programs in visual computing. Programmable GPUs, 3D movies, Kinect, HDR displays, 4k video projectors, Oculus Rift, or all-in-one smartphones are just a few examples of how sudden, widespread availability and adoption of “new” technologies drive contemporary research (even though most of it had, in fact, already been available in the lab for quite some time). In my talk, I will concentrate on a few ongoing consumer technology trends and demonstrate how they are triggering intriguing new research in visual computing.

Vita:
Marcus Magnor heads the Computer Graphics Lab of the Computer Science Department at Technische Universität Braunschweig (TU Braunschweig). He received his PhD (2000) in Electrical Engineering from Erlangen University.

For his post-graduate studies, he joined the Computer Graphics Lab at Stanford University. In 2002, he established the Independent Research Group Graphics-Optics-Vision at the Max-Planck-Institut Informatik in Saarbrücken. In 2009, he was Fulbright Scholar at the University of New Mexico, USA, where he holds an appointment as Adjunct Professor at the Physics and Astronomy Department. In 2011, Marcus Magnor was elected member of the Engineering Class of the Braunschweigische Wissenschaftliche Gesellschaft. He is laureate of the Wissenschaftspreis Niedersachsen 2012.

His research interests concern visual computing, i.e. visual information processing from image formation, acquisition, and analysis to image synthesis, display, perception, and cognition. Areas of research include, but are not limited to, computer graphics, computer vision, visual perception, image processing, computational photography, astrophysics, imaging, optics, visual analytics, and visualization.

Poster with abstract and vita

2016/04/26

Biological Image Analysis and Visualization: Ongoing Projects and Future Perspectives

Who and when?
Prof. Dr.-Ing. Dorit Merhof from the Institute of Imaging & Computer Vision at RWTH Aachen, is going to visit our university on Thursday, December 17th. Her talk will start at 3:00 pm in the Visual Computing Lab C 061.

Topic: Biological Image Analysis and Visualization: Ongoing Projects and Future Perspectives
Innovative imaging and screening technologies have become fundamental to scientific progress across all disciplines of natural and life sciences. However, the actual bottleneck often lies in the handling and analysis of the vast amounts of complex data generated through these technologies, which requires expert knowledge for analysis and visualization.

Interdisciplinary research between biologists and image processing specialists aims at developing dedicated algorithms for automated analysis and for interactive exploration of complex, large and/or high-dimensional scientific data repositories. Since analysis and visualization have become a real bottleneck in biological sciences, this data provides interesting and challenging research questions from a computer science point of view.

In this talk, a review of interdisciplinary projects addressing the analysis of challenging biomedical image data such as super-resolution microscopy images or realtime analysis of biological video data is provided. Finally, future challenges for biomedical image analysis are discussed.

Vita:
Dorit Merhof received her Diploma in computer science (2003) and her PhD degree in biomedical image analysis (2007) from the University of Erlangen-Nuremberg, Germany. After two years with Siemens Molecular Imaging, Oxford, UK, she joined the University of Konstanz, Germany, as an assistant professor for Visual Computing. Since 2013, she is full professor at the Institute of Imaging & Computer Vision at RWTH Aachen University, Germany. Her research interests include acquisition, processing and visualization of image data originating from various biomedical and industrial applications.

2015/12/16

Visualization and small things

Who and when?
Dr. Guido Reina, postdoctoral researcher at the Visualization Research Center of the University of Stuttgart (VISUS), is going to visit our University on Thursday, October 22nd 2015. His talk will start at 2:00 pm in the Visual Computing Lab C 061.

Topic: Visualization and small things
Molecular dynamics has become a commonplace approach for prediction and verification in natural sciences. It is also a good example for how increasing computational resources and capabilities add to the continuous growth and availability of scientific data. VISUS at the University of Stuttgart has several project partners from thermodynamics, physics, and biochemistry who make extensive use of MD. In this talk, I will give an overview of the work I was involved with over the last few years. I will point out our general method and some specific examples of molecular dynamics visualization. I will also touch upon two of our publicly available frameworks, MegaMol and OGL4Core. The former is a GPU-centered research prototyping platform while the latter is mainly aimed at teaching modern OpenGL via continuous abstraction.

Vita:
Dr. Guido Reina is a postdoctoral researcher at the Visualization Research Center of the University of Stuttgart (VISUS). He received his PhD in computer science (Dr. rer. nat.) in 2008 from the University of Stuttgart, Germany. His research interests include large displays, particle-based rendering, and GPU-based methods in general. He is a principal investigator of the subproject Visualization of Systems with Large Numbers of Particles for the Collaborative Research Center (SFB) 716.”

2015/10/20

BEAMING between Virtual and Reality: Studies in Asymmetric Telepresence

Who and when?
Prof. Anthony Steed, Professor in Virtual Environments and Computer Graphics at the Department of Computer Science of the University College London in the United Kingdom, is going to visit our University on Thursday, June 18th 2015. His talk will start at 3:30 pm in the Visual Computing Lab C 061.

Topic: BEAMING between Virtual and Reality: Studies in Asymmetric Telepresence
Beaming is the process of virtual teleporting to a destination. Based (loosely) on the idea from Star Trek, our aim is to transport a visitor using a high-end virtual reality system, into a real destination environment. We want the visitor to both be able to understand the destination, but also to appear within the destination, so that people in that destination can understand and interact with the person.

We have investigated a range of solutions to technically achieve Beaming, including novel displays, robotic systems and scene reconstruction. In this talk, I will present research challenges that have arisen from experience of prototype Beaming systems. These will range from technical challenges, to open questions about how new virtual reality and robotic systems can transform our experience of interacting with remote people

Vita:
Professor Anthony Steed is Head of the Virtual Environments and Computer Graphics (VECG) group at University College London. The VECG group is the UK’s largest group in this area with over forty staff and research students. Prof. Steed’s research interests extend from virtual reality systems, through to mobile mixed reality systems, and from system development through to measures of user response to virtual content. He has published over 160 papers in the area, and is the main author of the book “Networked Graphics: Building Networked Graphics and Networked Games”. He is also founder and CTO of Animal Systems, creators of Chirp (chirp.io). He is also director of the UCL Centre in Virtual Environments, Interaction and Visualisation (UCL VEIV), a specialistic doctoral training centre supported by a wide variety of companies.

Poster with abstract and vita

2015/06/10

Pushing Virtual Reality: Enhancing Immersion Through Secondary Cues

Who and when?
Prof. Robert W. Lindeman, Associate Professor and Director of the Human Interaction in Virtual Environments Lab in the Department of Computer Science of the Worcester Polytechnic Institute in Canada, is going to visit our University on Friday, March 20th 2015. His talk will start at 11:00 am in the Visual Computing Lab C 061.

Topic: Pushing Virtual Reality: Enhancing Immersion Through Secondary Cues
Visual and audio quality in video games has reached a point where we can now provide sensory realism approaching the threshold of standard human perception. For example, single frames that used to take hours to generate for films just a decade ago can now be produced at interactive rates on commodity hardware. These techniques use a careful combination of captured and synthetic content created by artists using sophisticated, but readily available tools. So, what’s next?

In this talk, I will discuss current efforts at increasing the sense of presence in virtual reality through the use of secondary sensory cues that enhance the high-quality primary visual and sound cues used in current video games. I hope to stimulate thinking beyond standard approaches.

Vita:
Rob Lindeman is an Associate Professor in the Computer Science Department at Worcester Polytechnic Institute (WPI) in Massachusetts, USA. He founded the Human Interaction in Virtual Environments (HIVE) Lab at WPI, where he and his students specialize in Virtual Reality, 3D human-computer interaction, teleoperation, and Augmented Reality. Rob is the Director of the Interactive Media & Game Development (IMGD) program at WPI. He earned the B.A. degree (cum laude) in Computer Science from Brandeis University, the M.S. degree in Systems Management from the University of Southern California, and the Sc.D. degree in Computer Science from The George Washington University. Rob was General Chair of the IEEE Virtual Reality Conference in 2010 & 2011, and of IEEE 3DUI in 2014 & 2015. He is Program Co-Chair of ACM ISMAR in 2014 & 2015. Rob is an Associate Editor of the journal Frontiers in Robotics and AI. He has worked extensively in research labs in Japan and New Zealand, in addition to the US. He is a Senior Member of both the ACM and IEEE, and a member of UPE. Rob enjoys skiing, playing soccer, and geocaching.

Poster with abstract and vita

2015/03/20

Capturing Bispectral Reflectance and Reradiation

Who and when?
Dr.-Ing. Hendrik Lensch, Professor for computer graphics at Tübingen University, is going to visit our University on Monday, January 12th. His talk will start at 4:00 pm in the Visual Computing Lab C 061.

Topic: Capturing Bispectral Reflectance and Reradiation
In fluorescent materials, light from a certain band of incident wavelengths is reradiated at longer wavelengths, i.e., with a reduced per-photon energy. In this talk, we will extend the well-known concept of the bidirectional reflectance distribution function (BRDF) to account for energy transfer between wavelengths, resulting in a Bispectral Bidirectional Reflectance and Reradiation Distribution Function (bispectral BRRDF).
Two different measurement setups will be presented. One is for capturing bidirectional and bispectral reflectance and reradiation data of homogenous fluorescent materials. The other one is a hyperspectral light stage which allows for capturing spatially varying material properties of arbitrary 3D objects. In order to reduce the number of measurement images we make use of principal component analysis as well as compressive sensing, reconstructing the full BRRDF. Finally, a simple reflectance display will be presented.

Vita:
Hendrik P. A. Lensch holds the chair for computer graphics at Tübingen University and is currently the head of the computer science department. He received his diploma in computers science from the University of Erlangen in 1999. He worked as a research associate at the computer graphics group at the Max-Planck-Institut für Informatik in Saarbrücken, Germany, and received his PhD from Saarland University in 2003. Hendrik Lensch spent two years (2004-2006) as a visiting assistant professor at Stanford University, USA, followed by a stay at the MPI Informatik as the head of an independent research group. From 2009 to 2011 he has been a full professor at the Institute for Media Informatics at Ulm University, Germany. In his career, he received the Eurographics Young Researcher Award 2005, was awarded an Emmy-Noether-Fellowship by the German Research Foundation (DFG) in 2007 and received an NVIDIA Professor Partnership Award in 2010. His research interests include 3D appearance acquisition, computational photography, global illumination and image-based rendering, and massively parallel programming.

2015/01/08

Building Algorithmic Polytopes

Who and when?
Dr. David Bremner, Professor at the Faculty of Computer Science at the University of New Brunswick, Fredericton, Canada, is going to visit our University on Friday, December 12th. His talk will start at 4:30 pm in the Visual Computing Lab C 061.

Topic: Building Algorithmic Polytopes
Researchers have studied the the Matching Polytope (the convex hull of all characteristic vectors of all perfect matchings in a complete graph) for almost 50 years. Recent results have shown in a certain sense that no polynomial size inequality representation for this polytope exists. Motivated by this, we are studying alternate “natural” ways of representing problems like matching as linear programs. In this talk I will discuss ongoing work to develop a compiler from a simple ALGOL-like pseudocode to polynomial sized linear programs. These LPs can compute the output for any input (of a given size) to the corresponding algorithm by a trivial encoding of the input into the objective function. The talk will cover the structure of the inequalities needed a simulate a simple bit-oriented register machine supporting arithmetic and arrays, and a limited kind of integrality guarantee needed to solve these systems as linear, rather than integer linear programs. I’ll also give an overview of the current compiler implementation, and time permitting a demo.

2014/12/10

Natural Interaction in Multi-Display Environments

Who and when?
Dr. Raimund Dachselt, Professor for Computer Science at the Technische Universität Dresden is going to visit our University on Thursday, December 11th. His talk will start at 3:00 pm in the Visual Computing Lab C 061.

Topic: Natural Interaction in Multi-Display Environments
We are faced with an ever increasing multitude of interactive surfaces in any size ranging from smartwatches and tablets to tabletops and large display walls. The talk will first provide insight in research on combining several modalities to interact directly on the surface. It will then be demonstrated that the interaction space can be extended to the 3D volume above tabletops by adding multiple spatially aware, handheld magic lenses. These tangible displays allow exploring various information spaces mapped onto the 3D space. Furthermore, considering the space in front of large interactive display wall, we can further extend the interaction space by also including remote interaction. Gaze-supported interaction, handheld tangibles and gestures are discussed as possible input channels. By combining multiple displays and interaction modalities in a seamless way, we can now interact with digital content on, in front and behind large wall-sized display paving the way for effective collaboration.

Vita:
Raimund Dachselt is a university professor for computer science at the Technische Universität Dresden and head of the Interactive Media Lab Dresden (Chair of Multimedia Technology). His research interests include natural human computer interaction, interactive information visualization and 3D User Interfaces. He published over 100 peer-reviewed contributions and received 4 ACM Best Paper Awards. Dachselt has co-organized 15 international workshops and has been co-chair, reviewer and PC member of several leading HCI conferences and journals. He was also the general co-chair of ACM Interactive Tabletops and Surfaces 2014.

2014/12/03