Virtual Co-Location: As if being there?

Who and when?
Dr. Stephan G. Lukosch, Associate Professor in the Faculty of Technology, Policy and Management at Delft University of Technology, Netherlands, is going to visit our University on Thursday, October 30th. His talk will start at 4:00 pm in room B 120.

Topic: Virtual Co-Location: As if being there?
Complex problem solving often requires a team of experts to physically meet and interact with each other. Identifying the problem and creating a shared understanding is a prerequisite for efficiently solving a problem and is one of the major challenges. Typical scenarios are, e.g.: solving complex construction problems, training the usage of complex machinery, analysing complex situations in emergency services or diagnosing complex medical situations. Unfortunately, it is not always possible to bring a team together to jointly handle a complex situation. This is due to experts’ availability, critical timing issues or accessibility of a location. Virtual co-location allows experts to engage in spatial remote collaboration. It entails that experts are virtually present at any place of the world and interact with others that are physically present to solve complex problems as if being there in person. Virtual co-location relies on augmented reality to create spaces in which people and other objects are either virtually or physically present. The talk will explore the concept of virtual co-location along several scenarios from current and upcoming projects on remote support for emergency services, crime scene investigation or training of experiment procedures. The talk concludes with an overview of open issues that have to be addressed in order to answer the question “Virtual co-location, as if being there?” with a clear yes and a research agenda for the coming years.

Vita:
Stephan Lukosch is associate professor at the Delft University of Technology. His current research focuses on virtual co-location. Individuals can be virtually at any place in the world and coordinate their activities with others and exchange their experiences. By using augmented reality techniques to merge realities additional information can be provided and visualized, thereby fostering shared understanding. By merging realities complex problems can be solved, complex trainings can be supervised, or complex activities can be guided without all interacting individuals being physically at the same place. In his research, he combines his recent research results on intelligent and context-adaptive collaboration support, collaborative storytelling for knowledge elicitation and decision-making, and design patterns for computer-mediated interaction.
His articles appeared in various journals including the International Journal of Cooperative Information Systems, International Journal of Human Computer Studies, and Journal of Collaborative Computing and Work Practices. Currently, he is a steering committee member of the special interest group on Computer-Supported Cooperative Work (CSCW) of the German computer science society and the ACM International Conference on supporting Group Work (GROUP). He further serves on the editorial board of the Journal of Universal Computer Science (J.UCS) and the International Journal of Cooperative Information Systems (IJCIS).

2014/10/14

Computer Vision for Autonomous Driving – The Bertha Benz Project

Who and when?
Dr. Markus Enzweiler, Daimler AG Research & Development Environment Perception in Sindelfingen, is going to visit our University on Tuesday, October 14th. His talk will start at 10:30 am in room B 120.

Topic: Computer Vision for Autonomous Driving – The Bertha Benz Project
Recent Mercedes-Benz cars offer a powerful stereo camera system that sets new standards in vehicle safety and comfort. Autonomous driving has become a reality, at least in low speed highway scenarios. This raises hope for a fast evolution of autonomous driving that also extends to rural and urban traffic situations. In August 2013, “Bertha”, a Mercedes-Benz S-Class vehicle with close-to-production sensors, drove fully autonomously from Mannheim to Pforzheim, following the 100km long historic Bertha Benz Memorial Route. Next-generation stereo vision was the main sensing component and as such formed the basis for the comprehensive understanding of complex traffic situations.
This talk will sketch the state-of-the-art in robust computer vision for intelligent vehicles and will present the overall system architecture used for autonomous driving through busy cities.

Vita:
Dr. Markus Enzweiler received the BSc degree from the Bonn-Rhein-Sieg University of Applied Sciences, the MSc degree from the University of Ulm, and the PhD degree in computer science from the University of Heidelberg. Since 2010, he has been with Daimler Research & Development in Sindelfingen, Germany, where he co-developed the Daimler vision-based pedestrian detection system which is available in recent Mercedes-Benz cars. His current work focuses on statistical models of object appearance with application to scene understanding, object recognition, and autonomous driving in the domain of intelligent vehicles.
He received graduate and PhD scholarships from the Studienstiftung des deutschen Volkes (German National Academic Foundation). Among several best paper awards, he received both the IEEE Intelligent Transportation Systems Society Best PhD Dissertation Award as well as the Uni-DAS Research Award in 2012 for his contributions to computer vision for intelligent vehicles.

2014/10/02

Talking with Robots: Experiences and Some Lessons Learned

Who and when?
Prof. Michael Jenkin, Director of the York Centre for Field Robotics from York University, Canada, is going to visit our University on Wednesday, September 10th. His talk will start at 11 am in room C 116.

Topic: Talking with Robots: Experiences and Some Lessons Learned
Very few autonomous systems are intended to be fully autonomous. Rather it is intended that they will follow the instructions that they are given and communicate information that they obtain back to their operator. Thus human-robot communication is a key aspect in the development and deployment of real robotic systems. For many applications, especially those that take place indoors in a laboratory, the existing infrastructure and communication channels (e.g., wired networks and workstation displays) provides a natural channel for human-robot communication. But as we move outside of this controlled environment, the problem becomes much more complex.
This talk reviews some of our recent research efforts in the development of effective human-robot communication systems for autonomous robots operating in complex unstructured domains. ROS (Robot Operating System) has evolved as a standard middleware for research robots, and accessing its internal state can be a difficult task from an external agent that is not ROS-enabled. One approach here is to expose limited portions of the internal state of ROS to external human-robot interface devices and then to exploit the capabilities of those external devices. We have explored this approach for both lightweight interface devices (e.g., Android tablets) for harsh external environments as well as for the development of virtual reality-based tele operational systems using the Unity game engine.

Vita:
Michael Jenkin is a Professor of Computer Science and Engineering, and a member of the Centre for Vision Research at York University, Canada. Working in the fields of visually guided autonomous robots and virtual reality, he has published over 150 research papers including co-authoring Computational Principles of Mobile Robotics with Gregory Dudek and a series of co-edited books on human and machine vision with Laurence Harris. Michael Jenkin’s current research intrests include work on sensing strategies for AQUA, an amphibious autonomous robot being developed as a collaboration between Dalhousie University, McGill University and York University; the development of tools and techniques to support crime scene investigation; and the understanding of the perception of self-motion and orientation in unusual environments including microgravity.

2014/09/08

Trends and Visions of Head Mounted Display Technologies

Who and when?
Dr. Kiyoshi Kiyokawa from the Cybermedia Center, Osaka University, is going to visit our University on Thursday, June 25th. His talk will start at 3 pm in room C 153.

Topic: Trends and Visions of Head Mounted Display Technologies
Thanks to the Google Glass project, a head mounted display (HMD) will likely be finally accepted by many people after the first invention of its kind in fifty years ago. However, it will exhibit only a small portion of huge potential that an HMD can offer. This talk will explore the history, current research trends and future visions of HMDs. Specifically, studies on head mounted visual displays, head mounted
multi-modal displays, and head mounted sensing technologies for Augmented Reality (AR) are introduced, and challenges and visions are discussed for the realization of better AR experience. During this talk, introduced are both milestone research projects of the world, and the speakers’ past and current research projects. The latter includes an occlusion-capable optical see-through display, a super wide view head mounted projective display with semi-transparent retroreflective screen, and a super wide view parallax-free eye camera.

Vita:
Kiyoshi Kiyokawa received his Ph.D. degree in information systems from Nara Institute of Science and Technology in 1998. He worked for National Institute of Information and Communications Technology from 1999 to 2002. He was a visiting researcher at Human Interface Technology Laboratory of the University of Washington from 2001 to 2002. He has been an Associate Professor at Cybermedia Center, Osaka University, since 2002. He has co-authored nine book chapters, over 40 journal papers and over 200 conference papers. He is a steering committee member of IEEE International Symposium on Mixed and Augmented Reality and a board member of the Virtual Reality Society of Japan. He has served as general and program chairs at numerous academic conferences such as IEEE 3DUI, IEEE ISMAR, IEEE VR and ACM VRST.

2014/06/11

Fooling your Senses – Perceptually-Inspired Interfaces for the Ultimate Display

Who and when?
Prof. Frank Steinicke from the University of Hamburg, is going to visit our University on Thursday, June 12th. His talk will start at 4 pm in room C 116.

Topic: Fooling your Senses – Perceptually-Inspired Interfaces for the Ultimate Display
In his essay “The Ultimate Display” from 1965, Ivan E. Sutherland states that “The ultimate display would […] be a room within which the computer can control the existence of matter […]“. This general notion of a computer-mediated or virtual reality, in which synthetic objects or the entire virtual environment get indistinguishable from the real world, dates back to Plato’s “The Allegory of the Cave” and has been reconsidered again and again in science fiction literature as well as the movie industry.

For instance, virtual reality is often used to question whether we truly “know” if our perceptions are real or not. Movies like “The Matrix” or the fictional holodeck from the Star Trek universe are prominent examples of these kind of perceptual ambiguities. Furthermore, in movies like Steven Spielberg’s “Minority Report” or Jon Favreau’s “Iron Man 2″ actors can seamlessly use free-hand gestures in space combined with speech to manipulate 3D holographic projections, while they also perceive haptic feedback when touching the virtual objects. In my talk I will revisit some of the most visually impressive 3D user interfaces and experiences of such fictional ultimate displays. As a matter of fact, we cannot let a computer fully control the existence of matter, but we can fool our senses and give a user the illusion that the computer can after all. I will show how different ultimate displays can be implemented with current state-of-the-art technology by exploiting perceptually-inspired interfaces. However, we will see that the resulting ultimate displays are not so ultimate at all, but pose novel interesting future research challenges and questions.

Vita:
Frank Steinicke is a professor in Computer Science at the Department of Informatics at the University of Hamburg and chair of the Human-Computer Interaction Group. From 2012-2014 he was the director of the newly founded interdisciplinary Institute for Human Computer Media at the University of Würzburg, and was head of the Immersive Media Group. His research is driven by understanding the human perceptual, cognitive and motoric skills and limitations in order to reform the interaction as well as the experience in computer-mediated realities. Frank Steinicke regularly serves as panelist and speaker at major events in the area of virtual reality and human-computer interaction. The results of his work have been published and presented in several international conferences and journals including ACM CHI, ACM SIGGRAPH, IEEE VR, ACM TOG, IEEE TVCG, and many others. Furthermore, he is on the IPC of various national and international conferences and currently co-chair of ACM SUI 2014 and IEEE 3DUI 2014.

For more information on Frank Steinicke and his contact details, please visit his homepage at Hamburg University.

Scale in Stereoscopic 3D Media

Who and when?
Prof. Robert Allison from York University, Toronto, is going to visit our University on Tuesday, June 10th. His talk will start at 4 pm in room C 118.

Topic: Scale in Stereoscopic 3D Media
A primary concern when producing stereoscopic 3D (S3D) media is to promote an effective and comfortable S3D experience for the audience when displayed on the screen. The amount of depth produced on-screen can be controlled using a variety of parameters. Over the last decade, advances in technology have made S3D displays widely available with an ever-expanding variety of technologies, dimensions, resolution, optimal viewing angle and image quality. Of these, one of the most variable and unpredictable factors influencing the observer’s S3D experience is the display size, which ranges from S3D mobile devices to large-format 3D movie theatres. This variety poses a challenge to 3D content makers who wish to preserve the three dimensional artistic context and avoid distortions and artefacts related to scaling. This talk will review the primary human factors issues related to S3D image scaling. The amount of depth from disparity alone can be precisely predicted from simple trigonometry; however, perceived depth from disparity in complex scenes is difficult to evaluate and most likely different from geometrical predictions. This discrepancy is mediated by perceptual and cognitive factors, including resolution of the combination/conflict of pictorial, motion and binocular depth cues. I will present the results of experiments which assess S3D distortions in the context of content, cognitive and perceptual influences, and individual differences.

Vita:
Robert Allison is Associate Professor of Electrical Engineering and Computer Science and Psychology and a member of the Centre for Vision Research at York University. He obtained his PhD, specialising in stereoscopic vision, from York University in 1998 and did post-doctoral research at York University and the University of Oxford. His research investigates how we can reconstruct and navigate the three-dimensional world around us based on the two-dimensional images on the retinas. His research enables effective technology for advanced virtual reality and augmented reality and for the design of stereoscopic displays. He is recipient of the Premier?s Research Excellence Award from the Province of Ontario in recognition of this work.

2014/05/30

Non-photorealistic rendering methods and their application

Who and when?
Prof. Dr. Oliver Deussen, professor for Computer Graphics and Media Informatics at Konstanz University, is going to visit our University on Monday, May 12th. His talk will start at 3:15 pm in room C 118.

Topic: Non-photorealistic rendering methods and their application
Computer graphics traditionally focuses on creating photorealistic images. However, for more than 20 years computer graphics researchers also worked on creating abstract visual representations. By means of algorithms human abstraction is described and complex data is represented only by a few strokes.

In my talk I will give an overview about our research on abstract rendering methods. I will show where we applied such techniques in different fields. Lastly, I will describe the works related to our painting robot (see also http://vimeo.com/68859229), which is our basis to study human and machine painting.

Vita:
Prof. Deussen graduated from Karlsruhe Institute of Technology and was appointed professor for Computer Graphics and Media Design by Dresden University of Technology in 2000. Since 2003 he has been professor for Computer Graphics and Media Informatics at Konstanz University. He is visiting professor at the Chinese Academy of Science and serves as co-editor in chief of the Computer Graphics Forum. His areas of interest are modeling and rendering of complex objects, non-photorealistic rendering as well as information visualization.

2014/04/16

Scientific Visualization and Parallel Processing

Who and when?
Prof. Dr.-Ing. Stephan Olbrich, holder of the chair of “Scientific Visualization and Parallel Processing” at the University of Hamburg’s Department of Informatics is going to visit our University on Tuesday, December 10th. His talk will start at 4 pm in room C116.

Topic: Scalable In-Situ Data Extraction and Distributed Visualization
In the last few years, the data analysis and visualization aspect has dramatically gained in importance, since this part of the complete process chain is much more difficult to scale than the numerical cores of simulation models. 3D presentation of results of scientific computing – especially taking advantage of highly interactive virtual reality environments – has become feasible using low-cost equipment such as 3D monitors or TV sets and advanced 3D graphics cards, where the development was driven from the consumer market. In computational fluid dynamics typically 3D grids consisting of up to 10^11 data points on 4000 cores can be simulated, which results in a non-stationary scenario (~10^4 time steps) in ~10 Petabyte raw result data. Since such an amount of data cannot be transferred or stored or explored using traditional approaches of separate post-processing, one topic of world-wide research is the development of tools to integrate data extraction in the simulation software, so-called “in-situ data extraction”, and to take advantage of distributed systems for remote visualization.

We have developed a visualization middleware, which implements parallel in-situ data extraction by providing a programming library in order to minimize the sequential bottlenecks by parallelization of visualization mapping methods and to reduce the data volume by storing polygons and lines instead of raw data. Supporting synchronous and asynchronous, on-demand 3D presentation and interaction scenarios under bandwidth and rendering performance constraints, and nevertheless limiting the frame update time to get interactive rates, requires flexible and efficient reduction and post-filtering techniques. For this purpose, our data extraction library supports MPI-based computing environments and encapsulates a parallel implementation of vertex cluster based simplified isosurfaces, and parallel extraction of property-enhanced pathlines. These pathlines can be interactively post-filtered as part of a specialized, so-called “3D streaming server”, which combines storage, filtering, and play-out of sequences of 3D scenes as a 3D movie, which can be navigated in real-time.

Vita:
Prof. Dr.-Ing. Stephan Olbrich, born in 1961, received his diploma degree in Electrical Engineering from the Leibniz University of Hannover in 1987. After working in the field of commercial IT development at altec electronic GmbH, he started in the Regional Computing Center for Lower Saxony at the Leibniz University of Hannover in 1989, combining advanced IT service and research in the field of data visualization in the context of high-performance scientific computing. He received the doctoral degree in 2000, and became full professor for “IT Management” at the University of Düsseldorf and director of its Center of Information and Media Technology in 2005. Since 2010 he is director of the Regional Computing Center at the University of Hamburg and additionally, holder of the chair of “Scientific Visualization and Parallel Processing” at the Department of Informatics.

2013/12/03

Solving Problems with Visual Analytics: Challenges and Applications

Who and when?
Prof. Dr. Daniel Keim, from the Chair for Data Analysis and Visualization at the University of Konstanz, is going to visit our University on Thursday, November 14th. His talk will start at 4:00 pm in room C116.

Topic: Solving Problems with Visual Analytics: Challenges and Applications
Never before in history data is generated and collected at such high volumes as it is today. As the volumes of data available to business people, scientists, and the public increase, their effective use becomes more challenging. Keeping up to date with the flood of data, using standard tools for data analysis and exploration, is fraught with difficulty. The field of visual analytics seeks to provide people with better and more effective ways to understand and analyze large datasets, while also enabling them to act upon their findings immediately. Visual analytics integrates the analytic capabilities of the computer and the abilities of the human analyst, allowing novel discoveries and empowering individuals to take control of the analytical process. Visual analytics enables unexpected and hidden insights, which may lead to beneficial and profitable innovation. The talk presents the challenges of visual analytics and exemplifies them with several application examples, which illustrate the exiting potential of current visual analysis techniques but also their limitations.

Vita:
Daniel A. Keim is full professor and head of the Information Visualization and Data Analysis Research Group in the Computer Science Department of the University of Konstanz, Germany. He has been actively involved in data analysis and information visualization research for more than 20 years and developed a number of novel visual analysis techniques for very large data sets. He has been program co-chair of the IEEE InfoVis and IEEE VAST as well as the ACM SIGKDD conference, and he is member of the IEEE InfoVis, EuroVis, and VAST steering committees. He is coordinator of the German Strategic Research Initiative (SPP) on Scalable Visual Analytics and the scientific coordinator of the EU Coordination Action on Visual Analytics.
Dr. Keim got his Ph.D. and habilitation degrees in computer science from the University of Munich. Before joining the University of Konstanz, Dr. Keim was associate professor at the University of Halle, Germany and Technology Consultant at AT&T Shannon Research Labs, NJ, USA.

2013/10/17

Dynamic Monitor Allocation in the Java Virtual Machine

Who and when?
M.Sc. Marcel Dombrowski and Prof. Dr. Ken Kent from our Canadian partner the University of New Brunswick (UNB, Fredericton, Canada), are going to visit out University on Friday, October 11th. Their talk will start at 3:00 pm in room C118.

Topic: Dynamic Monitor Allocation in the Java Virtual Machine
With the Java language and sandboxed environments becoming more and more popular research needs to be conducted into improving the performance of these environments while decreasing their memory footprints. In this talk we present a dynamic approach for growing monitors for objects in order to reduce the memory footprint and improve the execution time of the IBM Java Virtual Machine. According to the Java Language Specification every object needs a monitor, however not all objects require synchronization, thus the monitor can have a negative memory impact. Our new approach grows monitors only when required. The impact of this approach on performance and memory has been evaluated using the SPECjbb2005 benchmark and future work is also discussed. On average a performance increase of 0.47% and a memory reduction of about 5.51% has been achieved with our approach.

Vita:
Marcel Dombrowski got his Dual Degree Master Autonomous Systems (MAS) in cooperation with UNB. He was awarded an IBM Center of Advanced Studies (CAS) PhD scholarship at UNB and works on improvements of the IBM JVM.

Additional Information
After the talk, both, Marcel Dombrowski and Ken Kent will hold a short information session on the Dual Degree Programme.

2013/10/09