Research at Sentience Lab, Auckland University of Technology

Who and when?
Dr. Stefan Marks, from the Auckland University of Technology, is going to visit our University on Wednesday, July 12th. His talk will start at 3 pm in the Visual Computing Lab C 061.

Topic: Research at Sentience Lab, Auckland University of Technology
“One of the most fantastic capabilities of the human brain is that of complex pattern recognition. If world-encompassing actions were accelerated, or a facsimile of the action presented within the velocity range of human comprehension, not only would the motion become clearly visible, but also some fundamental principles or heretofore unfamiliar forms of behavior probably would be exposed. The brain quickly correlates such new information with previously acquired data and insight gained from other experiences and adds understanding to the new phenomena being examined.” (Fuller, R. B. (1982). Critical Path (2nd edition). New York, N.Y: St. Martin’s Griffin. p183) With the renaissance of virtual reality technology in 2012, scientific visualisation of complex spatial datasets can now be achieved with COTS hardware at a level that was previously reserved for specialised CAVE facilities. 3D immersive visualisation enables the user to “dive” into data and opens up opportunities of seeing and observing patterns, connections, spatial and temporal relations. Selected examples of scientific visualisations implemented at Sentience Lab, AUT, will be presented, including the integration of the research and the VR facilities into under- and post-graduate teaching and interdisciplinary projects.

Vita:
Dr. Stefan Marks is researcher and senior lecturer at Colab, the interdisciplinary unit at Auckland University of Technology. His main areas of research are virtual reality, visualisation, and computer graphics. He combines these interests in his function as director of Sentience Lab, an immersive virtual reality facility for multimodal and multisensory data visualisation and interaction.

Stefan has eight years of industry experience as a hardware and software developer, a Diplom of Microinformatics and a Master’s Degree in Human-Computer Interaction from the Westfälische Hochschule, Germany and a PhD from The University of Auckland.

In his free time, Stefan enjoys hiking and photography.

2017/07/11

Beyond Fun and Games: VR and Visualization as a Tool of the Trade

Who and when?
Prof. Carolina Cruz-Neira and Prof. Dirk Reiners, from the University of Arkansas at Little Rock, USA, are going to visit our University on Monday, June 19th. Their talk will start at 3 pm in the Visual Computing Lab C 061.

Topic: Beyond Fun and Games: VR and Visualization as a Tool of the Trade
The recent resurgence of VR is exciting and encouraging because the technology is at a point that it soon will be available for a wider range of industries and uses and will be driven by the consumer market and therefore more robust and at much lower cost than the large-scale systems from the early 2000’s. However, it has also been a little bit disappointing to see that VR technology is mostly being portrayed as the ultimate gaming environment and the new way to experience movies. VR is much more than that, there has been a wide number or groups around the world using VR for the past twenty years in engineering, design, training, medical treatments and many other areas beyond gaming and entertainment that seem to have been forgotten in the public perception. Furthermore, VR technology is also much more than goggles, there are many ways to build devices and systems to immerse users in virtual environments. And finally, there are also a lot of challenges in aspects related to creating engaging, effective, and safe VR applications. This talk will present our experiences in developing VR technology, creating applications for industry, exploring the effect of VR exposure to users, and experimenting with different immersive interaction models. The talk will provide a much wider perspective on what VR is, its benefits and limitations, and how it has the potential to become a key technology to improve many aspects of human life.

In addition to VR becoming more prevalent, we are experiencing an exponential growth of data in all aspects of human life to the point that the vast amounts of data are becoming overwhelming to manage as well as are starting to be unused due to our lack of tools to extract meaningful information from the raw data. Social media, advances in sensors, new computational models, web surveys, electronic transactions, are just examples of data generation/collection technologies that are capturing much more than we can handle with the current approaches for data analysis. Clearly the human cognitive system only enable us to scrutinize and analyze a limited amount of the raw data we generate, and therefore limiting also the quality of our scientific insight on the problem at hand. Consequently, our data-rich world is developing a critical need for visualization as a key component of the scientists’ tool set for discovery and insight into their areas of expertise.

Visualization, and especially interactive visualization, takes advantage of the bandwidth of the human visual system, our ability to visually identify patterns and relationships, and how we interact with the data to extract information. This presentation explores the power of visualization to extract information from big data by presenting an introduction to visual analytics, to current methods and techniques and some illustrative examples of work being done at the Emerging Analytics Center at the University of Arkansas at Little Rock. The presentation seeks to stimulate the audience’s imagination about what’s possible as well as to pursue future research with a multidisciplinary approach in which visualization and visual analytics takes as much as a central role as the data gathering approaches model and analyze a wide variety of problems, phenomena, situations, training and other disciplines of human life.

Vitae:
Dr. Carolina Cruz-Neira is a pioneer in the areas of virtual reality and interactive visualization, having created and deployed a variety of technologies that have become standard tools in industry, government and academia. She is known world-wide for being the creator of the CAVE virtual reality system, which was her PhD work, and for VR Juggler, an open source VR application development environment. Her work with advanced technologies is driven by simplicity, applicability, and providing value to a wide range of disciplines and businesses. This drive makes her work highly multi-disciplinary and collaborative, having receiving multi-million dollar awards from the National Science Foundation, the Army Research Lab, the Department of Energy, Deere and Company, and others. She has dedicated a part of her career to transfer research results in virtual reality into daily use in industry and research organizations and to lead entrepreneurial initiatives to commercialize results of her VR research. She is also recognized for having founded and led very successful virtual reality research centers, like the Virtual Reality Applications Center at Iowa State University, the Louisiana Immersive Technologies Enterprise and now the Emerging Analytics Center. She serves in many international technology boards, government technology advisory committees, and outside the lab, she enjoys extrapolating her technology research with the arts and the humanities through forward-looking public performances and installations. She has been named by BusinessWeek magazine as a “rising research star” in the next generation of computer science pioneers, has been inducted as an ACM Computer Pioneer, received the IEEE Virtual Reality Technical Achievement Award and the Distinguished Career Award from the International Digital Media & Arts Society among other national and international recognitions.

Currently, Dr. Cruz is the Donaghey Professor and the Director of the Emerging Analytics Center at the University of Arkansas at Little Rock and an Arkansas Research Scholar through the Arkansas Research Alliance.

Dr. Dirk Reiners has been at the heart of immersive visualization and Virtual Reality (VR) for more than 20 years. He has a MS and PhD in Computer Graphics from the Technical University Darmstadt, Germany, and worked at the Fraunhofer Institute for Computer Graphics, the largest Computer Graphics research institute in the world, for more than 10 years on different topics in interactive graphics and VR. He was instrumental in pioneering VR deployment for many of the German car manufacturers. His primary research interests are in interactive 3D graphics, immersive and high-resolution display systems and the software fundamentals needed to do all of this effectively and efficiently. He was the initiator and project lead for the OpenSG Open Source scenegraph project. He has been an active member of the virtual reality community and has been the Chair for Demos, Videos, Exhibits and the Program Chair at IEEE Virtual Reality and other conferences.

2017/06/16

Carolina Cruz-Neira – inventor of the CAVE – visiting IVC

The inventor of the Cave, Carolina Cruz-Neira, will visit our University and give a talk. She will be joined by Dirk Reiners who will in the same talk outline current research at the University of Arkansas at Little Rock’s Emerging Analytics Center. Their talk will be on Monday, June 19th starting at 3 pm in the Visual Computing Lab C 061.

Further details to be posted here.

2017/06/14

It’s Time to Stop Staring at Your Phone’s Mobile Map: The Importance of HCI Perspectives for Next-generation Navigation Devices

Who and when?
Prof. Johannes Schöning, from the University of Bremen, Germany, is going to visit our University on Thursday, June 8th. His talk will start at 1 pm in the Visual Computing Lab C 061.

Topic: It’s Time to Stop Staring at Your Phone’s Mobile Map: The Importance of HCI Perspectives for Next-generation Navigation Devices
In my talk I give a broad overview of my research. My research interests lie at the intersection of human-computer interaction (HCI), geographic information science and ubiquitous interface technologies. In our lab, we investigate how people interact with digital spatial information and create new methods and novel interfaces to help people interact with spatial information.

For example, catastrophic incidents associated with GPS devices and other personal navigation technologies are all too common: a tourist drives his rental car across a beach and directly into the Atlantic Ocean, a person in Belgium intending to drive to a nearby train station ends up in Croatia, a family traveling on a dirt road gets stranded for four days in the Australian outback. I will characterise key patterns that exist in these incidents and enumerate implications for research and design in HCI that emerge from these patterns.

In addition, researchers and mapping platforms have shown growing interest in optimizing routes for criteria other than simple travel time, e.g. identifying the “simplest route”, the “safest route”, or “most beautiful” route. However, despite the ubiquity of algorithmic routing and its potential to define how millions of people move around the world, very little is known about the externalities that arise when adopting these new optimization criteria, for instance potential redistribution of traffic to certain neighborhoods and increased route complexity (with its associated risks). I will present the first controlled examination of these externalities, doing so across multiple mapping platforms, alternative optimizations, and cities.

Vita:
Johannes Schöning is a Lichtenberg Professor and Professor of Human-Computer Interaction (HCI) at the University of Bremen in Germany. Before, and coming to Bremen, he was a visiting lecturer at UCL, UK, helping to setup the Intel Collaborative Research Institute for Sustainable Cities and had a faculty position at Hasselt University, Belgium. He is also currently a visiting professor at the Madeira Interactive Technologies Institute (M-ITI), Portugal. Previously, he worked in Saarbrücken, where he was a senior consultant at the German Research Centre for Artificial Intelligence (DFKI). During his time at DFKI, he received a PhD in computer science at Saarland University (2010), which was supported by the Deutsche Telekom Labs in Berlin. He obtained his Master’s degree in Geoinformatics at the University of Münster at the Institute for Geoinformatics (2007).

2017/05/31

Value of the Open

Who and when?
Prof. Walid Karam, from the University of Balamand, Balamand Al Kurah, Lebanon, is going to visit our University on Friday, May 19th. His talk will start at 11 am in the Visual Computing Lab C 061.

Topic: Value of the Open
We live in an open world despite all boundaries. Technology and the Internet has brought people closer. Societies have evolved to rely on the knowledge economy. Education has transformed from supply of information and basic scientific facts and skills to innovation and creativity. There are new business concepts that strongly rely on open innovation, and new mechanisms to create values. Laws have been put in place to support these new methodologies. This talk will shed the light on these concepts, and will highlight the value of “openness” in the knowledge society.

Vita:
Walid Karam is a Professor and researcher at the University of Balamand. He is founding member of LINC (the Lebanese Internet Center) and LERN (the Lebanese Education & Research Network) as well as board and senior member of the internet Society (Lebanon Chapter) and Computer Society (Lebanon section). He received his degree from Georgia Tech (Bachelor of Electrical Engineering) and Telecom-ParisTech (PhD in Computer & Telecommunications).

2017/05/16

Visually guided underwater robots

Who and when?
Prof. Michael Jenkin, from York University, Toronto, Canada, is going to visit our University on Wednesday, March 1st. His talk will start at 10 am in the Visual Computing Lab C 061.

Topic: Visually guided underwater robots
Vision has proven to be a particularly effective sensor for robots operating on and above the surface of the earth. In this domain vision has been used to track features, build environmental representations, solve localization tasks, avoid obstacles and to provide a conduit for human-robot communication. But how well does this sensing modality work underwater? Utilizing the AQUA2 underwater platform I have been involved in a long-term research project that has been developing solutions to these and other problems associated with underwater vehicles capable of operating in a 6DOF environment. Results for environmental reconstruction, localization and gait planning will be presented along with some highlights of ongoing work with a new vehicle (Milton) that underwent its first sea trials last summer, and which will be used extensively in trials early in 2017.

Vita:
Michael Jenkin is a Professor of Electrical Engineering and Computer Science, and a member of the Centre for Vision Research at York University, Canada. Working in the fields of visually guided autonomous robots and virtual reality, he has published over 150 research papers including co-authoring Computational Principles of Mobile Robotics with Gregory Dudek and a series of co-edited books on human and machine vision with Laurence Harris.

Michael Jenkin’s current research interests include work on sensing strategies for AQUA, an amphibious autonomous robot being developed as a collaboration between Dalhousie University, McGill University and York University; the development of tools and techniques to support crime scene investigation; and the understanding of the perception of self-motion and orientation in unusual environments including microgravity.

2017/02/15

Self-motion and self-orientation: studies using Virtual Reality and the human centrifuge

Who and when?
Dr. Laurence Harris, Director of the York Centre for Vision Research at York University, Canada, is going to visit our University on Wednesday, December 14th 2016. His talk will start at 10:30 am in the Visual Computing Lab C 061.

Topic: Self-motion and self-orientation: studies using Virtual Reality and the human centrifuge
More details can be found in this anouncement.

2016/12/13

From J9 to OMR: Developing Technologies to Improve Virtual Machines

Who and when?
Dr. Kenneth Kent, Director of the IBM Center for Advanced Studies and Professor at the University of New Brunswick, Canada, is going to visit our University on Wednesday, December 7th 2016. His talk will start at 11:00 am in the Visual Computing Lab C 061.

Topic: From J9 to OMR: Developing Technologies to Improve Virtual Machines
The Java virtual machine forms the underlying platform for many of the technologies that IBM deploys in the cloud and enterprise domains. Having an efficient platform is key for ensuring that client applications operate in an optimal manner. For the last 5 years, CAS-Atlantic has collaborated with IBM to develop several technologies to increase the performance of their JVM. In this talk, Dr. Kent will give a snapshot of some of the projects that have been undertaken and the outcomes. In addition, he will give an overview of the next 5 years of research that IBM and CAS-Atlantic has planned for collaborating together on OMR to provide a virtual machine that supports multiple languages, not just Java.

Vita:
Having served as the Director of the Information Technology Centre, he cooperates with industrial partners including IBM, Altera, Protocase and Buterfly Energy Systems. His collaboration with IBM has led to the creation of the Centre for Advanced Studies – Atlantic at UNB where he is the founding Director.

His research interests in Virtual Machines and FPGA Architectures have led to numerous publications and a number of tools widely used in the open-source community. He is an active member in the scientific community having served as co-programme chair, co-general chair and steering committee member of the IEEE Rapid Systems Prototyping Symposium and co-programme chair of the Highly Efficient Architectures and Reconfigurable Technologies Workshop.

He is a member of the National Science and Engineering Research Council Strategic Grant selection committee and an executive board member of Science Atlantic.

2016/12/06

Human-Computer Interaction Research at Otago

Who and when?
Dr. Holger Regenbrecht, Professor at the University of Otago, New Zealand, is going to visit our University on Friday, December 2nd 2016. His talk will start at 10:30 am in the Visual Computing Lab C 061.

Topic: Human-Computer Interaction Research at Otago
After a brief introduction into the research areas of the Information Science department Holger will present a selection of his own research projects in human-computer interaction with an emphasis on Virtual and Mixed Reality (VR, MR) technologies. In particular he will talk about different projects in telepresence, mobile systems, and MR interaction.

Holger Regenbrecht has been involved in research and development in the fields of Virtual and Augmented Reality for over 20 years. He leads the Computer-Mediated Realities Lab at the University of Otago. Holger has worked as a computer programmer, project manager, and researcher for clients in civil engineering and architecture, automotive and aerospace, and health and wellbeing. His work spans theory, concepts, techniques, technologies, and applications.

Vita:
Dr. Holger Regenbrecht has been working in the fields of Virtual and Augmented Reality for over 15 years. He was initiator and manager of the Virtual Reality Laboratory at Bauhaus University Weimar (Germany) and the Mixed Reality Laboratory at DaimlerChrysler Research and Technology (Ulm, Germany).

His research interests include Human-Computer Interaction (HCI), Applied Computer Science and Information Technology, (collaborative) Augmented reality, 3D Teleconferencing, psychological aspects of Mixed Reality, three-dimensional user interfaces (3DU) and computer-aided therapy and rehabilitation.

He is a member of IEEE, ACM, and igroup.org and serves as a reviewer and auditor for several conferences, journals and institutions.

2016/11/25

Sign Language for Cars – 3D Hand Gesture Control Goes Automotive

Who and when?
Dr. Alexander Barth, Technical Manager for Vision Engineering at Delphi, Wuppertal, is going to visit our University on Tuesday, November 15th 2016. His talk will start at 10:00 am in the Visual Computing Lab C 061.

Topic: Sign Language for Cars – 3D Hand Gesture Control Goes Automotive
Hand gestures are a natural and intuitive way of human communication. They can also be used for human-machine interaction, e.g. for controlling a TV, computer game, or, as recently introduced by BMW, the infotainment system of a car.

With the wave of a hand or flick of a finger, drivers can browse through a music playlist, zoom in and out of navigation maps or accept phone calls. In future cars hand gestures could even replace conventional controls like buttons and sliders.

The biggest challenge of such system is to distinguish intended hand gesture commands from random movements to avoid triggering unintended commands.

The technology behind this innovation is 3D imaging, computer vision, and machine learning. This talk will give an introduction to the technical concepts and applications of 3D hand gesture control in cars based on Time-of-Flight cameras.

Vita:
Alexander Barth received his BSc and MSc in Computer Science from the Bonn-Rhein-Sieg University of Applied Sciences and his PhD degree in Engineering Science from the University of Bonn in 2010. After four years with Daimler Research and Development in Sindelfingen, Germany, and three years with Mercedes-Benz Research and Development North America in the Silicon Valley where he was working on stereo vision-based driver assistance systems and automated driving, Alexander Barth joined the automotive supplier Delphi at Wuppertal as Technical Manager for Vision Engineering. His current R&D work focuses on 3D hand gesture control systems for in-vehicle applications.

2016/09/15