Abstract Title

Gestures and Sensors for Manipulating 3D Data Visualizations

Abstract

The research investigates methods for interacting with 3D data visualizations using gestures. It addresses limitations of the controls ordinarily available to computer users. Standard controls are designed primarily for 2D interactions. Gestures offer promise by tracking users’ hands and bodies in 3D space, recognizing movement and manipulating the data in a natural and fluid way. We implement several methods for exploring a 3D data rendering as C# scripts in the Unity 3D game engine. We experiment with commercially-available imaging depth sensors including the Leap Motion controller, Kinect for Windows v2, ASUS Xtion PRO LIVE, and Intel RealSense. We also integrate the Oculus Rift DK2 headset to provide an immersive, virtual reality experience. The Unity 3D game engine was chosen for features including: cross-platform portability, low-overhead scripting, and built-in rendering and physics engines. Each sensor’s strengths and weaknesses are tested by interacting with the data using several gesture-control methods, including: controlling a vehicle or avatar, controlling the virtual camera, and grasping and manipulating the data as a virtual object. The results are presented as a comparison of sensor performance for each method of manipulating the data visualizations. Each method is itself evaluated for user experience and consistency.

This document is currently not available here.

Share

COinS
 

Gestures and Sensors for Manipulating 3D Data Visualizations

The research investigates methods for interacting with 3D data visualizations using gestures. It addresses limitations of the controls ordinarily available to computer users. Standard controls are designed primarily for 2D interactions. Gestures offer promise by tracking users’ hands and bodies in 3D space, recognizing movement and manipulating the data in a natural and fluid way. We implement several methods for exploring a 3D data rendering as C# scripts in the Unity 3D game engine. We experiment with commercially-available imaging depth sensors including the Leap Motion controller, Kinect for Windows v2, ASUS Xtion PRO LIVE, and Intel RealSense. We also integrate the Oculus Rift DK2 headset to provide an immersive, virtual reality experience. The Unity 3D game engine was chosen for features including: cross-platform portability, low-overhead scripting, and built-in rendering and physics engines. Each sensor’s strengths and weaknesses are tested by interacting with the data using several gesture-control methods, including: controlling a vehicle or avatar, controlling the virtual camera, and grasping and manipulating the data as a virtual object. The results are presented as a comparison of sensor performance for each method of manipulating the data visualizations. Each method is itself evaluated for user experience and consistency.