by Anton Shilov
07/05/2013 | 01:50 PM
Microsoft Research has developed a stereoscopic 3D monitor with haptic technology which simulates the sense of touch through tactile feedback mechanisms. In case the prototype device becomes a commercial product, it can revolutionize personal computers, interactivity of communication with people and things as well as many industries.
But how can a user touch and feel objects inside the virtual world? Can a flat touchscreen convey depth, weight, movement, and shape? Yes, say scientists in the natural interaction research group at Microsoft Research Redmond. Mike Sinclair, Michel Pahud, and Hrvoje Benko mounted a multitouch, stereo-vision 3D monitor on a robot arm to study how the kinesthetic haptic sense, which relates to motion rather than tactile touch, can augment touchscreen interactions.
The result is Actuated 3D Display with haptic feedback, a project that features a haptic device that provides 3D physical simulation with force feedback. The system consists of a touchscreen with a robot arm, engineered for instant, sensitive responsiveness, smooth forward and backward movement, and applications that support multitouch screen interactions, force sensing, 3D visualizations, and depth movement. By moving a finger on the screen, the user can interact with on-screen 3D objects and experience different force responses that correspond to the physical simulation.
Objects that can be touched and manipulated with the Actuated 3-D Display.
“I had been interested in the notion of putting a robot behind something you could touch. Originally, I had wanted a robot arm with many degrees of freedom. But complexity, costs, and safety issues narrowed down the options to one dimension of movement. At that point, I was sure that others must have already looked into this scenario, but after looking at the literature, it turned out no one had done this,” said Mr. Sinclair.
It also turned out that being limited to a robot armature with one dimension of movement – the Z-axis of the applications – provided valuable insights into how much or how little data humans need to detect the shape and type of object being touched.
“Contour detection was a major component of the project. The question was, by using your normal haptic sense, could you actually identify the type of object you were touching, even though it’s being presented by a fairly crude device consisting of a 2D touch screen and a robot arm that moves only in one dimension, forward and back along a track,” said Mr. Pahud.
To determine whether the device could simulate contours convincingly, an application presents the user with two rigid shapes of different depths, a cup and a ball. By changing the depth of the screen according to the user’s touch input, the team was able to simulate the surface contour of the 3D object. In contrast to the force-feedback behavior, this mode of operation can be thought of as setting the screen position with infinite resistance so that the user “feels” the contour of the 3D object by tracing a finger along its surface.
“Your finger is always aware of motion. As your finger pushes on the touchscreen and the senses merge with stereo vision, if we do the convergence correctly and update the visuals constantly so that they correspond to your finger’s depth perception, this is enough for your brain to accept the virtual world as real,” said Mr. Pahud.
Taking this experiment a step further, the team “blindfolded” subjects by making the screen blank. The goal was to see whether users could identify shapes by touch alone.
“They could not see anything, and the shapes were simple objects. We knew beforehand that complicated objects wouldn’t work, but some of the objects were reasonably sophisticated – a pyramid, a wedge, a cylinder. But I would say these results were the biggest and most pleasant surprise of the project,” claimed Mr. Sinclair.
The project proved that with just a low-bandwidth data channel – the finger – it is possible to model surface contours. The low bandwidth is simply a function of the fact that, with a finger, the user only touches one point at a time, yet with sufficient haptic feedback as the finger moves, there is enough information to identify shapes.
One feature of the Actuated 3D Display with haptic feedback project that users might not notice but was essential to its function was implementation of a constant idle force.
“It is a very lightweight force that pushes back to follow the finger and maintain constant contact. At first, it feels as though you are touching a hard wall that’s easy to push, but you get used to it very quickly because it supplies only a few ounces of force against the finger. Since touchscreen interactions require the user’s finger to remain in contact with the surface, the main challenge of the idle mode is to ensure that the screen remains in contact with the fingertip regardless of the direction that the fingertip is moving, either away or toward the user,” explained Mr. Sinclair.
Thanks to this small idle force, the screen can follow the user’s finger in depth excursions, both positive and negative, until a haptic force beyond the idle force is commanded, such as when touching and interacting with an object.
The team also implemented four additional command modes: force, velocity, position, and detent, where additional forces are added to the idle force depending on the position of the finger and the application requirements. Detent mode, for example, specifies a fixed-position command to the controller, which makes the screen remain exactly at a desired position, effectively canceling the idle force. This adds a brief resistance that creates a haptic signal for the user, analogous to the sensory click of a radio dial when turned off. The detent mode proved extremely useful for exploring volumetric data.
From the first discussion about this work, Pahud envisioned a 3D image of a brain, built from volumetric data.
“I could see an image of the front of a brain and pushing a finger through the layers of the brain to travel through the data. I could imagine receiving haptic feedback when you encountered an anomaly, such as a tumor, because we can change the haptic response based on what you touch. You could have different responses for when you touch soft tissue versus hard tissue, which makes for a very rich experience,” he said.
In contrast to other project applications that apply to 3D scenes, the volumetric-data-exploration application shows how movement and haptics can enhance interactions with 2D data. Mr. Pahud implemented a volumetric medical-image browser that shows the MRI-scanned data of a human brain. By gently pushing on the screen, the user can explore the data using touch and view different image slices of the brain. The brain browser is not just for exploring data through touch. The prototype is meant to demonstrate how touch interactions can be deployed for meaningful work.
Brain Explorer. The interface provides a method to mark and save depth investigations of interest.
To facilitate search and retrieval of annotated slices, the project implements a haptic detent to mark that slice. Thus, when the user’s finger moves past an annotated slice, either coming or going, the detent provides tactile resistance to alert the user of an annotated slice. The user may continue on and push past the detent, but the detent has done its job by making it easier to find annotations through touch, unaided by visuals.
There is no doubt that the kinesthetic approach creates a productive new paradigm that augments touchscreen interactions. Mr. Pahud and Mr. Sinclair see many opportunities for this type of haptic device.
“There’s always 3D gaming. but also 3D modeling, education, and medical. We anticipate improving the experience with crisper, more detailed feedback, such as texture,” concluded Mr. Sinclair.