Here’s the problem:

To navigate most AR and VR interfaces, you have to move your entire arm.

This is fine when the feeling of the motion is important, like when swinging a sword in a videogame.

But, for mundane tasks like navigating the internet, this quickly becomes very tedious and makes you stand out in public (in a bad way).

Here’s my solution:

Lazy Pointer lets you navigate AR/VR interfaces with small wrist movements from any hand orientation.

So you can, say, browse the internet with your hand just chilling on your lap.

For 3D interfaces, I came up with a control scheme where you:

  1. Hold a button

  2. Drag your controller to the desired position

  3. Release the button

This felt the most intuitive, since the user gets real-time feedback on the pointer’s direction, which makes precise control easier.

A pulling motion also just feels natural and satisfying, especially when we add a bit of haptic feedback to mimic tension.

Here’s the GitHub repo for the project.

For 2D interfaces, I created a physical prototype to match a ring or wristband form factor (using nothing but an IMU, a strap, and some tape).

The reason why 2D is different is that we have screen edges to work with.

This means that, unlike in 3D, the pointer (or cursor) won’t escape to Narnia while we’re moving our hand to a new position to control from. So we don’t need a re-center button of any sort.

After this hand reposition, the code will automatically change the control axes to reflect how a laser pointer would behave if it pointed towards the screen from that hand orientation.

Here are the objectives I was working with:

High Control

  • Users need to be able to interact with the right UI element every single time

Low Effort

  • Users should be able to control the UI from any hand position

  • There should be as few calibration steps as possible

  • Control motions should be as small and quick as possible

  • Control motions should move as little weight as possible 

Low Cost

  • Don’t use anything that isn’t found in existing commercial devices

  • Don’t have to train any fancy machine learning models

Low Physicality is ACCEPTABLE

  • In many AR/VR tasks, we want the user to actually feel like they’re doing the action themselves.

  • But when tasks are highly repetitive and mundane, convenience trumps all else. This solution is for those tasks.

Here are some general learnings:

Real-time feedback is vital in 3D

  • In 3D, there are a lot more ways to fail at basic interactions. That makes everything clunky in the absence of real-time feedback.

  • For example, I first tried having the pointer direction be set to wherever the user was looking.

  • This made calibration faster, but it didn’t provide any real-time feedback on the pointer’s direction, making precise targeting harder.

  • The lack of feedback between the uncalibrated and calibrated states also made the interaction feel jarring and unsatisfying.

Efficient motion, sufficient illusion

  • From past projects, I’ve learned that VR game design is all about creating motions that are efficient, but still create a sufficient illusion of doing something.

  • Turns out this also applies to common VR interactions, even though I was explicitly trying to prioritize convenience over physicality.

  • Integrating some small motion (as opposed to just a button press) just seems to provide more real-time feedback and makes even small interactions more satisfying.

New ideas often come from just having different objectives

  • Nothing about this project was groundbreaking or difficult. Anyone could come up with the above solutions.

  • It was simply the result of setting an objective that others seemingly hadn’t set (“must be able to control with tiny wrist movements”).

Previous
Previous

Melee Combat System for VR

Next
Next

Teaching Precise Motions With Haptic Feedback