Write-up TBD

Scalable Gesture Controls for AR/VR

Gestures are critical for VR games:

VR is the only platform where YOU get to cast the spell or slice the enemy, rather than pressing a button and watching a character do it.

That visceral feeling is one of the main reasons anyone would ever put on a big, inconvenient headset, when they could just play a mobile or console game with much less effort.

But it’s very hard to implement gestures in a way that feels good for players:

Accidental triggers and random failures

  • While playing a VR game, you’re constantly moving your hands around and some of these random hand movements will look a lot (or even exactly) like some gesture. This will cause you to randomly trigger gestures when you don’t want to.

  • Plus, the more gestures a game has, the more likely it is that a hand motion for one gesture will look kind of like the hand motion for another (or multiple others). So, whenever your motion is slightly off, you’ll also trigger the wrong gesture.

  • Then, anatomical differences also mean that everyone will have their own way of doing a particular motion. So it’s likely that your natural way of doing a motion will be slightly different from what the system is looking for, which means even more opportunities to trigger the wrong gesture.

  • Now, constantly triggering random gestures makes a game genuinely unplayable, so developers inevitably end up making gestures require more precision. And the more competing gestures there are, the more precision they’ll ask for.

  • But this extra precision means that it’ll be easy to fail gestures and very difficult to achieve any degree of consistency, which feels clunky as hell.

  • As a result, developers have to choose between having lots of gestures and having a game that isn’t a clunky piece of crap where abilities only work half the time, which prevents them from making some of the coolest VR game ideas.

Communicating errors

  • But requiring precision can work if the player is given feedback on exactly what they did wrong, so that their failures don’t feel random and they can learn the gesture well enough to repeat it consistently.

  • However, this isn’t possible in most gesture control systems, since they tend to use complicated algorithms that just spit out an obtuse similarity score between the tracked motion and a recorded gesture.

  • Put simply, this means the developers themselves have no idea why you failed some gesture. All they can see is that the your motion’s score was below the threshold they set. And, if the developers don’t know why you failed, they can’t communicate that to you either.

  • But even if they knew the exact error, it’s still incredibly hard to communicate that you were, say, 3 degrees off along the y-axis 3.4 seconds into a motion (in a way that you’d actually be able to consistently apply to future attempts). If it was easy, a lot of tennis/golf/baseball/etc. coaches would lose their jobs over night.

  • The net result is that your failures will feel random and not like something you actually did wrong, which just adds to the clunkyness.

Next
Next

Melee Combat System for VR