Navrtar - the technology behind the smoke and mirrors

Here at Navrtar we provide a virtual reality experience in London to remember. Most of our guests ponder about their high score, or the black lighting in the bar, as well as mentally booking their next visit! But for those who enjoy a deep dive let's look into the technologies that make Navrtar possible.

In a word, the secret source is Vicon, a British Academy Award(TM) motion capture software developer and hardware supplier with over thirty years in the industry. Vicon has high-end motion capture algorithms in their intellectual property and has developed solutions that reduce the complexity of running a VR as close to plug-in and play as possible. Thus, allowing Navrtar to reduce operating costs and provide competitive prices for our experiences.

What is motion capture?

Motion capture (often referred to as mocap) is providing the movement of an object's path through 3D space over a time period. One example would be the movement of a person while walking; their gait. If a detailed gait can be captured and modelled on to a virtual 3D avatar, it should be perfectly possible for the people who know you well to recognise your avatar from its walk, and vice-versa. This can be a powerfully immersive part of our virtual reality experience in London.

What is optical tracking?

Optical tracking is the solution we use to follow an object around the game space. It is a whole family of algorithms dedicated to computer vision. To minimise ambiguity, we have adopted a system of outside tracking; pulsing LED markers are worn on strategic locations that have been pre-associated with a generic human model.

Software that brings it all together Evoke

Ok, so we have some coordinates of where some flashing LEDs are relative to a 3D game space. So where are the 3D models, the game engine, the physics engine and how do we get all the numbers crushed to give the lowest latency? Not to mention about bug fixing over time? How do we not need a team of technicians to keep the VR suite running?

This is the role of Evoke(™) by Vicon. Using the network of ceiling cameras with real-time auto-heal constantly recalibrating said cameras to the likely locations of a tracked target, to maintain an accurate interpretation of the motions going on before its many eyes. It‘s not a case that each tracked LED cluster location is known all the time and occlusion by other players or player's own body parts is to be expected during any dynamic activity. It's the capacity to interpret, and it is this interpretation of what is going on in the 3D space. Not strictly measuring the game area makes Evoke robust and fast and truly real-time.

The second half of Evoke is its simple integration with both the Unreal Engine(™) and Unity(TM); this was a primary function of Evoke and has been well maintained as the game engines are updated. It is in the engines where matters of gameplay, rendering and physics are handled and both we and Vicon must pass the baton to game designers of our virtual reality experience in London.