Updated on January 23, 2017
Markerless Augmented Reality has arrived
With the recent emergence of better cameras and more accurate sensors in soon-to-be mainstream devices, Augmented Reality is transitioning from image or QR code based activations to markerless Augmented Reality experiences. Current implementations of markerless AR (often referred to as ‘dead reckoning’) use sensors in devices to accurately detect the real-world environment, such as the locations of walls and points of intersection, allowing users to place virtual objects into a real context without needing to read an image.
Markerless AR basics
“Markerless AR” is a term used to denote an Augmented Reality application that does not need any pre-knowledge of a user’s environment to overlay 3D content into a scene and hold it to a fixed point in space. Until recently, most AR fell under the category of “marker-based AR,” which required the user to place a “tracker” — an image encoded with information that’s translated by complex software to produce a 3D object that maintains spacial orientation within a scene — in order to achieve the desired effect. Markerless AR solutions have included new hardware packages like Google Tango, though we prefer a proprietary solution developed in house that produces the same effect without the need for specialized equipment.
Project Tango and Smart Terrain
Google has seen adoption of its “Project Tango” platform — which uses a camera to scan the environment in real-time, using the data to spawn a virtual environment around the user, occluding objects like furniture, with a virtual layer — by Lenovo (the Phab 2) and Asus (the recently unveiled ZenPhone).
Qualcomm is also working on a markerless AR SDK. Called “Smart Terrain,” it works much like Project Tango in that it uses a 3D camera to scan and calculate the size of objects and then allows you to accurately overlay them with virtual objects.
This shift in AR user experience opens up endless possibilities for both commercial and entertainment applications. Combining this type of AR with something like the MicroSoft Hololens opens up even more opportunities for incredibly immersive experiences. Here are three key areas that will see significant impact from the new level of experiential immersion that markerless tracking enables.
1) Gaming and entertainment
Imagine walking into a room, scanning it with your mobile device, and then using the information to spawn an entire game world around you. A wearable device such as the Hololens could be used to view the world, creating one of the most immersive gaming experiences imaginable. Without the need for a television, VR headset, or even a standard method of control, markerless AR enables gaming anywhere by simply transforming a real environment into a game world. This is just the beginning of how this kind of technology could change the gaming world.
2) Commercial product visualization
Have you ever wanted to see how that new appliance might look in your kitchen? Or wondered whether you had room for said appliance? Markerless AR will make it possible to grab a mobile device, scan a real world environment such as a kitchen, and virtually place a product there to see how it would look and fit. It is now possible to “furnish” an entire room virtually, trading out styles of cabinets, flooring and appliances to see how they work before buying. This type of markerless AR allows consumers the freedom to shop in their own real environments without the restrictions of printable image targets.
Imagine walking down the street, holding up a device and seeing an entire virtual layer themed after a movie, pro sports team or video game. The possibilities with this kind of advertising will become even greater when wearables like the Hololens become more common. Users could freely cycle between multiple virtual layers and experiences in their real world environment.
The combination of markerless AR with broad acceptance of wearables will make AR an everyday technology. This combination gives users the freedom to move around and use an experience without being fear of dropping the virtual layer when an image target falls out of sight.