Markerless Augmented Reality (AR) is a breakthrough technology that promises to bring AR to the masses as never before. So what is Markerless AR and how are developers harnessing its power? We talked to Marxent Software Engineer Ken Moser, who received his PhD in Computer Science at Mississippi State University. His concentration was in Augmented Reality, and his dissertation focused on the problems, and solutions, to calibrating transparent displays. Ken spent two years teaching at the university level, and two summers, 2014 and 2015, conducting projects at the Nara Institute of Science and Technology in Nara Japan, under the supervision of Christian Sandor, a leading researcher in the field of Augmented Reality.
Ken Moser: In the most general sense, the phrase “Markerless AR” is used to denote those Augmented Reality (AR) systems and/or applications that do not require any pre-knowledge of the user’s environment in order to place virtual content within a 3D scene. This is in contrast to the more prevalent “Marker-Based AR” which requires the user to have a pre-defined pattern, object, or marker placed in the real world that is then used by the tracking system to properly orient and register virtual content into the space. Of course, just as there is a wide distinction between Virtual Reality (VR) and AR, as described by Milgram and Kishino’s “Reality-Virtuality Continuum,” there is also a wide distinction between the various modalities of Markerless AR as well.
In its most primitive form, Markerless AR is achieved by superposing virtual objects into a static, pre-captured 2D image. This is, of course, not the state of the art, and actually straddles the line between AR and photo editing. That’s not to say that this method is completely without merit. It’s a straightforward and easy to implement solution for applications that want to offer “off-line” AR instead of live experiences. PlanAR’s PlanarView Visualization tool is a good example.
On the complete opposite side of the Markerless spectrum are systems using RGB-D SLAM and/or sensor fusion approaches, most notably the Microsoft Hololens and Google Tango devices. These systems integrate information from standard, Red Green Blue (RGB) cameras, along with state of the art infrared time of flight cameras to construct a 3D map of the user’s surroundings “on-line,” while the application is in use. This is the key component of the SLAM (Simultaneous Localization And Mapping) tracking paradigm, and is what enables applications running on these devices to concretely place virtual content within the space.
No. SLAM is also possible without the use of a depth camera, by using one or multiple calibrated RGB cameras instead. The “map” in these systems is comprised of various feature points extracted, triangulated and tracked across the frames of the camera feed. These RGB SLAM systems can suffer from scale and drift errors without rigid calibration, since the exact distance to features is estimated instead of directly measured by a depth camera. The localization aspect of SLAM, though, is the same in both cases and simply refers to the retrieval of the 6 Degree Of Freedom (6DOF) pose of the user with respect to the generated map’s coordinate system.
Unfortunately, the current culture around AR often erroneously labels any type of on-line Markerless solution as SLAM or SLAM-like, which makes even less sense. Unless the tracking is actually building or updating a map of the environment, while also simultaneously localizing the user within this map, it’s not using SLAM. Of course, without a “Map” you’re only left with “SLA,” which doesn’t have quite the same ring to it.
Untethering the user from a marker actually comes with numerous advantages. They are, of course, able to initialize the application anywhere, making it far easier for users to take the experience with them and share with others at work or on the go. The average range of motion for users is also greatly increased with Markerless AR.
Marxent’s Relative Tracking, for example, allows users to walk around any open space — say 3 to 4 meters on average for most indoor locations — which is far beyond the range of any extended tracking on the market today. Even this will pale in comparison to emerging SLAM technologies that will allow AR experiences to span whole cities. The massive size of the experience will inherently drive multi-user collaborative interactions, where the precise location and orientation of every user is known in the real world despite them all being miles apart. This is something marker-dependent tracking could never achieve. As an added bonus, removing the need for printed markers also reduces the ecological cost of paper and ink waste generated by marker-based experiences, which is rarely ever discussed.
Being able to track a person or object’s motion within a space has enormous potential in basically every market domain. We, naturally, are utilizing tracking for AR consumer experiences, which includes placing virtual products in a user’s space. User-centric Markerless tracking is also applicable to VR headsets as well, which would facilitate unbounded virtual environments able to adapt to the local space and beyond. Markerless tracking is also essential to autonomous vehicles and robotics. I can immediately envision upgraded motorized wheelchairs with integrated Markerless tracking, allowing people with MS, ALS, or severe paralysis to be able to navigate their chairs with built in obstacle avoidance and safe path finding by simply looking at a location on a HUD, or perhaps by simply looking at a place in front of them using integrated eye tracking.
But wait, there’s more! Click here to read part 2 of our conversation about Markerless AR, including information of the development of Markerless systems, and Ken’s observations on what sets the Marxent Markerless AR solution apart from the rest of the industry.
Marxent A little #MondayMotivation from @KRiviello: #AR #VR https://t.co/EDHn0xK6dU https://t.co/yivb5b6Mr5
Marxent The future of the shopping mall https://t.co/eye2WC08CD via @McKinsey
Marxent RT @megrene818: How does a downtown office in sunny St. Petersburg, FL sound? We're hiring BI and Product Analysts to join our team! https:…