There’s an old saying that reality is what you make of it. That’s more true than ever for 3D artist or designer, who are now working in at least four different “realities” — Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), Diminished Reality (DR). For the moment, it’s VR and AR that are getting most of the press (AR in particular has enjoyed widespread coverage as of late thanks to the smash success of Pokémon Go), while MR and DR remain largely misunderstood.
Today, we’re focusing on Diminished Reality, which is perhaps the least well-known and understood of the four realities. And who better to explain the concept and its applications than Marxent Software Engineer Ken Moser, PhD? Ken is a Doctoral candidate in Computer Science at Mississippi State University concentrating in Augmented Reality. His PhD dissertation focuses on the problems, and solutions, to calibrating transparent displays. Ken has also spent two-plus years teaching at the university level, and did two summers, 2014 and 2015, conducting projects at the Nara Institute of Science and Technology in Nara Japan, under the supervision of Christian Sandor, a leading researcher in the field of Augmented Reality.
So yeah, Ken is an expert in the field. I peppered him with questions via email to get to the bottom of what DR is and how people are using it. Here’s our conversation:
QUESTION: What is Diminished Reality?
KEN MOSER, PhD: Diminished Reality, in the most general sense, is the direct opposite of Augmented Reality. In AR, the goal is to augment, or add to, the real world using virtual imagery, sounds, smells, haptics, synthetic olfactory stimuli, etc. DR is the process of removing, eliminating, or diminishing the amount of perceivable stimuli from the world. Technically, DR does not lie along the traditional Virtuality Continuum since it does not explicitly mix reality types. However, DR can be used in conjunction with AR to provide unique visual experiences.
Is there more than one kind of DR?
Traditionally, there are two main categories of DR: Observational and In-painting. These simply describe the underlying technique used to produce the visual reduction. Observational DR utilizes pre-captured or existing images/video of a background scene. Then, when new physical items are incorporated into the space, the background images can be used as a reference for obtaining background information obstructed by the new objects. In-painting is a technique that attempts to paint over objects using texture and patch information from the source image itself. In-painting is less accurate, but provides a more general approach for accomplishing DR when prior knowledge of a scene is not accessible.
What are some use cases for DR?
DR is well suited for image / video production where unwanted items / features of a scene need to be removed. In general, DR does not impose any constraint on real-time processing of a scene. This means that many of the traditional post-production methods used in film studios can be / are classified as DR. For example, wire harnesses and assemblies are often used in action sequences where the actors must fall or hurtle a significant distance through the air. In the final edit of the movie these wires must not be visible to the audience. In-painting techniques are typically employed to remove these features within each frame.
Since these cinematic DR effects do not have to be performed live while the scene is being filmed, but are instead performed in post, the complexity and computational time of the algorithms used can be quite high. Perhaps several minutes or more per frame. These same methods are not usable though, when live video must be “diminished” in real-time, since the processing time per frame must be reduced down to only a few milliseconds. In “real-time” diminished reality I’ve only ever seen examples within the research community. Most simply showcase simple scenarios of removing objects from tables or counter tops or small items in outdoor spaces. One could imagine a number of innovative scenarios, however, where live DR, coupled with AR, could be advantageous.
What types of occupations will benefit from DR?
Contractors and builders, especially, could benefit from DR technology. One could imagine a city planner designing a new hotel where a destitute parking garage currently resides. DR would allow the planner to remove the parking garage and place in the new hotel design. With standard AR, it would be possible to place in the hotel, but if the new design doesn’t fully cover the old structure, a lot of the ascetics are lost. The DR would allow the un-occluded portions of the garage to be removed as well. On a smaller scale, landscapers wanting to remove stumps or shrubs within an outdoor space, or bad tile and walkways could also utilize DR. Indoors, interior designers could remove out dated or grimy décor before adding new accents and pieces through AR.
Does DR have any downside?
The primary concern would be for safety. If the user forgot they “removed” an end-table, or cement step, or man-hole and began walking throughout the space, the possibility for accidents increases significantly.
Are there examples of DR already in the MarketPlace?
There are a number of applications that provide off-line DR through photo and video editing features. Photoshop being the most pervasive and commonly known application supporting these “touchup” effects. Within the research domain, there are a number of videos showcasing the possible use cases of live video DR:
While there are a large number of AR applications available for mobile devices, I am not currently aware of any offerings for live DR on Google Play or the iStore.
Is there a specific use of DR that you think is the “Best”?
Since all the real-time DR I’ve seen has been solely within research related endeavors, it’s hard to say that I’ve seen one use that is “Best.” I can say that they all have their pros and cons, much of which revolves around tracking and the in-painting method of choice. If I had to choose the “Best” use of DR that I’ve seen it would have to be within a “Large” scale haptic system built by the Interactive Media Design Lab at the Nara Institute of Science & Technology (skip to about 2:03 in this video).
This system consists of a large haptic device, which provides tactile and force feedback to the user during interaction with virtual objects. DR is used to remove the device from the user’s hand while they are picking up blocks (for the application shown in the video above). I would consider this the “Best” implementation since it works on a live video feed and interacts with a fairly complicated tracking and haptic system. (I need to note here though that this is the same laboratory I did some of my PhD research in, so I may be a little biased.)
When will we start incorporating DR into our designs at Marxent?
We are actively pursuing and developing DR technology so that we can continue to provide state of the art innovative solutions for our customers. We hope to have a small scale DR option available in the not-so-distant future.
Ken Moser, PhD, is a Software Engineer at Marxent.