Password Required

Please enter the password to view this page

Welcome back to Season 2 of the In Reality podcast, which covers all things Augmented & Virtual Reality. In Reality features industry news, commentary, and perspective from AR/VR veterans and experts.

In Reality is co-hosted by Marxent’s Joe Johnson and Joe Bardi. Johnson is the Creative Director at Marxent, and has been in the AR/VR industry for 6 years following a stint on Microsoft’s Office UX team. Bardi is Marxent’s Senior Content Strategist, and has been in the industry for 2 years, after having spent more than a decade in print and TV media.

For this week’s episode, the Joes are joined by Dr. Ken Moser, PhD, Marxent’s resident expert. What is Ken an expert on? Why, everything. Don’t believe us? Just click the below link as Ken and Joes take a rapid-fire trip through: recent developments in the world of Augmented and Virtual Reality, technology, self-driving cars, AI bots talking to each other, and whether or not free will exists at all. To push play or not to push play — is it even up to you?

Show Notes:

Marxent’s MxT Tracking vs. ARKit: A Q&A with Dr. Ken Moser, PhD

Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam

Facebook engineers panic, pull plug on AI after bots develop their own language

Thanks for listening. We really do appreciate it! Enjoyed the show? Please check out previous episodes at the In Reality Soundcloud page, or subscribe at the iTunes store, Stitcher or the Google Play Music.

Questions, comments, concerns? Email us at


00:00 Joe Johnson: Welcome to the In Reality Podcast. Now starting in three, two, one.


00:08 JJ: Welcome to Season two, Episode three of the In Reality Podcast, where we’re covering all things augmented and virtual reality. The In Reality Podcast is hosted by Joe Bardi and Joe Johnson and features news, commentary and perspective from industry veterans and experts. First up, introductions. I’m Joe Johnson, creative director of Marxent Labs and I’ve been in the AR and VR industry for six years now at Marxent.

00:26 Joe Bardi: And I’m Joe Bardi, I’m the senior content strategist… I’m the senior…

00:31 JJ: You’re so good at this.

00:32 JB: I know, I’m told you man. Anyway, and I’m Joe Bardi, the senior content strategist here at Marxent, and I’ve been here for about two years.

00:39 JJ: Joining us this week is our resident scientist/genius PhD, Ken Moser. Thanks for coming on the show today, Ken.

00:44 Ken Moser: Are we recording?


00:47 JB: We are on. This thing is live.

00:49 JJ: He’s ready.

00:50 JB: Yes.

00:51 JJ: He seems ready.

00:52 KM: Hello, Joes.

00:53 JJ: Hello.

00:54 JB: Hello. Oh, I’ll say hello too.

00:56 JJ: That’s good. Alright, last week we were all business, but this week we’re evidently not. We’re just gonna pepper Ken with questions and see what happens. So let’s check the big brain on Ken.


01:12 JJ: So, now that we’ve sat down with you, what’s going on in your world, Ken? What gets you excited up in the morning?

01:18 KM: Well…

01:18 JJ: ‘Cause this is just a conversation today.

01:20 KM: That’s true. All the exciting things at Marxent, for sure. We’re doing lots of exciting things. The things we do are exciting.


01:27 KM: I’m very excited about the things that we do.

01:29 JJ: He’s prepped.

01:30 JB: He is. Oh, he’s ready.

01:31 KM: We have lots of new innovative projects, in the pipeline, that will be coming out.

01:36 JJ: Do you have anything self-aggrandizing? Let’s talk about that first.

01:40 KM: So, I guess pertaining to… I’d like to say umm…

[overlapping conversation]

01:43 JJ: I’m hearing some… What sounds like somebody banging on the table.

01:47 JB: So, my foot was moving, I just stopped when you looked up ’cause I was like, “it’s gonna be my foot.”

01:50 KM: I’m being very conscious about what I touch. Yeah, and look at.

01:53 JJ: I appreciate that about you, Ken. He fails every time.

01:55 KM: I’d like to do this closer to me, is that okay?

01:57 JJ: Yeah. Do whatever you want, yeah.

01:58 JB: You can do that. Yeah.

02:00 KM: Can I do that? Can I do that?

02:00 JJ: Yeah, it sounds great.

02:01 KM: I like to lean forward, I’m more of a…

[overlapping conversation]

02:03 JJ: I’m doing this. I’m like, “Hey, I’m into what you’re doing,” I’m leaning.

02:06 JB: I’m purposely not moving. [chuckle] This is what I look like when I’m not moving…

02:10 KM: Remain motionless.

02:11 JB: You can see that I have to think about it.

02:14 JJ: He’s a fidgeter, did you know that?

02:14 JB: I fidget.

02:15 KM: I… I caught on.

02:16 JJ: He’s the fidgeter.

[overlapping conversation]

02:18 JJ: But seriously though, what are you working on right now that you’re really excited about?

02:22 KM: Yes. So the first thing to mention would be the use of our MxT Tracking for the initialization for the ARKit. We’ve got lots of positive feedback…

02:34 JJ: Is it like fully integrated at this point?

02:34 KM: [02:34] ____ about that. It is integrated in the sense that it is in our SDK for any clients out there that are not currently using our SDK. Feel free to use our SDK now for AR. [chuckle] You will gain all the benefits of ARKit. Plus the benefits of our instant start.

02:56 JJ: So, let’s dig into that.

02:57 JB: Let’s take a step back and explain exactly why we’re talking about this. So, ARKit is Apple’s SDK. And explain to the people, Ken, what the one sort of bugaboo is with ARKit.

03:09 KM: The main drawback of ARKit that our clients are finding, so these are people that are gonna have their own retail apps that they want people at home to use. The drawback that they’re finding is that people are having difficulties getting the… What ARKit refers to as the floor anchor, initialized.

03:35 JJ: Is that like the ground plane, it’s like the software detecting ground plane?

03:36 KM: Basically the ground plane. Exactly, yes. ARKit is able to detect horizontal plains. The plains could be a table, it could be a floor, it could be a chair that’s flat, basically anything horizontal, currently. In their 1.5 release they are adding in the vertical plains. But if you want to place stuff on the floor, like furniture or tables, which is also a piece of furniture. It’s furniture in general…

03:55 JJ: Sci-fi scenes with the fighters shooting at them, yeah, any of that stuff. Yeah.

04:00 KM: Characters. Exactly.

04:00 KM: Or the demo that they showed at their [laughter] WWDC [04:03] ____ where the guy’s on the table and he falls off.

04:04 JJ: Solid reference, Joe. That’s right, The yellow grid.

04:07 KM: The yellow grid.

04:08 JB: With the stars, I remember that.

04:10 KM: Then you need a horizontal plane, of course.

04:12 JJ: Yeah.

04:12 KM: And so ARKit does that for you naturally. The drawback is they don’t currently provide the developer any mechanisms in getting any numbers around how close it is to finding a horizontal anchor…

04:26 JJ: You mean like a progress on finding it?

04:28 KM: A progress, exactly. You basically are there left “scanning,” in quotation marks.

04:35 JJ: I know I as a user have booted up an app for filming or something, and I don’t have any idea when it’s gonna start working. So I just wave it around and occasionally, it’ll work when I think it’s gonna work.

04:47 JB: You can get stuck assuming that it’s not working.

04:49 KM: Exactly. There are alternatives. Wayfair’s app in the past… When they first came out anyway for ARKit, use the ARKit point cloud, so ARKit also gives you a point cloud of the space. And this point cloud basically is dependent upon how much movement you’ve done over the last so-and-so amount of seconds, X number of seconds. If you basically start the app up and remain motionless, there is no point cloud ’cause you haven’t moved.

05:14 JJ: I did detect that, yes.

05:15 KM: Yes. You have to actually move around some for it to get the point cloud. And it’s just making area of disparity between consecutive frames to generate that point cloud.

05:22 JJ: So, what we’re really talking about here is we’re talking about augmented reality user experience.

05:26 KM: Exactly. It is purely around the UX.

[overlapping conversation]

05:28 JB: And what… Did you come up with… Like is there an average start-up time of just raw ARKit from start of app to initialization when it’s actually, when it pops in, did you get an average time? Or…

05:38 KM: So that is very dependent upon user. It is dependent upon space you’re in. Here in the office, we have a good swatch of persons here in office we have. They’re all adults, of course. But we have relatively short people…

05:50 JJ: We’re not hiring children. That’s not entirely true. They do let me work here.

05:54 KM: That’s true. But for a child, you’re very tall.

05:56 JJ: Thank you.

05:57 KM: You’re like a six foot tall. But we also have people that are sub-five.

06:00 JJ: I’m like a big baby.

06:01 KM: We have sub-five foot, we have some four-foot tall ladies.

06:03 JB: Are we about to talk about the distance between hands to the floor right now?

06:06 KM: Well, so if you want to move of course… If you’re a taller person, if you just swing your arms you’re covering a greater distance than if I was a shorter person with my arms.

06:13 JJ: Oh, that’s interesting. So the point cloud would be different.

06:15 KM: Right. So, we don’t have a full swath of data over body types and height types or different floor types. We have a very dark carpet here… It’s not a pure colour, there is some speckles in the carpeting here but at a distance it looks grey basically…

06:32 JJ: Like uniform.

06:33 KM: It’s very hard to pull features out of. And so, I find myself just here in the office that the carpeting is not optimal, and I can often spend sometimes, you have 10 seconds or more trying to get it to pick up the floor for floor anchor, sometimes it does it quick.

06:46 JJ: Yeah, I mean I imagine other people’s environments are similar.

06:47 JB: I was gonna say, for the record, it’s a pretty generic carpet.

06:49 JJ: [06:49] Yeah.

06:51 KM: Yeah, for an office, yes. For an office space…

06:53 JJ: It seems fine, yeah.

06:54 KM: I wouldn’t have this in the house, but for an office space this is like normal.


06:56 JJ: Right. So we know he has better taste at home.

07:00 KM: Well, home may be even worse. Home may even be worse, maybe you just have a pure white carpet at home or a beige carpet.

07:02 JJ: So, how have you solved that problem?

07:04 KM: So our MxT tracking doesn’t do the anchor detection, so it doesn’t require you to already know information about the floor. It makes presumptions about who you are and where you may be holding the device above the floor. And therefore allows you to immediately just place an object in the scene. The scale is not the same as ARKit. So ARKit scale is like an absolute scale. If I move one foot in ARKit tracking space, then it looks like relative to the AR content that I have in the scene that I move one foot around. Where with MxT tracking, it is relative to where you start. So wherever I start at, if I move the foot it looks like I move the foot from it. But it’s all relative to where I actually begin the tracking phase at. But, we can of course, can continue to run ARKit in the background. So we’re running MxT, you instantly start, I can immediately start looking at the product I wanted to view. ARKit is still running, and so as I begin to move, and look around the product in MxT tracking.

08:00 JJ: You get a more confident track.

08:01 KM: Exactly. And ARKit is still trying to find the floor plane, and then when ARKit does find a plane, we kind of intelligently infer using distances from where you’re holding the device at it. It could be up the floor or the table or some other object that’s not the floor. But once we feel confident you detected the floor anchor, then we can just switch right into ARKit, but as the user you wouldn’t have any idea that ever actually happened.

08:23 JJ: That’s great! Yeah, as a former UX professional it’s always good to turn initialization times down from like eight seconds to 0.3 or something.

08:30 KM: Goodness, yeah. So, basically the analytic data we have so far, and some of our clients have given us their own data, and we’ve done a few user tests with our UI/UX department as well. But most people it takes at least five seconds or more, and of course in the high-end, yes 10 seconds upto just not getting it at all. I don’t know how low our threshold was, but it was like 15 to 20 seconds.

08:52 JJ: It’s pretty low, yeah.

08:53 KM: If you didn’t get it at that point, then it’s considered a failure. But sometimes it can take an amount of time. And of all the apps we’ve used, so we are continually surveying the app store for their hit apps…

09:07 JJ: Your Wayfair’s, or your Ikea places…

09:08 KM: And your general games, or your gamified apps that use ARKit for basically no purpose at all. As a time waste.

09:14 JJ: Well, I got a segue for that later. [laughter]

09:16 JB: That’s called marketing, Ken.

09:18 KM: Exactly. They all have their own little mini-game for getting you to do it. Some [09:22] ____ want you to move the phone, and this is one that we just saw recently, I don’t remember the name of the app off-hand… Or do I?


09:29 JJ: I don’t know.

09:31 KM: It’s a pretty good app. They wanted you to move the device in a circular motion, which was so far, from what I’ve seen, the worst yet. Because you don’t really wanna do a circular motion, you wanna more lateral motions because of course, if it’s trying to detect the plane in front of you, it needs that disparities, like your eyes are separated by distance, not by rotation.

09:48 JB: Yeah, well, I imagine…

09:49 KM: On top of each other, it’s like rotated. You want to do more of a translational motion.

09:53 JJ: I imagine MxT works because generally it behaves the way people hope things behave. Like when you start it up, it basically starts instantaneously and as you do whatever you do, it slowly gets a better track and then it basically conforms to the way you want to use it.

10:08 KM: That is exactly correct, and it’s…

10:09 JJ: Which is clearly ideal.

10:10 KM: Exactly. And it’s optimized for our business model which is products, [10:14] ____ product placement. And so, most of our products so far in AR with our customers like Macy’s or actually that we have made AR apps for…

10:24 JB: What, are you name dropping right now?

10:25 KM: Well, we actually have apps for them.

10:27 JB: There’s a press release, folks. We’ll get it later. That’ll be in the show notes.

10:31 KM: So the mobile apps for iOS and Android had been out for quite some time now, almost a year now.

10:36 JJ: Are you using MxT on those?

10:37 KM: They were just too dated, so when they first came out ARKit didn’t exist yet. So they were just using MxT.

10:42 JJ: Okay.

10:44 KM: So, for those of course we’re placing this furniture. Large pieces of furniture, tables, desks, chairs, lamps. Things on the floor. But theoretically you can also use MxT to place stuff on a desk, or a countertop or a cooking device. [laughter]

10:54 JB: Like a lamp, or a paperweight?

10:57 KM: Yeah, exactly. Things of that nature.

11:00 JB: And so, Google’s variation on this is called ARCore.

11:03 KM: ARCore.

11:03 JB: So is it the same story basically?

11:05 KM: So they also do the same… They of course don’t call it the same, they don’t call it anchor, it’s not the same…

11:09 JB: Oh, yes. God forbid.

11:10 KM: But it’s the same basic principle in that it does not have any knowledge about the environment until you begin to move around it. And so, it also has the same features where you can detect horizontal plains, and I believe they are also adding the vertical plains. I haven’t tested it recently in their 1.0 release, but I’m pretty sure… And maybe we’ll have a sub-bulletin to look that up… But the 1.0 release doesn’t do vertical plains yet.

11:33 JJ: Okay, I know that wall tracking’s next on the hot list.

11:37 KM: I know they do horizontal plains, and they of course in the 1.0 tracking, they also incorporate the 2D image tracking that ARKit has in the 1.5 version. We can do the marker tracking.

11:47 JJ: Yeah, of course.

11:48 KM: I’m sorry, I didn’t mean to hit my microphone.

11:48 JB: You’re fine. Let’s jump… Let’s talk about the 1.5 version. Because that’s where I was going with this.

11:52 JJ: I wanted to talk about another thing.

11:53 JB: We’ll jump to that in a second.

11:54 JJ: You’re killing me!

11:55 JB: But Ken has touched ARKit 1.5, whereas I have read about ARKit 1.5.

11:58 JJ: Oh, fun. Yeah, let’s talk about it.

12:00 JB: So my first question would be, did they fix this initialization problem? And then second would be just you’ve messed with it, what’s your general take, what did they add? What you got?

12:11 KM: So it’s still the same initialization. So you still have to get anchors. So they’ve added the vertical planes, which is very similar to what you’re doing with the horizontal planes in that it doesn’t really work with just blank walls. But if you…

12:21 JJ: Oh, that seems problematic.

12:23 JB: Yeah, that seems problematic.

12:24 KM: Well, if you just have a white wall, it’s kind of hard to pick that up.

12:28 JJ: I’m just checking out the office right now…

12:28 KM: If you had general regular stuff on our wall, or you’re far enough away from the wall where you can see those things on the wall, then it does indeed pick up the vertical anchor.

12:40 JJ: What if you have no family like me? There are no pictures on the wall. You’re just out of luck, you’re SOL?

12:48 KM: So I guess it depends on the texture, gradient, perhaps, of your wall itself. Maybe the lighting you have on there. Obviously, of course, it needs features.

12:55 JJ: Time to paint up a mural.

12:55 KM: It needs features for…

[overlapping conversation]

12:56 JJ: Got it. Well, that’s fair enough.

12:57 KM: So if you are insane and you’re in just a white room and it’s padded and it’s all the same, and somehow you got a mobile device in there… [laughter]

13:04 JJ: Was that the best burn I’ve ever heard?

13:07 JB: That was fantastic.

13:08 KM: And somehow you got a mobile device in there, it may or may not pick up your walls.

13:10 JB: If your personal aesthetic idealizes a clean room, it will not work for you?


13:14 JJ: That’s fair.

13:14 KM: Exactly.

13:17 JJ: Alright. So they’re working on wall tracking, what else are they doing?

13:18 KM: They also, in the 1.5, I have not tried this, but they do have it on the list for the 2D image recognition on the 1.5.

[overlapping conversation]

13:27 KM: So very, very similar. Exactly, which I presume could be sort of their answer to…

13:32 JJ: The initialization problem?

13:34 KM: Well, not necessarily the initialization, but maybe the save perhaps. We can kind of say [13:39] ____ we do this maybe also as well…

13:40 JJ: Do it.

13:41 KM: Or however you wanna edit this, but ARKit and ARCore also do not allow you to save a session that you had worked on.

13:49 JJ: I was literary just gonna ask, okay, so if they’re doing point clouds and they’re doing horizontal tracking, what is the next step for remembering a space that you’ve been in? That seems really vital.

13:56 JB: Yeah.

13:57 JJ: That’s the next step between, “Oh, I have this design tool and I have to rebuild my room every time,” or if let’s say it remembers your space, wherever you’re at, or etcetera, you can save your room design and the next time you wanna screw with it, you’re like, “Well, let me just bring my room up and check it out again in that AR space.”

14:11 JB: And what kind of weird privacy issues does that raise?

14:16 JJ: We’ll get into that later, but I think you were gonna touch on the save sessions part.

14:21 KM: So without being able to save, just like you said, if you have an app and you’ve spent all this time making my configuration, I’m interested in buying these things or maybe I like the way it looks, but maybe someone else I need to confirm with, my designer or whoever it is…

14:35 JJ: Your designer? I think you mean your wife.

14:35 KM: Exactly. I have to consult with… [laughter] I wouldn’t be doing this anyway, she’d be the one doing it in the first place.

14:39 JJ: Yeah.

14:39 KM: But you spend all this time doing it, and then basically now every single app out there, more or less, you kill the app and then when you reopen the app, you have to do it all over again. You can’t save it, even if you saved… You could theoretically save your configuration of virtual products and then maybe you have to reinitialize the floor and then reposition everything you’ve done in the space, but maybe you save like your building material and you just pull things back to where they were before.

15:06 JB: I feel like I’m operating an ancient PC where in order to save the state of whatever I’m working on, I’m like, “Well, let me just write down this code.” [laughter]

15:11 KM: Exactly.

15:13 JB: “And we’ll just re-input this the next time.”

15:14 KM: Exactly. We use glyphs for our saving. There are actually pictures of you, pictures of all our employees are the save code.

15:23 JB: Oh my God.

15:24 KM: It’s like two Joes, two Joe Jays, Joe B.

15:27 JB: Is this real? Are you screwing with me right now?

15:29 KM: I think it should be. I think it should be. It should be an Easter Egg in our apps.

15:32 JB: I’m super flattered by even imagining it.

15:34 JJ: Imagining an image of your own face?

15:35 JB: To label software, yes.

15:35 JJ: I love it. I love it.

15:38 KM: So, ’cause currently ARKit and ARCore do not provide facilities for that. They don’t allow you to save any information about it, and it will be up to you, the developer, to do so. There are some companies that are emerging, that are starting to offer these kind of facilities. I don’t know if I can say the names of these companies, you can edit this part out.

16:00 JJ: Do whatever you want, man. We call it all kinds of names. Who cares?

16:00 JB: We do.

16:03 KM: So two companies that I am aware of… Okay, so the first, SDK, is Placenote. The second is called JIDO, J-I-D-O. And they both offer facilities for saving an AR session. Placenote actually has an example app on the App Store, you can actually download and try it out. JIDO does not, you’d have to contact them directly through their website.

16:22 JJ: What’s their methodology for saving these?

16:23 KM: So both companies are doing a more of a cloud based approach, where basically the processing itself is done on the cloud, on their servers, and basically no information is stored on the device itself.

16:38 JJ: Okay.

16:39 KM: Which, for them, is good because once you’ve used them, you have to use them also to keep their information, so it’s kind of like, “I got you,” at that point. They store the map information on their clouds. And when you run the application, you connect to their servers and download the map.

16:55 JJ: So their service is actually mapping the space and remembering it?

16:57 KM: So their service, yes, exactly is basically… Both companies actually use their own custom point information. They don’t use ARKit’s point cloud, they generate their own point clouds. But a very similar methodology of what ARKit is already using, they of course just correlate their point cloud to ARKit’s tracking information.

17:13 JJ: Sure. For verification purposes?

17:15 KM: Exactly. And then it can re-localize using the standard [17:17] ____. For those in the know, the PNP methodology’s for re-localizing from a 3D point cloud…

17:26 JB: That was half Greek to me, I got half of that.

17:29 KM: And 2D image.

17:30 JB: Did you say PNP?

17:31 KM: So the PNB is a perspective endpoint calculation, where basically you have 3D points…

17:34 JB: Duh. Idiot.

17:36 KM: Exactly. You have an image, an image that has 2D points. And then you have the 3D points that those 2D points correspond to in the real world, and then you can figure out where the camera would be located in that 3D world space.

17:45 JB: Man, that’s great.

17:45 KM: It’s a decades-old method and procedure.

17:49 JB: I didn’t say it was new, I said it was Greek. I love it.


17:53 KM: But both of these companies do that and create the data, but they store the information on the cloud. Nothing is stored on the device itself. So you’re basically subscribed to their services…

18:01 JB: Yeah.

18:01 KM: To use this.

18:02 JJ: Okay, I gotcha. They gotta make money some…

18:04 KM: They have to make money somehow, exactly. So…

18:05 JB: Oh, you know how they make money? They’re gonna make money by selling the contents of my house… Eventually when object recognition comes around they’re gonna be like, “That’s in his house, and that’s in his house. And hey, everybody, this guy who subscribes to my service and I have a whole bunch of information about, he has all these products, do you wanna market to him?”

18:22 KM: Yes.

18:22 JB: Just a guess.

18:23 KM: So if we were to store metadata, it would only be for beneficent purposes, to offer suggestions for things you may like.

18:30 JB: You’re so magnanimous.

18:30 KM: Exactly.

18:32 JB: We’re here with Mark Zuckerberg.


18:34 JB: This is the Zuck.


18:36 JB: Coming to you from Dayton.

18:37 JJ: Alright, I have one more question on ARKit and AR tracking in general, because I know you have other things to discuss.

18:41 JB: Oh, I don’t actually have other things… This is just a…

18:44 KM: I have a lot more to say about everything we’ve touched on so far.

18:46 JB: I know, it’s great. I know.

18:47 JJ: So sort of my question is, rolling all of this up, so you have… ARCore 1.0 is out, ARKit 1.5 is in beta basically, right?

18:56 KM: I don’t believe it’s been… I’m presuming that’s a WWDC release?

19:00 JJ: That’s what I would guess too, right?

19:00 KM: That’s what I gather from it, but yes.

19:01 JJ: And I find it weird that it’s 1.5 and not 2.

19:04 KM: That’s also why I’m not sure if a WWDC… It’s in June, so it’s still some months away.

19:07 JJ: Yeah it is, approaching quickly.

19:09 KM: They may have a 2.0 release or they just decide to make it 1.5. Because iOS usually every year has a new iOS version number…

19:14 JJ: Well, right.

19:15 KM: But ARKit doesn’t have to correspond with that, so it could just be a 1.5.

19:17 JJ: Yeah. Another thought I had was that it could be that the 1.5 is out now, and the 2 isn’t much different than 1.5 when it released with iOS 12.

19:24 KM: As far as I know, 1.5 has not been release mode.

19:28 JJ: Yeah.

19:28 KM: It’s not a release canon. You cannot download it on your device through the iOS update yet.

19:30 JJ: Right.

19:33 KM: So, yeah. It either will be a patch or and it will be an 11.3, I’m not sure what the latest version of that is…

19:38 JJ: Yeah, you don’t have to be exact on that one, buddy. 11 whatever.

19:42 KM: It’s 11 point whatever, and then of course 12 will be announced in June, but usually a new version doesn’t come out until September.

19:47 JJ: Right, right. So then the second part of this is, knowing what you know about what you’re working on, and what they’re working on. So, are the 1.5 features and what exists in AR core sort of it for 2008, or is there more on the horizon, that means the second half of the year…

20:00 KM: That is it for 2008, I can guarantee you.

20:02 JJ: Oh, right. I’m sorry. 2018, what year is it? Again, with the jet lag, I’m just saying. I’m just saying, that’s what happens, you lose 10 years.

20:08 KM: Gotta get back in time.

20:10 JJ: Yeah. So, for the rest of 2018, are there things sort of in the ether that we should be looking for?

20:16 KM: So as far as ARKit and ARCore, I think that’s basically it. So, I am sure Google will be watching WWDC to see if Apple’s announcing anything extra in their ARKit release, because the 1.0 release for ARCore basically mirrors the 1.5 release for ARKit.

20:35 JJ: Okay.

20:35 JB: No shocker there, I know.

20:36 KM: Of course, Yeah, no shocker there. And so if Apple somehow has managed to circumvent the corporate espionage and prevent Google from…

20:45 JB: What are the odds?

20:46 KM: Getting all the information they have released, then I suspect that probably will be it until later in the year, when they will make the announcements for the next year.

20:52 JJ: Okay, right. Gotcha.

20:53 KM: For the next year.

20:55 JB: Well, that very enlightening.

20:56 KM: Yeah, that’s true. But they both do not have… So they still have to do the scanning, and they both do not have a map save feature in them. However, I’m sure they are all working on these. So the map save is good for what we talked about. Map save is also essential for multiplayer and multiuser experiences in AR as well, which do not exist currently ARKit and ARCore. Within an easy mode, you could use a marker to self locate both people, like a marker being the origin of your…

21:23 JJ: When you say multiplayer, do you mean at least two, maybe more people, on say, different devices, say iPads, iPhones, whatever, looking at the same scene and that network content being updated on all of them at the same time. Is that what you mean?

21:34 KM: Yes, that’s exactly what I mean. So basically a collaborative AR session, which could be a game or it could be a more productive modality.

21:39 JJ: Yeah, that would be interesting. I never really thought about how I would collaborate with somebody in AR, but I’m sure there’s all kinds of possibilities.

21:43 JB: Yeah.

21:44 KM: Yeah, it must be markup if you were a contractor or something, perhaps…

21:49 JJ: Oh man, that sounds exciting.

21:51 KM: And then you wanted to mark up stuff for your employees. Then theoretically you go into the house, you map the house, you mark some stuff up, or maybe there’s more than one person in there…

22:00 JJ: Yeah, that’s great.

22:00 KM: Doing it at the same time, and they’re both uploading stuff to the same app. And then I come in, and I see what my boss had, or manager…

22:06 JJ: I know that…

22:06 KM: I’m not a construction person. Whoever the person was that came in marks…

22:08 JB: I know that IBM is using Watson as a virtual collaborative tool for VR, for object recognition and stuff like that.

22:15 JJ: Yeah.

22:15 JB: Collaborative spaces.

22:16 JJ: Well, and you mentioned ARKit was adding sort of the object recognition with the 1.5, and I know Google’s is called Lens, right?

22:26 KM: Google Lens, that is correct.

22:27 JB: How many objects are they gonna be able to recognize to start?

22:30 KM: So, I’m not sure if it’s still called Google Lens in this app, but the Pixel phones had the Google assistant…

22:35 JB: Yeah, right.

22:36 KM: And with the Google Assistant, you could actually just show it a picture and it would give you information about what you have taken a picture of.

22:43 JJ: No shit.

22:43 KM: And we’ve used it here in the office, and it can be very specific sometimes.

22:49 JJ: Oh, wow!

22:49 KM: So the very first thing we tried it on was a box of Kleenex tissues…

22:53 JJ: Okay.

22:54 KM: And well our developer here Chris Jones, shoutout to Chris Jones…

22:57 JB: Chris Jones!

22:58 KM: Was using his Pixel phone, and took a box of the tissue box and then it brought up, and not just like it didn’t highlight in the photo, it literally brought up text that it had produced itself.

23:11 JJ: Okay.

23:11 KM: Saying that it was Kleenex brand tissues, however many counts of tissues were in the box, and like three ply.

23:18 JJ: Okay.

23:18 KM: And we were like, “Okay, maybe… ” And we looked at the box and it was exactly that information.

23:21 JJ: Oh, that’s pretty good.

23:22 JB: Yeah, that’s pretty good.

23:23 KM: So that was it. That was impressive. We also took pictures of other things with logos and emblems on them…

23:28 JB: I was gonna say, it’ll probably start with things that are immediately identifiable to human eyes, you know? Like brand identification, because clearly…

23:35 KM: Yes, logos it did very well.

23:36 JB: Yeah, it’s a commercial play to start of course.

23:39 JJ: And so it works with a photo, though? It’s not a live video, I can’t just hold my camera up and it won’t do it…

23:45 JB: Oh, give it time.

23:45 JJ: Right.

23:45 KM: So you do technically hold it up, you don’t snap a photo, you do hold it up while the video feed’s going.

23:49 JJ: Okay.

23:49 KM: But I’m presuming it’s taking a photograph.

23:51 JJ: It’s taking… Well sure, okay.

23:53 KM: Because you have to hold still. It doesn’t ever do the snapshot thing, the video is always live, but it does do the little reticle, it’s like the little video… The photo reticle is there…

24:01 JB: It probably takes your best resolution still and uses it, yeah.

24:04 KM: Yeah, exactly, exactly.

24:05 JJ: So what is your take sort of on… We’ve called it in the past AR Search, right? It seems to me, and to Joe I think I’m speaking for you, sorry, that one of the real big sort of growth areas in this is the idea that I can use my phone to identify things in the real world, whether they be addresses, or I’m at the store and I wanna know about a product or whatever, I no longer have to actually type into Google or whatever, my query, I just hold up my phone and it gives me all the information.

24:31 KM: Yes. So, I wouldn’t necessarily call it AR Search. Technically, modalities of this type of function have been around for the inception of smartphones…

24:42 JJ: Wow! Okay.

24:42 KM: Going back to like the QR Code, for example. The QR Code was more condensed, and limited, I guess, in this information, in that the manufacturer had to directly put the QR Code in it. But, after that point, it used the QR Code to go to some information data pulled up for you.

24:57 JB: But as far as modalities go, this is the first one I would describe as being truly human centered, right?

25:02 KM: This would be very neutral. Very neutral in the data. You’re just a blanket, the manufacturer has to do nothing.

25:05 JB: Yeah.

25:07 KM: Other than have a very popular item, perhaps, that Google has already sourced through all of the many webpages and searches information on that particular item. But basically, Google has been around now for two decades, more or less.

25:21 JB: And I’m old.

25:21 KM: And so they of course have aggregated a lot of data, and a lot of information.

25:27 JJ: Yes, they have had a project going where they’re just trying to catalog all of it, right? They’re just identifying everything in a giant spreadsheet.

25:33 KM: Everything that has ever been Googled, exactly, on the internet.

25:35 JJ: I’m probably making it simpler than it is.

25:37 JB: But that does seem simpler than it probably is.

25:37 JJ: Regardless, yes, it’s like the ultimate database challenge, its like can you database the world and have it be searchable?

25:43 KM: That’s right, that’s right.

25:44 JB: No singularity?

25:45 KM: Exactly.

25:47 JJ: I mean, yes. How long after becoming self aware does it decide that we don’t deserve to live, Ken? [laughter]

25:50 JB: You’re on this Skynet stuff all the time, man, let it go.

25:53 JJ: I just wanna know when we have to flee to the woods, that’s all I wanna know.

25:55 JB: You’re not safe in the woods.

25:57 JJ: Damnit!

25:57 KM: I mean if we wanna sidetrack the conversation more about AIs, then my… Which I have no problem with…

26:04 JB: We do, let’s do it.

26:05 KM: My feeling on the AI, and like the robots in general. So you see the sci-fi movies, and they try to humanize the AIs, right? The Ex-Machina movies, or the movie literally called AI with the kid from… Was he in Sixth Sense?

26:17 JJ: Sixth Sense, yes.

26:18 JB: That would be Haley Joel Osment. Shoutout.

26:20 KM: Whose a man now, whose a grownup.

26:21 JB: Yes he is.

26:22 KM: Who was good in both movies. AI, I liked the movie AI, it’s a bit long, I saw it in the theaters when it came out. And it was a long movie, Jude Law, I believe the man’s name is…

26:31 JJ: Yes.

26:31 KM: Who was basically…

26:32 JB: Jigalo Joe.

26:34 KM: The Jigalo bot, the Jigalo bot? It was all good, aliens in the end, spoiler alert, aliens in the end.

26:38 JB: Spoiler for AI if you haven’t seen it yet.

26:40 JJ: That’s the moment when I felt the length of the movie, when you think the movie is over and all of a sudden it’s like, “Oh, aliens on the ice! Oh god, this has 15 more minutes, oh no!”

26:48 KM: Yes. But they try to humanize the robots, these AI robots. But no matter how much you humanize them, they are still robots, and so I don’t know how you could ever actually prove if an AI was ever sentient in the way that a human is sentient…

27:06 JB: In the way that we assume that human beings are sentient…

27:09 KM: Because humans can be completely random. So I could say something random to you…

27:12 JB: I don’t think you can.

27:13 KM: Just like right now.

27:14 JB: I think you’re a function of your environment and your nature.

27:15 KM: And the number. Or am I?

27:19 JB: Anyways, so once we get to the philosophical discussion about what sentience is and how we can prove it…

27:23 KM: Yes.

27:23 JB: If the robots have the appearance of sentience and we can’t tell the difference, its functionally the same thing.

27:28 KM: Or is it?

27:29 JB: Functionally the same thing.

27:31 KM: It may functionally be the same thing, but that would be the same thing as like saying that any advanced enough technology is magic.

27:37 JB: Yeah, I don’t disagree with it, I don’t disagree with Sir Arthur Clark at that point. [chuckle] What is useful to us is what we can determine through our senses, and if our senses tell us that a robot is sentient, we have to deal with that information that way.

27:50 KM: But, so wait…

27:52 JJ: Can I agree with both of you?

27:53 JB: Yes. Well, of course.

27:54 KM: For the AIs in the movies to be basically be sentient, it would mean that our computer technology had advanced to the point where they are no longer deterministic, so basically our whole computer system is based around a logic machine. Which, in courses, if you have had computer science courses, is basically automata.

28:11 JB: Yeah.

28:13 KM: The autonomous machines. And so all automata are deterministic, meaning that given an input, you know exactly what it’s going to do.

28:20 JB: Every time.

28:21 KM: Every single time. And so even the AIs we have today are deterministic, ’cause if they weren’t, they would just produce random data. You would have no idea of knowing what was, you couldn’t verify that they were working correctly.

28:32 JB: “You’re not doing what I designed you to do.”

28:34 KM: Exactly.

28:34 JJ: Or wait, so there have… This is interesting. So I was gonna say there has been examples of AI systems that have become non-deterministic, but I don’t know if that’s actually technically correct.

28:42 KM: I don’t think so.

28:43 JJ: I’m thinking of the one where, I think it was a Google project. They assigned computers to learn different languages, or make up languages, and eventually the computers made up their own language and began communicating with each other…

28:54 KM: Supposedly.

28:55 JJ: Allegedly, yes. Because the people were…

28:57 KM: I like allegedly better than supposedly.

29:00 JJ: The people who were running it, “Didn’t know what the computers were saying,” to each other, and eventually they pulled the plug.

29:04 KM: Which I think it shows that those people should be fired. Because you wrote the program, you should be able to back track, you should be able to back determine what the language they developed was, you know what I mean? I don’t know what the goal of that project was, if I was their manager, my goal would have them create a language that you can then decipher. And if you didn’t do that, you should be fired.

29:22 JJ: Essentially a phony code, but…

29:23 KM: Yes, exactly.

29:25 JJ: My position is going to always be in the end, we don’t know enough about ourselves to determine whether or not we are deterministic at this point, correct? Come on.

29:35 KM: Well…

29:35 JB: Yeah, I’m gonna agree with you.

29:37 KM: Well, from a creationist standpoint…

29:39 JB: Regretfully.


29:41 KM: From the creationist standpoint, I would say that theoretically, we are not deterministic and we have free will…

29:48 JB: Allegedly.

29:49 JJ: Right.


29:50 KM: And supposedly even as a product of your environment, you can choose against what an identical person in the same situation would have been able to do.

29:58 JB: Sure.

30:00 KM: Which is why you have brothers and sisters growing up in the same environment, some succeed and some don’t. Even growing up in the same household.

30:02 JB: Yes.

30:04 JJ: And in the end, we’re gonna say that both of us are right. Because, I will say that we are deterministic but we need the illusion of free will in order to maintain our sanity.

30:13 KM: I’m walking out… I choose to walk out right now.


30:15 JB: And, I will say… “We now interrupt this podcast for 30 minutes on free will.”

30:20 JJ: No, it’s fine, it’s fine.

30:21 JB: Free will is… It’s one of the great classic philosophical arguments of humanity. Does it exist or does it not? I like the idea that in some way we’ll get some insight on it from our development of artificial intelligence.

30:34 KM: We, of course, like most everything we do, we only base it on our observable things.

30:39 JB: Yeah.

30:39 JJ: Right.

30:39 KM: We’re making the AI to mimic what we already know about intelligence.

30:43 JJ: Ken, I don’t know if you listened to our last couple of podcasts, but I’ve actually expressed a lot of hope for the future of AI. Largely because I think that we will continue to make it relatable to us, because we are making it. That is my essential point.

30:54 KM: Yeah, of course, yeah. And, whoever’s making it, of course, is going to make it to make money. Which is the reason they’re making it in the first place.

31:00 JJ: You’re not wrong.

31:00 KM: The only way they make money is for people to use it because people are the people that have money, and to get money you have to get the money from people. And so you’re making products for people. Any marketing students out there, if you wanna make money, you gotta market it to people, because people have money.

31:14 JJ: Yeah, he’s got a point there.

31:14 KM: Animals do not have money, and the environment and the earth doesn’t have money. And so you’re not really making AI for any reason but to make money.

31:21 JJ: I don’t know, you mentioned that there are people out there that are working on it for a variety of other reasons…

31:25 KM: Hogwash.

31:26 JB: They’re called young and naive, Joe. [chuckle]

31:28 JJ: No, they are certainly people out there who have motivations that are not necessarily tethered to money.

31:34 JB: For sure, yes.

31:34 JJ: They could be tethered to reproduction, they could be tethered to…


31:37 JJ: Hang on, they could be tethered to your legacy, they could be tethered to all sorts of things, yes.

31:40 JB: Social status.

31:41 KM: I consider legacy to be the same as money.

31:44 JJ: Alright, we’re learning a lot about Ken today. [laughter]

31:46 KM: It’s purely selfish.

31:49 JJ: Yeah.

31:49 JB: Yeah.

31:49 KM: You did it for yourself, you did it so that you would be remembered.

31:51 JJ: Alright.

31:52 KM: You didn’t do it totally altruistically… You’re not the BitCoin guy, who, the original creator of BitCoin, who has an Asian name, superimpose your voice over me, the creator of BitCoin, blah, blah, blah, blah, blah.

32:05 JJ: I’ll deep focus him, I’ll turn him into a puppet and yeah…

32:09 KM: But the theory is that no one actually knows who the guy is.

32:13 JJ: Oh, I’ve heard this comment, yeah.

32:14 KM: He was in contact with some of the early developers, who are real people, had just never actually met him in person. It was always just a correspondence, as he was getting it going. And then he eventually disappeared.

32:25 JJ: Yeah.

32:26 KM: So, you don’t know if he was actually real or not.

32:29 JJ: I’m going to assume…

32:30 KM: I’m not sure why we started talking about this.

32:31 JJ: I don’t either.

32:33 JB: I know exactly why, we were talking about why people create things.

32:36 KM: Oh, yes, yes, legacy, legacy.

32:37 JJ: And, reasons why people might develop an artificial intelligence.

32:39 KM: Legacy, legacy. So in that case, that guy doesn’t care about his legacy at BitCoin.

32:43 JJ: Yeah.

32:44 KM: Because no one even knows if he’s even a real person or not.

32:45 JJ: That’s fair.

32:46 JB: Why is there no AR solution that touts it’s use of block chain technology? Because from everything I’ve heard that would mean at least a tripling of the stock price for that company.

32:54 KM: You would think so.

32:54 JJ: Block chain, Marxen.

32:56 KM: You’d hope so. You would hope so.

33:00 JJ: So, cryptocurrency aside…

33:01 KM: Can I make one…

33:01 JJ: Yeah, go ahead do it, let’s talk crypto.

33:05 KM: No, I don’t wanna talk cryptocurrency.

33:05 JJ: Crypto bros!

33:07 KM: I actually don’t know any thing about cryptocurrency. I was gonna make another comment on the AI. Basically, all of our computer knowledge is deterministic, they’re all automata based machine, the logic machines is all deterministic.

33:18 JJ: Fancy abacuses.

33:19 KM: So theoretically, you could determine if the AI was going to determine that you needed to be destroyed.

33:28 JJ: You would know, there would be an error message. You’d make it to throw an error message before it wants to kill you.

33:33 KM: Exactly! Exactly, so I think that was the flaw in the Terminator movies was those developers weren’t looking at their debug consoles.

33:40 JJ: Yeah, they really need to be careful about debug.

33:42 KM: Where they had that flag turn out or someone…

33:44 JJ: Nobodies looking at the log files.

33:45 KM: They built in release mode, I guess, maybe in debug mode.

33:47 JJ: They need a production environment separate from their test environment.

33:48 KM: Exactly, yes. Exactly, there should have been a fail safe.

33:52 JJ: This is the best conversation we’ve ever had on this podcast.

33:53 JB: I just wanna say, in watching the development of self-driving cars, the idea that they could rush and skip over reading some vital thing that might reveal that the AI was turning sentient and evil seems entirely possible and likely.

34:06 JJ: So here’s the thing, you’re basing this off the fact that self-driving cars have had some accidents, correct?

34:11 JB: No, I’m basing this off of a story I just read about the dude who left whichever company…

34:20 JJ: Was it Tesla?

34:21 JB: Left Google to go to Uber and resulted in those lawsuits…

34:22 JJ: Google to Uber. Goober.

34:24 JB: And it… There’s all these quotes from him talking about… Like the quote that he denied it was, he was angry they didn’t have the first death, because that meant they weren’t moving fast enough to actually dominate the market.

34:36 KM: I would like to propose that…

34:38 JJ: That’s dark.

34:39 KM: I know, right?

34:40 JB: The current fatality, they say was the first fatality from a fully autonomous vehicle. I would say that wasn’t really the first autonomous vehicle.

34:47 JJ: What are we calling the first fully autonomous vehicle?

34:49 JB: I would consider the very first fully autonomous vehicle would be, there is no person inside of the vehicle, no one is in there.

34:55 JJ: So any car where someone is ghost riding the whip. No? You don’t know what that is? We’ll get into that later, we’ll do some urban dictionary for that later.

35:02 JB: Okay, that’s fine.

35:04 KM: Is that a…

35:04 JJ: It just means, ghost riding the whip, is when you get out of the car, and ride on it.

35:08 JB: Ride on it?

35:09 KM: I would say a person is not even associated with the car itself made the choice to drive.

35:15 JJ: That’s a smart car.

35:16 KM: And to do this route. Which I don’t think… I guess in this case the car would probably would make the choice to do this route. But, there was a person in the car, that was supposedly monitoring the vehicle, because the vehicle wasn’t finished yet.

35:26 JB: Right.

35:27 KM: If the vehicle had been released there wasn’t a reason to have a person in the…

35:31 JB: The initial report, by the way, is that it was not the car’s fault. That the person emerged from a shadow straight into the road…

35:38 KM: It was going too fast to stop.

35:39 JB: And there’s a question, the car never tried to stop. The car just plowed through the person. And there’s some question as to whether or not the driver, the rider, the tester, whatever you wanna call him, is somehow liable or at fault. Because they did not intervene in the process as it was happening, and this person is now dead.

35:58 KM: Right.

36:00 JB: From what I understand, through the initial reporting says…

36:01 KM: I would say they probably are.

36:02 JB: Because they stepped into the road and they didn’t have the right of way and it was dark and yadda, yadda. But it’s super interesting…

36:08 JJ: What did I just step into?

36:08 KM: I would suspect in this case it would fall into whatever purview it would be, if the car had not even been considered autonomous yet, and the guy was actually driving and plowed into them. What would have been the liability situation?

36:19 JJ: Right. I like how Ken is always technically correct, which is the best form of correctness.


36:23 JJ: Yes, yes, that’s right, technically correct. [chuckle]

36:24 KM: Is it the only form of correctness?

36:26 JJ: No, there are all kinds of other correct forms.

36:28 KM: Fake news.

36:28 JJ: I mean, whatever you wanna call it, there’s all kinds of other corrects. [chuckle] So, we were talking about AI…

36:35 KM: Yes.

36:36 JJ: We were talking about…

36:37 KM: We were trying to link in the AR search with the AI.

36:39 JJ: Oh yeah, so let’s talk about AR search and AI. Where do you see those intersecting, if at all?

36:44 KM: Right. My initial postulization was that it’s not really AR search, but that using the phones to do just generic kind of searches by going off [36:54] ____ to get to where we are now. Just using image-based searching, I think, again with the database that Google has, and I’ve been trying to think of another alternative to Google, people that may just have like information about everything that’s ever been collected, but I think Google pretty much has cornered the market on that.

37:12 JJ: Yeah, they really have. It’s either that or some kind of state actor, right?

[overlapping conversation]

37:15 KM: That or the NSA.

37:16 JJ: Maybe the NSA has something like that, like DARPA.

37:18 KM: Exactly. That thing, exactly. But the other companies, so Apple for example, if they implement a search feature, they can theoretically leverage Google in the interim while they’re recording all of the data from people doing searches…

37:31 JJ: Sure.

37:31 KM: And Google’s only getting better now with their Google Assistant and Google Lens, people are doing searches. On those searches, you can actually rate quality of the search, as I’m presuming, of course, they’re using that metadata to…

37:41 JJ: Right.

37:42 KM: To quality of their other algorithms.

37:44 JJ: To qualify whether or not their other stuff is working.

37:45 KM: Exactly, to refine it even more. But Google I think has a leg up and may emerge as, I’m not gonna say sole source, but basically like a Microsoft situation where everyone’s using PCs, everyone’s using Microsoft, but there are some other people.

37:58 JJ: Yeah, right. Your Yahoos, your Bings, your NSAs.

38:01 KM: Eventually over time others will reach perhaps that capacity, or not, I don’t know.

38:06 JJ: Google has to own this though…

38:08 KM: Yes.

38:08 JJ: Because if they don’t, their entire business model is in trouble.

38:10 KM: I think so.

38:11 JJ: The lion share of their money is based on advertising revenue built around search…

38:14 KM: That’s it. It is questions.

38:15 JJ: If they lose search to somebody else, that’s it for the company.

38:17 KM: That’s it, pretty much, yeah. And of course they have all that information anyway.

38:21 JJ: Sure.

38:21 KM: So they have the leg up. They have the leg up. Yeah. And I guess their initial business model was search, exactly.

38:25 JJ: Sure. And they also have the expertise so… Yeah, you would expect them to sort of be in the lead on this. What you just said really resonated with me as comparison to the Maps fiasco where…

38:33 KM: Who?

[overlapping conversation]

38:33 JJ: Apple tried to develop their…

38:34 KM: Apple Maps? Whoops.

38:35 JJ: Their own map to compete with Google and it sucked. And so… [chuckle] But Apple Maps was available. And now they…

38:40 KM: That’s true. They’re doing better.

38:41 JB: I don’t wanna say they have parity now now but they…

38:44 JJ: Nope. They don’t have [38:45] ____.

38:45 JJ: It’s fine, it’s totally usable. It really doesn’t matter.

38:47 KM: It is, and the more people use it…

38:49 JJ: Occasionally you drive into a lake.

38:49 KM: Exactly, exactly.

38:51 JJ: It’s just occasionally. Occasionally, my friend Hana drives through the Port of Tampa.

[overlapping conversation]

38:54 S?: Exactly. [laughter]

38:57 S?: But it will get better over time, the more people using it.

39:00 JJ: Right. But you described the same way that Apple just let… Google Maps was sort of this fill in until they could come up with their own solution, a similar thing will happen with AI search.

39:08 KM: A similar thing will happen. And also a recent development, since we brought up maps, a kind of searching is that Google made the Maps API usable in Unity now, supposedly for VR AR, and in their little demo video, they show people using maps to… It’s all games, basically. Their focus was on games, not like anything real.

39:28 JJ: Right.

39:29 KM: Not for anything actually useful.

39:30 JB: What do you need the real world for?

39:32 KM: Right. [chuckle]

39:33 JJ: Ken I don’t know if you’ve ever heard me say this, but I’m pretty sure that the gaming industry is the space race of the [39:38] ____, right, the teens, right? The idea that gamers are going to subsidize every piece of critical technology?

39:44 KM: I know.

39:45 JB: The new [39:45] ____.

39:47 KM: Basically, yes.

39:47 JJ: That’s my position.

39:48 KM: I find that to be very scary. I also find…


[overlapping conversation]

39:50 JJ: Hang on, hang on. Slow down, slow down.

39:52 KM: Simply because the fact those people doing this are not old enough to be making their own income to be doing this. [laughter]

39:57 JJ: So, you’re saying you don’t want your technological breakthroughs determined by people who don’t have any disposable income?

40:03 KM: Basically, yes.

40:04 JB: What could go wrong? Letting teenagers re-work the entire world.

[overlapping conversation]

40:05 KM: Also, minds of 13-year-olds, good night. [laughter]

40:11 JJ: We just talked about how Google Maps API is available though Unity, but what are they trying to do with it?

40:15 KM: That’s right. In their demo video, so this is Google of course, in their demo video, they’re basically showing people playing games using Google Maps where the application has used the map data, meaning the buildings and the streets, to substitute those buildings and streets…

40:34 JJ: With 3D modeling over them?

40:35 KM: With similar, yes.

40:36 JJ: Interesting.

40:36 KM: With similar virtual items. A building could still be a building, but maybe now it’s a castle, or it is a… I don’t know, you’re playing a game. It is [40:46] ____… I don’t play games.

40:46 JJ: It’s really easy… Hang on, hang on. We’re gonna do it like this. Everybody here knows what Skyrim is, everybody. It’s on every platform.

40:51 KM: I know Skyrim, yes.

40:52 JJ: Alright, so it’s Skyrim except instead of…

40:54 KM: [40:54] ____?

40:55 JJ: Yes, instead of the towns being generated by an artist…

41:00 KM: It is your town.

41:01 JJ: All of your models are based on the actual physical models of your town, and they bear some sort of resemblance to those things in terms of height, whatever.

41:08 KM: Names and… Exactly. You could have sign posts [41:09] ____ could pull from street names.

41:12 JJ: Wait. So the layout, like the actual layout is exactly the same.

41:15 KM: Google Maps.

41:15 JJ: But all of the buildings may be totally different?

41:17 KM: Could be, or they could be intermixed.

41:18 JJ: You could be in the Wild West town but it’s your city’s grid.

41:21 KM: Exactly. So you can navigate using the real world to your design.

41:25 JJ: Gotcha.

41:25 KM: It’s a geo-located gaming.

41:27 JJ: And then it’s a short step, it’s a short step to recognizing objects that are moving in reality and then creating things that go over top of those things, like other human beings or cars, or whatever.

41:38 KM: Theoretically, yes. In an AR experience that would be a little more difficult because then you would have to be able to… You’d have to be able to get the depth of things…

41:47 JJ: Yep.

41:47 KM: That you’re doing in AR to make sure that person was still on top of the fake things.

41:50 JJ: Just throw some LIDAR on your headset, it’s fine.

41:51 KM: Exactly, yes. Maybe a magic leap?

41:55 JJ: Yeah, maybe. But that’s why I said it’s a short leap.

41:58 KM: Or maybe a head-worn thing.

42:00 JJ: When I say it’s a short leap, I mean that it is conceptually possible…

42:00 KM: Yes, yes.

42:01 JJ: Which is, I guess, exciting. I don’t know.

42:03 KM: This is the best kind of possible.

42:04 JJ: Yeah. [laughter] Are you gonna get your hands on a Magic Leap developer kit?

42:10 KM: I can neither confirm nor deny because…

42:13 JJ: What would that Magic Leap do?

42:14 KM: That’s a yes. Maybe because, I don’t actually know if I will ever actually do that.

42:18 JJ: I see. I gotcha. [laughter]

42:20 KM: Because no one knows what Magic Leap is actually doing. For all I know, Magic Leap could literally be released tomorrow. You know what I mean?

42:26 S?: Right. Or next year.

42:28 JJ: Did you know that I’m going to release Magic Leep tomorrow? [chuckle]

42:30 S?: That’s… Wow.

42:31 JJ: That’s my big plan. I’m gonna go public soon, the whole thing. [chuckle]

42:34 S?: I look forward to that.

42:35 KM: I think you should start a new company called Magic Leap, but maybe one of the other letters is capitalized or not capitalized.

42:39 JJ: L-E-E-P. Just L-E-E-P. Magic Leep.

42:41 KM: Exactly.

42:43 JJ: That’s great. I love it. Alright, so we’ve established that the Google Maps API is integrating with Unity…

42:50 KM: That’s right.

42:50 JJ: In some very interesting ways.

42:51 KM: Yes.

42:52 JJ: Not currently necessarily for enterprise or business use, but I’m sure there’s something…

42:57 KM: Entertainment.

42:57 JJ: Yeah, entertainment.

42:58 KM: And then they’re marketing more through the AR Core. Of course, things are coming out where you’re able to put things in the world and stuff.

43:00 JJ: Of course, yeah.

43:03 KM: They wanna unify…

43:05 JJ: Alright.

43:05 KM: Their platform, their [43:05] ____…

43:06 JJ: We know you’re into MXT plus the ARKits and cores for increased initialization time or decreased initialization times. I’m assuming, based on the amount of engagement you’ve had with us about AI and object recognition, those are exciting topics for you. What’s your final exciting topic of the day?

43:24 KM: We can talk more about web AR for apps.

43:26 JJ: Oh, let’s talk about Web AR. Web GL or web…

43:28 KM: Those will be the last. I don’t know how long you guys have. I could literally be here ’til like six ‘o clock, for another 45 minutes.

43:34 JJ: Well on web AR, man… I’m on your time, Ken. You’ve got 15 seconds.

43:37 KM: On any topic… [laughter]

43:38 JJ: No, I’m kidding. Okay. [chuckle]

43:39 KM: Anytime we’re talking about [43:39] ____.

43:40 JJ: The most rockest podcast. [chuckle] It’s great.

43:43 KM: Yes. So another… Since we’re on the AR theme…

43:46 JJ: So, web AR.

43:47 KM: Is the Web AR, yes. Going along with Google releasing their AR Core 1.0, they’ve also had been showing, I don’t wanna say showcasing, but they’ve been showing their forays into using these AR libraries in a web browser on the mobile device.

44:04 JJ: When you say these AR libraries, what are you specifically referring to?

44:06 KM: ARKit and AR Core.

44:08 JJ: Okay, I gotcha.

44:08 KM: Both on the iPhone and also on the Android device.

44:11 JJ: For using Chrome or whatever.

44:12 KM: Exactly. Or Safari, theoretically. Technically it’s not in any actual browser yet, they’re just experimental browsers.

44:18 JJ: Oh interesting.

44:18 KM: So there’s no browser that actually supports it currently. The trick they’re using now to make the AR possible in the browser, is that the browsers themselves are running the library.

44:30 JJ: Got it.

44:30 KM: The browser is running the ARKit, the browser is running the AR Core…

44:32 JJ: Yeah.

44:33 KM: And it’s showing the video frame in the web panel of the browser, and then the web page loads on top of that.

44:40 JJ: Yeah, but it’s not strictly a web browser. That’s not a… Yeah, it’s a different thing.

44:42 KM: Exactly. So it’s not just purely…

44:44 JJ: An acute dodge.

44:45 KM: Exactly. [44:45] ____ web browser up and it’s doing to the content. The browser’s specifically made to be running those tracking libraries kind of in the background.

44:51 JB: It’s essentially a trojan horse method.

44:53 KM: Basically yes. [laughter]

44:54 JB: For getting AR on…

44:55 KM: That’s basically it.

44:56 JJ: That’s always the best way to refer to software.

44:57 JB: Yeah.

45:00 S?: I mean, but that… Yes. [chuckle] The…

45:00 KM: Exactly.

45:01 JJ: Uncomfortable associations aside…

45:03 KM: Exactly. They say this is just kind of a prototype for how it… Basically allow people to experience and play around with it on the web, but that the actual protocols and paradigms for doing it…

45:18 JJ: Will have to be built.

45:18 KM: Exactly. Would have to be discussed among the major browser corporations, [45:24] ____ and people who are in charge of these things for what may be the best way to do that.

45:30 JJ: That means it’s a ways off.

45:30 KM: Well [45:30] ____ Trojan Horse, anything web-based, there’s always a security thought process. You don’t wanna have any vulnerabilities associated with…

45:37 S?: And for the right machine.

45:38 JB: I look forward to seeing how this gets hacked.

45:39 KM: Exactly.

45:40 JB: It’s gonna be amazing.

45:40 KM: [45:40] ____ core library, exactly. And you don’t necessarily want people to be able to spoof the fake, the ARKit or ARCore for nefarious reasons of getting around web page content or things of that nature.

45:53 JJ: Is this similar to the original, when smartphones first arrived, the debate between the web app and the dedicated smartphone app? Similar…

46:01 KM: I would consider it to be very similar, yes. How much is the browser doing? And then how much is the browser interpreting? So browsers are just interpreters of the scripts that run the webpage. So how much is the browser actually permitted to do versus interpret from the scripting commands.

46:20 S?: Okay.

46:20 JJ: Okay. And it seems to me that the web app versus dedicated app conversation, the final answer was, it depends on what you wanna do. There are certain things that dedicated apps are better for. There are certain things that a web app is great…

46:33 KM: Exactly.

46:33 JJ: And totally passable. Is that sort of what you see happening here?

46:36 KM: The current limitations with a lot of the web browsers, especially on mobile, is that they are basically script interpreters. They’re interpreting the web page data. They’re slowly progressing this way, but currently, you can’t do a lot of multi-threading on browsers or on browsing web pages. Your performance is a little bit limited. Because it’s kind of… Because it’s a scripted environment. Anyway, you’re doing either compiling what Java [47:05] ____ compiling, which gives you better performance if you actually compile the script into like a native code that the device could run locally. But then you have to have access to the hardware. So Web GL, I think, has done good at this as with their API, where you can access, of course, the graphics card, the GPU through the web browser interface, and they’ve had to tackle all of the security issues…

47:27 JJ: The multitudinous challenges of actually doing it.

47:30 S?: Yeah.

47:30 KM: Exactly. Allowing people to access hardware from a web page.

47:34 JJ: Yeah.

47:34 KM: So all the API and levels of integration associated with that.

47:37 JJ: Would you say that Web GL is mature at this point? Or does it have a ways to go?

47:40 KM: I would say Web GL is mature, yes. I would say Web GL is probably mature at this point in that it’s being used enough that they’re aware of the issues at hand and what needs to be done to improve it and make it better.

47:54 JJ: What are you excited about as far as the implementations of Web AR? What applications does it have that you’re excited about?

48:00 KM: So I suppose you don’t have to download an app. Right? You do it from the web page.

48:03 JJ: Well that is nice. I do like that.

48:05 KM: Yeah. You’re using mobile browsers. We were talking about Wayfair and IKEA and Macy and Ashley’s…

48:08 JJ: Yeah. So let’s say…

[overlapping conversation]

48:08 KM: All these people [48:10] ____.

[overlapping conversation]

48:12 S?: And it’s very hard for a brand to get you to download… Not only download the app but actually use it. I have the Wayfair app. I downloaded it…

48:21 JJ: I can tell you on no hands, how many apps that I actually downloaded from [48:23] ____. [chuckle]

48:23 S?: I have Wayfair and IKEA Place both on my phone from when they both came out in September, and I have it over… And Wayfair updates constantly. And everytime it updates I’m like, “Why is this still on my phone? I have to delete this.”

48:36 KM: And then you…

48:36 S?: But every once in a while I pop it open and show people what it’s like, I look at furniture and so I keep it. But yes it’s not a… Because it’s shopping for furniture, it’s not something I’m necessarily doing every day. It’s something I do maybe once or twice a year or something like that. So a browser makes a lot of sense because I don’t have to have something, I could just go to your page and now I have access to all those tools.

49:00 JJ: Okay, so we’ve established why it would be totally awesome to be able to do it from a webpage. How far would you think it is? Just make a wild ass guess.

49:06 KM: So it is…

49:06 JJ: A wag, if you will.

49:08 KM: Well, so the hurdles associated with doing it, like we just talked about, where we’re just getting it basically adopted by the browser creators themselves, are kind of what we have already just been talking about in general, particularly with saving, so basically anything you could do now in an app. You have a lot more facilities in an app ’cause you can save stuff to the device, [49:31] ____ local storage, you can just put stuff in the app itself, just embed it in the app to give you extra data or information or just algorithms in general that maybe you wanna process more.

49:41 JJ: Okay.

49:43 KM: It is especially in an app you can do just GPU process. GPU processing is like metal, you can do similar things on the Apple device.

49:49 S?: I was gonna mention this as well, processing on a cloud somewhere…

49:53 KM: Exactly.

49:54 S?: So that it doesn’t have to be done on the device.

49:56 JB: Right.

49:56 KM: We were talking about the saving. They are saved for example where the two current companies that I’m aware of are doing all their processing in the service and nothing is stored on the device. That within itself better towards a web AR experience ’cause you’re already on the web and using it as opposed to an on-app experience. But I can only presume and hope that Google and Apple are both working on their own facilities for allowing developers to save maps generated by AR Kit and AR Core. And then in that case the developers need to figure out how their distributing them over the web, user accounts, I would presume. Doing that can be easily facilitated through user account, maybe through an Ashley website or Macy’s website.

50:35 JJ: So it’s gonna be amazing controversy in 2021. Yeah I was gonna say, it might not be 2021, but it definitely seems like the way it’s going to end up going. I know that the way that people shop is definitely not planned out. People are not visiting… They’re not downloading apps ahead of when they need things. That’s not how anybody behaves. It seems that Web AR for the purposes of what we do is going to become more important. Are we looking forward to that? Are we moving towards that at all here?

51:04 KM: Well, we of course have offering for like [51:05] ____ the home builder, which I could see. So I have a house and I’m not looking to make a new house or buy a new house.

51:13 JJ: Yeah.

51:14 KM: But I could see as a person that may add benefit for me in that scenario or [51:17] ____…

51:19 JJ: Well beyond…

51:20 KM: In that particular instance. But as far as like an app, I’m using Google Maps I use it every single day. I’m using messaging in app that, okay I need this AR app for my life.

51:28 JJ: Yeah so back to what I was saying, we do have empirical… Well we have empirical evidence I think that…

51:35 JB: Which is the best kind of evidence.

51:37 JJ: The best kind of evidence. I don’t believe in anecdotal evidence. Let’s just just stick the empirical stuff. That having access to AR and VR tools in a retail location is significantly more effective than engaging people at home with those same tools. Because they’re already ready to buy, they’re already higher up in the funnel, and they need a tool to help them visualize, not, “Oh I wanna screw around in my house, designing my house.” And I think that’s…

52:00 KM: Exactly.

52:02 JB: Yeah. I think that Amazon has given a lot of retailers sort of this false… People use Amazon, the Amazon app almost as an entertainment portal.

52:12 KM: What?

52:12 JJ: Yes.

52:13 KM: Window shopping.

52:13 JJ: I’m on it. People are on it, not me. People are on it at night window shopping, exactly. They’re just scrolling products. “I just saw this thing on TV, let me look.”

52:19 JB: What is wrong with you, people?

52:20 JJ: They have Prime subscriptions. I can have it in three hours. It doesn’t even matter.

52:24 KM: In participating locations.

52:26 JJ: Consumer culture is more than just a phrase, it’s a thing. And so I think that everybody sees what Amazon has been able to build and they’re like, “I want that too.”

52:34 JB: I want that.

52:34 JJ: But the fact of the matter is most shopping is spur of the moment, it’s when inspiration strikes, it’s “I suddenly got a wild hair up my ass… ”

52:41 JB: It goes back to our MXT the initialization conversation, like if your app doesn’t start fast enough, after they download it, they’re like “Screw this, I’m not even gonna bother, I’m gonna go find something else to do.”

52:49 JJ: So as a retailer you need to be available when the person… When the lightning strikes.

52:54 JB: When they’re ready.

52:55 JJ: You need to be there. It does seem like a web app.

52:58 JB: Then it does seem like Web AR is the natural evolution of that, yeah.

53:01 KM: I think you made a very good point talking about the retailers though, in that I think we will probably see the most gains and like AR/VR usage in professional usage as opposed to in-home usage or app usage.

53:18 JJ: There are all sorts of reasons to use virtual reality and augmented reality in…

53:23 KM: Professional space. Sure.

53:24 JJ: Let’s say you’ve got a collaborative VR space and your 3D modeler, and you wanna critique in real-time. You’ve got… Everybody is in the same space, they’re all seeing the same virtual images. You can do virtual markups, stuff like that. It’s just easier to do those things. Telepresence is so much easier in virtual or augmented reality, provided the processing capability is there and all that stuff, and it’s a smooth experience. There are less uses for it at home. I don’t wanna be inundated with advertising in my house. I don’t want to necessarily invite every brand into my house to have a conversation about my decor. So, yeah.

54:00 S?: They’re all knocking.

[overlapping conversation]

54:01 JJ: I know they’re all knocking. They’re like vampires, if you don’t let them in, they can’t come in.

54:04 KM: Yes. I think a lot of the sci-fi shows out there kinda show the dystopian future where [54:08] ____.

54:10 JJ: Altered Carbon, everybody go watch it on Netflix.

54:12 KM: Exactly. But I can only presume that, in that universe, the people felt that they were getting also some benefit from allowing that to happen. So like… Now it has devolved in a sense that they’re so used to it that it’s just everywhere, but at the time, they had to have been getting some benefit in return for [54:30] ____ occur.

54:30 JJ: As someone who consumes a lot of dystopian fiction, one of my favorite reasons for ubiquitous AR is the world is dingy and sucks, and if you can put a nice digital patina over everything, everybody just feels a little bit better about what they’re looking at.

54:42 JB: And I would add the gamification of everything. So it’s “Life is a game, we’re playing life now”, and “Oh I gotta”…

54:50 JJ: And on the flipside, everybody has pop-up blockers for the very same thing.

54:54 KM: So I always presumed it was like a Magic Leap scenario, where I have Magic Leep, and now because I have a Magic Leep, Magic Leep is selling space in the real world to put this advertisement that it knows you’re there, and advertisement, it comes up, and Magic Leep is making a billion dollars because everyone likes to use their Magic Leep device…

55:09 S?: Yeah, situational advertising.

55:11 KM: The advertisement is gonna come up.

55:13 JJ: What I think it really means is that retail is not really dead.

55:16 JB: No, just evolving.

55:18 JJ: It just means that people are going to browse less and…

55:21 JB: Or differently.

55:22 JJ: Or differently yeah. Like if you have a purpose and you’re like “Well I know I need furniture, but I don’t know exactly what I want, I’ll go to place where I know I can buy it, and they’ll probably have enough for me to look at that I can make a decision.” And they’re gonna be able to do it faster than before, and it’s gonna require fewer returns, ’cause people are gonna know what fits, they’re gonna know what looks right, etcetera.

55:40 JB: The other thing to remember is if we end up in more of a club or rental sort of culture, a lease culture, the idea that instead of buying a car, I just join the car club and I have access to a car whenever I need it.

55:53 KM: The car drives to you [55:53] ____.

[overlapping conversation]

55:53 JB: Subscription models.

55:54 JJ: Yeah, that’d be nice.

55:54 JB: The same way you used to buy videos but now you use Netflix or whatever, that people are trying to bring that to everything, housing, transportation, etcetera.

56:01 JJ: I still buy videos.

56:04 KM: Yeah.

56:05 JJ: I just bought Rogue One.

56:06 KM: The subscription model is definitely the prevailing model.

56:10 JB: I started giving away all of my physical stuff.

56:11 JJ: Well, that’s great. Subscription models are great for software developers or whoever’s selling the software, because you get that evergreen money, I mean I get it.

56:19 JB: My point of bringing that up though was just the idea that we look at purchases right now in a very set way because we are buying the thing forever and it’s ours and we’re not gonna give it back. If it’s just a rental, I just check it out, “Ah, bring it, ah, I don’t like it, I’m gonna send it back in two weeks and get another one [56:34] ____.

56:34 JJ: Apple’s gonna try to sue me for jail-breaking their iPhone. I clearly don’t own anything.

56:37 JB: That’s… You don’t, yes. [chuckle]

56:38 JJ: Yeah. Leasing their software.

56:40 JB: At the end you can’t take it with you, Joe.

56:42 JJ: You’re not wrong about that. So I guess in effect I’m leasing everything.

56:46 KM: That’s true. [laughter]

56:46 JJ: Yep. That’s right. Well, anyway…

56:49 JB: On that note… [laughter]

56:50 JJ: Yeah, on that note, hey, everybody, your personal possessions are an illusion and we’re all gonna die.

56:55 KM: Death is inevitable.

56:56 JJ: And I’m Joe Bardi. And in reality, I’m Joe Johnson.

57:00 JB: And I’m Joe Bardi.

57:00 JJ: And that’s also this Ken Moser guy.

57:02 JB: Yes, thank you Ken.

57:02 KM: Are we recording?

57:03 JJ: We are still recording.

57:03 JB: We are still live.

57:05 JJ: Thank you Ken, for appearing and for sharing your incredible wealth of knowledge.

57:09 KM: No, thank you, Joe.

57:10 JB: It’s always fun to be…

57:12 JJ: You’re so smooth.

57:13 JB: Who? Ken?

57:14 JJ: Yeah.

57:14 JB: Yes, yeah, that’s right.

57:15 JJ: I thought you meant me for a second. I was like, “No”.

57:16 JB: And then the stinger.


Request Demo