Pokemon, Google and the rise of Augmented Reality wearables
It may not sound like it but, I promise, all of these are related
You’re walking in the park, casually swiping around on your phone, looking for the telltale rustling leaves, ignoring the alarming rate your data is draining at, and checking how many more steps you have before your egg finally hatches.
On your screen is what every Pokemon master dreams of. You’ve been looking for months, but finally, you’ve come across a Mew, only a few steps ahead of you. But suddenly you look up, and you make eye contact with the person across from you and you know that they too, see the Mew. So you start running, frantically trying to capture the Pokemon.
Such is a day in the life during the 2016 Pokemon GO! craze, a time that saw interest in augmented reality spike enormously.
You’ve probably heard of augmented reality (AR), whether that be around Pokemon GO! or around Google’s failed attempt at AR wearables, Google Glass (more on that later). But hearing about something doesn’t necessarily mean that you know what it is, so, first thing’s first- what is augmented reality?
AR in a nutshell
Augmented Reality comes in many different forms from the Pokemon GO! example to Snapchat filters. But put simply, AR is just any computer-generated image that interacts with the physical (“real”) world around it.
How it works
Now, for the technical bit, how does AR work?
AR is only possible because of a lot of sensors. You have gyroscopes (which track motion), cameras, and accelerometers (which track speed). Based on the data received by these devices, the image in front of your eyes or on your phone changes based on your movements and your surroundings.
This is a concept called tracking; the reason the AR device knows to change its size based on your distance or disappear if you move behind a solid object. Tracking comes in two forms: either through an outside-in model or an inside-out model.
- Outside-in tracking means that the system’s sensors are housed outside of the device, and then the data is transmitted to the AR device. This could mean cameras set up around the room.
- Inside-out tracking means that sensors are within the actual device and it receives all of its data internally. This is the AR we are typically referring to, the kind that uses your phone’s sensors, or even a bulkier wearable headset such as Microsoft Hololens.
Even more tracking terminology
Outside-in and inside-out tracking is only the beginning though. Inside of those two categories, there’s, shockingly, even more categories. So let’s dive into it, shall we?
- Simultaneous Localization and Mapping (aka SLAM): Basically, the AR device creates a map of the space by figuring out the approximate locations of key objects in the room and the spacial differences between them.
- Marker-based AR: In this type of tracking, virtual objects are tied to a specific part of the physical world. For example, if you had a virtual coffee mug, it would be anchored to a specific point on the physical table. A marker is just any easily recognizable object registered by the AR device.
- Markerless AR: This type of tracking is basically the opposite of marker-based AR, meaning, it doesn’t need to be attached to a point in the physical world.
But just being able to transmit a real-time image is not enough. In order to seamlessly blend in with the real world around it, AR needs to comply with the same principles that objects in the real world do. This includes:
- Place: This basically means that unless directly moved, objects need to remain in the same place, even if the user moves around the room.
- Scale: Essentially as the user moves closer or farther away, the AR object changes its size to fit the new point of view.
- Occlusion: This is when an object in the physical world blocks an AR object or vice-versa. The AR has to change its appearance to fit in. For example, if you were looking at an AR painting and there was a physical lamp in front of it, certain portions of the painting would not be visible.
- Lighting: Just like in the real world, depending on the lighting and the amount of light available, the shading on objects appears different.
All of these concepts come together to mesh both physical and virtual aspects to create one enhanced world.
Now that you have a basic understanding of augmented reality, let’s talk about some cool contributions to the field. In the past few years, there have been a few major players, namely Google and Facebook. We’ll look at what they’re doing:
Google has been working and has worked on a variety of different projects involving AR. A few of them have really taken off and some of them…not as much.
- First up is an initiative that intends to link Google search results to AR. With this, you could search up the word ‘shark’ for example, and by clicking a button, your screen would show a shark in AR.
- Additionally, Google is working on image recognition software called Google Lens that allows the users to get information related to the image seen on your camera. This, of course, uses the power of neural networks to figure out exactly what the image is.
- Another area is Google ARCore Google’s open development platform software for augmented reality developers to create their own AR.
- Facebook has created ARstudio, a drag and drop style platform that lets users create their own AR experiences while applying complex real-world logic to their experiences with a just click of a button.
Another area that has really taken off, is wearables. Wearables are exactly what they sound like, unlike your phone, this is AR that you can, well, wear. Generally, this comes in the form of a headset or when the tech gets scaled-down, even glasses. This one the most exciting areas of AR because one day this tech is going to be so applicable that it will permeate literally all parts of our life, from school to work to play. We’re still a long way from that vision, but here’s what we have now:
Ah, what to say about Google. Google’s first foray into wearables was a 2014 release called Google Glass. Now the idea behind Glass was that it would be a pair of smart glasses that would be navigated via voice commands and natural language processing (NLP). Eventually, the tech would become good enough to hopefully, replace the need for smartphones altogether. Which, is a really cool industry-changing idea. It’s almost 2020, nearly 6 years after the original release, so why isn’t everyone walking around with a pair of Google Glasses?
Well, that’s because it turned out to be a total flop.
Google underestimated the public’s willingness to adapt to this technology, the world really just wasn’t ready. The issue that overshadowed all of Glass’s possibility, was privacy.
When Glass was originally released, it had a camera feature that allowed it to record whatever the user saw in front of them…expect there was no indication that the device was recording. Yeah, see the issue? People didn’t take very well to the idea of them being recorded without their knowledge. Google quickly modified the device to show a red light when using the camera, but by that time it was too late- people had already been turned off from using it.
Later in 2015, Glass was discontinued for public use. In the future, another venture with smart glasses will have to take into account how important privacy is for people and find a solution to deal with it.
That isn't to say Glass failed completely- the tech has found a home in industrial environments in use cases such as giving the wearer the ability to check the specifications of a given object with just a look and a voice command. Remember Google’s image recognition software? It’s all coming together now.
Microsoft has one of the coolest devices for AR developers on the market today- Microsoft Hololens.
Hololens is a head-mounted device (HMD) that packs literally everything on it. Remember the concept of tracking? Well, Hololens is the perfect example of inside-out tracking. Anyway let’s go over some of the coolest features of Hololens:
- Holograms: One of the coolest things about Hololens is its ability to produce holograms. Yes, you heard that right, holograms…well, kind of. Of course, the whole idea of AR is being able to view virtual images in the real world, but Hololens actually allows you to manipulate those objects via hand gestures. These gestures are recognized by the various sensors and cameras on the device.
- Eye-tracking: Another thing that makes Hololens stand out is its’ tracking. Each Hololens needs to be calibrated to that specific user’s gaze, which allows the device to be able to tell where you’re focusing, and it focuses on only that specific area, just like how our eyes naturally do.
The Hololens for all its cool features is not the future. It’s bulky, ugly, and isn’t very portable- it’s largely meant for industrial/ work use, not day-to-day activity. So let’s look at an option that is more consumer-friendly.
Focals by North is an interesting one. This company has been working on smart glasses, but unlike Google Glass and other competitors they’re doing something different- they’re making their glasses stylish. Oh wow right? Why is that a big deal exactly? As superficial as this sounds especially when compared to the heavy-hitting device that is Microsoft Hololens, if AR wearables want to become mainstream, that is extremely important. As sad as it sounds, it doesn’t matter how cool something is, if people look like dorks, they are not going to buy the product.
So that’s why what North is doing is so important. All of the companies interested in AR wearables haven’t been focused on this aspect of their product, and that is one reason their products are unlikely to take off.
But that’s not to say North’s glasses are amazing, the tech in them can be likened to that of a smartwatch. It has limited capabilities, few apps and really is only a heads-up display at this time. But the possibilities of this technology and the company’s ideas make this a company to look out for.
But why should this matter to you?
Now I realize, that was a lot of information. So, let’s just make it clear what all of this actually means. AR and AR wearables are going to completely change how we look at entire industries.
Education: Imagine a world where your classroom is half virtual. I wrote an article on this, check it out here.
Workplace: Instead of relying on your computer for notification or to write, imagine just seeing it pop up before your eyes.
Manufacturing: AR has and is having a significant effect on this industry already, with the workplace becoming a lot safer because of the ability for workers to view information, such as the specifications of a part, without having to avert their eyes, or divert their attention away from the task at hand.
Limiting factors for wearables
So right now what’s keeping us from all rocking our very own computers on our faces? Well, lots of things:
- Cost: Currently wearables are expensive, but with emerging technology- the cost is always an issue at first. But as the tech scales down, the cost will as well.
- Hardware/Size: The amount of space required to allow for inside-out tracking that a wearable would require is enormous. That currently leaves companies with the option of leaving out the awesome stuff we expect out of AR or making a really bulky device like Microsoft Hololens.
When the sensors and cameras can be scaled down, that’s when the real magic will happen and you can finally toss out that outdated block we call a smartphone.
- Currently, our AR is still in its early stages with things like Pokemon GO! and Snapchat filters
- Tracking is the reason AR is possible
- Companies like Google and Facebook are making development environments for AR developers
- Wearables are going to be a big area of AR, and companies like Google, Microsoft, and North are doing cool work in the space
- AR is disrupting and will disrupt almost every industry in some way
- Cost and hardware space are two areas that need to be addressed before AR wearables hit the streets