The future of augmented reality: Use cases & how we get there
When you think about Extended Reality, you might envision something like Ready Player One.
Everyone is wearing VR headsets, immersed in a digital world and ignoring the real one.
VR will have some amazing applications, especially for training where total immersion is key. But VR is not a very social experience with people in the real world around you. In fact, it’s very isolating.
Augmented Reality offers an enhanced layover to our real world, aka “meatspace.” The level to which this takes over your view of the world will depend on the use case. When driving, you can expect a guided line projected onto the road in front of you.
Relevant signs being highlighted so you can pay extra attention to the most important aspects of your drive. Imagine a theme park and ditching that cartoonish map for a nice AR overlay. Many theme parks support mobile apps tracking FastPass times, show schedules, and ride maintenance.
This will be replaced with a more intuitive, less interruptive Augmented Reality version.
Another interesting use case for AR will be “cold reading” people. Some can do this more instinctively than others, but AR could be tuned for reading micro expressions (facial tics, pupil dilation, blood flow, temperature, etc.). It will be able to tell you a lot more about the person you’re having a conversation with. Almost to an invasive level.
The current problem is computing and battery power.
We have a decent amount of computing power in our pockets. To take AR to these imagined heights, we need the AR lens to stream real-time visual information to our phones, and the phones back to it.
This is why you either get some lightweight features (see Google Glass), or a heavier but more capable lens (see Microsoft Hololens).
Yet as cities get more and more cameras connected (e.g. London), we’ll realize there’s another way to get a visual readout of the world in real-time.
Amazon has already deployed Sidewalk, a feature of their Echo devices that will allow for a much wider and more available network for other Amazon devices. This is like Bluetooth tracker networks such as Tile or Apple’s Find My service.
Before long these kinds of networks, provided by our smart devices, will allow anyone with compatible lenses to get an enhanced view of the world. The heavy-lift of all that computing is being distributed to all the connected devices nearby: like BitTorrent for real-world information.
VR might be isolating, but AR has the power to bring people together in meatspace: Pokémon Go is a prime example.
As more brands adopt models that focus on consumer privacy, we’ll find that recognition and brand loyalty is more important than ever. Musicians busking at metro stations will be able to have elaborate backdrops and light shows provided by AR, rivaling larger more expensive concerts.
You could actually see a golf ball’s entire flight if both the player and the course had the necessary gear.
Sailing via the stars becomes achievable by anyone with the tech and enough bravery.
Even playing a board game over coffee can be a simple way to bring people together. Nintendo will have Smash Bros. tournaments where the real world architecture is the setting the characters fight.
Though gaming will likely set the stage for unique uses of this technology, there’s one simple thing that will have a tremendous impact: translations. Word Lens proved AR translating texts on signs in real-time back in 2012. It stands to reason that we’ll have translated subtitles overlayed for us when conversing with someone speaking a different language.
This means that we could parse any website, conversation, and sign in the world that had previously been foreign to us.