Developers will soon be able to write applications that enable a brain to learn custom information about any exposed perceptions, both real and virtual. Powering this functionality is a combination of technologies centered around the advent of Ultrahaptics and David Eagleman’s VEST concept. Ultrahaptics is a UK-based startup focused on using ultrasound technology to generate in-air tactile sensations. The VEST is a wearable technology that uses vibrations to physically feed digital information to one’s brain. Together, these technologies will revolutionize human-computer interactions completely.
We humans digitized physical data. We then integrated that digital data into almost every type of physical device. Now we are starting to introduce our digital products as physical elements of the world using holograms with technology such as Microsoft’s Hololens.
The next step is for information itself to become a physical entity by applying the motivations behind Eagleman’s VEST: feeding real-time data streams into interpreted sensory streams. Machine learning and data mining algorithms nowadays can derive valuable information from massive amounts of aggregate data. Computing these loads rapidly requires massive computing power, something not readily available on today’s smartphones. Many are shifting this load onto the Cloud, but another novel platform is available. After all, every one of us travels around with a free supercomputer in their head. Our brain can easily learn to use any information we serve it with. All it takes is the right delivery system.
Eagleman’s prototype device:
What this really means is that any kind of blanket sensory information that could otherwise be interpreted by a machine learning algorithm can in fact be “learned” by our own bodies. Custom data feeds to pour into the sensory stream is all that is needed, and the sensory information itself need not only be vibrations.
Here’s an example of how people can gather data from video.
The sensory “streams” can be converted into other types of “streams” as needed, just like other types of computer data.
All manner of mixed reality technologies could facilitate the provision of custom sensory information. If people intend to “train” themselves in how to use a custom sense, it is likely that future applications will involve simulated environments that people interact with to practice a given sense, i.e. gamified training software.
The applications of this technology is more “limitless” than even the AR/VR excitement in 2016. While the technology is still down the road a ways, people should be keeping an eye out for the rise of tactile-feedback hardware, if only for the ramifications it has for bringing about a sensory evolution.