What is the Meaning of Zero UI and How Does it Affect Design?

Zero UI

Many advances have been made in technology over recent years. Not so long ago, the advances we have made in the world of artificial intelligence would have seemed like science fiction.

You may have heard mention of zero UI, which could be the next step along the road. So what exactly is zero UI and how will it affect design professionals who have studied for traditional qualifications such as a Diploma of Digital Media Technologies? Let’s take a look at this in more detail.

What does zero UI mean?

The term zero UI was first used back in 2015 by Andy Goodman, who is now VP of Experience Design at BCG Digital Ventures. It concerns the fact that devices are now starting to sense our needs, with regard to the Internet of Things which surrounds us in our everyday life.

The belief that is central to the idea of zero UI is that there should eventually be no need to use screens to interact with interfaces. We should be able to interact by simply using movements, gestures and voice. Of course, this already happens to a certain extent when we speak to Siri or use a Microsoft Kinect. However, much of what has happened so far revolves around factors such as machine learning. The idea of zero UI is that the next step will be for machines to actually understand our language.

What does this mean for the future of design?

If we are to be able to interact with interfaces in this way, designers need to think differently. Current design principles are somewhat linear. In the future, designers will have to consider wider concepts. Instead of interfaces understanding simple commands, they will need to understand thought processes and more complex discussions and questions.

This will make future design more complex. For instance, interfaces will need to understand the gestures and actions of different individuals. The process has begun with the use of smart devices, such as the Nest thermostat, which have the ability to learn from the actions of users.

However, it’s still early days. At the moment we are still reliant on screens for most of our interactions with interfaces. It remains to be seen how long it will take for a reliance on screens to reduce and for us to be able to interact with interfaces in a less linear manner.

The concept of zero UI is not new. We are already able to communicate with interfaces without using screens. However, the gestures and voice commands which interfaces recognize are restricted and based on the way we code devices. If Andy Goodman is right there will come a time when interfaces will understand what we are talking about if we simply use our own language. If this does eventually happen it’s not clear how long it will take but it is clear that this will make a huge difference to the way we interact with the devices around us.