Since the beginning of the popular GUIs of the computer age we have used metaphors in order to convey an easier understanding of the digital world. Examples include the desktop, small trash can icons and “pages”. While these metaphors did make is easy for new users to understand and interact with digital content; I think the time had come to begin moving away from analog references and rethink interfaces on digital mediums. The post is not an attack on the page interface itself as it still has many use cases where a more advanced or different UX might confuse users examples include static magazine/news apps or sites, books etc. Instead the goal is to remind myself and others to push boundaries and remove antiquated frameworks when approaching a new project.
The bulk of this rant will be focused on interfaces for non desktop based devices as I don’t feel much change can happen until we move past the ubiquitous mouse (although apples gesture based devices are pushing the envelope). The idea for this post stemmed from a vendor presentation for a mobile application. The deck walked our team through the various “pages” of the application which I assume they thought was the correct approach since they themselves and most our team come from a web background; however it was not.
What we know about apps:
- Applications allow us to push the boundaries of what has traditionally been done on the web. This is true for native and web based apps. We can destroy the common “page” structure for most apps and can explore using 3d space, mixed media environments, and page-less apps that call upon the data required when needed via ajax and other technologies.
- Animation, motion, transitions, transforms..it’s 2012 our browsers and devices are powerful machines. I understand that there are compatibility issues but the goal should always be to provide the richest experience to the platforms that can support and handle them.
- Native applications have always been able to tap into their respective devices hardware however newer web browsers now allow us to tap various hardware as well. We can obtain location data, locally cache content, serve audio and video all without plugins.
****As I mentioned above not much has happened in terms of adoption of other input devices on the desktop but other connected devices offer a plethora of choices.
- Point and tap are necessary however we should also keep in mind that touch based devices now support gestures (circle, drag, zoom, swipe etc). Apples laptops and touch pad device for desktops support gestures as well.
- Connected televisions and set top boxes; namely the popular game consoles allow you to take gestures to the next level. Devices like Microsoft’s Kinect understand a vast permutations of hand and body movements.
- Higher end mobile devices come equipped with accelerometers, cameras, and gyroscopes..use them!
We can and should use all of the above in order to enrich current and future user experiences.
There are may projects that rethinking the standard paradigms especially in the operating system space. Apple is doing amazing stuff with gestures and on iOS and Mac OSX and even Microsoft is pushing boundaries with Kinect on Xbox and on the desktop with the touch friendly version of Windows 8.
Notice that there are no buttons, the screens are not contained in a ‘page’ and the interface is mostly chromless. The content itself is the interface.