Surface Tension: A Case Study on Elegantly Solving the Wrong Problems

About a week ago, Google announced the Pixel C. Immediately, it looked familiar. Isn’t this a lot like the Microsoft Surface that made its debut back in 2012? And is it another competitor to the iPad Pro, too? If this is a trend, was Microsoft the first to catch on?

Why is everyone suddenly putting keyboards on tablets and calling it a feature?

The idea behind these devices is that they’re “all-in-one.” You can use it like a laptop with the keyboard, or like a tablet with a stylus. The need for this is pretty clear: work is becoming increasingly mobile and dynamic, so people need new ways to get stuff done. Adapting their core computing technology makes sense... in a way.

The way we do work is changing, and technology does need to adapt. The tech of the future will accommodate the way we work, create, and play: the devices released by Microsoft, Apple, and Google are attempts to create this future. However, adapting technology doesn’t just mean cramming existing devices together in more elegant ways.

Likewise, there are different ways to enable composers to record and experiment with multiple instruments. Some of these are more practical than others.
Likewise, there are different ways to enable composers to record and experiment with multiple instruments. Some of these are more practical than others.

So, how do we solve this problem? How do we make technology that adapts to the way we live?

Let’s reexamine the issue. The way we work is changing, but how?

For one, an increasing amount of work is done on the go. Whether we’re traveling, hanging out at a coffee shop, or walking between meetings with our phones, we’ve gradually started depending less and less on designated office spaces. More people are working remotely than ever, as communication technology has gotten good enough to break down most geographical walls.

To add to that, software advancements have enabled a broader scope of digital creation than ever before. Digital sketching, music composition, 3D modeling, virtual reality experiences - these are only a few examples of what modern software can create. We can express ourselves in more ways than ever before, on any machine we use.

All these changes have led to the lines between life and work being blurred. Everything in our lives continues whenever we leave the office or put down our phone. It’s “always on.” So, our devices need to be able to handle anything, right?

The major tech companies have noticed that certain form factors are better for certain classes of tasks: for example, laptops are best for long-term, intense content creation tasks like word processing. Phones excel at smaller-scale content consumption, and shorter-term interactions.

These use cases form a hierarchy of computing devices, ranked by computational intensity. Below are the main devices we use today, with the main task they're used for beside them. Watches mainly help you observe your notification stream, phones help you select content to read/watch and respond to notifications, tablets probably also exist for a reason, and laptops/desktops still reign supreme for any significant content creation.

Popular devices, measured by computational intensity.
Popular devices, measured by computational intensity.

Now that we use our devices for more, it’s time to rethink the way we separate them. Computing power isn’t a limiting factor, now that most of our large data tasks are processed “in the cloud.” So, let’s make a new framework for the next age of digital interaction.

Before we dive in, I do want to clarify: this is one approach that we as an industry could take, meant primarily as a thought exercise. If anyone wants to get in touch about it, I’d love the opportunity to discuss alternate opinions how we could develop the technological experiences of the future. Please don’t hesitate to contact me and start up a conversation!

My Proposal for the Future

This model assumes we use technology for 3 main things: work, play, and imagination.

A visual aid.
A visual aid.

The definition of technological work in this context is fairly straightforward. We have tasks that we are required to do to keep our jobs: creating spreadsheets, doing research, configuring systems. Much of the time, these tasks involve long-term use of a workstation, and require a large amount of mental calculation and reasoning, or “left-brained” work.

Next up is defining how we play. In technology, play is the process by which we immerse ourselves in an experience outside of what’s offered by the physical world. Many play-type interactions emulate parts of real life, but they're often handled in a different way. The main use case that comes to mind is gaming, but technology can also be used to simulate other experiences for play as well.

Defining how we imagine is a little more complex: it’s the process by which we create new objects, and develop new ideas. A good amount of work involves creative processes, but imagining isn't normally done by the same means as the more “left-brained” tasks. This category includes things like concept prototyping, art, and collaborative ideation.

Now that we’ve laid this out, let’s consider the technology that could enable these types of tasks. I’ve laid out a diagram below with some ideas.

The new range of device classes under this ideology, with smartwatches filtering data / notifying of small changes.
The new range of device classes under this ideology, with smartwatches filtering data / notifying of small changes.

For work, a standard laptop makes a lot of sense. The input interfaces (mouse and keyboard) are positioned conveniently under the user’s fingers, resulting in a very ergonomically feasible machine. (To a new user, a keyboard may be daunting, but most of us have grown up with them and gotten quite quick with them.) Barring some smaller models, most laptops have screens big enough and crisp enough to display large amounts of data that we need to do our work. Plus, everything is contained in one machine with no peripherals, meaning it can be setup wherever you happen to be.

Now, let’s move on to play. As it stands, we’ve got the console gamers vs. the PC gamers, and the difference often comes down to gamepad vs. keyboard/mouse. (And modding. But that’s a separate point.) However, neither of these interfaces are actually any good for emulating an experience. You don’t press a key to move in real life, or push a button to swing a sword. And you don’t view everything through a rectangular window, either.

Virtual reality looks to be invading the play space, given the past few years of advancement. There are some virtual reality headsets out there that do a great job at providing emulated visual and auditory input, taking over your sight and hearing in a very controlled way. The experience won’t be complete until we can create a deeper connection with the virtual environment, but there are input devices being developed for more realistic interaction.

In these cases, the goal is to free the experience - and to be clear, this doesn't necessarily mean freeing the user. While these may seem one in the same at first, they are actually very different goals. The experience we want to offer could be beyond human capability, so in these cases, we would need to effectively map a user input to a virtual-space interaction. It needs to feel intuitive, and engage us with the experience above all else.

Ever seen those VR demonstrations where you’re a bird flying in the sky? Great example of the experience being free while the user is not.
Ever seen those VR demonstrations where you’re a bird flying in the sky? Great example of the experience being free while the user is not.

To imagine, we need something entirely different. Ergonomics don’t matter if they limit our creativity, and the keyboard and mouse are not versatile enough to encompass the range of media that help us think. We create by drawing, sculpting, experimenting, and sharing - and that's only the beginning. Our interfaces need to be freeing above all else - this is where we must free the user completely.

Creating a device to help people imagine is much harder because of this - for different creative tasks, we may need different devices to help free our ideas. What I've come up with as an example is a device that has seen a few attempted implementations in the past. At its core, it’s a projector that can be mounted on any surface, and used with a pencil or stylus. Once the projector is mounted, we can draw on anything, and the projector’s tracking sensors will record it. (Once we get hologram technology, we could even draw in 3 dimensions!)

We need something that’s ready whenever inspiration strikes, and versatile enough to handle the way we conceive ideas.
We need something that’s ready whenever inspiration strikes, and versatile enough to handle the way we conceive ideas.

The benefit to this is that you can carry it on your person, pull it out regardless of context, and start creating as soon as you get your idea. And if you don’t draw? This tech could potentially handle a variety of input devices. We aren’t limited to pencils, the idea is to enable digital creation anywhere, with anything.

This just one way we could change the way we could create devices. Once again, I’d love the chance to have more conversations about this - if you have other ideas, agreeing or disagreeing, please reach out or leave a comment!