The simple way

A lot of companies have issues with prioritizing which content/products/information to show the user when he arrives at their site. Everybody within the company wants to contribute: Sales, Marketing, PR, Product Owners.

But what it really comes down to is the following:

  • Can we identify our users/visitors/customer segments? 
  • What do these identified users usually want to achieve?
  • Which is the simplest way of offering them what they want?

You can do this by analyzing site data, customer segments, customer behavior, customer focus groups, usability testing, sales statistics, user flows, customer service input. By going through all of this information you get what the customers want to achieve when visiting your site or using your app. It is not unusual that this differs from what you as company want the user to achieve. So how do you combine these to goals?

  1. Make it simple for users to achieve what they came to do (if they have a goal).
  2. If they don’t have a set agenda guide them through a focused UI without many distraction towards your goal.

This means daring to not show every possible offer all the time and to deliberatly focus on one thing at a time. This focus can of course be adjusted to change according to time of day, time of broadcast of commercials,  if we have a logged in user profile.

Dare to be simple.

Assimilating the friendly machines, part 2: The sublime propaganda



There is a silent and secret campaign going on. It’s been active for years. It’s on the TV, in books, in video games, on billboards, on the web. It’s not for a specific product, more of a line of products. I am of course talking about robots.

The theme of robots has been picked up by many science fiction writers. Probably most notably by Isaac Asimov. Early fiction containing robots were mostly depicting them as either evil, weird or stupid but this has gradually moved towards describing them as intelligent, helpful and even compassionate. In todays media landscape – with robotics being more hyped than ever – one can find robotics attributed in everywhere from children’s tv-shows(Rob the Robot, Cars, Bob the Builder) to music (Daft Punk) to actual robots looking after our children to weaponized military drones.


”Watching John with the machine, it was suddenly so clear. The Terminator would never stop. It would never leave him. It would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. And it would die to protect him. Of all the would-be fathers who came and went over the years, this thing, this machine was the only one that measured up. In an insane world, it was the sanest choice”

– Sarah Connor, Terminator 2


Why are a lot of us suddenly embracing robots more than before? What made us change our mood from paranoid skepticism (Hal 9000, Terminator, Ash in Alien, Blade Runner) to over the top optimistic depictions bordering on praises? As commercial media has claimed more power over the public than actual research the opinions and news are often one sided and without nuance. Yes, there are voices warning us about embracing new technology to quickly but they easily disappear in the ocean of how cool, uncomplicated, time saving (a huge factor) and convenient this technology is. The psychological or moral discussions seldom reach the surface.


It is strange that we apparently are trying to fulfill our own fictional prophecies almost word by word. A lot of science fiction has acted as inspiration to inventions but the future strategies for robot development almost seem carbon copied from a book by Asimov. Fiction is fiction and we have to make up our own moral compass, rules and laws before trying to transform fiction into reality.


Assimilating the friendly machines, part 1: Killer Robots




Will robots ever be an accepted and integrated part of our society? The question is both complex and controversial. For the past 50 years the use of robots within manufacturing has been steadily increasing and today some production lines are made up completely by robots. We have become so dependent of industrial robots that we couldn’t easily remove them as we have created production environments and work activities based on their capability, accuracy, durability and non-organic bodies. Their intelligence has been limited and basic as their line of work has been repetitious and contained no complex problem solving. But there has always been scientists striving towards a more intelligent machine, so intelligent is could finally be mistaken for a human and therefore integrated into our society.

Science fiction writers have been fantasizing about this for a long time. Isaac Asimov’s life long fascination in robotics and its potential future impact on human life has acted as a outspoken inspiration for many scientists. But Asimovs obsession was not one-sided. His contribution was as much a philosophical approach to the complex issues awaiting us in a future society inhabited by humans and robots alike. He coined the three laws of robotics that was to be integrated in every robot to secure the power balance between man and machine:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

To some extent these laws would be sufficient but as Asimov continuously showed in his writing they could also lead to very complex situations and cause confusion and loopholes.

To no surprise one of the future implementations of robots are military. Military Robot Development resulting in intelligent ”killer robots” – armed robots that can select and kill targets without human command – has recently been debated intensely in the UN and also resulted in a huge organized anti-camdaaa to ban land mines. A United Nations expert has called for a global moratorium on the testing, production and use of armed robots that can select and kill targets without human command. United States, Britain, Israel and South Korea (Samsung Techwin) already use technologies that are seen as precursors to fully autonomous systems and the specific development of ”killer robots” have been officially confirmed in the United States, Britain and Russia.


Is there hope for an enforceable ban on death-dealing robots or have we gone so far it will be hard to back up? These new types of autonomous robotic weapon are so technically advanced that there is still a lot of problems to overcome before they become available. The most crucial issues lies within the trustworthiness wether the weapons will be able to do something they’re not directly programmed to do. There are a lot of factors and variables to calculate before making an accurate decision in combat. Noel Sharkey, Professor of Artificial Intelligence and Robotics and one of the more public figures behind the anti-campaign, has given examples where engaging or attacking self analyzed targets could be disastrous as the robot never could consider all aspects of an attack and which chain of event it could trigger. One moral example Sharkey has mentioned is taken from real life combat in Afganistan where a US squad identified hostile armed troops apparently near a villages and saw an opportunity to engage. The problem was that the hostiles were currently attending a funeral so the squad decided not the engage with respect for the mourning, a decision a robot hardly would or could make.

With drones used in over 50 countries is there really a difference between pulling a trigger by remote or by pulling a trigger by setting rules within software? The difference is the human factor, the ability to feel compassion and to change ones mind if a decision was not correct. A machine will probably increase the accuracy of finding an neutralizing targets but the question is wether they are the right targets and at what cost. But the major risk of an autonomous robotic weapon is – as always with robots and programmed systems – he risk of malfunction where a deadly killer robot suddenly could become an irrational psycho robot on a killing spree without any directives to hold it in check. No three laws. And who will be accountable if there is an accident? The programmer? The manufacturer?

More in depth:
Smart Drones (NY Times)
ICRAC (International Committee for Robot Arms Control)

Interaction and interfaces, part 2: The Future

In my last post I ranted a bit about Apple, their new iOS design and their place within the changing interaction ecosystem. In this post I want to focus on the future of interaction and interfaces: Where are we headed and why and will it be better than today?


”If you want to know where technology is headed,
look at how artists and criminals are using it.”

William Gibson


If you look at current and past science fiction movies some elements regarding the interaction with computers are reoccurring: voice commands, hand gestures and 3D navigation. The first two elements are well on their way in todays interaction environment but the third is remarkebly non present in itself, though todays multitasking layer environment in computers can be looked upon as semi-3D.

But let’s examine the first two elements in some more depth:

1. Voice Controlled Devices (VCD)
The past 20 years have introduced everything from washing machines that allow consumers to operate washing controls through vocal commands and mobile phones with voice-activated dialing. The new and modern VCDs are speaker-independent, so they can respond to multiple voices, regardless of accent or dialectal influences (instead of thoroughly analyzing one voice through different test sentences). They are also capable of responding to several commands at once, separating vocal messages, and providing ”appropriate” feedback, trying to imitate a natural conversation. VCDs can be found in computer operating systems (Windows, Mac OSX, Android), commercial software for computers, mobile phones (iOS, Windows Phone, Android Phone, BlackBerry), cars (Ford, Chrysler, Honda, Lexus, GM), call centers ”agents”, and internet search engines such as Google.

Among the future cross platform players are Google who has created a voice recognition engine called Pico TTS and Apple that has released Siri. Apple’s use of Siri in iPhone and Googles use of speech-recognition in for example Google Glass has not been received without sarcasm or frustration. Both give you the possibility to give a set of commands: dictate, google/search for information, get direction, send email/message/tweet, open apps and set reminder/meeting.

Siri hasn’t been as big a success as anticipated, mostly because of issues with Siri not understanding your commands correctly. But Siri’s technical solution is not an easy one. It is built up by two parts: the virtual assistant and the speech-recognition software (made by Nuance). The assistant actually works pretty good while the speech-recognition engine works…occasionally. This has got to do with how the different parts are interacting and also the quality and speed that the actual sound file can be delivered to the online speech-recognition engine that then will have to send the text back to your phone for the virtual assistant to act on. Sound complicated? Basically if you articulate well while you’re connected to Wi-Fi you should be well off. In the future – apart from improving Siri – Nuance has mentioned developing advanced voice recognition software for use in cars (Dragon Drive) for getting direction, searching for nearby restaurant, but also within TVs (Dragon TV).

Among other prominent devices the introduction of voice commands was given a lot of room when Microsoft revealed the new Xbox One. Voice is used for starting, ending and switching between different services but also for giving specific commands within games.

So this is the present situation but wherein lies the future for voice commands? Vlad Sejnoha, chief technology officer of Nuance Communications, believes that within a few years, mobile voice interfaces will be much more pervasive and powerful. “I should just be able to talk to it without touching it,” he says in an article in Technology Review. “It will constantly be listening for trigger words, and will just do it — pop up a calendar, or ready a text message, or a browser that’s navigated to where you want to go.”

This future scenario sounds both intriguing and disturbing. A silent spying assistant that is always on call, ready to do your bidding even before you ask for it. Hopefully the privacy and security settings will be as well developed and intelligent as the voice-recognition itself.

Eric McIntire

2. Gestures UI

Kevin Kelly, founder of Wired Magazine but also technical consultant to the fictional interfaces designed by Jorge Almeida for the iconic movie Minority Report, recently gave a speech where he described the future impact of different disruptive technologies. Among them where gesture based interaction, Gestures UI, something that was featured in Minority Report when Tom Cruise orchestrated rather than navigated and clicked to find information within a computer (Tom Cruise interaction was by the way a lot more realistic than Keanu Reeves quite ridiculous 3D/VR-glove attempts in Johnny Mnemonic). Kelly states that as screens and displays can be anything and everywhere – something he also managed to put into Minority Report – an easy and accessible way would be using a type of sign language rather than typing your commands. Kelly gives Eye Tracking as an example of current existing technology that scans your body language to get information. Eye tracking could also be used to identify your mood and level of interest and adapt the presentation according to that, for example noticing that you don’t understand a word and subtly explain it to you. Iris identification software could also be used to identify persons at a larger extent than today, possibly even for advertising purposes.

From the gaming industry PlayStation and Xbox has introduced some gesture-based features and developed them further with their next-generation consoles which contain even more possibilities for commands, navigation and in-game interaction through gestures.

The touch based revolution that Apple initiated has been a great building block to prepare the public for even more physical future interaction patterns.
The two disruptive interaction technologies described above will change the design of interfaces massively. For example: when using voice based interaction an effective, intelligent and attentive servant could remove the need of an actual interface or menu. Gestures on the other hand would get us closer and more active which would require a totally different approach. Xbox One combines both features and will be an interesting experience.

Both technologies could restore some of our humanity within the digital environment when we’ll use human language – voice-based or body-based – as the primary tool for interaction with machines.

Interaction and interfaces part 1: Apple





The launch of redesigned interfaces always generate a lot of discussions, especially in fast paced social media channels. For the past week the iOS7 interface (designed by Jonathan Ive) has caused quite a stir. The choice to move away from skeuomorphism towards a flatter, simpler and more modern looking design has been looked upon as both controversial and thoughtless. Harsh accusation to be based mostly on HD screenshots and a short introductory film. CoDesign’s John Pavulus excellent article and Cliff Kuang’s comment in Wired and developers direct access to the iOS7 has and will further provide some more nuance and insight into the process behind Apples choices. Apple is brave to dare to start moving towards new unproven interface grounds both I believe they could have been even braver. Why? Because besides the 200 new features iOS7 contains it’s mostly just a change of design. Most of the familiar styles of interaction will remain – as they have proven to be extremely successful – unchanged.


”I believe I can see the future
Because I repeat the same routine”

– Nine Inch Nails, Every day is exactly the same


When Apple launched the iPhone it was the first step on a beautiful new journey for all mobile customers. It has changed how and why we think about and use mobile phone. It was genuinely groundbreaking, as it managed to overcome the final problems and finally brought our phones over the smartness barrier. Most types of services that Apple offered wasn’t unique but they offered them in a pristine shining environment that although it was new and maybe even frightening immediately felt like home. Steve Job’s thoughts on skeuomorphism might not have been in line with a minimalist designer’s wet dream but it gave the iPhone its human aura and feel. But the real underlaying reason it became successful was because the iPhone managed to introduce a different way to interact with our phones. We came closer to them and closer to the information we now easily could consume and distribute faster and more efficiently than ever. A closeness which translated into an even larger touch based product – the iPad. Tablets might pretty soon overtake the sales of laptops and one can see in research report after research report how Apple has thoroughly changed the way we consume media, communicate with each other and interact with technology. So if they have our consumer loyalty, product worship and user experience in a firm grip why won´t they dare to take an even greater risk when changing the iOS interface?

First of all the landscape where Apples products exists has changed dramatically since 2007. There is a lot more serious competition within the digital ecosystem of mobile devices. Android has taken significant market shares, but previous huge players like Microsoft should not be underestimated. But the competition is not only coming from within the mobility sphere. Next-generation game consoles, Smart TVs and an increasing array of Internet of Things is affecting the way and position of the iPhone and iPad as they introduce both other screens of consumption but also new possible ways of interaction. At the digital game conference E3 smart phones and tablets have been  frequently used as supplements to consoles and computer games which has widen their ecosystem and range significantly (Read this for more info). Apple has a given within this ecosystem but they must think more about the interaction environment within the ecosystem and less about the interaction between themselves. The optimal – for all producing companies – would of course be that the complete ecosystem only contained products from one company. Service and app developer think and work  this way: the service should be recognizable and used in the same way independent of platform. A total Apple dominated product environment is an utopian dream which is both wonderful and troublesome at the same time and as competing players are growing their market shares Apple should focus on making a few friends and try living in fruitful symbiosis as a humble leader instead of an all-knowing pundit. For example they could be better at meeting the demands and wishes of cross platform apps and services – for example the new and improved Facebook Home – at least halfway.