Posts Tagged ‘user experience’

Touching the… something

June 5, 2011

You may recall I’ve had words on various touch gadgets in the past, like here or overthere – this post I thought I’d devote to a designer’s look at touch technology itself, so let’s dig in:

– undoubtedly riding the coattails of both the ridonkolously successful iPhone/Pad and their own attemps at mobile OS, Microsoft have started teasing Windows 8. Now, if one did not much like MS, one might ask why one should believe they can make that work when Win-7 clearly ended up as little more than a graphic front-end on XP but that would be outside the scope of this article, and… whoops, I guess I did ask it after all.
Anyway, that’s not the point – the point is, MS want to be the first to apply what is essentially the user interface of a smartphone to a “PC”, as they call it. Complete with swiping, tiles, apps, the whole shebang.

Behold:

The above video became the straw that made me start writing this, and now I must explain why…

Well, in one of the articles I linked at up at the beginning, I mention hotkeys and such, and I also berate the lack of an actual, physical keyboard – at the heart of it, these two complaints are the problem with touch.
Of course, you may think that hotkeys are the stuff o’ geeks, and certainly there are ones you (or even I) have never heard of, but if you’re being but a little productive on your computer, chances are you use some, like cut/copy/paste, or arrow keys or tab, for example.

Then there’s the physical keyboard itself – it relates to touch like a church organ does to a harmonica. What I mean is, even if you’re the most dexterous person you know, the number of possible gestures you can squeeze out of ten fingers (even if none are used to hold the device) pales in comparison to the number of combinations even a relatively-fumbling person can manage with those ten fingers and a qwerty keyboard – there’s just way more material there, which means potential access to a greater number of practical shortcuts.

But there’s something more to the difference between touch and an actual keyboard, and it’s the same reason that this never became popular:

It’s a laser keyboard, and it’s been around for almost twenty years but maybe you’ve never seen one in real life, as it never really caught on.
Why? Because it lacks something you probably didn’t know you needed in a keyboard (and indeed in many other cases): Tactile feedback.
Your fingers, flying mostly below the radar of your consciousness, rely heavily on feedback from the keyboard, and therein lies the rub: – anyone doing anything other than casual browsing (i.e. non-productive activities) on a device is going to experience significant slowing in interaction if, instead of the fingers instantly “knowing” if they hit those buttons or keys or not, and if they responed, you’d have to rely on a visual or audio cue (like a blink or a click) that you have to consciously take note of.
That’s why the F and J keys on your keyboard, and the 5 if you have a numerical pad, have a small bump – to tell you, at a near-subconscious level, that your fingers are in the right place.

This is hyper-low tech – it’s tactile response, something we’ve had since we were friggin monkeys, having snuck into everyday use (check your non-touch phone if you have one, see the little bumps? – see the little nubs on your headphones, telling you which is left and right?), and we rely on it so heavily we’re not even aware of it.

It makes sense – afterall, how often do you ever look down at your fingers, or any other part of your body, to check where it is? Usually (as in, when you’re not learning the tightrope, for example) you don’t have to because the body is exceptionally good at keeping you informed about what it’s doing.
Imagine doing something as simple as walking by consciosly deciding what to do with your legs and feet all along – this is head-explodingly difficult, and the reason some people never learn to walk fully again after certain types of injuries, namely the type that makes the body “forget” how it feels, and forces the concious mind to work it out instead.

The concious mind is good at many things – but being fast is not one of them.

So everyone who uses even slight amounts of blind-typing, which here means “anyone who ever take their eyes off the keyboard at any time during computer use”, is going to have to overcome this. Now granted, for much casual use speed is not of the essence, and you’ll probably be OK once you’ve got new routines but for most productive uses, touch is going to be a major hurdle.



(if you’re now sighing and rolling your eyes, mumbling to yourself how they’ll overcome that easily and soon, you should know that Nokia announced their tactile feedback touch keyboard as far back as 2007 – also something you’ve probably never seen or even heard of. Apparently it’s not that easy)

OK, so what am I saying here, that I want to outlaw – or at least diss heavily on – touch interfaces?
Of course not. Well, the dissing part may be true. But they’re cool, and what’s more, they are now popular with Microsoft, meaning we’re stuck with them even if they weren’t cool at all.

What I want to say with this article is this: – some very important details concerning humans and design are especially apt at flying below the radar and be overlooked, specifically because their place in whatever process we’re talking about is mostly (or completely) outside the spectrum of the conscious experience involved, even if they’re absolutely central to it.
I think we should devote specific attention to try and notice these details.
Not because we can’t make things that work otherwise – things like touch technology clearly works – but because we can make things that work even better if we do.

I also think touch interfacing has as much place on an actual computer as wifi on a sledgehammer, and by thinking this is the one goal of computer development now, to add touch to everything, MS and Apple both (damn, I wish we had more choices!) risk crippling the otherwise incredibly versatile tool a computer truly is.

Broader vision, people, broader vision.

Share on Facebook

On a great dane

May 26, 2011

I think I may be something of a jaded person sometimes – no, it’s true, at times I am cricital, skeptical, even cynical, and find that occasions of real inspiration are few and far apart.

But even I get caught up now and then, and it seems this happens almost every time Bjarke Ingels opens his mouth… so I thought I’d share a few hastily scrawled notes on that, if for no other reason then at least to express my admiration (and hopefully explain what drives it), and also spread the idea of even grumpy old people like myself being happy about something.

Here, enjoy one of his tour-de-forces first, if you haven’t already:

Now, obviously Bjarke is charming and funny and knows how to work both a presentation and an audience, and that is really cool. As lectures on architecture go, in my limited experience with them, this kind of feels like eating candy for dinner.

But – and this is what I like – it’s not all candy.

By Bjarke’s logic, and to continue with the eating analogy, he believes that, rather than force some politically correct but bland or bad tasting health sludge down your gullet, or opt out of health entirely and just eat the aforementioned candy, you should make food that is both nutritious and healthy and provides you with a lovely eating experience of aroma, taste and appearance.


– and for dinner, raw beets, they’re good for you!

Actually, this is no bad analogy when I think about it, becaus it obviously leads to this question: – the idea of food that is both healthy and delicious has become quite ubiquitous and should not surprise anyone these days, so why is it that this trend seems so new and jawdropping when applied to architeture, as Bjarke Ingels does?
Howcome this same, wholesome idea, that what is good for you (for us, the world, society, whatever) should also be a good experience, has not permeated business, politics, architecture and design along with the food and beverage industry?

Unmodestly, I like to think I’ve thought along these lines for a while, as I’m sure others also have – like, what if one of the powers of design could be to make the things society or the world desperately needs fashionable somehow? Even identifying said things, which are often not what seems most obvious or alluring?
What if we, designers, found other ways of thinking in terms of added value in our work, besides the (let’s face it) old-hat concept of desperately trying to create the “designer classic” of the future?

This is why I like Bjarke and his work so much – because no matter how hard you may try to point out flaws in his plans or ideas, you’re faced with two facts:

1. – that he and his company, notwithstanding the comic book presentations, rap music and humor, are completely serious about their ideals and willing to do a lot of homework to explain how they’re supposed to work, and

2. – when Bjarke speaks of hedonism and having a good experience, having a conscience is an indispensable part of it; you’re not just indulging yourself egoistically. To him, this is a requisite of any project.


for the sake of your job and/or sanity, don’t do a Google image search for hedonism – massively NSFW

It’s important to note that you may disagree with BIG’s way of realising those ideals, and that they may not at all be the only propositions one might come up with.
That said though, I think we should really be happy about this example of how it’s actually possible (in these ways, and so maybe in other ways as well) to be joyous, humorous and playful about using technological power, granted by mankind’s struggle through history towards modern society, in trying to create even greater societies and even better technology.

We should also appreciate the message, built in to the concept of hedonistic sustainability, that you can seek enjoyment while still being a responsible person, and that combining joy and responsibility actually strenghtens both.

Rock on, Bjarke!

Update: – Wired featured a large piece on Bjarke which is now available online, and I’m happy to find that Williams (the piece’s author) seems to harbor the same fascination with some of the same main strokes of mr. BIG’s way as I do.

It makes for a helluvalot more spreading of these generous and necessary ideas that guys like that hook on to them than, well, guys like me. Kudos, sir.

Share on Facebook

Technology in clown shoes?

November 4, 2010

Did you get an iPad yet?

You do have an iPhone, right?

Or whatever brand of smartphone you prefer – you know, the kind that has internet capacity (and not WAP, either).
If you do, you’re part of the future, if we are to believe some of the big players. Apple seems to have already relegated their actual computers to an afterthought, compared to the efforts placed on the carry-with-you-gadgets, Microsoft is moving in, and every mobile service provider out there is scrambling to give you on-the-go internet, as is a plethora of other service providers (here in Denmark we have free internet access on buses and trains, for example).

Mobile internet is the future, apparently.

Well, I’m not gonna poop on that, it’s probably true – I am, however, going to add some comments to it, for you to make of what you will.

The defining factor here is speed. The speed of your internet connection, and the processing speed of your hardware.
By now, the standard internet connection is more than 1 megabit, and many if not most people have way more.
This doesn’t just mean you can download that video of the sneezing panda in mere seconds. It profoundly affects the way web sites are coded; almost the entire progress in website design and coding has been dictated by the increasing bandwidth and computanional capacity available.

The earliest web sites were just static text, perhaps with an 8-bit image, then later with a few spinning gifs, but try going to, oh, I don’t know, let’s say your Facebook page.
Run your mouse over it a little – see all those tips, hints, pictures and stuff popping out? The constantly updating statuses, chat, whatever?
Or check out your favorite news outlet, and notice all the Flash adverts all over it – they need the advertisement for funding, without it they can’t exist.

Each of all of those things demand a chunk of code be downloaded into your browser, and for your computer to execute it. They also demand something else: The Mouseover Event.

Not many mobile units have that kind of bandwidth yet, and they also don’t have any mouseover capacity (because there is no mouse pointer – your finger is the pointer). We’ll get back to the power in a moment, but that event needs a little explanation, especially if you’re not a web geek.


sadly, this is not live anymore – easily one of the funniest memes ever

You see, most of what makes websites of today non-static (or reactive) is due to the mouseover event family. Regular code (HTML or Javascript) can pay attention to your mouse pointer and make the website react to its location, and most Flash in websites is almost entirely based on mouse pointer detection.
Basically, the only thing most Flash content will do on a mobile device is play video and let you click things like regular links – forget about playing all but the simplest point-and-click games, for example.

This is no small point – websites that rely, say, on expanding menus can’t be used at all on a mobile unit, pure and simple; you can’t access the site content if you can’t get the menus to show.

Then there’s the issue of power. That’s actually pretty straight forward: – if the content requires a lot of stuff to be downloaded and executed in the client (your browser), current mobile chips just don’t have the power, and, perhaps more importantly, neither do the batteries.
And battery technology is not following Moore’s law and can’t keep up – if you want an indication of the power consumption, try running some Flash content on your laptop without the charger plugged in. It will hork that power down like a stoner with a plate of burritos.

null
*BURP*

Also, it is worth keeping in mind that a lot of stuff gets downloaded or executed behind the scenes even if you don’t point at or click on it, sucking up bandwidth and power beyond your control.

The reason this situation exists is that web content as we know it was designed on, and for, stationary computers and laptops with a charger never too far away – obviously, the content was built to push the available power to its limit.

But what is the new situation – mobility – going to mean then?

Are we going to see all the knowledge and funcionality of the 21st century internet suddenly becoming, not obsolete (as it is not being replaced by something better, not yet anyway), but just not used anymore?
I mean, take Silverlight for example, Microsoft’s cutting edge web technology, which they’ve spent years developing. Good luck with that on a mobile device – it’s geared specifically towards all that multifunctionality executable stuff, rich media and whatnot, and even if the mobile chips grow powerful enough to handle it, and even if mobile internet becomes fast (and cheap) enough, we still need some pretty exceptional leaps forward in battery technology to be able to use it.
HML 5 is going to have problems too, as a large part of it is centered around extended mouse pointer detection functionality and rich media.

Will all this carefully developed technology be wasted? What should web developers focus on – powerful stationany machines or iPhones? With the current challenge of getting consistant behavior across platforms and a few years worth of computers (which has people like that frothing at the mouth already, let me tell you), they’re not going to be able to cover both flavors at once; the difference is way too big.

Are we going to see two internets – one for mobile, another for “real” computers? I mean, sure, we want stuff that works on our mobile devices but we don’t want to come home to our actual computers and load up something that looks like Facebook Mobile. We want the bells and whistles there.
But do we want two internets?

I guess what I’m asking is, are we going to see this state-of-the-art, developed-over-decades internet tecnology go tripping over its own feet, like the awkward adolescent it is?

Share on Facebook

– and it designs back…

August 19, 2010

I’ve tried not to get into this, on account of it being a veritable can of ticked-off killer bees, but I think I must, so with a due sense of dread I utter the word: – Microsoft…

(disclaimer: while I am very much a Mac fan, this is not going to be a “Mac vs. PC” article, and I probably won’t respond very well to that line of discussion. Clear? OK, let’s proceed)

So what’s the deal here?

Well, as any designer with a user experience focus should, I have taken note of this thing, the computer – in fact, the very recent history and speedy ascent of this thing presents us with a unique opportunity to see what design actually does in a complex environment.
You see, the computer operating system is the first widely accepted, uniform “object” ever to have taken on its shape purely by design, and to have spread far, far beyond any specific demographic, environment or circumstance.

What I mean is, the chair, the hammer, the glasses, the car, all the other designed objects we have, they were made into their overall shapes by a meeting of function and design – so there’s a million chairs in the world, but they all have a seat, and they all have some manner of footing, and a car won’t work unless you have a reasonably intuitive steering method that is also reliable, and so on – you get the point.


some are weirder than others though

Not so for the computer OS. Nobody had any expectations for this entity, and there were litterally no limitations – anything could have been built on the 0’s and 1’s that make up computing.
To say that something was limited only by imagination has rarely been more true.

Anyway, long story short, someone at Xerox PARC came up with a programming paradigm called Smalltalk, somebody thought of multitasking in windows, there was some borrowing, some stealing, some lawsuits and some business shenanigans, and presto, what we now know as Microsoft Windows became the prevalent operating system of computers all over the world.

And this design, it’s designing back – and this, as they say, is where the plot thickens…

For example, pretty well 90% of all computer users consider crashing and recovering from it a basic condition of working with a computer, much like refueling a car, but this is by design, not by function.
The only reason they think so is because the only OS they’ve ever known, Windows, is prone to crashing. They don’t know that a computer is not a thing that’s supposed to crash anymore than the aforementioned car is, and that it should be treated the same if any particular model is prone to do so.
Similarly, the way most people react when a piece of technology doesn’t do what it should is something along the lines of “- oh, I’m so stupid with machines!” – but this, too, may (!) be traced back to the fact that Windows, most people’s first and only direct acquaintance with hi-tech, notoriously chides the user when something goes wrong, and/or talks tech above the user’s head with gibberish alerts that sound ominously complicated, yet serves no purpose since nobody in the room understands what they mean.
It has done so ever since it was first brought to the market, and an entire generation of people just got used to it.

The list could go on but as I said this is not about bashing Windows, so let’s just say the point is made.
(if you feel like a real tirade, albeit an extremely well spoken and interesting one, this guy did his homework and then some)

Now, I’d like to think there’s more of a reason for looking at this, and certainly for writing about it, than just establishing that I don’t like the Windows experience particularly – and there is.
That reason is, there’s a cost – mostly in actual money, too, we’re not talking abstract cost here – and we’re failing to notice.


imagine a pile of these reaching from earth to the moon…

For example, look at the concept called the productivity paradox. If you don’t feel like following that link, the brief version is that, in the business world, there is hardly any visible gain in productivity with the increase in IT expenses.
This is named a paradox and has statisticians and other researchers flailing for an explanation but I believe the answer is right at hand: – the most prevailing computer user interface is Windows, and Windows does not increase productivity, at least not in any way relative to the expenses incurred by using it.

Let’s do a bit of math here – I know, it sucks, but it matters:

Acme is a copywriting company that, so far, has worked on electric typewriters. There’s some 25 writers employed, and they each get 40.000,- a year.
So let’s get them computerized – each work station costs $750,- and each copy of Windows is $300,-.
That’s $93.750,- right there – or more than two whole salaries for the first year (and we haven’t bought antivirus software yet, or even an office package). The computers better make these people that much more effective, but how would they do that? They’re still just writing stuff.

It gets worse though.
A piece of hardware such as a computer should be able to run pretty well for at least 5-6 years (in fact there’s no real reason why it shouldn’t run much longer than that) but, mainly because of Windows, this company is probably going to have to upgrade most of the workstations every 3 years or so – in addition to that, there’s going to be point updates to the system, costing money too (say, a couple of hundred dollars pr. workstation pr. year average).

But it gets worse yet.
Because, unless these people are mostly superusers, having 25 workstations running Windows is going to require at least one full-time IT support employee. If the company has servers, too, one guy probably won’t cut it (if they run Windows Server, IIS and the sort).

– and just as you thought it wouldn’t get worse anymore, it probably will; the company is all but guaranteed to experience serious downtime and expenses due to a virus or hacker attack. Most attacks (even taking the spread of Windows into account) exploit weaknesses and security holes in Windows that should not exist in the first place.


Remember these? Gone. By the wheelbarrow.

The calculation is not meant to be textbook but I’d have to be off by quite a lot before computerizing such a company is going to be worth it.

Again, my point here is not that they’d be better of with Mac systems – my point is we, society, businesses, take this as a prerequisite condition of day-to-day operation without question, even though similar conditions in any other field of human endeavour would have us frothing at the mouth.

Could this be because the omnipresence and uniformity of Windows has redesigned our perception? I maintain that it is, simply because there is no other explanation for our glaring blind spots regarding computers.

I think we, people, need to start learning to cope with tecnology and design – in fact, we’re overdue; the personal computer is spreading in the form of smartphones, and these are already beginning to show the same kind of vulnerability. If we just sigh and resign to this development, we will react to it in the wrong way, also known as the Micosoft way: – the tecnological equivalent of frantically trying to heal two broken legs and a gunshot wound with a band aid and a tylenol.

The world of computers should be teaching us this, to the tune of billions of dollars every year in costs incurred by virus-, bot- and worm-attacks, spam (some estimates have more than 75% of all spam spreading via Windows PCs, thanks to their inherent vulnerabilities) and just regular wasted time.

And it’s not even that we don’t know – articles flood the news everytime there’s a major virus or worm attack. It just never gets traced back to Microsoft, which is even more baffling when you realize that significant action could be taken against these attacks, and the costs they cause, by redesigning the computer operating system in a sensible way.

I think this is really an appeal to the true power of design – just imagine if we solved this problem, born from (bad) design, with (good) design, and saved the world billions upon billions of dollars…

Share on Facebook

“UX is not” manifesto, really…

October 27, 2009

This is going to be just a short post (by my standards anywyay) – just thought I’d take a moment to commend this wonderful run-down by UX designer Whitney Hess, of what User Experience and the design thereof is not.

While I’m usually not particularly fond of defining anything through what it isn’t, the field of UX (as it’s also mentioned in the article linked above) is so new and floaty that, to get more familiar with it, the is-not view is one of the steps that we probably have to go through.

Now, I am not going to go down the list and comment everything in the article – suffice to say, I mostly agree with the points that are being made, and you’ll just have to read it to see what they are (I also suggest clicking on some of Whitney’s many links, interesting stuff there too).

And while you’re at it, feel free to compare what’s being communicated in the article to my personal work manifesto (see how much I like that word? Terrible!), as well as the reason for my self-ascribed title of “Design & User Experience Creative Playmaker” (conveniently located in the lower right side of my contact page there).
Yep, we certainly seem to be on roughly the same page.


– and you find all of that over at my website, of course

Finally, a side note: – did you peep that “drag to share” widget about half way down the page? Awesome!

Share on Facebook