Tag: Interaction

Microsoft’s Vision of HCI

Although Microsoft has a reputation for building suboptimal user interfaces, its research department actually has several world-class interaction design researchers (like Buxton, Hinckley, Wilson, Benko). There is no big human-computer interaction conference, be it CHI, SIGGRAPH, UIST or ITS, without several papers and keynote speakers from Microsoft Research. Recently, Microsoft has released several videos about the future in human-computer interaction, and these video actually assemble many quite recent research findings which are adopted almost one-to-one.

Here’s another one:

Some of the research concepts you see in the videos are:

  • Proxemic interaction (cf. Saul Greenberg, Till Ballendat et al.)
  • See-through displays
  • Multitouch and animation (cf. Takeo Igarashi)
  • Telepresence
  • Back-of-the-device interaction (e.g. Baudisch)
  • In-air gesture control
  • Interaction with and between multiple devices
  • Tangible Interaction (cf. Hiroshi Ishii et al.)

Windows 8 Critique by UI Expert Nielson

Jakob Nielson is a well known and highly regarded expert in the world of interface/interaction design and human-computer interaction in general. He wrote a critique on Windows 8 shortly after its release which caused a lot of controversy in the net (try Google with “Nielson Windows 8”). Nielson heavily criticizes the way that Windows 8 tries to fuse desktop and mobile UI.

What’s interesting is that Nielson did empirical user studies with 12 experienced PC users. The findings that I find most relevant are these three:

  • The double desktop (one traditional, one with big touchable tiles) is confusing since one has to switch between two worlds that work very differently (inconsistency).
  • The flat Metro style, while visually pleasing, makes it hard to distinguish regular text from clickable links.
  • Some of the new gestures that e.g. require the user to swipe from the outside of the touchpad into it are highly error-prone.

I recently got my own Windows 8 laptop and could experience “live” some of these concerns. Even now, I find it difficult to know whether I’m in the Metro world or in the traditional desktop world because with ALT+TAB you switch between all applications (of both worlds). Gesture interaction is a pain. Of course, Microsoft has the problem that it tries to introduce new interaction techniques for a huge range of actual hardware devices. That may be one reason why the resulting experience does not feel as optimized as in Apple products.

Nielson’s own summary is this:

Hidden features, reduced discoverability, cognitive overhead from dual environments, and reduced power from a single-window UI and low information density. Too bad.

If you want a balanced picture, read some of the counter arguments on the net. I do not link up any because I haven’t found anything substantial yet.

Facial Expression Replication in Realtime

The FaceShift software manages to use the depth image of the Kinect for a realtime replication of the speaker’s facial expression on an avatar’s face. Awesome. Look at the video to see how fast the approach is – hardly any visible latency between original motion and avatar and really subtle facial motions are translated to the virtual character. The software was developed by researchers from EPFL Lausanne, a research center with an excellent reputation, especially in the area of computer graphics. The software is envisioned to be used in video conferencing and online gaming contexts, to allow a virtual face-to-face situation while speaking.

Permanent Subjective Foto Stream

The company “OMG Life” released a camera called “Autographer” that you can wear around your neck and makes a stream of pictures throughout the day, based on sensor readings. The sensors (location, acceleration, light, temperature ..) are used to make the decision whether a moment is “interesting”.

This idea sounds familiar from the MyLifeBits project at Microsoft Research around 2002. The researchers developed a device called SenseCam and published their heuristics to detect interesting moments based on rules (e.g. a drastic change in light may indicate that the user enters/leaves the house). The MyLifeBits project goes further back to 1945 when Vannevar Bush published his vision of the Memex computer system that would store all an individual’s books, notes and other documents and thus extend one memory. Your can read his article “As We May Think” online (July 1945 Atlantic Magazine).

Also see the Autographer homepage and the SenseCam project page.

References
Gemmel et al. (2002) MyLifeBits: Fulfilling the Memex Vision. In: Proc. of ACM Multimedia.

Neuer Fingercontroller: Leap Motion

Update: Das Gerät scheint auf dem Markt zu sein. Engadget hat am 25.5.2012 einen Hands-on-Bericht veröffentlicht (siehe Video weiter unten).

Die Firma Leap Motion entwickelt seit 4 Jahren an einem hochpräzisen Detector à la Kinect, aber viel präziser, wie aus ihrem Video hervorgeht. Microsoft’s Kinect kann zwar den Körper als “Skelett” in 3D tracken, ist aber nicht präzise genug, um Finger zu erkennen. Wenn das Gerät von Leap Motion so funktioniert wie dargestellt und tatsächlich günstiger ist als die Kinect, würde das den Sensormarkt tatsächlich revolutionieren. Für die Eingabe von Gebärdensprache z.B. ist die Kinect nicht geeignet. Mit Leap Motion wäre eine Gebärdensprach-Erkennung zumindest theoretisch möglich.

Hier das Video von Leap Motion:

Links:

Hier der Test von Engadget:

Google’s projectglass und Gegenvision

Google hat ein Konzeptvideo veröffentlich, wo eine Datenbrille in Aktion gezeigt wird, die das Internet ins tägliche Leben holt. Erinnerungen werden eingeblendet, man wird durch Straßen navigiert, Fotos werden über die Brille per Sprache geschossen und direkt auf Google+ hochgeladen. Und natürlich kann man Musik hören und skypen… schöne, neue Welt.

Hier eine interessante Gegenvision zur Augmented Reality à la Googleglass:

Sight from Sight Systems on Vimeo.

 

Copyright © 2024 Michael Kipp's Blog

Theme by Anders NorenUp ↑