Tag: Avatar

Facial Expression Replication in Realtime

The FaceShift software manages to use the depth image of the Kinect for a realtime replication of the speaker’s facial expression on an avatar’s face. Awesome. Look at the video to see how fast the approach is – hardly any visible latency between original motion and avatar and really subtle facial motions are translated to the virtual character. The software was developed by researchers from EPFL Lausanne, a research center with an excellent reputation, especially in the area of computer graphics. The software is envisioned to be used in video conferencing and online gaming contexts, to allow a virtual face-to-face situation while speaking.

Talk at ACM ASSETS Conference

Research results on sign language avatars will be presented on 25 Oct at this year’s ACM ASSETS conference in Dundee, Scotland (see programme). The title of our paper is “Assessing the Deaf User Perspective on Sign Language Avatars” and the talk will be given by Quan Nguyen (DFKI) and Silke Matthes (Univ. Hamburg).

Find more information on our project page at DFKI.

10-Jahres-Feier Lehrstuhl André (Uni Augsburg) // Vortrag

Am 28. Oktober 2011 feiert der Lehrstuhl Human Centered Multimedia von Frau Prof. Elisabeth André sein 10-jähriges Bestehen.

Elisabeth André hat wie ich am DFKI Saarbrücken geforscht und ist sozusagen meine “Doktormutter”. Sie hat mich eingeladen, bei Ihrer Feier einen Vortrag zu halten. Titel: Medium Mensch: Gesten, Gebärden und gutes Schuhwerk.

Der Vortrag findet um 14:15 Uhr statt.

Webseite zur Feier

Copyright © 2024 Michael Kipp's Blog

Theme by Anders NorenUp ↑