I discuss how soulless and jerky avatars are being replaced by the latest in Avatar technology and human effort
Avatars that attempt to use sign language are viewed with suspicion by people who know sign language.
The avatars are often described as soulless, devoid of expression and people are unable to understand them.
Movements are jerky and do not reflect the magic of a real live, animated signer. To get an idea of what I mean, check out Tessa and Guido. Bless them, the technology was not quite there yet.
It would seem that there has been a rush to fund projects associated with Avatar technology. In the past 15 years, the European Union and multiple universities have thrown hundreds of thousands of pounds at research in the hope of developing avatars that can be used on websites, to and to record sign language dictionaries.
There have been a range of avatar projects in the UK like eSign, and the IBM funded SiSi project, using real signers for their avatars. More recently, Microsoft has jumped on the bandwagon by developing Kinect Technology, using synthesised avatars to translate speech. It is a dream of these people to provide automatic signing for speech and television broadcasts without having to drag a real live interpreter in to the studio. There could even be a off the shelf avatar product that deaf people can use to translate text. Cost-effective in the long run they say. Really?
The real issue here is that most avatar technologies are NOT at a level that captures the full expressive wealth of sign language (1). We CANNOT substitute an interpreter for an Avatar signing the 8 o’clock news. Then again, we could always get Thamsanqa Jantjie, the Mandela Memorial’s Fake Sign Language Interpreter in.
Cultural context, subtle off the cuff expressions, incorporating prior knowledge, metaphors and examples into a signed conversation is the mark of a fluent signer. We will ALWAYS need real live signers to be represented through avatars.
Sceptical, I stumbled across the latest developments in Avatar technology by MocapLab in Paris. I was so impressed by what they had done with their avatar technology, I had to have a go. Here is me signing several phrases in International, British , New Zealand and American Sign Languages.
MocapLab are definitely on the right track. See how they captured shoulder movements, squints and narrowing of the eyes and subtle mouth movements using real people who sign. That’s more like it!
If Avatar technology is done with the right signers, then hey-ho! This opens up WHOLE new areas! Excited, I started thinking about the use of proper sign language Avatars in children’s TV programmes. We could have a signing dog WITH personality! Avatars could be used in apps and in e-learning resources to teach both deaf and hearing children about emotions, facial expressions and to represent signing deaf characters properly. People looking to deliver sensitive information (i.e. that of health or legal stuff) could use Avatars to conveniently screen the real person from the information itself without boring us rigid. Avatars could be used in rail station announcements declaring a change in platform as the Gare de Lyon in Paris did. The Avatar sign for platform 3 could be replaced by another sign for platform 6.
The possibilities are there.
To read more about sign language, literacy and technology check out the rest of my blog here or by following @playbyeye
Amanda Everitt
References:
1: Research on the limitations of Avatar technology: Bottoni, P., Capuano, D., De Marsico, M., Labella, A., and Levialdi, S., (2012b) ‘Experimenting DELE: a Deaf-centred E-Learning Visual Environment,’ Proceeding of AVI ’12 Proceedings of the International Working Conference on Advanced Visual Interfaces, New York, 2012, pp 780-781, also available online here. (Accessed 13 November 2013.)