[mary-users] MaryTTS Viseme data

idoor idoorlab88 at gmail.com
Sun Apr 16 15:29:06 CEST 2017


Do you mean you can help? would you please provide more info?
Thanks


On Sun, Apr 16, 2017 at 9:27 AM, idoor <idoorlab88 at gmail.com> wrote:

> Sorry, I do not know what is "Pode ajudar", is it Spanish?
>
> Best regards
>
> On Sun, Apr 16, 2017 at 9:21 AM, Jose Carlos de Oliveira <
> oliveirakol at gmail.com> wrote:
>
>> Pode ajudar
>>
>>
>>
>>
>>
>> Oliveira/Jose Carlos de
>>
>> Pessoal
>>
>> Brasilia - DF - Lago Norte
>>
>> SHIN CA05 CJM02 AP309
>>
>>    cel:+5561-99311.9226 <+55%2061%2099311-9226>
>>
>>
>>
>> *De:* mary-users-bounces at dfki.de [mailto:mary-users-bounces at dfki.de] *Em
>> nome de *Joan Pere Sanchez
>> *Enviada em:* sábado, 15 de abril de 2017 15:05
>> *Para:* idoor Du
>> *Cc:* mary-users at dfki.de
>> *Assunto:* Re: [mary-users] MaryTTS Viseme data
>>
>>
>>
>> Hi Dave,
>>
>> This task is the main goal of my PhD thesis. I'm doing lip-sync from the
>> input text over the time duration estimation done while the speech is
>> generated. You can develop your own strategy for lip/mouth synchronization,
>> but often this is an avatar (or interface -I'm using a talking head too-)
>> dependent task. So, if you are using an avatar, it depends if you can use
>> blend shapes to mix by interpolation from the initial pose to the next one.
>> Most of MPEG-4 systems are able to do that automatically.
>>
>> On one hand, you have each phoneme and their start and finish time. On
>> the other hand, you can adjust a set of visemes for each basic expression
>> (no more than 15 are needed) and then choose the sequence corresponding to
>> each word you are generating. It's the more efficient and simple way to
>> have an effective lip synchronization.
>>
>> Don't hesitate to contact me if you want more info or refs about.
>>
>> Bes regards,
>>
>>
>>
>> 2017-04-15 18:27 GMT+02:00 idoor Du <idoorlab88 at gmail.com>:
>>
>> Hi all,
>>
>>
>>
>> I am new to MaryTTS, tried to call its API via:
>>
>>
>>
>> AudioInputStream audio = mary.generateAudio("testing");
>>
>>
>>
>> Now I want to animate mouth/lip shapes at runtime based on the audio
>> sound, how to achieve that? are there any viseme data associated with
>> the audio?
>>
>>
>>
>> Thanks in advance.
>>
>>
>>
>> Dave
>>
>>
>> _______________________________________________
>> Mary-users mailing list
>> Mary-users at dfki.de
>> http://www.dfki.de/mailman/cgi-bin/listinfo/mary-users
>>
>>
>>
>>
>> --
>>
>> *Joan Pere Sànchez Pellicer*
>>
>> kaiserjp at gmail.com
>>
>> www.chamaleon.net
>> +34 625 012 741 <+34%20625%2001%2027%2041>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.dfki.de/pipermail/mary-users/attachments/20170416/68671ba4/attachment.htm 


More information about the Mary-users mailing list