Using a mix of programs including FLARToolkit for the AR portion, Javad has created a talking head that you can interact with. Javad does a much better job of explaining the programs he used to make it happen:
What takes place is that we have an AIR client (built using Cairngorm) communicating with a Java server side using Remote Objects over BlazeDS. The text is sent to the Java server application using remote objects where a text response is generated using AIML and a Java chatbot framework. This text response is passed to a text to speech (TTS) socket server to generate both an mp3 byte array and something called MBROLA input format. MBROLA input format is a stream of text symbols (phonemes) together with duration in milliseconds, that represent visemes (mouth shapes).
The whole lot is packaged and sent back over the wire via BlazeDS where we have an Augmented Reality Viewer create as an Advanced Flex Visual Component (using Papervision3D and FLARToolkit). The model head was created in Maya and is an animated Collada with 13 different mouth shapes that have been mapped to the output received from the MBROLA stream.
To play the speech response, the mp3 byte array is written as a temporary file, read into a sound object and then played back. At the same time the MROLA stream has been parsed into an ArrayCollection of frames (for the model head) and durations and this is now iterated over in the handler method of a timer.
All this back end work results in an impressive demonstration shown in this video:
This talking head has a lot of potential applications from gaming to educational, though mostly it reminds me of Max Headroom.
Development of AR in all its incarnations will come from a variety of sources. Javad is showing his contribution through this admirable project. Stop by his blog and say hello.