fbpx

There’s so much for me to talk about from ISMAR09 and I was only there for half of the conference.  I have a half-dozen more posts sketched out for the next couple of weeks.  I did get to attend the demo night on Monday which showcased the real hands-on applications of augmented reality.  Gail Carmichael posted up a video of some of the demos, so I’ll try to expand on what was shown.

 

 

Sony EyePet Demo– Ever since I saw the trailer for this game, I’ve been wanting to own it.  Even so much that I’m willing to buy a PS3.  The ability for the camera to pick up hand motions was impressive.  In the video, he’s bouncing the head of an AR bobble-head doll to make bubbles come out and tickling the monkey with his fingertips.  As a game, its mostly a cute demonstration of the technology that aims at the 3-8 year old market (and AR enthusiasts), but it’s a precursor of bigger things.  In the future, motion capture will be the new controller. 

The Tank and Kid Demo – This one showed how virtual objects and real ones can interact in a seamless manner.  Once again this technology will be best used in games, but it could bleed over into many other applications. 

Shooter VR/AR Demo – Notice I’m not using the real demo names because I’m not even sure what “Computing Alpha Mattes in Real-Time for Noisy Mixed Reality Video Streams” means.   Unfortunately, its hard to get a feel for what this demo did from the video.  The video makes it look like a cross between Max Headroom and a VR game.  In some ways, that’s all it was, because it used blue screen technology to mix in virtual reality dioramas with the player.   I found it interesting when the player would look at the area at the edge of both the real and the virtual.  I got a real sense of how these two realities can mix together at the edges.  Let’s hope they can figure out how to do this without the blue screen. 

ProFORMA Rapid Model Acquisition – Here’s one I can almost understand from the abstract title.  The program creates 3D models in real-time which is mind blowing.  The downside is you need to rotate the object around for the camera to pick up the object, but the usage has crazy possibilities.  It won the Best Demo for a good reason.  Mix the ProFORMA with other technologies like photosynth and we can achieve a 3D mapping of the world in rapid (4-5 years) time.  More on ProFORMA here

Animatronic Shader Lamps Avatars – I would have been more impressed by this demo if Mark Mine from the Disney Imagineers hadn’t explained this same technology during his talk.  Regardless, it grabbed attention because they had a comic as the face making fun of passerbys.

Thanks to Gail Carmichael who took the video and also posted more pictures about it on her blog.  I sat next to her during the Disney keynote while she took tons of pictures with her giant expensive looking camera and uploaded them to her Flickrstream.   I had total camera envy and was afraid she’d laugh at my tiny phone camera.  Cheers to you Gail for helping put on a great ISMAR and taking fantastic pictures.

About

Thomas K. Carpenter

Thomas K. Carpenter is a full time contemporary fantasy author with over 50 independently published titles. His bestselling, multi-series universe, The Hundred Halls, has over 25 books and counting. His stories focus on fantastic families, magical academies, and epic adventures.

  • Thanks for this! I’m going to link back here. I was so tired I didn’t have the energy to write up the descriptions… 😉

  • Greetings. I am one of the researchers working on the “Animatronic Shader Lamps Avatars” project mentioned about. Regarding the comment about Mark Mine from Walt Disney Imagineering having “explained this same technology during his talk,” that is not accurate. I know and sometimes work with Mark (he is a close friend), and was present for his talk, and what he showed was WDI’s work projecting pre-recorded animations onto heads. Our “avatar” demo at ISMAR involved the *live* capture of the comedian’s head pose, shape, and appearance; and then re-mapping/warping, transforming, and finally projecting that onto a moving physical model of another head, along with full-duplex audio and video. This meant that the comedian could look around as if he was sitting in the avatar’s place, the avatar would move correspondingly, and all of his facial expressions, etc. were mapped (live) onto the avatar. Nobody else that we area aware of, including WDI, has done this. (Incidentally, a paper describing the approach was peer-reviewed and accepted for presentation and publication at that same conference. Only about 20% of the submissions were accepted.) Take care.

  • @greg

    Thank you for the explaination and the clarification. Either way both the Disney and the comedian demonstrations were interesting and enlightening. Thanks for stopping by!

  • {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
    >