New Scientist is reporting that Microsoft's Cairo, Egypt lab has a created a way to combine cell phone videos of the same scene, from multiple sources, into a single video; and they can do it in real-time. They are calling it Mobicast, and it will allow them to create live broadcasts of streaming video from multiple angles.
The idea is very similar to PhotoSynth, but applied to video. Let's say there are two people recording video of the Macy's day parade with their cell phones. Mobicast will see that they're recording the same footage in the same place and will begin synchronizing their clocks with the server, comparing the streams, and aligning the frames based on image-recognition technology, to create a single, wider-angle video. In the midst of all this, the video is continuously streamed, live, to other phones and devices.
The challenge, according to one of the developers, Ayman Kaheel, is to do this in real-time. Fortunately (or not, depending how you look at it), phone video is fairly low quality and has poor frame-rates, allowing Mobicast to work in real-time. Once a user streams video to Mobicast's servers, they receive feedback to their phone highlighting their contribution to the stitched video, allowing them to move their phone around to help contribute to the video at a more effective angle.
Mobicast uses GPS to pinpoint the location of users who are recording. The program requires software on the phone itself (Windows Mobile only, for now), as well as on the server that will receive the streams. This could change the way people see breaking news stories. Imagine if multiple people witnessed the 9-11 attacks with Mobicast. We'd have had an amazing panoramic video streaming to every phone and news source across the country.
Check out this video of it in action...
6 Comments - Add comment