If you read that CNN link about how they did this, there is a commentary section below. One of the posters posted a link to a YouTube video of a meeting of the Cisco Corporation, where this technology was used. Here's a http://gizmodo.com/5076663/how-the-cnn-holographic-interview-system-workslink[/link] to the page.
You'll have to scroll down the page a bit until you get to the embedded YouTue video in the post of user MINI Driver. Sorry, but I didn't have time to learn how to embed the actual video while in the office. But it looks pretty neat...
So let me get this straight: Could Wolf see her too or is it just a video effect? I got the impression it was just a video effect and nobody could see her outside of the image on the screen.
Former user
wrote on 11/5/2008, 1:01 PM
No, it was just a chroma key effect. He can't see her like a hologram.
Pretty useless wizardry for news. I would have preferered to see her at the actual location. And what's with the Princess Leia glow around her? I was expecting her to say "Help me Obi-Bama, You're my only hope."
Got me on that one. I thought Wolf could see her like a hologram. He was probably looking at another monitor "behind her" to make it look like he was talking to someone, or he was just talking to the red dot on the floor.
It was green screen with a twist! The twist was the New York studio cameras could truck, pan, tilt, pedestal up and down and the Illinois image was synced to do the same by using multiple cameras. Yes, it is only an effect seen on video, not in the studio by Blitzer.
The Cisco Telepresense is a projection in 2-d that the person on stage can see and interact with. Satic images only.
The ability to use the jib and the camera pedestals for movement during the interview I enjoyed. But that's sort of a quality of life effect that is similar to the movement of a jib in live television. It adds a 3D feel to an otherwise flat image. I love a good dolly or jib move in anything I watch. It adds that pizazz that makes the image more enjoyable with more depth.
The cost and setup I am sure will result in limited use of the effect, while the Steadicam and jibs used now on the CNN New York set are here to stay.
I appalud CNN and David Bohrman for the effort and look forward to other innovations. The collaboration screen is clearly the best addition to that set along with the wall of screens.
Couldn't two computer controlled jibs (one in the studio and the other duplicating
movements in the remote location) do the same with a cyclorama green screen?
I think it looks like crap. CNN obviously wanted to be able to claim they were the first to try some new technology live, but I feel sorry for the guy who rote the check. With dozens of HD camera and a week-long tech setup, I'm sure it was expensive.
And what's with the rear-view shot while she's talking?? I personally like to see a person's face when they're speaking. Sheesh.
It reminds me of the kind of web site misdirections and bad ideas that flooded the young internet back in the late 90's. I suppose we'll have to put up with cheese like this until virtual sets become more commonplace.
Someone please explain to CNN that "less is more".
I think that the glow and the kind of "Star Wars" wavy effect is added after the fact and it's a little cheesy. If they could realistically place someone in the chair across from Larry King during an interview, that would be kind of interesting.
The technology seems to be a development from the BBC's Origami project that's been around for a few years. Pretty impressive if you get to see it first hand.
What you are seeing is a 3D model of the talent built in real time from multiple cameras. The 3D model can interact with other CGI elements such as dinasours, well that's what they were doing in the demo I saw. This is quite different to and way more advanced than the old virtual set technology. It does require a massive amount of computation as every pixel of the talent is tracked in 3D in real time.
I do think it could save news departments plenty of money.
You don't need microwave ENG/SNG trucks now. Just create the
talent in 3D software. A reporter can take a still picture with a camera phone
and email it to the 3D computer for the background, then text message the words
you want the computer proxy to say. (utility patent pending)
Hopefully there are no computer glitches or you might end up with a "Max Headroom"
type character.
I wated most of it on NBC and they did use a green screen effect at one point. I think it was 100% better. The female announcer was surounded by columns and the numbers floated up in the middle. Then the purposely removed the graphics to show her standing in front of the blue screen.