VP8 - why does 32-bit look different from 8-bit?

Shergar wrote on 9/10/2007, 9:43 AM
I must be missing something fundamental here ....

If I switch pixel format from 8-bit to 32-bit on any clip - WITH NO EFFECTS APPLIED, the clip gets darker - both on my display, and according to the Scopes histograms.

Shouldn't it stay the same? An 8-bit pixel value represented in 32 bits should not be a completely different value, surely?

Certainly switching from 8 to 32 in After Effects or Premiere doesn't change the colour of a clip.

So what's happening in Vegas?

Comments

Zelkien69 wrote on 9/10/2007, 10:02 AM
I did the same thing and it appears that the "extra" color information results in blown out highs and deeper shadows. After using levels for the same clip it does appear to have better details through color range and seperation.
rmack350 wrote on 9/10/2007, 12:05 PM
I think turning on 32-bit also does a color space conversion. Maybe Glenn will chime in on this soon.

This morning I put bars up on the timeline and then did a couple of renders to the BMD 10 bit format, one in 8-bit space and one in 32-bit space. Bars look the same regardless of Vegas' bit mode, but the two rendered clips aren't the same. The one rendered while Vegas was in 32-bit mode matches generated bars if Vegas is in 32-bit mode. The one rendered while Vegas is in 8-bit mode matches the generated bars if Vegas is currently in 8 bit mode.

Hope that make sense.

BTW, I couldn't really tell if Vegas actually created 10bit BMD files, but WMP can't play them. Maybe that's a sign. When I have more time I'll do better tests. These aren't really enough to say much.

Rob
StormMarc wrote on 9/10/2007, 12:11 PM
I just noticed there is a gamma control under the 32 bit setting. It changes when you choose 32bit but you can change it back to the setting that is used for 8 bit and the lumenance look much closer to the 8 bit setting. Anyone have more info on this?

Marc
marts wrote on 9/10/2007, 12:15 PM
actually it is described in Vegas 8 user manual on pg. 249 in Compositing gamma paragraph.
Eugenia wrote on 9/10/2007, 12:38 PM
To make the 32bit footage use the same gamma as the 8bit, just apply the "Computer to studio" filter template under "color correction".
StormMarc wrote on 9/10/2007, 12:56 PM
Well... I read the paragraph on the compositing but still do not understand when to change the gamma in real world editing situations. My best guess is that you just leave it at default when working on a new project in 32 bit and perhaps change it when you need to rerender something that was color corrected in 8bit so that the levels do not look way off?

Marc
Shergar wrote on 9/10/2007, 1:01 PM
Eugenia,
thanks! you're right .... so now all that puzzles me is why Vegas' gamma for 8bit is different?
Paul

john-beale wrote on 9/10/2007, 1:15 PM
Not that this really answers the question, but as far as I know, "gamma" is historically the worst train-wreck across all still and video editing and display situations. Default PC display gamma is different than Mac, computer displays are different than televisions, LCD monitors are different, web browsers don't treat things uniformly, some image & video formats store intended display gamma but it is interpreted differently and/or ignored by different viewers and different platforms, the list goes on.

Historically, I believe anything more than 8-bit color has been limited to high-end video and cinema work so it wouldn't surprise me at all if there were conflicting default gamma standards in play.
rmack350 wrote on 9/10/2007, 3:09 PM
I really suspect that this is a conversion from Rec.601 DV color space to Rec.701 HD color space. But since I really don't understand the details...this is just the area I'd be looking at.

<edit> It's Rec.709. Shows how much I know about this. Nothing!

Rob Mack
FrigidNDEditing wrote on 9/10/2007, 4:10 PM
ok, I can't even get white without pulling the curves filter almost all the way back. I love the extra room to play in, but has anyone tried doing an additive mask mask generator filter?

It's like they added the the reference point (or whatever you might call it) to a point at the top of the spectrum and stretched everything down from it. I have a hard time using some of the coloring controls and getting my levels even up to 50 - 60 IRE on my waveform.

Am I the only one? (and the default shift in color sans 32bit processing doesn't happen DV-AVI's for me, just the HDV).

Dave
RBartlett wrote on 9/10/2007, 4:38 PM
Float32 is going to hit the points on the RGB palette with more accuracy than 8bit rounded (integer) numbers. Almost all compressed moving footage we can ingest has to be converted with addition and multiplication arithmetic. So this brings us quite a lot of math that causes fractions of levels to occur. 8bit numbers (integers) in either direction causes problems. Camcorders imagers are RGB and have 14bits of DSP precision (plus or minus a couple of bits) before this data is converted and truncated to 8bit Y'CbCr. So it figures that having many small fractional levels of precision in the NLE can only help get the math calculations back to where the imager was prior to the big squeeze and 'normalization'.

If you can afford the CPU tax, Float32 is worthwhile for quality. We are nevertheless in a world where quality is often sacrificed. Also, your production values are still best being applied during your preparation time before you click record on your camcorder. Float32 has some scope for correcting bloopers in post-editing. So just turning it on isn't supposed to be a 'conform/legalize' process. Although you may see some benefits before you grade/ColorCorrect.
rmack350 wrote on 9/10/2007, 5:23 PM
This doesn't look like "fewer rounding errors". It looks like what was once a value of 16 and 235 is now a value of zero and 254.

I'm not saying anything's broken, except maybe "me". I'd like to know what to expect here...my current expectation is for a render to look like the original media.

Maybe it does if you aren't trying to break things.

Rob Mack
RBartlett wrote on 9/10/2007, 5:55 PM
Vegas does have a history of opening up formats that are part of the suite into Computer RGB space. But if you are using (probably) Cineform, MPEG-1, MPEG-2, DV or SonyYUV (and now AVC presumably) then I'd expect 8bit and Float32 to expand from Y'CbCr into Studio RGB without 'bolting' all the way up to full-scale values. It is worthwhile to run some tests. Thanks for pointing out what you've found Rob.
GlennChan wrote on 9/10/2007, 7:19 PM
Here's what's happening:

In 32-bit float projects, you can switch between a "compositing gamma" of 2.222 and 1.000. 2.222 is old school Vegas behaviour. In "1.000" gamma, your values are converted from gamma-corrected values to linear light values. This:
A- Is necessary for optically-correct compositing. Your cross-dissolves will be more film-like. Adding green text on a magenta background won't cause a dark fringe where they meet. Diffusion effects will look more correct.
B- It also breaks a lot of the existing filters... they will deliver different and possibly wrong results. e.g. "Studio RGB to computer RGB" presets no longer do what they say.

So for most cases, I would stick to 2.222 and nothing wacky will happen.

2- When you change between 8-bit and 32-bit, Vegas changes more than the bit depth. Certain codecs like HDV, SonyYUV will decode differently depending on this setting. Other codecs like DV and Cineform do not do this.

In 8-bit, those codecs will decode to studio RGB levels.
In 32-bit, those codecs will decode to computer RGB levels.

I believe the intention is that you keep your project in either 8-bit or 32-bit mode (and 1.000 or 2.222 compositing gamma)... because flipping between the modes can potentially give you headaches.

Though there may be various workarounds you can use. e.g. transcode to Cineform via scripting tools (Gearshift) or Cineform's capture app (transcode as you capture).
Or just assume that working in 8-bit is color inaccurate, if you intend on rendering your project in 32-bit/2.222 compositing gamma.
farss wrote on 9/10/2007, 7:50 PM
Isn't this why one should be using view LUTs?

Having said that I maybe way out of my depth but from my to date putting my toes in the very deep pond of working in linear light grading this needs a lot of explaination.

For example, bring HDV into Vegas in 32 bit and start grading it but for what? Output back to Rec 709 or a 16 bit tiff sequence for a film out. Depending on what your output is going to be how you time your output and how you view it while timing it should be different.

Bob.
GlennChan wrote on 9/10/2007, 8:14 PM
Isn't this why one should be using view LUTs?
No not really. / The difference between the three modes (8-bit, 32-bit 2.222, 32-bit 1.000) is that they change the image processing. At the end you need to make sure the levels are correct.

You also need to make sure your monitoring is correct... certain combinations are messed up. For example, if you have HDV clips in a DV timeline, previewing via firewire will mean that all the HDV clips will have the wrong levels in 32-bit mode. You need to add a computer RGB --> studio RGB conversion to all the HDV clips.


Viewing LUTs are designed for when your target output device is a film stock + film printer combination, not video. If you are grading for film out, it is probably best left for facilities that specialise in it. They will have proper monitoring in place.
riredale wrote on 9/10/2007, 9:18 PM
Coming late to this thread, but this was my initial query last week when Spot showed two clips.

To my way of thinking, the clips should have looked identical with the exception of smoother gradations. Is editing now going to involve a whole new layer of gamma-adjustment complexity?
GlennChan wrote on 9/10/2007, 10:57 PM
You have to pay more attention to getting the right levels. HDV material now behaves differently in 32-bit versus 8-bit... it decodes to different levels depending on what your project is set at.

The "compositing gamma" setting in Vegas controls colorspace conversions. You will be fine using a compositing gamma of 2.222, which will give you the same behaviour that you've always been used to. The other compositing gamma (1.000) is tricky since filters behave differently.
farss wrote on 9/10/2007, 11:27 PM
The other compositing gamma (1.000) is tricky since filters behave differently.
So I guess the primary and secondary color correction FX aren't linear light aware?
And what about the scopes?

Bob.
Shergar wrote on 9/10/2007, 11:44 PM
The "compositing gamma" setting in Vegas controls colorspace conversions

I don't think this is quite true Glenn - changing the compositing gamma only affects how compositing operations and effects are handled, not the representation of each clip. It loooks like M2T clips at 32 bits are presented as Computer RGB (whatever the composite gamma setting), whereas at 8 bits they are presented as Studio RGB.

As Eugenia said, to make 32bit clips display at the same levels as they do at 8bit, it seems we have to apply the "Computer RGB to Studio RGB" preset in the Color Corrector.
Cheesehole wrote on 9/11/2007, 12:25 AM
That's true, the "Computer RGB to Studio RGB" preset makes the footage look exactly like it does in 8-bit mode as long as the Gamma matches (2.222). I was hoping to find more about the reasoning behind this in the manual but there isn't much.
GlennChan wrote on 9/11/2007, 1:11 PM
Both compositing gamma and an 8-bit vs 32-bit project affect colorspace conversions.

Compositing gamma affects the behaviour of filters. In 1.000, the studio RGB <--> computer RGB presets are broken. To convert from studio RGB --> computer RGB in the Levels filter, use
Input start 0.068
Input end 0.916

8-bit/32-bit has been covered in one of my previous posts.
STSA wrote on 9/11/2007, 1:24 PM
Any ideas on why VP8 is crashing in 32-bit mode and not in 8-bit?
jrazz wrote on 9/11/2007, 1:29 PM
Maybe your system can't handle it? I can't see your specs. My system has yet to crash and I have encoded 4 20 minute vids in 32 bit mode without a hitch. Now, I can say I have a 10 fps playback on the timeline for M2T files, but no crashes.

j razz