Why can't I get 32bit pixel format to work?

erikd wrote on 3/31/2011, 2:19 AM
I've hesitated to bring this up for over a year because I've believed I was the only one having this problem. I'm using VP9e on Windows XP Pro 32bit, HP xw8600 workstation dual quad core 2.66ghz with 8GB RAM.

It doesn't matter what type of video format I'm using, if I try to use 32 bit floating video levels for my render project settings, Vegas becomes inoperable.

Let's say I'm doing a 1080i HD project using the "HD 1440 x1080-60i" Sony MXF template. My video is also identical as it was recorded using the XDCAM HD camera. I complete my project and then switch to 32bit just before rendering to get maximum color depth for my images and Vegas immediately starts coughing up blood. Playback of the timeline is no longer an option but more importantly "red screens" start appearing for thumbnails on the timeline and even more importantly, Vegas refuses to render without crashing almost immediately.

I realize that this maybe looks like a RAM issue but I'm wondering how much is required to use 32bit for rendering. I know that only 2GB of my RAM is accessed in XP32bit but does that mean 32bit video pixel renders in Vegas are only an option on 64bit computers? If that is the case then I must have missed the memo. Anyone rendering 32bit files on a 32bit machine? Really perplexed about this and I wish Vegas also had a 10bit option instead of jumping all the way from 8 to 32.

Erik

Comments

LoTN wrote on 3/31/2011, 4:16 AM
It can be a memory issue combined with a cpu ressources starvation. My observations are that 32 bit mode is very expensive in cpu cycles. For memory, you can estimate that any image will roughtly need 4 times memory.

You may consider to have a try with blink's memory hack but, honestly, unless you face a no-go with Vegas x64 it is (I think) the best way to work on 32bit depth HD projects.

Really perplexed about this and I wish Vegas also had a 10bit option instead of jumping all the way from 8 to 32.

Fully agree. I am still wondering why SCS chose to use float 32 instead of int 16 arithmetics in Vegas code...
erikd wrote on 3/31/2011, 9:47 AM
"My observations are that 32 bit mode is very expensive in cpu cycles. For memory, you can estimate that any image will roughtly need 4 times memory."

So, I take this to mean that you are successfully rendering 32bit pixel format files. Which version of Vegas and on what system specs? Any more crashes with 32bit than 8 bit? Etc.

Erik
erikd wrote on 3/31/2011, 10:26 PM
Somebody please jump on board here. I'm getting the feeling that 32bit video levels is not really being used by very many people here. If you only use 8bit, please say so. If you use 32bit please say so as I'm trying to find out if this is a working feature in Vegas. JR, Grazie, Bob, Ferris....? What Vegas version and rough system specs as I can't get it to work at all really.

Erik
Marc S wrote on 3/31/2011, 11:44 PM
I had tried it in the past and could never get it to work without crashing. However I recently upgraded to an i7 machine and I plan on rendering out my current project in 32bit mode using Vegas 10 64bit. So far the test renders seem to work but I have not tried it on a large project yet.

One reason decided to use it was because I am noticing a slight luminance flicker when faded up from black and crossfading my Cineform HD footage in 8 bit mode. Using 32 bit seems to solve the problem.

Marc
LoTN wrote on 4/1/2011, 12:18 AM
So, I take this to mean that you are successfully rendering 32bit pixel format files. Which version of Vegas and on what system specs? Any more crashes with 32bit than 8 bit? Etc.

As far as I know I do not have any 32 bit pixel colordepth codec and can't say if such one exists. I use 32bit mode for small portions where I need more accurate color grading and want to avoid gradient banding. For such manipulations I use MXF 4:2:2 50Mb/s or lagarith.

Footage is either one of HDV, ACVHD, LAGS or MXF.

Actually, I do this with V9 x86 or x64 on a Seven x64 spare box (my edit PC has been destroyed). It is an old dual core "re-loaded" laptop with 4G RAM. While the usual edit system is broken it just serves me the best it can: it takes more than 10 seconds to display preview set at half size preview quality... Useless to say I would not try editing of a full project with such slowness.

On this same system, I won't ever think about doing the same with V10. Even with a 8bit project it is too unstable and slow.

Based on my small experience with this I would say that 32bit float edit within vegas requires a boosted bloody workhorse. Something like sandybridge+Z68 and a fair amount of memory.
rmack350 wrote on 4/1/2011, 12:29 AM
What you want to know is whether it works in an environment like yours and with a project that's somewhat comparable. I can't tell you that but I certainly have no trouble at all with a very basic test of a couple of Sony MXF fils created in Vegas, slapped on the timeline, overlapped to create a crossfade, and a gaussian blur applied. This with VP10-64 on Win7-64 with 8GB installed and dynamic RAM set to 1296 MB (way too high for 32-bit vegas on 32-bit windows). So really I'm testing in a situation that is too simple and rigged to work.

My first suggestion would be to open the task manager before opening Vegas and your project and then watching the RAM and page file usage before and after opening the project, and before and after setting the project to 32-bit color mode.

I usually associate red frames with memory problems. You haven't really said how many tracks of media are under the cursor, how many FX are in play, how much media is in the project media list, how long the project is...all the stuff that adds up.

Sony's MXF is mpeg2. I'm assuming that means it has a 15 frame GOP so figure Vegas has to decode all 15 frames of each clip that's in play at any given point on the timeline. For a simple crossfade that's 30 frames in RAM and then in 32-bit color mode that's 4x the RAM, I guess.

Have you done any tests? Simplified the project? What happens if you set the preview window to bypass all effects?

Rob Mack
rmack350 wrote on 4/1/2011, 12:45 AM
As far as I know I do not have any 32 bit pixel colordepth codec and can't say if such one exists.

I don't know if there are useful 32 bit/channel codecs but when I last rendered an uncompressed AVI in 32-bit mode Vegas reported it as 720x480x128. The 128 indicates 32-bits for each channel including alpha.

But you probably wouldn't want to do this. As I understand it there are two scenarios where 32-bit mode helps:

8bit source -> 32-bit process (FX1) -> 32-bit process (FX2) -> ... -> 32-bit process (transition) --> 8bit render

8bit source -> 32-bit process (FX1) -> 32-bit process (FX2) -> ... -> 32-bit process (transition) --> 10bit render (or anything over 8-bit)

So the idea of a 32-bit chain is that you would get rid of a lot of rounding and noise, even if your final output is still 8-bit. And of course you can render to 10bit (or greater) codecs if you like.

LoTN wrote on 4/1/2011, 1:15 AM
Yes this is what I do: this is the definitive way to get rid of banding caused by rounding errors.
erikd wrote on 4/1/2011, 1:48 AM
Yes, my projects can be complicated with many tracks, many files in the project folder and many fx's. I can take your suggestion of doing testing to see at what point things start to fall apart but I am a little surprised that others aren't more concerned about this since 10bit is a minimum standard for Broadcast output.

Even my mid-range PDW-335L XDCAM HD camera records everything at it's highest setting at 35mb, 1080, 10bit. If I then bring a single clip onto the timeline from the camera (say a low light situation) and then convert the project from 8bit to 32bit, it is very, very easy to see the difference. (Assuming of course you have a monitor that shows at least a 10bit output.) Likewise for graphic backgrounds that include gradients it is very easy to see the difference.

You guys seem to be saying that I should render each of my clips individually to 32bit and then drop them back into the 8bit project to help the problem but this seems counterintuitive if all we really need is 10bit to solve the problem.

If Sony is going to manufacture cameras that record 10 bit video, please give me a 10bit setting for my project settings because for any real world project, 32bit appears to be a whole lot of pie in the sky. Sounds good, sounds even impressive, but at the end of the day we all seem to be exporting 8bit renders.

Erik
LoTN wrote on 4/1/2011, 10:47 AM
On my side, I do not see a big difference between different footage/codec. Fact is that when in 32 bit mode, Vegas is very cpu-hungry.

Intermediates with 10 bit (or best) codecs may help at the cost of a more complicated workflow.

Why 32bit ? Still a mystery for me, or maybe it has something to do with r3d.
GenJerDan wrote on 4/1/2011, 7:28 PM
Somebody please jump on board here. I'm getting the feeling that 32bit video levels is not really being used by very many people here. If you only use 8bit, please say so. If you use 32bit please say so

I haven't used 8-bit since 32-bit became an option. That's been on an XP, Vista, and now Win7 (64bit), never more than 4GB of ram, and only on a multi-core since Win7.
erikd wrote on 4/1/2011, 9:41 PM
Gen,

Thanks for the info. Can you describe what your typical project looks like? What video format used? What video format rendered too? What version of Vegas?

Erik
rmack350 wrote on 4/1/2011, 10:30 PM
You guys seem to be saying that I should render each of my clips individually to 32bit and then drop them back into the 8bit project to help the problem but this seems counterintuitive if all we really need is 10bit to solve the problem.

That's certainly NOT what I was saying, although maybe it seemed that way. What I was saying is that when you use Vegas in 32-bit mode it will read your 8-bit media, process each effect/filter/transition at 32-bit precision, and then render output at whatever you're asking for. Nothing fancy to get your head around here.

As to whether Sony should have just gone with 10-bit processing...I don't have much of an opinion. 32-bit certainly covers the 10-bit space, and the 16 bit space, etc, etc.

Rob
eightyeightkeys wrote on 4/2/2011, 4:44 PM
I use 8 bit during editing and only switch to 32 bit during rendering.
On my i7 rig, with 12 GB of RAM, the 32 bit mode just bogs down the machine completely. My Nvidia GTX470 is not even being used at all. 0 % !
farss wrote on 4/2/2011, 5:50 PM
I've used 32bit float once when it first came out.
A few months back I tried it with nothing more than generated media and I still had problems with red frames. 8bit int is good enough for what I do. It generally involved a large amount of long GOP media on long timelines. No one is paying me large sums of money and my biggest hurdle is what was in front of the camera than anything I can do in post.
The few times I may have needed more than 8 bits was dealing with DigiBeta footage but it was going onto DVD anyway with no FXs, just cuts. I've since learned I can save a lot of angst in post by capturing the tapes via firewire from the J30 and save a lot of disk space in the bargain. The only recent time I've captured the full 10 bits from the tape was for some client whose tapes were telecine transfers from 16mm.

I too have always wondered why SCS went for 32bit float rather than 16 bit int with a 10bit pipeline. Even today there's not a lot of cameras recording 10bit and those who can afford them are more than likely not using Vegas anyway. An interesting question arises, does OFX support 32bit float, Vegas is the only NLE I know of that uses 32bit float.

My other concern with 32bit float is it does seem to cause a lot of issues and no doubt a few support calls. I've never once heard anyone say they're going to jump ship to Vegas because it has 32bit float but I've read plenty of angry posts from people having stability issues with Vegas and quite oftenly those problems go away when they switch their project to 8bit int.

Bob.
erikd wrote on 4/2/2011, 9:44 PM
GenJerdan and eightyeightkeys can you:

Describe what your typical project looks like? What video format used? What video format rendered too? What version of Vegas?

I'm finding this very interesting overall. It would be great if more people out there could tell of their 32bit experiences. I would especially like to hear what JR's experience is. My guess is that anyone using HDV, the perennial sweet spot for Vegas, would probably have the best possible experience. However, does HDV even record at 10bits? Don't know.

Erik
RedEyeRob wrote on 4/2/2011, 9:50 PM
Go to my thread about problems I had with the 32bit floating point and how it was solved here...

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?MessageID=745055

The original post on how to change it ussing CFF Explorer program is here
http://www.sonycreativesoftware.com/forums/ShowMessage.asp?MessageID=647907

I think you want to change
vegas90.exe
mcstdh264dec.dll
sonymvd2pro_xp.dll

for Vegas 10. It worked for me.
erikd wrote on 4/2/2011, 11:39 PM
RedEyeRob, very interesting results so far. I applied the 2gb flags to multiple file I/O and the exe for Vegas 9.0 and I can see very obvious and immediate improvements. Now, even on a complex project when I switch to 32bit I don't get red thumbnails. The timeline is much more responsive even allowing half resolution playback of full HD 35mb 10bit mxf files.

The real test will be if it allows me to render though, unfortunately my first attempt on a 60 spot with a complex multiple video track timeline, did fail pretty quickly.

Erik
LoTN wrote on 4/3/2011, 1:38 AM
Well, your observation is in line with my first answer to your post :)

You may consider to have a try with blink's memory hack but, honestly, unless you face a no-go with Vegas x64 it is (I think) the best way to work on 32bit depth HD projects.

@rmack350. Yes I am aware that going 32 provides room for any pixel colordepth, what puzzles me is the use of floats. I don't see any advantage using float instead of integer. It consumes more cpu cycles and needs numerous calls to C library integer <-> float conversion functions like ceil() or floor() and cast operators. This said, I don't have the source code, so I may be totally wrong.
erikd wrote on 4/3/2011, 3:18 AM
Right LoTN! I remember you said the same thing but I didn't know for sure what you were referring to at the time. I should also say that after saving the correct dll file for the render type....wait a little...my 32bit pixel format renders DO NOT FAIL.

That of course is the good news and the bad? 5 hours to render one 60 second spot! But at least it will render without failing. Now, all eight cores are not only working on the render job but working much harder than they ever did in the past.

I should have tried this hack a long time ago. Does this hack work for V10? I would assume there is no difference as long as I am on a Windows XP 32bit OS.

Erik
farss wrote on 4/3/2011, 4:08 AM
"However, does HDV even record at 10bits? Don't know. "

No.
You'll not find a camera that records 10bit under around $20K.
Put simply there's no real advantage to recording 10bit unless the camera has a very good signal to noise ratio. With HD even a 3x 2/3" camera is not really upto it. Arguably the cheapest camera worthy of the expense is the F3 but allow for an external recorder to record 10bit such as the Gemini.

Bob.
LoTN wrote on 4/3/2011, 5:24 AM
erikd,

Sorry if my answer was confusing. The blink3times era was so vocal I always assume that forum members who joined before him have read his prose. ;)

What blink told was to use CFF in order to modifiy one variable of the PE/COFF binary executable header. Upon process creation it tells the kernel that the process can handle more than 2G memory. It does the very same thing than if the program was linked with the /LARGEADRESSAWARE switch. Check this MSDN article for further details.

The hack will work for any x86 application agreed the code doesn't contain lame programmer kludges.
GenJerDan wrote on 4/3/2011, 12:03 PM
GenJerdan and eightyeightkeys can you:

24P and/or 30i SD DV, doing very little other than color correction and curves. Output usually as uncompressed AVI to bring into Sorenson for spitting out FLVs and WMVs.

Doing the same sort of thing in v7, v8, v9, and now 10c. Boring, yes?
rmack350 wrote on 4/3/2011, 12:46 PM
It's beyond my abilities to say why Vegas and other NLEs use 32-bit float, but Vegas isn't alone here:

http://blogs.adobe.com/VideoRoad/2010/06/understanding_color_processing.html

http://www.apple.com/finalcutstudio/specs/


The merits of 32-bit float is beside the point, though. The goal here would be to get the OP rendering. It seems like that's being solved by making more memory available to 32-bit Vegas through LAA hacks. That's good!

If you *must* use 32-bit Vegas then running the self-hacked version on 64-bit Windows would make more RAM available overall, not just to Vegas but to Windows *and* Vegas. The end lesson though is that users and developers should be marching steadily towards 64-bit Vegas.

If you think about it, SCS (and Microsoft) probably saw the writing on the wall years ago. They got 64-bit Vegas out the door fairly early as NLEs go and they must have had several years of development before that. Someone, somewhere at SCS was seeing that Vegas was going to need a lot more than 2GB of memory.

Rob Mack