I have, in the past, encoded 23.976 with the pulldown flag set and gotten beautiful DVDs. However, I've always used an external encoder. This time I am trying to encode using the MPEG encoder in Vegas. Something is terribly wrong.
My source material is video from a film transfer. The AVI header is set to show 24 fps (the original is silent film that was shot at 24, not 18, fps). The project properties in Vegas are set to 720x480 progressive, 24.000 (film). The media properties for the input media (which are DV files) show as Frame Rate: 24.000 (film), and I have changed the field order to None (progressive scan) because my film transfer process ensures that both fields of video are always from the same frame of film, and therefore there is no temporal difference between the upper and lower fields.
When I render, Vegas 6.0d offers the "DVD Architect 24p NTSC video stream" template. The template correctly shows the Frame Rate: 23.976 + 2-3 pulldown".
However, the result takes forever to encode and looks like it was actually encoded to 29.97, with additional crossfaded frames being added. When I bring the file into DVDA, it shows as being a 23.976 + 2-3 pulldown file, but the resulting DVD looks horrendous. Here is a link to one scene from the MPEG file that Vegas created. As you can see, there are all sorts of extra "ghosting" frames. Since this is a football game, there is lots of motion. The original shows each frame uniquely and crisply, and this is how it should be encoded (i.e., the MPEG file should show each frame individually, and then the DVD player should add the pulldown -- there should be no pulldown in the MPEG file itself).
Something is terribly wrong. Is this a known bug in Vegas?
Here is a link to a short (6 MB) sample of the bad encode:
Bad Encode
Here is a link to a slightly longer cut from the same portion of the exact same clip (it includes the entire play), encoded, via frameserver, using the external Mainconcept encoder:
Good Encode
It is perfect.
I should also note that the encode in Vegas took forever (almost twelve hours for a two-pass encode of 90 minutes of cuts-only project that had nothing more than B&W and levels adjustments). Normally, this would have taken just slightly over real-time (or 2x real-time for two-pass), plus another thirty minutes or so, per pass, for the levels adjust. I am now encoding via frameserver to the external encoder, and even though I am using some advanced parameters that considerably slow the encode, the estimated time is six hours, compared to the twelve hours it took, using faster parameters, in Vegas itself.
Either I'm screwing up somewhere, or something ain't right with Vegas. The external version of the MainConcept encoder (same one used in Vegas) works fine, on the same project.
[Edit]
Here's four frames from the original AVI:
Original AVI (four frames)
My source material is video from a film transfer. The AVI header is set to show 24 fps (the original is silent film that was shot at 24, not 18, fps). The project properties in Vegas are set to 720x480 progressive, 24.000 (film). The media properties for the input media (which are DV files) show as Frame Rate: 24.000 (film), and I have changed the field order to None (progressive scan) because my film transfer process ensures that both fields of video are always from the same frame of film, and therefore there is no temporal difference between the upper and lower fields.
When I render, Vegas 6.0d offers the "DVD Architect 24p NTSC video stream" template. The template correctly shows the Frame Rate: 23.976 + 2-3 pulldown".
However, the result takes forever to encode and looks like it was actually encoded to 29.97, with additional crossfaded frames being added. When I bring the file into DVDA, it shows as being a 23.976 + 2-3 pulldown file, but the resulting DVD looks horrendous. Here is a link to one scene from the MPEG file that Vegas created. As you can see, there are all sorts of extra "ghosting" frames. Since this is a football game, there is lots of motion. The original shows each frame uniquely and crisply, and this is how it should be encoded (i.e., the MPEG file should show each frame individually, and then the DVD player should add the pulldown -- there should be no pulldown in the MPEG file itself).
Something is terribly wrong. Is this a known bug in Vegas?
Here is a link to a short (6 MB) sample of the bad encode:
Bad Encode
Here is a link to a slightly longer cut from the same portion of the exact same clip (it includes the entire play), encoded, via frameserver, using the external Mainconcept encoder:
Good Encode
It is perfect.
I should also note that the encode in Vegas took forever (almost twelve hours for a two-pass encode of 90 minutes of cuts-only project that had nothing more than B&W and levels adjustments). Normally, this would have taken just slightly over real-time (or 2x real-time for two-pass), plus another thirty minutes or so, per pass, for the levels adjust. I am now encoding via frameserver to the external encoder, and even though I am using some advanced parameters that considerably slow the encode, the estimated time is six hours, compared to the twelve hours it took, using faster parameters, in Vegas itself.
Either I'm screwing up somewhere, or something ain't right with Vegas. The external version of the MainConcept encoder (same one used in Vegas) works fine, on the same project.
[Edit]
Here's four frames from the original AVI:
Original AVI (four frames)