As some of you know I (and many others, even users of other NLEs) have been grappling with this issue. All manner of solutions have been put forward and certainly they all kill the problem, some of them would either take the patience of a saint or else knock the resolution around.
The most obvious tool that Vegas provides is gaussian blur, a tiny amount (0.001) in the vertical direction certainly cures the problem but even though my source is mostly 3000x2000 the loss of resolution is noticeable. Now that to me seemed odd. So I'm thinking that Vegas calculates the blur radius based on project settings rather than source resolution.
To tes this I dropped the same still image into a 1080 project applied 0.001 vertical GB. Then took that into a PAL DV project.
Firstly, line jitter is gone. Secondly through my bleary eyes the result looks better than putting the same still into the same (PAL DV) project and applying the 0.001 GB. I've done a split screen check on both halves of the image (it's damn hard to judge these things at that res with real images) and both halves show better res on the 1080 uncompressed source.
Now this seems a pretty simple fix, well apart from trying to find space for over an hour of 1080 uncompressed video.
I'm hoping someone can tell me if this kind of makes sense, I'd love to know when I set GB to 0.001 just what the 0.001 is being calculated against and at what point in the processing chain. Is the radius 0.001 x 576 px in a PAL DV project?
If so then that should mean for a 1080i project it's 0.001 x 1080.
In order to avoid chewing up terrabytes of storage I'm going to try going direct from 1080 to mpeg-2 at 720x576.
Any thoughts much appreciated. Maybe we've got this sucker licked.
The most obvious tool that Vegas provides is gaussian blur, a tiny amount (0.001) in the vertical direction certainly cures the problem but even though my source is mostly 3000x2000 the loss of resolution is noticeable. Now that to me seemed odd. So I'm thinking that Vegas calculates the blur radius based on project settings rather than source resolution.
To tes this I dropped the same still image into a 1080 project applied 0.001 vertical GB. Then took that into a PAL DV project.
Firstly, line jitter is gone. Secondly through my bleary eyes the result looks better than putting the same still into the same (PAL DV) project and applying the 0.001 GB. I've done a split screen check on both halves of the image (it's damn hard to judge these things at that res with real images) and both halves show better res on the 1080 uncompressed source.
Now this seems a pretty simple fix, well apart from trying to find space for over an hour of 1080 uncompressed video.
I'm hoping someone can tell me if this kind of makes sense, I'd love to know when I set GB to 0.001 just what the 0.001 is being calculated against and at what point in the processing chain. Is the radius 0.001 x 576 px in a PAL DV project?
If so then that should mean for a 1080i project it's 0.001 x 1080.
In order to avoid chewing up terrabytes of storage I'm going to try going direct from 1080 to mpeg-2 at 720x576.
Any thoughts much appreciated. Maybe we've got this sucker licked.