progressive vs lower field first..

dvideo2 wrote on 3/30/2011, 10:23 AM

Is there any reason why i shouldn't make (photo videos) in a progressive format? In other words, if my video only consists of still images (JPEGS/photo shop files/tiffs), does it make more sence to work in progressive than lower or upper filed first? The output
will be for tv monitors and for projectors.....
I know this question might be a bit general, but I'm wondering what
peoples thoughts are.....
Thanks....

Comments

rs170a wrote on 3/30/2011, 10:29 AM
Most of my projects end up on DVD so I have my project properties set to LFF, even if it's just a bunch of stills.

Mike
JohnnyRoy wrote on 3/30/2011, 11:33 AM
I do all of my photo montage projects as 24p. This has two advantages:

(1) They render faster because I only have to create 24fps instead of 30fps

(2) This eliminates line twitter because the photos are progressive and so is the output format.

There is no good reason to interlace perfectly good progressive photos.

Note: If someone views the DVD on a TV that only supports interlaced, the DVD player will happily add the interlacing but everyone else gets a nice smooth progressive experience.

~jr
musicvid10 wrote on 3/30/2011, 11:37 AM
Since most DVD players are now progressive playback, there is no reason to render progressive footage or slideshows as interlaced. The analog outs on all DVD players are always interlaced.
rs170a wrote on 3/30/2011, 2:56 PM
You learn something new every day :)
24p it is from now on.
Thanks guys!!

Mike
Jøran Toresen wrote on 3/30/2011, 3:12 PM
Johnny (and others): So the DVD / mpeg2 standard(s) accepts 25 fps (PAL) progressive video? (I live in PAL land).

Jøran
johnmeyer wrote on 3/30/2011, 3:25 PM
I have to disagree with JR and the others, on two counts:

1. You should generally render to the output device on which the video will be viewed. If that is a television set, then you should render 25 fps interlaced (PAL) or 29.97 interlaced NTSC. Why? Because that is what these devices were designed to display. Yes, they can display other things, but they are perfectly capable of displaying interlaced video.

Progressive is NOT better than interlaced; it is simply different.

2. A much more important reason for not rendering 24p is that you will not have as much temporal smoothness. If this is what you want (and there are many excellent artistic reasons why you may want to do this), then by all means render 24p. However, 24p always produces a slightly "removed" feeling to motion, which is one small part of why film has a totally different feel than video, and impacts people quite differently.

I watch a lot of different types of video, but one thing that has made me especially sensitive to what I consider to be bad advice about rendering to 24p, is watching ESPN3.com on my son's Xbox 360. ESPN (the USA sports channel) has an online service that streams sporting events. They are in excellent HD, but are all at what appears to be 24p. While the picture is wonderfully detailed, the impact of the actual sporting event is significantly altered by this dramatically lower (24 events per second instead of 60) temporal cadence.

So, if you want the "feel" of 24p, then go ahead and render to that, but otherwise DON'T render to it if you are going to display on a television.
PeterDuke wrote on 3/30/2011, 3:53 PM
If the stills are truly still then interlaced or not won't matter. If you go overboard on transitions and/or panning and zooming (which makes me feel sea-sick) then there would ba a difference of course.
Rory Cooper wrote on 3/31/2011, 2:13 AM
I have to agree with JR

In my case 25p, progressive is better than interlaced because it contains more detail, resample’s/resizes better and won’t give you line blending problems
or deinterlace problems, any interlacing or deinterlacing has a loss of detail and causes artifacts no matter what.

Just wondering, what determines what stays and what goes when you interlace?

A bit of motion blur can improve motion in progressive…it’s an illusion thing just like temporal smoothness.
After all interlaced clips are still 25 frames. In pal land.

To add, professional motion graphic artists work in progressive because it is better and your project falls into that category
The HD gurus “not me” all say “interlace bad progressive good”

We all agree on one thing the shortfall is in the frame rate 25/30 but that’s all we have, so trying to compensate for that short fall by interlacing is like Hollywood scenario to a problem if you strap nukes on anything submarines, space vehicles and blow the crap out of the planet it will actually make things better.

It is simply not true.
johnmeyer wrote on 3/31/2011, 9:26 AM
In my case 25p, progressive is better than interlaced because it contains more detail,This is a common mis-perception, and is wrong. The reason why people think this is that they look at a freeze frame and, because they are looking at two fields at the same time, and these two fields are NOT from the same moment in time, they see artifacts, including the (correct) perception that the frame is not as sharp. But, if you took two progressive frames, and looked at BOTH of them at the same time, you'd also see a lot of artifacts and blurring.

[edit](Just after I posted the above, I realized that we might get into a long discussion about whether 25p has more resolution than 50i. Both are 720x576 pixels, and if you videotaped a resolution chart with the camera locked down and played them back (as motion video, not as a freeze frame), you would resolve exactly the same number of lines. So, in that sense they have exactly the same level of detail. And, as the camera moves, you'd see the same number of lines. However, the "twitter" -- which you will get with both interlaced and progressive as the camera moves -- will be different, and probably more pronounced, with interlaced footage, and this could lead to a perception of less detail.) [end edit]

A bit of motion blur can improve motion in progressiveSometimes yes, but often-times no. You can certainly create an effect that may be pleasing (like the blur often used in still photos by using a slow shutter speed when photographing moving objects, in order to emphasize their motion). But, blur is blur, and it means a reduction in detail, something that may or may not be acceptable, depending on what you are trying to achieve.

The HD gurus “not me” all say “interlace bad progressive good”No, they don't. The ultimate HD gurus -- the people who sat on the standards committees when HD television was invented in the 1980s, and then re-invented in the 1990s at the dawn of digital video -- included 1080i as the main part of their standard. Interlaced video looks great, and continues to be a wonderful delivery format.

Would I rather have 60p instead of 60i? Yes. Why? Because it has more spatial and, subtly, more temporal information than 60i. However, if you give me any other choice vs. 60i (or 50i for PAL), and if I were shooting or watching fast-moving action, or wanted an "immediacy" to my creation, I'd choose 60i over anything else.

One last point.

Ask the following question of anyone who has never edited a single frame of video in their life: do you see any problems, artifacts, or issues when watching modern 1080i HD television? To help them answer, show them the same scenes, shot first in 60i, and then in 24p or 30p. They may not be able to articulate what they are seeing, but none of them will say that the 60i looks bad. Some may prefer one vs. the other, simply because of their own sensibilities, but they won't use the same words to describe the 60i vs. the alternative that they would use, for instance, to describe 6-hour VHS vs. 2-hour VHS; or VCD vs. DVD; or low-bitrate encodes vs. high-bitrate encodes; or 4:1:1 colorspace vs. 4:4:4 colorspace.

Thus, the end user is sensitive to loss of detail, blocking artifacts, color space, noise, and many other types of artifacts, but none of them ever senses anything relating to interlacing, even with high motion video.

It would be a very interesting thing to screen the same scenes, one shot with 60i and the other shot with 60p, and show them to an audience with no ties to our industry and have them assess which was better. The test would have to be very carefully controlled so that lighting, color, contrast, and other differences didn't creep in. I suppose you could do it simply by taking 60p footage and turning it into 60i. My bet is that probably only half the people would notice the difference on most scenes, and of those who could perceive the difference, most would have great difficulty articulating it, and most would not refer to sharpness.

We are all constantly dealing with the above-mentioned engineering trade-offs, and interlacing is one of the few where there really is virtually no perceptible degradation to the end user. It is only to those of us who edit video, and see a single frame frozen in time, with both fields -- coming from two different moments in time -- now displayed together (which is an incorrect thing to do, but all editing programs do it), that gives us all the incorrect perception that interlacing has a problem.


John_Cline wrote on 3/31/2011, 12:59 PM
John Meyer: Thanks for your thorough and well-reasoned defense of interlaced video. I concur completely.

Ultimately, I'd like to shoot, edit and deliver in 60p but, until then, 60i works just fine. 24p and 30p simply do not have enough temporal resolution to suite me and the type of video which I shoot.
TeetimeNC wrote on 3/31/2011, 1:48 PM
>Ultimately, I'd like to shoot, edit and deliver in 60p but, until then, 60i works just fine. 24p and 30p simply do not have enough temporal resolution to suite me and the type of video which I shoot.

Interesting discussion. John, what type of video do you shoot?

/jerry
JohnnyRoy wrote on 3/31/2011, 2:12 PM
Just to clarify... the reason I recommended (and use) 24p for photo montages is because line twitter is a huge problem when adding motion to still images. There is nothing more distracting than a massively flickering image. You don't need to be a video professional to scream out, "what the heck is THAT?".

Delivering progressive images as progressive video eliminates this problem entirely. The drawback is that the only progressive format supported by the DVD spec is 24p. This does not pose too much of a problem, however, because I keep the motion in my montages very slow and subtle. Nothing that 24p can't handle. I doubt you could tell the difference in motion. Remember, this is not full-motion... it's slow gentle almost imperceptible pans and zooms.

Given that most modern TV sets can handle progressive natively, and more and more will in the future, the DVD's I deliver to my customers are future-proof. As I said, if someone has an older interlace only TV then see an interlaced version that may have line twitter. I do use an occasional vertical Gaussian Blur to clean some of the worst offending images but sometimes twitter still remains. When the customer buys a new progressive TV it will all be gone.

If I delivered my montages as interlaced, they would have line twitter forever and ever and ever. I personally prefer my solution and my customers seem happy. Your mileage may vary. Isn't it great that there is always more than one way to do things and you get to choose which one you like. ;-)

~jr
John_Cline wrote on 3/31/2011, 2:28 PM
"Interesting discussion. John, what type of video do you shoot?"

I shoot a variety of things, but one that needs the extra temporal resolution is automobile racing. The "judder" at 24p or 30p is just annoying and unacceptable.
Andy_L wrote on 3/31/2011, 3:45 PM
John Meyer, In the age of LCD tv's, I don't think your argument makes much sense--at least for US markets. modern TV's can't display interlaced footage as interlaced. They all deinterlace it first and show progressive (with some deinterlacing artifacts).

Even if they could show interlaced, without a CRT's phosphor persistence, interlacing still wouldn't look right.

I think there's a case for shooting/outputting 60i to utilize the higher frame rate, but the progressive vs. interlaced argument seems dead to me on any other merits.
tunesmith1801 wrote on 3/31/2011, 6:04 PM
I hope I am understanding this correctly.

first:
If I select lower field first, then I am using interlacing, correct.

Second:
I should normally render my footage as progressive if I am playing back to television.
johnmeyer wrote on 3/31/2011, 6:24 PM
Interesting discussion. John, what type of video do you shoot?There are two "Johns" and one "Johnny" in this discussion, so I guess we can each answer. Most of the video I shoot is event video: sports, dance concerts, a few weddings, talent shows, etc. 24p might be interesting to use for the weddings (there used to be a guy on these forums who did some amazing 24p wedding work -- can't remember his name at the moment).

John Meyer, In the age of LCD tv's, I don't think your argument makes much sense--at least for US markets. modern TV's can't display interlaced footage as interlaced. They all deinterlace it first and show progressive (with some deinterlacing artifacts)This is definitely a legitimate point and I can't disagree with any part of it. However, I still recommend keeping everything interlaced for three reasons:

1. Lots of people still have CRT TVs, and progressive video will not look as good on these sets as does interlaced.

2. All modern pixel-based TVs are designed to handle interlaced video, and the people who designed them are smarter than any of us. What I mean by this is that they have designed, into the TV, the circuitry that knows how to handle interlacing correctly for that particular set. You'll get better results by letting the hardware in the TV do the work (unless you want to follow Nick Hope's excellent tutorial on how to do really clever deinterlacing).

3. There is absolutely nothing that would stop a set manufacturer from producing a set where alternating rows of pixels could be addressed alternately, thus providing a line-by-line version of interlaced display. Thus, it is possible that future displays may be able to handle interlacing natively.

There are two reasons I take such pains to constantly challenge people about this interlacing vs. progressive issue. First, most people don't know how to do de-interlacing correctly, and they butcher the job. Just look through the posts in this forum over the past years, and note how many of the posts result from someone screwing this up. If they instead just left everything alone, and edited and rendered using the interlaced format of the original video, they would not have these problems.

The second reason for encouraging people not to deinterlace is that once you have done it, you have lost -- forever -- both temporal and spatial fidelity. By any objective measure, your video is worse than it was before, and now contains artifacts that it didn't have before. In addition, depending on how you do the deinterlacing, you may have lost half the resolution, created line twitter, etc.

The post from tunesmith, which I reply to below, shows what I'm talking about.

If I select lower field first, then I am using interlacing, correct.Whatever you do, do NOT change the field setting for your source video. As for setting the project properties to lower, upper, or progressive, I recommend that you use the "match source" button in the Project Properties dialog to automatically set the Vegas project properties so they are identical to your source video.

I should normally render my footage as progressive if I am playing back to television.Not to be hard on tunesmith, but this is exactly what I'm talking about. If you render to DVD at 29.97, and change the render to progressive, and then play this DVD on your TV set, it is not going to look as good as if you left everything alone and rendered using the the one of the appropriate DVD Architect templates.
tunesmith1801 wrote on 3/31/2011, 6:40 PM
Thanks Johnmeyer that is the information I needed to hear.

Jim
Rory Cooper wrote on 3/31/2011, 9:45 PM
debunk this

The Kell/Interlace Factor E!ect on
Resolution
For interlaced video the Factor is 0.7, which
means that the equivalent interlaced
resolution is only 0.7 (70%) of the progressive
resolution. By this measure the 1080
interlaced lines deliver the equivalent of 756
progressive lines. That’s barely more vertical
resolution than 720, which has 720 progressive
lines in the vertical dimension.

“When interlaced scanning [drawing all the odd
lines then all the even lines] is used, as in all the
conventional [video] systems, the 70 percent figure
applies only when the image is fully stationary and
the line of sight from the viewer does not move
vertically by even a small amount. In practice,
these conditions are seldom met, so an additional
loss of resolution, called the interlace factor,
occurs under typical viewing conditions.
This additional loss depends on may aspects of the
subject matter and viewer attention, so there is a
wide range of opinion on its extent. Under
favorable conditions, the additional loss reduces
the effective value of vertical resolution to not
more than 50%, that is, no more than half the
scanning lines display the vertical detail of an
interlaced image. Under unfavorable conditions, a
larger loss can occur.
The effective loss also increases with image
brightness, as the scanning beam becomes ...
[fatter].” "
From K. Blair Benson and Donald G. Fink, “HDTV:
Advanced Television for the 1990’s”, 1991, McGraw
Hill, NY, bracketed words added by Allan W. Jayne
in his article on Kell and Interlace Factor
Rory Cooper wrote on 3/31/2011, 9:59 PM
Extract from Philip Hodgetts book “ THE HD SURVIVAL HANDBOOK”

CHAPTER 8 Progressive Good, Interlace Bad
This is one of the rare times when I can be unequivocal: Progressive scanning of the image is good;
interlaced scanning of the image is bad.
We should never have had interlace in HD. We would have been much better off without interlace
in any of the ATSC broadcast formats. There were a couple of factors that led to its inclusion: the
established broadcasters didn’t want the expense of converting existing 29.97 interlaced frames
per second material to a new format. The other potential reason is that Microsoft entered the ATSC
process and pushed heavily for an all-progressive version of HD. Reportedly, because Microsoft was
in favor of all-progressive, everyone else felt the need to support interlace in HD! That may not be
true in fact, but it is true in spirit.

http://www.philiphodgetts.com/about-philip/


broadcasters are saying we should go
http://www.amberfin.com/pdfs/whitepapers/720pwhitepaper.pdf

PeterDuke wrote on 3/31/2011, 10:19 PM
On p5 of the paper:
"Since interlacing shows half resolution per field rather than full resolution per frame, 720p subjective resolution is argued to be close to 1080i."

If the picture is still, 1080i has the same resolution as 1080p. When the picture is still you have time to analyze the picture in detail and notice things such as resolution.
Rory Cooper wrote on 3/31/2011, 10:23 PM
It’s tv the picture isn’t still, check the kell/interlaced article above
PeterDuke wrote on 3/31/2011, 10:31 PM
Still pictures ARE sometimes shown on TV, particularly documentaries.
Rory Cooper wrote on 3/31/2011, 10:34 PM
Peter the point is they are never still it’s impossible the still image is still moving/scanning
TeetimeNC wrote on 4/1/2011, 2:00 AM
>Just to clarify... the reason I recommended (and use) 24p for photo montages is because line twitter is a huge problem when adding motion to still images. There is nothing more distracting than a massively flickering image.

JR, I think your point is good regarding low motion photo montages. What is your preferred approach for high motion video DVD's? I ask because in the dvxuser forum for my HMC150 cam, one of the most experienced videographers recommends shooting 720p60 rather than 1080i60, and then rendering the 720p60 using the DVDA widescreen template which puts it into a 60i stream (ie, one progressive frame per field). My instincts (rather than hard evidence) have told me this would be preferable for putting fast action HD on DVD. Your thoughts?

/jerry