In article <6fepk0$m14$1@news1.mnsinc.com> "Greg Storey" writes:
>Okay, let me follow up with this.
>
>If you have a MiniDV Camera that claims to have 550 lines of resolution what
>kind of resolution can be expected from the following --
>
> - The MiniDV tape (some have claimed that due to MiniDV compression the
>resolution is "degraded" when the signal is transferred to tape).
>
> - The S-video out.
>
> - The composite video out.

That's a fairly complex question, and it brings up the influence of
compression on resolution, which of course is a *variable*, so Yeesh!
Tough question.

But assuming we knew the answer to that question, things get simpler.
In order to get to analog there will be a digital to analog conversion,
and there will be a reconstruction filter. An attempt will be made to
preserve the digital resolution (which may be moot if the compression
resolution is significantly lower than the digital resolution) through
this filter. I don't know where 550 comes from, the theoretical limit
of DV would be 540 lines, accounting for the reconstruction filter
leaves maybe 500 or so. Subtract from that whatever degradation due
to compression, if any, which I don't know.

Whatever you get, whether it's 500 lines or less or a little more,
both the S-video and composite connectors will carry the same
resolution. In fact they carry the exact same signal, except on the
S-video the luminance and chromance are separate and on the composite
they're mixed together.

It's up to the monitor to define how much degredation occurs to the
composite signal as compared to the S-video signal. What's more, it's
not something you can easily state as lines of resolution, since lines
of resolution is a scalar quantity similar to the cutoff frequency of
a filter. For a first order filter, cutoff frequency means something,
but for higher-order filters the cutoff frequency may not be as
significant as the behavior of the filter passband.

Just so, the chroma separator of the monitor is going to muck up the
flatness of your luminance (and chromance) passbands, and the
resultant visual impact of this mucking up may be more significant
than the cutoff frequency (lines of resolution).

Speaking practically (at last), the S-video is going to look
consistently better than the composite in this case. It's probably
going to be a difference you have to look very closely to detect,
except for the case where there are vertical stripes in the image
whose luminance frequencies fall into the chroma range. Most very high
quality montiors with comb filters will do a darned good job of not
mis-interpreting such a signal as chroma, but no chroma separator can
do this perfectly, and there will be some rainbow strobing in the
vicinity of the stripes. In such cases, the S-video signal will look
significantly better, because there will be no confusion at all
between the luminance and the chromance, and the vertical stripes
won't have rainbows in them.

>And, apparently from your comment, independent of the resolutions above the
>resolution of the monitor (i.e., hi-res) can have the "effect" of improving
>the resolution of the image (degraded from 550 lines) based on the
>monitor's resolution.

No, the monitor cannot improve the resolution, but these "lines of
resolution" numbers are points on a sloping response curve. If you
combine two devices with similar response curves, similar lines of
resolution, the resultant curve get's steeper, and the resolution
decreases. So generally you want to have much more resolution
available in the monitor than in the signal you're monitoring. That
way, you're absolutely sure your monitor isn't *subtracting* any
resolution.

-Charlie Hand chand@netcom.com