"Robert
Feinman" <robertdfeinman@netscape.net> wrote in message
news:MPG.1a50c27faf17676c989704@news.acedsl.com...
>
There is a never ending discussion of resolution vs
>
print size and capture media.
>
The mathematics and empirical testing usually
>
show that the usual expected resolution is in the range of
>
40-60 lines per mm. A good print should have 6 to 8 lpm.
> So
simple logic means that the maximum enlargement should
> be
5 to 8x. Thus the best size print one could expect from
>
35mm would be on the order of 8x12 inches with correspondingly
>
smaller sizes from digital cameras.
> In
spite of this people still get prints that are "sharp" with
>
much larger magnifications.
>
Personally, I've been scanning 35mm color negative film with the
>
new Minolta 5400 lately and can print out inkjets that look "sharp"
>
all the way up to the 18x maximum scanning resolution.
>
I'm not one of those who doesn't know what a "sharp" print looks like
>
either, since I use formats all the way up to 4x5.
>
> So
what's going on?
> My
conjecture (a theory in progress):
>
>
For people pictures shot at normal distances we are used to seeing
>
detail only in limited areas of the face such as the eyes (lashes
>
and reflections in the pupil) and perhaps loose strands of hair.
>
For landscapes and the like, we can't see all that much detail in
>
distant leaves and grass, but we do see bare branches, telephone wires
>
and the like as sharp.
>
For buildings and other man made structures the detail is seen in the
>
building edges and things like window frames.
>
> In
all cases the "sharp" things are not those with a lot of fine detail,
>
but rather those with good edge contrast. In other words acutance.
>
Most digital processing involves a certain degree of sharpening. This
>
doesn't do much for real detail, but does increase acutance. This makes
>
those features that we search for in real life appear "sharper" so we
>
read the image as being sharp. We don't expect to see much fine detail
> so
we are not surprised when it is lacking as long as those sharpness
>
indicators have good edge definition.
>
There are categories of images where the detail is important such
> as
scanning electron microscope images and we always comment on
>
how much detail we see in them when viewed. This shows that we don't
>
normally expect to see the fine structures in an image.
> So
I'm guessing that since the images conform to our expectations from
>
viewing such scenes in real life we accept them as sharp even though
>
the resolution figures would indicate that they are not really that
>
detailed.
>
> As
I said, a theory in progress, comments welcome..
> --
>
Robert D Feinman
>
robertdfeinman@netscape.net
>
Landscapes, Cityscapes, Panoramas and Photoshop Tips
>
http://robertdfeinman.com
"Sharpness"
varies with the viewer, of course, and the viewing
distance
(permitting lower unit resolution in larger prints to look
OK). As
you note, supplying good detail in limited important areas
can
suffice for some people (I'm always amazed by how bad
image
edge/corner resolution can be, and some people call the
image
sharp ;-). In digital, one can selectively sharpen (and
selectively
soften) areas to increase the impression of good image
resolution
and smoothness, if done carefully. For me, distant
subject
info MUST be sharp in a "landscape" or an "architectural"
image -
and the "almost-sharp" of stretched-DOF-covered images
looks
bad. Standards of sharpness can vary considerably by
use,
with direct-viewed images requiring greater resolution (I look
at
these close-up, regardless of size...;-), and with reproduced
or
screen images requiring less resolution to look good (I think
the
sharp screen introduced by the dot pattern in reproductions
and
pixels in monitors contributes to the look of sharpness that
isn't
real [as can noticeable sharp-edged film grain] and/or the
expectations
are lower...;-).
Interesting
area, image resolution perception...
--
David Ruether
d_ruether@hotmail.com
http://www.David-Ruether-Photography.com