In article , spcmnspf@violet.berkeley.edu says...

>Bob, could you post the message I sent you this morning? Other people
>here might find it interesting. Thanks,
>Spiff.

OK, but I split it up, and added my responses - I trust that you don't
mind. ;-) To keep things clear, "SS:" appears before your comment,
and "DR:" appears before my response to SS, or my original statement
that SS was commenting on. (Keep in mind that SS has not had the
opportunity, yet, to respond to my responses. ;-)


DR:
>> Hmmm, but most people can demonstrably see at least 180 degrees wide....
SS:
>I'm not quite sure about this.... I'll check with Professor DeValois about
>this and see what he has to say, if I get an opportunity that is, he's a
>difficult man to get ahold of. I seem to recall that the issue of
>peripheral vision is an extremely complex one, if you want to include the
>unattended areas of the visual field, as well as the famous 'blind-sight'
>cases, then yeah, we can probably see about 180 degrees or so. But all
>things considered, what we usually are aware of without actively using
>selective attention has usually been shown to be about 160 degrees.
DR:
Ah (but not intending to be argumentative, just getting information
out....), I have been trying to show that we see in a way that is
unexpected by most people (in spherical ["fisheye"] perspective), and
one of the indicators that this may be true is the angle of view, which
is impossible to have with the rectangular-type perspective (that people associate with "low distortion") with the angles involved, be they 160,
180, or 200 degrees (which is closer to what I can see). Your comment
earlier, "In angle of (unattended) view, that is to say total peripheral vision, the human eye is approximated by about 20-28mm of focal length
for a 35mm format." is consistent with what others have been saying, but
does not fit what you say above ("160 degrees"), or what I have been
saying (even a 20mm lens has FAR less coverage than 160 degrees!).
SS:
>But, like I said, this is subject to individual variation, and having been
>trained as a developmental and cognitive research psychologist, one maxim
>that has been thoroughly drilled into my head is that the variation within
>groups is almost always greater than the variation between groups, and the
>human species is one damn big group.
DR:
I'd go along with that - though I suspect that variations in apparent
width of seeing field of view has more to do with experience than physical
or "electrical" structure (though there will be some variation there, also).
DR:
>> As is the role of the brain in giving us one horrendously long tonal
>> scale in a single image while maintaining good local contrast, and
>> in selectively color-correcting areas in the field of view (besides all
>> the other "fill-in" abilities the brain has for completing an imperfect
>> image obtained from an incomplete "film plane" (there are parts
>> missing!).
SS:
>I'm not sure what color has to do with the optics of the eye. True, the
>trichromatic and opponent processes have their functional units primarily
>in the retina, but what does that have to do with width of peripheral
>vision? On the other hand, your point about the interpolation of the
>contents of the visual field by the cortical areas v1-v4 is well taken.
>However, this just reaffirms my assertion, (and Ansel's too), that the
>human eye and visual system has only the slightest similarity to a camera
>and lens. That is why I think this whole argument is misplaced.
DR:
With the above, I was referring to your comment, "As for depth of field no comparison can be made here because the human visual system uses a wide variety of monocular and binocular cues (binocular cues being unavailable
to the camera lens) in order to percieve and gauge depth of field. (This
is a glossing over and deserves more discussion.)", in which you appeared
to bring up the subject of brain-eye interaction, and I was extending that
concept by showing (photo-related) instances (contrast compensation and
color balance compensation) when there is clearly brain signal-processing involved. Though it is obvious that the simple image produced by the eye structure is not what we ultimately perceive as what we see (our sight
would be pretty bad, if it were!), there are some similarities to
photography that are fairly straight-forward and structure-dependent:
the angle of view of the eye; perspective; focus; and color rendition
(the angle attended to, and the color and contrast balancing, both
localized and generalized, are more brain-involving - and there probably
is a bit of signal processing involving sharpening and a few other
things). You may be right that this whole thing is misplaced - one could
argue for a greater similarity of the eye-brain combination to video,
where the basic received light image is signal processed electrically
before being recorded electrically on tape.... But the basic optical
ideas can be explored in photographic imaging terms, I think.
SS:
>With regard to the issue of contrast, and again, I don't see what
>relevance this has to the optics of the eye, is primarily a function of
>the functional units of the visual field in the retina, LGN, and V1 and
>V2. (Edge detectors and the like.) I think it's a mistake to consider
>the retina as homologous (let alone analogous) to the film plane in a
>camera.
DR:
Yes, but the concept of uneven performance across the image area is
not unknown in photography - witness the images made by those
all-too-common bad zooms and poor wide-angle lenses out there! ;-)
And I cannot believe that our 20-stop (plus) tonal range (in one "frame")
is achieved structurally, without considerable localized signal processing.
I have used film/developer combinations that approach that kind of range,
and the price is considerable loss of overall brilliance and local contrast.
SS:
>> >humans are able to attend to an area of vision quite closely, thus
>> >getting the equivalent of an extremely long focal length, or a wide
>> >area of the total visual field, thus creating a short focal length.
DR:
>> I thought I covered this with an earlier post:
>> "...like having a super-wide-angle, super-wide-range zoom (8mm fisheye
>> to 5000mm super telephoto equivalent for 35mm film), which is only
>> reasonably good from maybe 100mm to 1000mm....."
SS:
>This raises the issue of the mechanism of selective attention and how it
>works. I'm sure you're probably aware this is one damn big can of worms.
>I suppose it depends on where you stand in the debate on to where the
>'filter' in selective attention is in the visual/attention system. I
>personally find the idea oxymoronic, for if the function of a filter is to
>ease the cognitive processing load, and this is done by selecting, the
>mechanism must necessarily identify ALL elements and then select from
>them. If all elements of the visual field are being identified, how does
>this 'save' cognitive load? Answer: it doesn't. In fact it creates more
>load if you think about it because the cognitive system needs to not only
>recognize all the elements, (a highly energy intensive task), and then
>select from them as well. This amounts to MORE work for the visual
>system.
>
>If you ask me, (you didn't but I'll give you my opinion anyway, hope you
>don't mind... ;) we don't filter at all. All the visual information the
>eye is capable of recieving is recieved, (barring any organic dysfuntion),
>and it is the mechanism of selective attention that parses elements out of
>the stream of visual information by routing them to the executive
>processor. What is not routed via the exec. processor becomes the context
>for the elements constructed from the information parsed out of the visual
>stream, where we then use default values for the frame in question, (the
>frame being what was filled in as you mentioned). Now with regard to the
>routing of info to the executive processor, we have a bandwidth vs. acuity
>trade off here. The wider the bandwidth we attend to, (a larger area of
>the visual field), the less our acuity due to the processing limitations
>of the executive. Conversely, the narrower the bandwidth, the greater the
>acuity as the exec. processor is able to devote more resources to the
>parsed info and thus achieve greater detail.
DR:
Well put. I would agree with the above. In my own experience, when I expand
my attention to the whole field of view (or to one or more points in the
field away from center), sharpness goes down in the central area of view
(though not so noticeably in the surrounding area, where it is limited anyway). I can count fingers on a hand three feet away at 90 degrees (horizontal) away from field center if the contrast is sufficient, or tell what activity is going on beside me without turning my eyes, but I cannot
read even large type well at all even slightly off field center.
SS:
>Thus we get back to the
>'zoom-lens' idea we both mentioned. Again, because of this phenomenon
>that is as far as I know unique to human vision and not possessed by a
>camera, this whole comparison is a bit strange.
DR:
Umm, there are enough similarities.... While the eye doesn't change
its FL, just its "coverage" (area of attention), it is enough similar to
a zoom lens with edge quality varying with FL (all too common!) to keep
this discussion going....;-)
SS:
>Anyway, to keep a long story from getting even longer, there's a lot to
>this stuff, which we don't really have all the answers to, let alone know
>all of the questions. I think that is a major part to the controversy
>that we have thrown ourselves into here.
DR:
Yes.
SS:
>Feel free to write back if you like, I find such dialogue quite
>entertaining and enjoyable... :)
>Spiff.
DR:
So do I. Thanks.
Hope This Helps (DR, in disguise....;-)