In reply to Defining notan's edge:
> Diverging boumas, if you will, is one of the forces driving script evolution.
I wish. As far as I've seen (and it does make sense) divergence is only
implemented at the level of individual symbols, and that only in extreme
cases (like "1" vs "7" where the latter is given a middle bar). This is not
surprising because layman consciousness cannot grasp immersive reading;
it only grasps legibility, and that only in an informal, reactionary way.
> well-documented and widely accepted constraints on parafoveal vision.
As a rule simpler data is more reliable. I think these "constraints"
might be the result of poor testing. How do you explain the simpler,
more reliable data showing that saccades can far exceed letterwise
decipherment resolution?
> upper limit saccades are, if I'm not
> mistaken typically followed by regressions
I've never seen that.
If you show me that all saccades that exceed the
fovea's span (4-5 letters) result in a regression
(or faulty comprehension) then I'm with you.
> the parafoveal preview benefit can be enhanced
If it can be enhanced to the point of picking out boumas, that
means the capability is there... Why this parafoveaphobia? :-)
> The actual foveal uptake of information
> takes much less time than 1/4 of a second.
I'd say that reinforces my view!
I posit that the remainder of the effort is going into reading the parafovea.
> if you give someone a black pen and a black piece of paper
> and ask them to write something, they'll still write the same
> letter shapes that they would if writing on white paper.
You can't be serious.
Try writing with your eyes closed.
> We can't claim that a super heavy and a hairline letter 'a' are
> recognised on any basis but their shared figural structure, since
> the black is otherwise so dissimilar and the white even more so.
But they're not dissimilar, in the ways that count.
> The key notion for efficiency seems to me
> whatever constitutes 'bite size' in reading
Actually, I agree.
But it doesn't make sense for that to be letters, because clusters
of them occur in high frequencies* and furthermore are easier
to pick out in the blurry parafovea.
* And for example "th" is much more frequent than "z".
What I'd ask you to consider is this: if short and common words are
picked up as wholes, why not increasingly long and decreasingly
common as a reader's experience increases? Language after all is
incredibly redundant - English is around 50%. So the more you
read the more you can be sufficiently sure that blurry cluster in
the parafovea is what you think it is and take the leap, literally.
> experienced readers make fewer regressions than inexperienced ones
This is only true when the reader isn't being challenged, either
in terms of time pressure or difficulty of the material. My feeling
is that the proportion of regressions is maintained at the top end;
it's the length of saccades that increases with experience.
> Don't designers try to acheive gestalt cohesion by making
> letters less self-contained and more open to each other?
Not enough. And painting the letters necessarily
impedes that, even if one believes in the White.
hhp