It does appear that we have ended up with the worst of
both worlds, complexity without a lot of intelligence at the
server end and expectation that those on the user end are all
using technology compatible with what the server transmits.
The problem about javascript is a golden example, but it
wouldn't be a problem at all if servers were equipped with
software that negotiated with the client and ended up speaking
standard html if that is what it takes.
You hit the nail on the head wondering if we might be
going at this backwards. The only thing worse
would be to just send video bit maps or fax-like web pages. End
users would need optical character recognition software in the
chain in order to use it with a screen reader or search it for
key words. Talk about needing high-end client systems plus think
of the utter waste of network resources this would cause.
A server that could speak html instead of javascript or
read the text aloud over a common telephone solves an incredible
number of unknowns.
Any server worth its salt probably has more than enough
computing power to do all these things and it is a crying shame
that this way of thinking has not evolved on a large scale.
Maybe it will eventually do so because people will get so
fed up with all the compatibility problems that we will have to
fall back and regroup. I don't think it will be that hard if
there will be some momentum to make it happen.
Martin McCormick
Paul Chapin writes:
>It raises an interesting question of whether we've been going about this
>backwards. The strategy has been to put the intelligence at the user's end.