EASI Archives

Equal Access to Software & Information: (distribution list)

EASI@LISTSERV.ICORS.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Martin McCormick <[log in to unmask]>
Reply To:
Equal Access to Software & Information <[log in to unmask]>
Date:
Wed, 31 Aug 2011 14:49:19 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (47 lines)
Prof Norm Coombs writes:
> Interesting article from NPR on researchers questioning the concept of
> different learning styles:

	My father tought clinical psychology in college for many
years until he retired and he says that it really doesn't matter
how the information gets in to the brain as long as it gets
there. We tend to eventually process it the same way whether or
not we read it or heard it.

	Of course certain channels of input work better for
certain information, but if you get that information in, the
brain can start to work on it.

	As one who was born blind but with some light and color
perception, I can say that certain things are extremely
difficult to understand such as how a boy properly throws a ball
because you almost have to see it to learn it but I bet I could
learn the same thing through virtual reality such as, what if
you wore a glove on your hand that applied pressure in one way
when you moved your arm a certain way and some other way when
you moved it wrong, so to speak. One would basically start
slowly and know what the motion is so that you could speed it up
and throw correctly.

	That's just one example pulled out of the air, but there
are many actions like that which are hard to explain to one who
can't see them, but which might be communicated via some other
channel.

	People who have been profoundly deaf all their life but
who can see normally can learn to make every single lip and
tongue movement anybody else can as long as they can see it.
When you start dealing with the more complex movements that
can't be seen such as how to say the name "Barbara," it comes
out "Baba" because it's hard to get across the extra tongue
movement that makes the R sound in the middle of the word.

	I doubt we will ever figure out how to produce
information for one sense that replaces information lost when
another sense fails, but the brain, itself, excluding organic
damage, should be able to process anything we throw at it as
long as we can somehow get those data around the limitations of
what senses remain.

Martin McCormick

ATOM RSS1 RSS2