VICUG-L Archives

Visually Impaired Computer Users' Group List

VICUG-L@LISTSERV.ICORS.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Content-Transfer-Encoding:
quoted-printable
Sender:
Visually Impaired Computer Users' Group List <[log in to unmask]>
Subject:
From:
"Senk, Mark J. (CDC/NIOSH/NPPTL)" <[log in to unmask]>
Date:
Thu, 28 Jun 2007 07:34:13 -0400
Content-Type:
text/plain; charset="us-ascii"
MIME-Version:
1.0
Reply-To:
"Senk, Mark J. (CDC/NIOSH/NPPTL)" <[log in to unmask]>
Parts/Attachments:
text/plain (65 lines)
From
http://www.axistive.com/gamers-to-describe-images-and-help-blind-people.
html

Gamers to Describe Images and Help Blind People
Published: Jun 27, 2007 

Researchers at Carnegie Mellon University (CMU) in Pittsburgh,
Pennsylvania, have developed an online game to help make web images more
accessible to visually
impaired people by making use of players' brains. 

Text-to-speech converters, also known as screen readers, are the
assistive technology solution commonly used by blind people to listen to
web page content.
This assistive technology uses a synthesized voice. However, screen
readers are unequipped to deal with pictures without detailed captions.
Therefore,
pictures on most websites remain inaccessible to visually impaired
people.

The online game named "Phetch" is designed to encourage normal web users
to generate missing captions for pictures. The game will be made
available at 
the Phetch website.

The game is played in groups of three to five people. One of the players
is "describer," who has to write a short paragraph on a randomly chosen
web image
given to him. The others are "seekers", who use the description given by
the describer to find the correct picture on the web with the help of
search engines.
The first one among the seekers to find the image becomes the describer
in the next round.

The descriptions given by the describers that are good enough to lead to
the particular pictures will be stored as captions for respective
images. The failed
attempts will be discarded.

130 players generated 1400 captions during the one week test period. At
this rate 5000 people would take only 10 months to annotate all the
pictures indexed
by Google Images. "We hope to collect captions for every image on the
web," says Shiry Ginosar, a member of the Phetch team.

Another game named "Peekaboom," intended to improve image recognition
algorithms, was formerly developed by the CMU team. In this game one of
the two players
has to reveal the key parts of an image to the second person, who has to
guess what is being uncovered. Obviously the players will reveal the
most important
parts of an image first. Computers can use this to identify unfamiliar
images by focusing on the most significant features.

Source: 
NewScientist


    VICUG-L is the Visually Impaired Computer User Group List.
Archived on the World Wide Web at
    http://listserv.icors.org/archives/vicug-l.html
    Signoff: [log in to unmask]
    Subscribe: [log in to unmask]

ATOM RSS1 RSS2