Speech to text is based on sound patterns and word prediction. On
Android, I get good results over all, but I find that personal (non-work
related) dictation is the most error prone, probably because it's more
unpredictable to the recognition engine. I started to notice it when I
was dictating a list of poems into a note taking app. Even though I
spoke clearly, the results were way off. Then I thought about the word
sequences I was dictating and realized there was no way anyone, human or
machine, could predict what I'd say next. On Android, recognition
improves over time if you check an item that says something about
personalizing your recognition and if you're consistent about correcting
errors. I'm assuming iOS works in a similar way.
VICUG-L is the Visually Impaired Computer User Group List.
Archived on the World Wide Web at
http://listserv.icors.org/archives/vicug-l.html
Signoff: [log in to unmask]
Subscribe: [log in to unmask]