I was watching a movie today on DVD. The dialogue was being presented in English, both as audio and as subtitles.
- I didn't notice the redundancy.
- my friends found it distracting and stopped the movie and restarted from the same point after turning off the subtitles.
- it was harder to follow the movie after that, especially at first. After a few moments it seemed normal again, but I had to pay more attention to the dialogue in order to achieve immersion.
If I spoke any other languages, I wonder if I would distinguish between them at all as input.
With the subtitles on, I guess I didn't have to work as hard. I could flash-read the whole sentence before any of it was spoken, so I could devote more attention to the rich visual information (the movie was high-resolution rendered animation of a fantasy world) and let the words fall into place. I could also glance back at the caption for context about the scene (nouns, verbs, and key words).
I can hardly wait until technology permits me to freely transform input streams (people talking, music, text) and sensation (the standard ones, plus a few) into other simultaneous and overlapping forms. I know that sounds like a nightmare to most people, but it really would work for me.
I want modally variable, selectively redundant synesthaesia.