Over at Marginal Revolution, commenter Jan A. asks:
Why is the (global) state of subtitling and closed captioning so bad?
a/ Subtitling and closed captioning are extremely efficient ways of learning new languages, for example for immigrants wanting to learn the language of their new country.
b/ Furthermore video is now offered on phones, tablets, laptops, desktops, televisions… but very frequently these videos cannot be played with sound on (a phone on public transport, a laptop in public places, televisions in busy places like bars or shops,…).
c/ And most importantly of all, it is crucial for the deaf and hard of hearing.
So why is it so disappointingly bad? Is it just the price (lots of manual work still, despite assistive speech-to-text technologies)? Or don’t producers care?
I use closed captioning all the time even though I’m not really hard of hearing. I just have a hard time picking out dialog when there’s a lot of ambient noise in the soundtrack—which is pretty routine these days. So I have a vested interest in higher quality closed captioning. My beef, however, isn’t so much with the text itself, which is usually pretty close to the dialog, but with the fact that there are multiple closed captioning standards and sometimes none of them work properly, with the captions either being way out of sync with the dialog or else only partially available. (That is, about one sentence out of three actually gets captioned.)
Given the (a) technical simplicity and low bandwidth required for proper closed captions, and (b) the rather large audience of viewers with hearing difficulties, it surprises me that these problems are so common. I don’t suppose that captioning problems cost TV stations a ton of viewers, but they surely cost them a few here and there. Why is it so hard to get right?
POSTSCRIPT: Note that I’m not talking here about real-time captioning, as in live news and sports programming. I understand why it’s difficult to do that well.