As if human-generated transcriptions of audio ever came with guarantees of accuracy?
This kind of transformation has always come with flaws, and I think that will continue to be expected implicitly. Far more worrying is the public's trust in _interpretations_ and claims of _fact_ produced by gen AI services, or at least the popular idea that "AI" is more trustworthy/unbiased than humans, journalists, experts, etc.
That still holds true for gen-AI. Organisations that provide transcription services can’t offload responsibility to a language model any more than they can to steno keyboard manufacturers.
If you are the one feeding content to a model then you are that responsible entity.
This kind of transformation has always come with flaws, and I think that will continue to be expected implicitly. Far more worrying is the public's trust in _interpretations_ and claims of _fact_ produced by gen AI services, or at least the popular idea that "AI" is more trustworthy/unbiased than humans, journalists, experts, etc.