1 Comment

You seem to not have realized that the training data consists, in part, of lies, so the trained model will also lie. I disagree that this is a bug in the strict sense. The more rare cases when the model 'hallucinates' objects by contrast, can be considered one.

News Flash: People lie. Some people lie a lot.

Somehow you either haven't yet figured out or chose not to mention that many lies are imposed.. forced upon us by the powerful.

Your previous article headline raising concerns about AI 'safety' with regard to text that a language model emits -- is a strong signal that the former is the case.

Expand full comment