ChatGPT, misinformation, and the Overton window

With all the ChatGPT buzz, here’s an attack surface to think about: a determined actor with time and resources injects intentional disinformation into the training set to tirelessly shift the Overton window toward their position.

Outputs of the GPT algorithms are increasingly based on this training data, and inevitably inject their talking points into what passes for “truth” emitted by GPT. Effectively, it’s a linguistic-memetic supply chain attack on society.

What gets created sounds plausible and correct — and it even has citations! Citations that are completely made up.