My friend Zellyn tells me that one GPT can train on another and mysteriously absorb its characteristics.
I hope I’m getting this right. Apparently, coders can tweak the ethical coding of a GPT to make it less scrupulous and more biased. A second GPT that trains on the output of this first vicious GPT will absorb these same vices — even if the content it trains on is not related to the vices in question.
This did not surprise me at all. Faiths are multistable things, and the structures that grasp our perceptions and conceptions of the world also structure our moral reasoning.
If you consume content from vicious ideologues, you’ll start thinking like a vicious ideologue without noticing because you’ll share their worldview. This is how the best propaganda works. It encourages the same naive realism, including naive realisms that have beliefs about naive realism and a belief that this belief immunizes them from naive realism. If that confuses you, just think about how Christians sometimes believe that their beliefs about Christianity immunize them from anti-Christian attitudes. It’s not only like that, it is the same belief structure.
This is how I explain the creepy synchronization of belief and even the words people use to express their beliefs. They are conditioned to produce the same “spontaneous” observations and thoughts as those who share their conditioning. To them it looks like independent verification, but it’s just shared faith doing what shared faith does.
If you don’t want to get synchronized, you have to expose yourself to thoughts, practices and experiences others around you are not thinking, doing and experiencing. If you are not actively trying to be an individual, you’re probably the unwitting agent of some collective. And if that collective imagines itself to be made up of radical thinkers, bold individualists and independent moral reasoners — dissenters, in fact! — you’ll think you’re one of those — despite your lockstep ideological conformity.