Skip to main content
Martin Hähnel

Purity Based Argumentation

A short concept note. I first brought up the idea here:

This text is not about changing society through political action, though. It is about exploring what could be a way to live within our current situation that neither loses sight of the complexities of life, by proclaiming a set of maxims, nor throws out the baby with the bath water, by being a cynical, egotistical jerk. The former leads to a kind of "purity discourse" that doesn't help any real person. If anything, it may make you feel bad, if you can't live up to the manifesto's demands. And the latter lives in a vacuum where nobody else matters, which is mostly sad and infuriating for anyone with a heart.

A little clarification here:

I do not claim to have all the answers with regard to how to deal with LLMs either, but I do strongly believe that throwing yourself into all aspects of an issue is a great way to learn more about it. I have also advocated before that I think it's a good idea to avoid a purity-based approach to contested topics (you either do everything right, or you're a monster isn't a good approach). I did this even in my last post on LLMs. I think it is fine and necessary to overstep from time to time - within reason.

And elaborated on it here:

It's important to note that my article tried to figure out a framework that - all else being equal - has a sanity based approach to judging the use of a technology/product that exists, right now. Given that me, the individual, can't really change how the current crop of "AI" was made, I can at least find a way to interact with them that makes sense and isn't "purity based".

I think this "anti-purity framework of judgment approach" is still a good idea. Normal people - including you - will use LLMs, sometimes you'll overstep and use them for frivolous things. Within reason, that's fine.

Same post, a little later:

In an unpublished article about LLMs I wrote:

Will performatively writing purity based arguments against LLMs do anything, though? No. But there is an important difference. Being open to the idea that LLMs could be changed ever so slightly to something better could do at least something.

I think the idea emerged from my thinking around manifestos: