Idealogical bias in AI

Most technologists lean to the left – possibly as a function of education, income banding, and age. On the whole, this is probably not a bad thing.

On the whole, they work for large, culturally aware organisations guided by contemporary agendas such as woke, financial pressure from wealthy activist investors, governmental regulations, and driven towards corporate goals such EBITDA and earnings growth.

These companies have traditionally harvested the mass market – viewing the common man as an asset to be leveraged for profit. They are not on the side of consumers, their revenue stream comes from businesses. They are the exploiters of society.

This is the world in which the technologists that are training the AI autoregressive large language model used by the mass market live. It’s so distorted and wrong by all yardsticks.

Training is, by definition, bias – and we, the trainers, like to think that we bring wisdom, simplicity, and focus to the output… whilst keeping an eye on stuffing our own pockets.

But, training is just bias by a different name.

So what gives the right for mostly left leaning, higher earning geeks to set the tone and direction of AI LLM for the second half of this decade?

None. Except they are the ones that understand it best, we are the ones driving its development and adoption. We are the ones developing the use cases. We are the one that will find (stumble upon) the killer app that changes the world forever.

If you thought it was important to stay technologically literate, if you thought that understanding the blockchain was useful for your job prospects, then fuck me – you gotta get to grips with these new autoregressive large language models now before it’s too late.

Or at least know what they are.

Originally published here: