Skip to the content.

<- Blog

A question of trust: AI is unsuited for the real world

5th November 2023

I am not of the opinion that AI is going to destroy humanity; that Bing will come to life and annihilate with no distinction between human and rival search engine. I instead make the case that any real world use is immediately countered by the inherent problems of AI models – that they are unreliable and lack accountability – and that they are therefore unsuited for any substantial use but for cutting corners.

In order to make this argument, we should talk about trust. We do not trust simply as a result of shared humanity, to do so would be stupid and rejects any possibility of motives that are in opposition to our own. However, humanity is a necessary precondition for trust; we trust because there are consequences for breaching that trust, and because we accept that we have shared rational and emotional foundations as humans. The same is not true for AI models.

AI models are trained on arbitrary datasets and are rewarded for producing outputs of a particular format. The methods by which outputs are produced are generated with no human involvement, it is not even possible to stare into these models and understand how it comes to a given output. But don’t worry though, you can just ask the AI itself – apart from you can’t because it’s trained on human datasets for its output, so you will find out little aside from details of its training process. It is plainly impossible to hold something accountable where there is no understanding of how it makes “decisions”.

But surely then we can’t trust any computer output then? Not even Microsoft Word’s spellchecker? The difference here is that in these cases computers are executing program instructions written by humans, where it can be determined for any given program how outputs are produced, and where these creators are ultimately responsible. However in fairness I will also state that you cannot trust Word’s spellchecker, even if it is made by humans.

AI models are also remarkably unreliable given the emphasis that corporations are now putting on them. They are prone to hallucination, where they will seemingly invent alternative facts (or more specifically, lies) and realities, often without any logical sense, and they can be made to ignore the very rules set for them, with enough careful persuasion. They aren’t without some of the characteristic flaws of humanity either, having been trained on our datasets – prone to reinforcing existing biases and stereotypes, and spewing the misinformation we inevitably feed it.

Despite all these issues, corporations seem keen to jump on the AI bandwagon. Why? Well, I suppose it is easier, and most importantly cheaper, to put ChatGPT in charge of customer service, rather than having to pay a large team to deal with the wrath of the general public. And careful assessment is good and all, but then again AI is but a click away, and even if it isn’t good at being reliable, it is very good at appearing convincing.

So, we’ve established that AI Models are unreliable, unaccountable and untrustworthy – we should now focus on putting in place regulation on AI and its uses so that they are not used in those many circumstances where reliability and accountability are so important.

AI is growing in influence yet we cannot ever fully understand it - perhaps it’s reassuring that Word’s spellchecker is so terrible.