Sunday, 02 Apr 2023

Machine-learning systems are problematic. Thats why tech bosses call them AI | John Naughton

Machine-learning systems are problematic. Thats why tech bosses call them AI | John Naughton

Machine-learning systems are problematic. Thats why tech bosses call them AI | John Naughton

One of the most useful texts for anyone covering the tech industry is George Orwell's celebrated essay, Politics and the English Language. Orwell's focus in the essay was on political use of the language to, as he put it, "make lies sound truthful and murder respectable and to give an appearance of solidity to pure wind". But the analysis can also be applied to the ways in which contemporary corporations bend the language to distract attention from the sordid realities of what they are up to.

The tech industry has been particularly adept at this kind of linguistic engineering. "Sharing", for example, is clicking on a link to leave a data trail that can be used to refine the profile the company maintains about you. You give your "consent" to a one-sided proposition: agree to these terms or get lost. Content is "moderated", not censored. Advertisers "reach out" to you with unsolicited messages. Employees who are fired are "let go". Defective products are "recalled". And so on.

At the moment, the most pernicious euphemism in the dictionary of double-speak is AI, which over the last two or three years has become ubiquitous. In origin, it's an abbreviation for artificial intelligence, defined by the OED as "the capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this". An Ngram tool (which shows patterns of word usage) reveals that until the 1960s AI and artificial intelligence were more or less synonymous, but that thereafter they diverged and now AI is rampant in the tech industry, mass media and academia.

Now why might that be? No doubt laziness has something to do with it; after all, two letters are typographically easier than 22. But that's a rationalisation, not an explanation. If you look at it through an Orwellian lens you have to ask: what kind of work is this linguistic compression doing? And for whom? And that's where things get interesting.

As a topic and a concept, intelligence is endlessly fascinating to us humans. We have been arguing about it for centuries - what it is, how to measure it, who has it (and who hasn't) and so on. And ever since Alan Turing suggested that machines might be capable of thinking, interest in artificial intelligence has grown and is now at fever pitch with speculation about the prospect of super-intelligent machines - sometimes known as AGI (for artificial general intelligence).

All of which is interesting but has little to do with what the tech industry calls AI, which is its name for machine learning, an arcane and carbon-intensive technology that is sometimes good at solving complex but very well-defined problems. For example, machine-learning systems can play world-class Go, predict the way protein molecules will fold and do high-speed analysis of retinal scans to identify cases that require further examination by a human specialist.

All good stuff, but the reason the tech industry is obsessed by the technology is that it enables it to build machines that learn from the behaviour of internet users to predict what they might do next and, in particular, what they are disposed to like, value and might want to buy. This is why tech bosses boast about having "AI everywhere" in their products and services. And it's why whenever Mark Zuckerberg and co are attacked for their incapacity to keep toxic content off their platforms, they invariably respond that AI will fix the problem real soon now.

you may also like