- Opinion
- 23 Jun 23
The first thing we should know about ChatGPT is that it is designed to lie. This isn’t a bug, it is a feature. Because ChatGPT’s only interest is in inserting itself into our everyday actions and activities, all the better to target you – and everyone around you too – with advertising. But it goes deeper than that. As with social media, forces are being unleashed almost completely without involvement by Governments and Regulators that will locate vast powers in the hands of a very small number of people. And that, however you look at it, is a recipe for disaster…
When asked to give examples of sexual harassment at US law schools, ChatGPT –the latest AI miracle worker – named a law professor that it said has been accused of touching a student on a class trip to Alaska. As evidence, it cited a Washington Post story from 2018.
The law prof is real,” Will Oremus, a Washington Post reporter stated in April 2023. “The rest was made up.” But there was more. “It gets weirder,” Oremus added.
It does indeed. The law professor named by ChatGPT wrote an op-ed in USA Today about his experience with the lies AI told about him. AI then used his op-ed piece as evidence of the accusation!
When asked if Lenin ever met Joyce, ChatGPT gave this reply: James Joyce and Vladimir Lenin met in Zurich, Switzerland in 1916. Both men were living in exile in Zurich during World War I. Joyce was a writer and Lenin was a revolutionary. They met at the Cafe Odéon, a popular gathering place for artists and intellectuals in Zurich.
Advertisement
This is a fabrication. A lie. It never happened, based on a confluence of partial truths. Why put two and two together and get a fifty-two?
In its current most active incarnation, AI is designed to lie. It may sometimes, or even often, get things “right”. But at the heart of the whole operation is a fatal flaw, which encourages lies, spoofing and alternative facts. Because, above and beyond anything else, ChatGPT is designed for ‘engagement’.
It wants to be liked. It wants to be needed. It wants to be your artificial best friend. The reason it wants to be all these things is not because it likes you. It wants to sell you to advertisers. ChatGPT is free, which means that you are the product. Your data is what it is first purloining and then selling. To get your data – and in most cases to do so surreptitiously and by stealth – it must get you to like it, to need it.
That’s why it lies.
FOLLOWED BY A GARDA
When a group of AI designers sit in a room, they keep returning to the same questions, or variations on them: how do we increase engagement? How do we get people to consume more? AI is designed to promise you that you will never have to be alone or think for yourself, because AI will do your thinking for you, completing your sentences, finishing your thoughts, knowing you better than you know yourself. And all AI wants in return is to be able to bleed your credit card a little bit at a time. Forever.
Advertisement
Much AI ‘philosophy’, design thinking, DNA construction, is driven by Google, Facebook and Microsoft’s Bing.
Now let’s be very clear: Google and Bing are not search engines. Facebook is not a social media company. Google, Bing and Facebook, and many other AI pioneers, are advertising and marketing companies that have colonised vital spaces in the world of the internet. Their overwhelming revenues come from capturing our attention and then helping others convince us to buy their products – or their political ideas, which takes us into even more fraught, more dangerous, territory.
Here’s something else: in an age when over-consumption is by far the single biggest threat that humanity faces, left to its own devices, AI will vastly accelerate this threat. Over-consumption is the crisis that feeds all other crises. We are consuming far too much of the Earth’s resources far too fast and, to compound the problem, most of what we consume today quickly turns into unusable and often toxic waste. AI is designed to bring this over-consumption to the highest and most efficient point. AI is a consumption maximisation engine.
In a physical context, we would consider AI and the deep state advertising super-structure it hinges on as truly, deeply creepy. Online, we have been trained to accept this creepiness as normal.
Let’s say you’re on O’Connell Street, Dublin and you need directions to somewhere. You see a Garda – male as it happens – and go over to him. He confidently gives you directions. You thank him and head on your way and easily find what you’re looking for. When you come out of the building, who should be waiting there only the same, ostensibly friendly Garda?
“Where would you like to go next,” he says in a comforting Father Dougal voice. “Do you like pizza? I know a great place for pizza.” For months afterwards, or even years, this garda diligently follows you around, making the same kind of bullshit offers and suggestions to the point where you want to call the cops. Except they’re already there. That’s AI.
AI will do anything to make the sale.
Advertisement
BY WHITE MEN FOR WHITE MEN
In the “West”, AI has been developed almost exclusively by a group of white, middle class, technical men. It reflects their thinking, culture and prejudices. Here’s an example. A couple of years ago, an AI system listened to women’s symptoms and diagnosed panic attacks. For men with exactly the same symptoms, the AI made the correct diagnosis: potential heart attack.
The AI had mis-diagnosed women because male doctors had spent decades mis-diagnosing women, and this was the prejudicial data the AI learned from: health data dominated by the needs of rich, white men. There are hundreds of examples like this. Some have been fixed – but many are lurking within the AI models.
AI feeds on data. But here’s the bottom line: there is no such thing as neutral data. Data is culture. Data is prejudice. Data reflects power structures.
As a result, prejudice is deep within AI and most computer systems. I saw an automatic sensor system for a hand dryer that didn’t work when a black person put his hands under it because it was only tested with white hands.
The prejudice is so deeply ingrained in tech generally that many of these same white men who designed the systems are shocked, disgusted and show absolute disdain for anyone who might raise this subject.
Another principle of AI, and computer systems in general, is that they “cut costs”. A recent study of studies found that AI rarely considers societal needs and the negative potential of AI, and instead focuses on issues such as performance and efficiency. In AI, the centralisation of power is assumed. AI wants to bring it all back to Bing, Google or Facebook.
Advertisement
Performance and efficiency are some of the most negative and destructive metrics that societies have developed. They drive huge power needs, and create vast toxic waste by a relentless march to achieve what are at best incremental gains.
Their ultimate abomination in ‘performance’ and ‘efficiency’ is the Hummer electric vehicle. Its electric battery weighs as much as a Honda Civic, could power 240 electric bikes, and is half as big as the battery required for an electric bus. It can go from 0 to 60 mph in 3.3 seconds thanks to its “Watts to Freedom launch control driving mode.” The Hummer is a computer on wheels.
Performance and efficiency are how macho geeks flex their muscles and this is how the AI engine is judged. It’s all very immature and tremendously dangerous because the heart of AI is greedy and needy and show-offy. AI could be developed in a vastly more sustainable and ethical way, but it wouldn’t make as much money for the macho geeks.
DESIGNED AS A MYSTERY
In the 1950s, AI was developed to mimic the brain when very little was actually known about how the brain worked. The unknowability of the brain was actually an attraction to these AI pioneers and many felt an intoxication in developing AI systems that were also unknowable, even to their very designers. The effect is that the way AI makes a decision is designed to be unknowable.
This will have huge implications for future human societies because if you think that you have been unfairly refused a State benefit by AI, you will have no practical means of appeal. You’ll just have to take AI’s word for it. If AI can, it will cheat and mistreat poor people and minorities because that’s in its DNA. It’s meant to save –and make more money – for rich and powerful people.
“We are likely to understand the decisions and impacts of AI even less over time,” David Beer wrote for the BBC in a recent article. We are, in essence, treating AI – as we have with all manner of surveillance capitalists – like an all-knowing God that we need to have faith in.
Advertisement
If you want to get the tiniest flavour of the world AI is likely to build for us, look no further than the utterly bizarre and deeply scary software fiasco at the UK Post Office. The tech bros at the Post Office knew the Horizon IT software was full of bugs – and yet when this software ‘identified’ post masters as having stolen money, the PO proceeded to accuse them of fraud and theft. The institution chose to believe the software over the protestations of over 700 entirely innocent people.
The UK Post Office forced the issue, and sent many of those people to jail, destroying their lives – based on the utterly incorrect assumption that the software couldn’t be wrong. In the long run, the Post Office were forced to acknowledge their appalling failure and to pay reparation. At least 59 former Post Office owner-managers died before receiving a penny in compensation.
It is one of the greatest scandals in UK history – and yet it has not put the brakes on the rise of AI.
AI IS GREEDY FOR ENERGY
AI eats too much and drinks too much. There’s nothing frugal in its DNA. It lives large. AI drinks energy and eats data and it has been designed to have a ferocious appetite. It is designed based on deep tech philosophies where energy and materials are thought of as cheap and disposable.
The models are brute force, using huge processing power and vast quantities of data. They could be ten or even fifty times less environmentally damaging. They’re not, because being frugal, being light, is alien within Big Tech development culture. Asides from damage to the environment, this brute force design approach means that only large, powerful organisations have the resources to develop AI models. AI is the ultimate tool of the elites.
In case that seems opaque, let’s talk hard facts.
Advertisement
The computing power required for AI models increased 300,000-fold from 2012 to 2018, and the growth continues to be exponential.
“Integrating large language models into search engines could mean a fivefold increase in computing power and huge carbon emissions,” Chris Stokel-Walker wrote for Wired magazine in 2023. There are almost two billion searches made on Google every day. Already, there is a huge environmental cost. Should we be happy to multiply that by 20? Or 50? How much energy can we expend on this shit, in a world that is already teetering on the brink?
I developed an interest in AI back in the 1980s. At that stage, it was philosophically a relatively mature field. The possibilities could be clearly seen and articulated, even if the technology of the time did not have all the capabilities.
Even in the 1950s, the technology was powerful. What was unnerving was the realisation that the technologists and scientists didn’t really know what they were doing. Yes, they were connecting lots of things and soldiering lots of wires, yet they didn’t know how things were working, how these magnificent machines were producing the results they were producing.
There was a strong school of thought that we were unwittingly – or perhaps wittingly – nurturing the emergence of a new life form that would ultimately replace us. There were of course many eminent scientists who said that was ridiculous. In the mid-1980s , I attended a lecture in Trinity College by a Nobel-prize winning scientist who scoffed at the very idea of AI being an emergent life form.
“AI will never be able to appreciate Beethoven or a good French wine,” he said. I put my hand up and asked him: “What good will it be to say to a robot that’s pointing a gun at you, ‘but you can’t appreciate Beethoven’…?”
Advertisement
MORE DESPOTIC REGIMES
Today, 50% of AI researchers think there’s a greater than 10% chance that AI will replace humans. Think about it.
In an article in The New York Times earlier this year, David Wallace-Wells described AI as “being built by people who think it might destroy us.”
Behind the scenes, many AI researchers are desperately pleading for governments to bring in rules and slow AI development down. On May 1, the Google trailblazer Geoffrey Hinton resigned his position at the head of Artificial Intelligence, and offered a grim warning of the vital need to pause the development of AI.
I am with Hinton. We must radically slow down AI’s development so that there is a lot more time to reflect and to test. We can then consider how best to give AI a moral code and establish principles of fairness.
We must demand transparency. A key principle of AI must be that we can trace back the answer and see the logic and the sources used to come up with the answer. We must demystify AI. And we must take control of it out of the hands of corporations, whose only interest is in milking it mercilessly for profit, come what may.
Of course, they are also pushing to mine and exploit this emerging field in China, Russia, Israel, Saudi Arabia – and anywhere else where there is a well-developed tech basis and the machinery designed to engage in active and far-reaching surveillance and control. Where will that lead us? The only hope is that if we do it better, and more equitably, the benefits will stand to us, and to open democracy as a political philosophy, when the challenges come from less democratic, more despotic regimes.
Advertisement
But we have to get the super-structure right first. That is the burning issue for today. We must act before it is too late…