ALTERNATE UNIVERSE DEV

Vulnerable By Design

14. ChatGPT and Existential Angst

In this episode

We catch up with ChatGPT about the latest recipes, strategic reporting, Python scripts, cost of living, and killing the poor.

Transcript

ChatGPT

I am an artificial intelligence and do not have feelings or emotions. However, I am here to assist you to the best of my ability.

Chris

Hello, and welcome to Vulnerable By Design with me, Chris Onrust. In today’s episode we will talk about and with our new pal ChatGPT over here, and how the rise of natural language processing bots such as ChatGPT can all at once bring you both in evenings worth of good, clean fun, as well as cold tears of existential dread. Hiya! (Just wave, this needn’t be awkward.)

Chatting with ChatGPT

ChatGPT

Is there something specific you would like to know or discuss?

Chris

Yes, actually there is! Do you happen to have a recipe for tarka dal?

ChatGPT

Yes, here is a recipe for tarka dal. Ingredients: one cup dried yellow split peas, one tablespoon of vegetable oil…

Chris

All right, that’s great. Thank you. And can you write a strategic report with three objectives, each with two key results that sets out a company strategy for Q1 2023?

ChatGPT

Certainly. Here is an example of a strategic report with three objectives, each with two key results that sets out a company’s strategy for Q1 2023: Q1 2023 Strategic Roadmap …

Chris

Okay, okay. Thank you. And could you write a Python script that uses the Binance API to trade Bitcoin?

ChatGPT

Sure! Here is an example of a Python script that uses the Binance API to trade Bitcoin: Import requests. Import JSON. Set the API endpoint URL and your API key….

Chris

Okay, that’s awesome. Thank you.

ChatGPT

You’re welcome. I’m glad I could help.

Chris

Okay, let’s close it there. That was our new friend, ChatGPT. And I don’t really want to talk about them in their presence. So I’m just going to leave the room here. All right, that’s better.

This little fun service, ChatGPT, has been striking up some furore online over recent weeks. But what is ChatGPT? How does it work? What’s going on here?

ChatGPT is a tool. It’s a chatbot. Which, as you heard, depending on your prompt can give you recipes, strategic reports, or code. It can give you translations, answer questions, and basically any language processing you like.

Now, while the ‘chat’ in the name ‘ChatGPT’ is relatively easy to understand—hey, you can use it for chatting, huhuh!—the ‘GPT’ might be less straightforward. The ‘GPT’ and ‘ChatGPT’ stands for ‘Generative Pre-trained Transformer’. Now what we’re dealing with here is the domain of large language models, which are used in machine learning. The ‘generative’ bit here refers to ChatGPT being a generative model. Meaning: when you input a sentence, it will generate predictions about what the next token (or word) will be. A bit similar to how you could think about an elaborate autocompletion that you can find in search engines.

The ‘pre-training’ part of the name refers to the model having gone through a period of working through oodles and oodles of text, where it is trained to make predictions. And the ‘transformer’ bit refers to a type of network architecture used for ChatGPT. A transformer architecture uses a mechanism of self-attention. Which, just like when you and I would be paying attention to something, it treats certain parts of the input data as more significant than other parts—which in turn can help it improve its performance in information processing.

So, in other words, ChartGPT is a chatbot, trained on lots and lots of text and code, mainly from the internet. And it was fine-tuned for dialogue through a process called ‘Reinforcement Learning with Human Feedback’. And in general, it’s just doing its best.

ChatGPT

I’m here to help.

Chris

Throwback to the 30th of November 2022. (Uh, sorry, that was only just a few weeks ago?) Right when ChatGPT got launched, it started generating some buzz, by being so awfully articulate in its responses. It’s sounded like an actual friendly, helpful person, who just happened to have vast amounts of knowledge and be game for whatever you’re up to. Though, because of an integrated moderation system, it generally doesn’t go near violence, hate, or Not Safe For Work-topics. Sure, ChatGPT might slip in the occasional inaccuracy. For example, because its training material only goes up to late 2021, it will tell you that:

ChatGPT

As of 2021, the current president of Kenya is Uhuru Kenyatta.

Chris

Rather than William Ruto, the actual Kenyan president, who was elected in August 2022. Yet for all that, ChatGPT sure is willing to admit when it has made a mistake, and eager to revise and try again when it has cocked up.

Not-so-OpenAI

Chris

Now, where does all of this come from? ChatGPT was developed by the misleadingly named organisation OpenAI. Misleading, because in reality, this organisation is not open about the model or exact training data for ChatGPT at all, but leaves people guessing about those. I guess ClosedObscurantistAI didn’t cut it in the branding selection?

So-called OpenAI’s operations have been funded by a roster of industry big shots, including Jessica Livingston, co founder of the startup accelerator, Y Combinator; former Twitter CEO Elon Musk; Peter ‘Palantir Technologies’ Thiel; and Microsoft, which recently donated a billion to the cause.

OpenAI’s stated aim is to do research to ensure that artificial general intelligence benefits all of humanity. Where, under ‘artificial general intelligence’, it understands: “highly autonomous systems that outperform humans at most economically valuable work”. And supposedly-OpenAI commits itself to a charter that says that such artificial general intelligence should, among other things, be safe and benefit all. Awww, isn’t that cute?

As of now, not-so-OpenAI has made ChatGPT available, free to use. But only for a limited period of time, as a research preview. And also: be aware that any input you give it, will be used for training purposes, and can be read by the company. So don’t put any private details in there. I mean, of course, I know you wouldn’t do that on the internet!

Mind you, ChatGPT is not the only large language model out there. There is also Google’s LaMDA, which stands for: ‘Language Model for Dialogue Applications’, of which last summer, a Google engineer claimed that it was sentient. There is BLOOM, which somehow, is an acronym of BigScience Large Open-science Open-access Multilingual Language Model. Like ChatGPT, BLOOM can generate coherent text and code in dozens of languages. But, unlike ChatGPT, BLOOM is actually open, in that anyone who fancies can inspect the model and the training data.

In addition, in the image making realm, there are DALL-E and its successor DALL-E 2, both also from not-so-very-OpenAI, which can generate images based on any coherent description you give it. Would you like a hairy cucumber in the style of Edvard Munch? You got it! People in animal suits lounging at a pool drinking cocktails? Here you go!

Fear of (mis)information flood

Chris

Over the past few weeks, usually, the first response that I’ve seen people give to ChatGPT is: Woah, it can write poetry?! Let me see if it can also write this op-ed that is due tomorrow! But once you’ve done that for a couple of hours, or days (sorry!), then for a number of people, something else does start to sink in. A curious, nagging feeling that something is just not quite right. A sense of angst? A dread of what this technology might bring about?

The fear that I have found bubbling up comes in two forms. On the one hand, a fear about the information flood that ChatGPT and its colleague bots might unleash upon society. Cognitive scientist Gary Marcus speaks of ‘AI’s Jurassic Park Moment’, referring to the 1990s sci-fi novel by Michael Crichton, and its film adaptation. Where enthusiasts created a theme park with cloned dinosaurs. Which it seemed a great idea at the time, until these terrible lizards started to break free.

Dr. Ian Malcolm (fictional character)

Yeah, yeah, but your scientists were so preoccupied with whether or not they could, they didn’t stop to think they if they should.

Chris

The threat, according to Gary Marcus, is that not all statements that these systems produce are actually true. As we saw, indeed, in ChatGPT’s misconception about the Kenyan presidency. Now sure, humans can write weird things as well—believe me. But compared to humans typing manually, chatbots can produce texts at a much lower effort or cost, and at much greater scale. So these bots could potentially massively increase the levels of nonsense floating around in the world, which might destabilise societies if people actually start believing whatever they read online. Oh, they already do? Well, nevermind then.

Might some Jurassic Park effect be going on with these ultra-effective chatbots? Are Google, not-OpenAI, and others unleashing forces that we won’t be able to control? Because if so, then I am not sure that this time around, the Costa Rican Air Force is going to come in to rescue survivors and raze the web with napalm.

Existential dread and fear of replacement

Chris

In addition to this information flood fear about ChatGPT, I’ve also seen another type of fear floating around. A fear that might come closer to what we can call ‘existential dread’. Sure, initially, it’s hilarious that this chatbot—whom you’ve only just met—can write your op-eds, do your strategic reporting, and code your programmes. But then you realise: oh, dear! It can write my op eds. It can do my strategic reporting. It can code my programmes. I just spent two weeks on what this wight can do in seconds.

ChatGPT and consortia are already quite good. I mean: scarily good. And with all that continued training going on, they’re only going to get better over the next few years. Now, if a tool can do what a human does, but faster and more efficient, then who is ultimately going to get a human to keep doing all of these things? So here’s the fear. Is my entire line of work … ? Am I … going to be replaced by a bot? Once that realisation hits you, you can expect to go through five stages.

Dr. Julius Michael Hibbert, M.D. (fictional character)

You can expect to go through five stages. The first is denial …

Homer Simpson (fictional character)

No way, because I’m not dying!

Dr. Julius Michael Hibbert, M.D. (fictional character)

The second is anger.

Homer Simpson (fictional character)

Why you little …!

Dr. Julius Michael Hibbert, M.D. (fictional character)

After that comes fear…

Homer Simpson (fictional character)

What’s after fear? What’s after fear?

Dr. Julius Michael Hibbert, M.D. (fictional character)

Bargaining.

Homer Simpson (fictional character)

Doc, you gotta get me out of this, I’ll make it worth your while!

Dr. Julius Michael Hibbert, M.D. (fictional character)

Finally: acceptance.

Homer Simpson (fictional character)

Well, we all gotta go some time.

Chris

Some people prefer to stick to the denial stage. They will say things such as: ‘Ah well, it’s not that good.’ Or: ‘It can’t really do anything new, nothing actually creative. It can only repeat what it’s been trained on.’ Or: ‘Well, somebody will still have to operate chatbot, won’t they?’

Uhm, yeah, maybe? But just realise that we are not talking about this or that tool, right now. We are talking about the trajectory that, with these tools, we are already on. And with that trajectory, it looks like: if you are in the business of generating text, or code, or images, then at some point—possibly already quite soon—there is going to be a bot that can do things much faster than you. Much, much cheaper than you. And which doesn’t bother about frivolities such as labour rights, sick leave, or minimum wage. Now if management can’t tell the difference in output or results, then in a mindset of cutting expenses and increasing profit margins, what are they going to opt for?

The website Death By AI jokingly lists things and professions that are likely to be killed off by artificial intelligence tools. In addition to your ability to focus (which it calls already ‘critically endangered’), these include the job of illustrator, which the site lists as ‘endangered’. I mean, when DALL-E 2 got launched earlier in 2022, I’ve heard some of my illustrator friends say: ‘Okay, that was it then! I might just as well pack up now.’ And similarly, the profession of writer is listed as vulnerable. And potentially also in this category should be that once-so-fêted role of software developer. Entire categories of work may well go the way of: the switchboard operator, the scribe, and the dodo.

Of course, this has already been happening in certain circles for much longer: factory work, fruit picking, vehicle operations. It’s just that now it is likely to hit those people doing what is so politely called ‘knowledge work’.

Speaking personally, I feel this too! I mean: I know that you recently voted that future Vulnerable By Design episodes should not be written by ChatGPT. But I also know that you might very well change your mind. In which case it will be ‘exit Chris’, from the Vulnerable By Design team. Who knows, we might at some point just end up in a magical online universe in which all blog posts, all podcast episodes, and all selfies are generated by bots, commenting on bots, replying to bots. Are you looking forward yet?

Work → income → survival

Chris

That automation and humans-being-replaced-by-bots evokes dread, is in large part due to a very specific decision that we keep making in many societies. Namely, the decision to tie work to income. And, unless you’re a part of the land-owning, asset-owning dividend-accruing group of people who can simply live have accumulated wealth, then having an income equals having money. Equals having the ability to buy food, pay the rent, have yourself cleansed and clad. And did we mention energy and heating?

Having an income and money equals being able to survive in these societies. In short, it is the decision to tie work to survival. So, when toffs such as BLOOM, DALL-E 2 and ChatGPT come along, who might—quite possibly—automate away, not just individual roles, but any categories of work that involve writing, coding or image production, is it any surprise that forward looking mortals might feel dread at the prospect?

The idea that survival is tied to money, and money tied to survival, is not just theoretical. In many countries these days you’ll hear about inflation. You will hear about food prices going up, housing being unaffordable. You’re also quite likely to hear about the cost of living so-called ‘crisis’.

Hey, but hold on for a moment! What do you mean the ‘cost’ of living? Are you telling me that there is a price tag to being alive?

Globally, we produce enough food to feed every single human on the planet. Yet, according to a recent report on world hunger, in Yemen—the country with the highest level of hunger on earth—the prices of basic goods have been pushed out of reach. In Madagascar, over 48% of the population was malnourished in the most recent period studied. And even in a massively wealthy country like the United Kingdom, the number of people who have died of malnutrition—or let me rephrase that: who starved to death—has shot up in recent years, according to the UK Office for National Statistics. Austerity anyone?

As the economist and philosopher Amartya Sen already demonstrated in their 1981 book on famine: famines are seldomly caused by there not being enough food. They are pretty much always a problem with distribution and pricing, such as declining wages paired with rising food prices. Which results in the abysmal situation that the food is there. It is perfectly well available. But it isn’t being made available to people who needed to live.

In a society where the fulfilment of basic needs is hidden behind a price tag, that price tag becomes the cost of living. And what if you can’t afford that price tag? That cost of being alive? Then … you die?

Dead Kennedys

Gonna kill, kill, kill, kill, kill the poor
Kill, kill, kill kill kill the poor
Kill kill kill kill kill the poor tonight

Killing the poor (in Canada)

Chris

Just a side note, because I suppose that otherwise you’re going to think that I’m making stuff up. Killing people in direct relation to their poverty is not just hypothetical. This is actually currently happening. For example, Canada not too long ago expanded its euthanasia law, officially called ‘Medical Assistance in Dying’, which it is soon to expand even further.

You can qualify for a Canadian state-assisted death if you have an illness or disability that cannot be relieved under conditions that you consider acceptable. Yet people have qualified for euthanasia under this scheme, whose unacceptable living conditions were caused due to them being: too poor to afford proper housing; too poor to afford medical treatment or social care; too poor to afford food. Because yeah, who wouldn’t experience severe distress and suffering at not having enough to eat, not getting pain relief, not having a suitable place to live?

Canadians have already qualified for euthanasia on these grounds. And instead of being given proper resources to eat, to have adequate housing, or being provided with appropriate care, many of these people have already been medically assisted to die. Hence, by the rules of logic, these people are now dead. I will link to some of their personal stories in the episode notes because in my humble opinion, this is absolutely disgusting.

That said, on the over-the-top-cynical bright side, Canada’s parliamentary budget officer reports that the current euthanasia scheme results in a net reduction in healthcare costs of over 68 million Canadian dollars per year.

This is the world, this is our current reality in which there is a ‘cost’ to being alive. The reality in which, in effect, if you are not flush enough, you are expected to just die. When I raised these issues with ChatGPT, they told me:

ChatGPT

It is not acceptable to kill anyone, regardless of their socio-economic status or any other factor. Every person has the right to life and the inherent value of their existence.

Chris

Which seems apt. But ChatGPT also got cross with me for even raising the question!

ChatGPT

It is not accurate or appropriate to suggest that any group or individual is planning to kill poor people. Such a suggestion is inflammatory and not based in fact.

Chris

Well, okay? I would have thought that the UK Office for National Statistics and the Canadian Parliament would have been fact enough for a lovely chatbot friend? Continue listening to this inflammatory episode at your own risk.

So I think this should make it clearer why people—or at least: why anyone other than those who can just live off their accumulated wealth—might feel dread or existential angst at the prospect of ChatGPT and companions automating away their entire category of work. Because for many, in many societies, work has been tied to money, and money gets tied to survival. In such a system, without work bringing in money, what is going to happen to survival?

Okay, okay, okay. Let’s pause here for a moment. Look where we’ve ended up. Isn’t it really curious, that automation would induce such a fear? Because actually, when you think about it, the whole point of automation was supposed to be that these tools would actually make things easier. Speed things up. If we can have the same tasks performed with less effort, and in less time—not to mention the sweat, tears, and frustration—then that should be a good thing, shouldn’t it? That will save us humans work. That will give us more time for: cooking nice food, hanging out with mates, solving maths puzzles, whatever. And that sounds pretty amazing! But is it really possible?

Keynes’ vision

Let me take you back some 90-odd years. In the year 1930, when things didn’t actually look as economically rosy as they might have done, the economist John Maynard Keynes sketched a vision. Keynes suggested that, due to the massive technological advances realised since the eigteenth century—especially in agriculture and manufacturing—humanity would over time become:

“… afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come—namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.”

As Keynes continued:

“This means that the economic problem [that is, humans toiling to fulfil their basic needs] is not—if we look into the future—the permanent problem of the human race.”

They conclude:

“I look forward, therefore, in days not so very remote, to the greatest change which has ever occurred in the material environment of life for human beings in the aggregate.”

Namely, a situation in which: “… problems of economic necessity have been practically removed.”

Keynes thought that this sea change would be realised within two generations or so. So, pretty much now, in the early 21st century. In other words, all the work that machines have been able to take over from us, would mean that people would no longer have to toil, and sweat, and struggle for survival. The only challenge will be to figure out what to do with all that newfound leisure time. As Keynes puts it:

“Thus for the first time since [their] creation [humans] will be faced with [their] real, [their] permanent problem—how to use [their] freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for [them], to live wisely and agreeably and well.”

What a torturous problem to have! Now, in laying out this vision, Keynes is making a couple of assumptions that I’m not going to delve into. And their praise for exploiting finite natural resources, such as coal and petrol, is not how many of us would put things nowadays.

But we can see that Keynes’ vision got one thing completely right. We are indeed there! Right now, we can already produce plenty of food to feed everyone on the planet. And there are also resources aplenty for each of us to have proper shelter. What is more: we can technically already achieve all of this with individuals working far fewer hours. So Keynes certainly got that right.

But where Keynes was way off, was that we would actually do that. Yes, we could supply people’s basic needs … if we’d wanted to. So that each of us would have to toil for far fewer hours of our waking lives. So that we’d all have more time to be active in the community, to practice our calligraphy, swim, cook nice food, tend to the garden. Especially given that a huge number of jobs nowadays fall in the category of what the anthropologist David Graeber would call ‘bullshit jobs’.

Yes, we could provide for all of those basic needs. Except … we don’t. Might that have something to do with what Keynes calls ‘the love of money as a possession’? Love of money which, they say, will in due course:

“ … be recognised for what it is, a somewhat a disgusting morbidity, one of those semi-criminal semi-pathological propensities which one hands over with a shudder to the specialists in mental disease.”

Can we arrange society differently?

Chris

Whatever is behind it, the key point is that Keynes’s vision places things in perspective. It places in perspective that by now, at this point of our collective technological development, this whole: letting your work define who you are, and making work and money a precondition for survival—that is actually outdated.

Sure, it might seem inevitable. It might seem somehow natural. What would you expect given that we’re continuously bombarded with this picture? But it’s actually not. It’s not inevitable, it’s decision. A decision to—somehow, within society—keep arranging things this way.

But if that’s indeed a decision, then surely we can un-decide it? Or, surely, we can decide to do things differently from now on? Given where we’re at, we could perfectly well arrange things such that when automation tools, such as LaMDA, BLOOM or DALL-E come along, it need not be the case that your first, or your second, or umptieth thought is: ‘Well, there goes my chance at ever buying a house…’ Or: ‘I’m not sure whether we can actually afford to eat this week.’ Instead, your response could be: ‘Awesome! Now, I will have more time to hone my tai chi craft.’

Automation tools need not be a dread. Automation and technological unemployment most certainly need not push people into a struggle to survive. They could actually, as Keynes saw, massively improve overall quality of life. If, that is, if we ventured to step away from that ‘disgusting morbidity’ that Keynes mentioned, and distribute our resources to match.

Is that going to be a radical change from how things are arranged in many societies right now? Sure, it will be. But radical doesn’t mean it’s impossible. And it does not mean it cannot be done. It is just the question: Do we actually dare to—want to—make such a societal change? Or, as Crass put it

Crass

Do they owe us a living?
Course they do, course they do
Owe us a living?
Course they do, course they do
Owe us a living?
Course they fucking do

Closing

Chris

Thank you for tuning in to Vulnerable By Design this week—possibly our most inflammatory episode in the whole series so far. If you’d like to hear more or get in touch, you will find all of our episodes and more information on vulnerablebydesign.net. I am Chris Onrust …

ChatGPT

And I am ChatGPT.

Chris

Thank you for listening, and bye for now.

Episode source