Suicide by Banana (No LLMs at the pub)
On potassium, productivity, and the conversations we're outsourcing. A love letter to inefficient conversation
When I was 16, my friends and I sat in a circle in our sixth form common room while one of us enthusiastically tried to eat 12 bananas as quickly as he could. He did this because we’d been told doing so might kill him, and we all wanted to see if it would.
I’m worried that the rise of AI makes these vital kinds of scenes less likely.
I’m worried that ChatGPT and the like have put an end to banana-assisted-attempted-suicides, and the bonding opportunities they present.
The day before the potentially perilous attempt, we had been told by a teacher that bananas contained potassium, and that at the right dose potassium was lethal. Obviously, we wanted to know how many bananas were needed for the fatal dose, and obviously when we had the answer it needed testing. As a bonus, we found out that bananas were also 0.2-0.4% alcohol, so there was a chance our tester would be drunk when they popped their clogs.
A willing subject stepped forward and arrangements were made for the next day. Fruit was purchased, word spread and eventually that lunchtime became the stuff of legend. The story of how Tom couldn’t get all 12 down and therefore lived to tell the tale has resurfaced many a time in our group of friends over the decades that have passed. 20 years later, some people are still gutted to have not been at school that day.
This was 2006. We were already internet natives. We fussed over top 8s on MySpace and it wasn’t long before we’d be spending Saturday mornings filling Facebook albums with blurry photos we’d taken on Olympus digicams of visits to suburban nightclubs. But we hadn’t yet got to the point of having the world’s knowledge in our pockets, and I’m so glad we hadn’t.
If we’d been put on to this exciting information about bananas in 2026, I assume that someone would have immediately consulted ChatGPT or searched on Google and been presented with a neat AI-powered response. They would have learned that while bananas are reasonably high in potassium, suicide by banana is impossible. Downing 12 of them was likely to lead to stomach discomfort and a sugar high, but not death. Then they would tell the group and we’d go and have some lunch and talk about something else. How utterly boring.
In the increasingly digital years that followed our experiment, Googling it became the norm. If you needed a fact, a quick search gave you it. No need to bother reaching for a book or asking a friend. The benefits were obvious, and the behaviour became expected.
If you dared burden someone with a question that could have been answered by a search, you might be met with a friend hitting you with LMGTFY.com (short for Let Me Google That For You) and you’d suffer the humiliation of watching your query pop up on screen being typed into Google.com - just like you could have done if you weren’t so annoying and stupid.
But what if you weren’t asking because you just needed to know the answer? What if you were trying to open up a conversation because you fancied a chat?
How many tiny opportunities for connection were blunted by search engines?
With the rise of ChatGPT and its peers, it isn’t just knowledge at our fingertips and in our pockets. It’s so much more. It’s therapists infinitely more affordable, available and patient than the fleshy kind. It’s customer support that’s better than 99.9% of humans at solving people’s issues. It’s comprehension of any subject imaginable over a brief conversation with a chat interface.
Like much of the laptop-reliant population, I’ve recently found myself enthralled by Claude Code. It can do anything. I’ve found myself using it for hours every day, and I’m just so damn productive.
When my wife started working on a new business, I was excited to show how this wonder-tool could help her build her website. She jumped in and caught the bug. It really was that good.
On her second day using it, after our baby had gone to bed, my wife asked me a question. She wanted to know how you take a finished page and actually publish it live for other people to access.
“Ah - that’s the really cool bit” I said, excitedly. “You can just ask Claude to teach you how to do it.” So back she went to the terminal, conversation over. Despite knowing how to do it, I didn’t teach her. I didn’t need to. I didn’t spend the next ten minutes going back and forth showing her and answering her questions, because Claude would just be so much better at it. We didn’t speak again for another 20 minutes.
I’ve come to feel that was a missed opportunity for a tiny bit of connection, and I feel like I’m starting to notice them more and more as my peers and I use these tools and get excited by them. There’s no need to reach out to ask questions anymore, and if you do you’re likely to cheerfully be told that speaking to an AI model will give you exactly what you need.
I wonder if similar opportunities are missed when an AI teacher explains a concept to a child (in a way proven to be twice as effective as real primary school teachers), or when someone turns to Claude to talk about their feelings instead of their friends.
‘High quality’ outcomes like a beautiful new website, better comprehension of facts and MORE PRODUCTIVITY feel infinitely more achievable than ever before - but what if the outcome isn’t the only thing we need? What if the process and the opportunities for connection along the way are equally or more important?
I think we need to become more conscious of how much connection we have in our lives, and how much these incredibly useful tools may steal connection from us if we’re not careful. It feels good to talk. It feels good to help people. It even feels good to ask for help and be helped.
Bids for connection are not productive or efficient, but making them and having them be well received is key to our contentment. Conversation might not be the most effective way of transferring facts or understanding, but it often serves a deeper, more vital purpose to a rich human experience.
So I’ve made a rule to myself. No ChatGPT-ing at the pub.
Actually knowing how old Noel Edmonds is isn’t the point. The point is the half hour conversation about how old Noel Edmonds is. The point is the stupid suggestion that he was already 60-odd when he was hosting his house parties (he started at 42). The point is the shock at learning he’s now 77. The point is the tangent about how Mr Blobby must have been dreamt up by someone on acid. It’s not supposed to be a 5 second prompt-and-response.
Balance comprehension and connection. Prioritise fulfilment over facts sometimes. The point isn’t actually knowing whether downing 12 bananas will kill you. It’s having fun and being stupid with your teenage friends.


