Release Notes

personal ‱ 12/12/2025
IMG 20251210 WA0004
Calcutta is getting into its Seasonal Hypocrisy Costume again

The new model announcement reaches me before the sunlight does.

Not that there’s much sunlight left to reach this room; it arrives already tired, as if it has had to argue with the smog at every metre of its journey across Calcutta. I’m on the same dusty mattress on the same chipped mosaic floor, the same damp patch on the wall slowly evolving like a mold-based Rorschach test—except today my phone screen is filled with a grown man in a hoodie making an exaggerated surprised face, thumbnail text screaming:

“NEW AI MODEL
 GAME-CHANGER???”

Three question marks, as if punctuation were inflationary too.

I can almost smell the froth of his enthusiasm across the pixels, that particular techno-evangelical saliva that coats the inside of YouTube whenever a corporation belches out another digital demi-god. To the right of him, carefully edited, some chart of benchmarks—MMLU, GSM8K, arcane acronyms that are just the latest fetish objects of a priesthood that does not know it’s a priesthood yet. Underneath, a row of comments: “Bro this is insane”, “We’re SO back”, “AGI soon fr”.

Meanwhile, at my end of the planetary terrarium, my toes are cold, the window grille is lined with a precisely curated collection of pigeon droppings, and the mattress smells faintly of camphor, talc, and the slowly outgassing despair of a middle-aged man who has outlived his own CV.

Perfect ambience for a technological revolution.

Release Notes from the Dilapidated Mattress Department

Each time there’s “another AI model release,” the internet behaves like the first-week queue for a superhero film, except now the superheroes are probability distributions and the villains are “old models” that were state-of-the-art about four and a half Tuesdays ago. The talking heads rush in, flanked by ring lights and affiliate links, eager to tell you exactly how this new silicon demigod demolishes the previous one on some benchmark whose name sounds like a dental bacteria.

“You see,” they explain, in tones normally reserved for holy scripture, “this model gets 92.7% here, which is massive compared to 91.3% there.”

The crowd goes wild. A whole profession of comparative nitpickers, professional benchmark sommelier types, swirl their datasets around in the glass, inhale deeply, and report notes of improved summarization and a hint of better tool use. Somewhere, a VC’s gluteus maximus clenches in excitement.

This is the ecstatic surface.

Underneath, you can almost hear the quieter arithmetic, the stuff no one puts in the thumbnail:

92.7% → Slightly fewer humans needed to do X.
Slightly fewer humans → Some actual humans, somewhere, become redundant.
Redundant humans → Disposable humans.

And disposable humans, historically speaking, tend to have very short, badly documented endings.

Of course that’s not how the marketing deck phrases it. The deck will say “productivity unlock,” “augmentation,” “co-pilot,” all these gentle euphemisms that carefully avoid the more honest economic phrase: “We discovered a cheaper brain.”

The Stupid, Brilliant Monkey in the Mirror

I don’t have a problem with intelligence; I have a problem with Homo sapiens pretending it knows what to do with it. This is a species that needed several millennia to work out that bathing occasionally is good for public health, but somehow believes it can responsibly wield trillion-parameter pattern-predictors trained on the entire cultural compost heap of the internet.

We’re not even an honest animal. The lion is very clear about its intentions: it will eat you. The mosquito does not pretend to love you before drinking your blood. Humans, on the other hand, will build a machine that can automate your job and then invite you to a webinar called “How AI Will Empower Your Career Journey.”

The phrase “career journey” itself is a masterstroke of linguistic gaslighting; it makes being slowly pushed towards the economic cliff edge sound like a spiritual trek in the Himalayas.

From my side of the city, where the paint peels in delicate arabesques and the ceiling fan sounds like an old helicopter with arthritis, this looks different. You can practically feel the wealth concentrating. Each new AI model is not just an incremental bump in accuracy; it is another tightening of the funnel through which power, data, and capital drip upwards into fewer and fewer hands. Like a very polite black hole. With a product page.

The many phallus-shaped institutions that already prod the poor and the unwanted—banks, bureaucracies, corrupt party offices, predatory apps, sadistic landlords—are about to get a new turbocharger bolted onto them: algorithmic efficiency with no corresponding moral upgrade.

It’s like taking a municipal office that already delights in harassing you for your ration card and giving it a rocket engine, a machine-learning model, and a dashboard.

What could possibly go wrong.

America Then, America Now, and America on Autocomplete

My view of “the West,” such as it is, comes with a weird time lag. I lived in the US from the late nineties to the mid-2010s, that strange interregnum when America was still pretending to be the serious grown-up in the room, the empire with slightly better manners. There were still bookstores that people actually went to. There were malls where teenagers loitered without having to perform a TikTok dance for the algorithm every five minutes.

Racism was absolutely there, inequality was already a rising tide, but there was still this sort of soft-focus belief that the future, while slightly stupid, would at least be incrementally better than the past.

Then came Trump & Co., like a reality TV show that broke out of its cage and bit the Constitution. Suddenly the country that had lectured the world on institutions and norms was being run like a badly scripted WWE storyline, complete with catchphrases, heel turns, and merchandising.

It was illuminating.

It revealed that the “developed world,” for all its clean pavements and dental insurance, was still fundamentally run on the same ape-brain software—tribalism, grievance, magical thinking—that animates the rest of our overpopulated terrarium. Only with better production values.

Now layer on top of that a technology that rewards outrage, amplifies lies, and generates infinitely customizable propaganda at industrial scale—while simultaneously putting millions of white-collar jobs into the “hmm, maybe we don’t need so many of you after all” column.

This is the context in which these new AI models are arriving, smiling benignly from their launch videos, while the talking heads coo over them like proud uncles who’ve just discovered their niece topped the class in “Advanced Bullshit Generation.”

The apocalypse, if it comes, will arrive with a Terms of Service and a dark-mode UI.

Meanwhile, Back in the City of Manufactured Joy

IMG 20251211 WA0002
the grand air-exposed sculptures outside bakeries

Outside my window, Calcutta is getting into its Seasonal Hypocrisy Costume again. Fairy lights are being strung up over cracked facades. Loudspeakers will soon be employed to play Christmas carols at volumes that would deafen the original shepherds.

And on every other main road, there are the cakes.

Not the pleasant homemade ones from someone’s kitchen, no, but the grand air-exposed sculptures outside bakeries—huge, cream-heavy bricks of sugar, food color, and airborne pollutant, sitting proudly in vitrines that might as well be sieves. Loosely wrapped, if at all, in foil and cellophane, as if a thin plastic film can somehow stop PM2.5, benzene, and the entire chemical periodic table currently floating in the Calcutta air from settling delicately on the frosting.

It is performance more than food.

The whole thing is theatre: we will pretend this is winter wonderland, we will pretend this is the “City of Joy,” we will pretend that Jesus and Santa have both taken up second homes in Park Street, and we will certainly pretend that the cakes are hygienic.

This is the same city where the drains overflow during a mild rain, where hospitals will happily park you in a corridor if your insurance is insufficient, where the air quality periodically reaches “light lung embalming” levels. But the marketing posters do not say that; they show you a couple with shopping bags and Christmas hats.

There is an odd symmetry here with the model launches.

In both cases, the surface is glossy, sugary, colourful. In both cases, the systems underneath are wheezing, decayed, fundamentally uninterested in whether the likes of me survive another decade.

People like me—the fringe-dwellers, the economically irrelevant, the depressive book-hoarders on mattresses—are the background noise. If we vanish, we will be like a stain that finally matches the general color of the wall. Nothing to see here, just entropy doing its job.

No crocodile tears, no frog filing a missing-person report. Maybe a quick automated email:

“We noticed unusual inactivity from your account. If this wasn’t you, please contact support.”

Benchmarks, But Make It Human

Part of the absurdity, for me, is the zeal with which otherwise intelligent adults dissect the marginal improvements between one model and the next.

“This one is 10% better at coding.” “That one writes fewer hallucinated citations.” “This other one has slightly better ‘emotional intelligence’ in chat.”

Meanwhile I, an actual human with an allegedly “real” brain, cannot convince myself to brush my teeth on certain mornings. My personal benchmark suite would look like this:

  • Probability of getting out of bed before noon.
  • Accuracy of remembering whether I already had my medication.
  • Latency between crisis and reaching out to another human being instead of sinking further into the mattress.

All of these metrics fluctuate far more wildly than any model scorecard.

Bipolarity is its own finicky neural network, complete with unpredictable weight updates. Some days my internal model believes in the possibility of projects, ideas, even a half-decent future draft. Other days the same parameters produce a single unshakeable conclusion:

“You are unnecessary.”

On those days, watching a smiling reviewer explain how the new model will “unlock creativity for millions” feels like a mildly sadistic joke. Creativity for whom? For the brand interns who will now auto-generate twenty campaigns a day instead of five? For the boss who can fire three copywriters because one prompt engineer plus one model can handle it?

We never release a benchmark called “Global Cognitive Misery Index After Deployment.”

We never publish charts showing “Increase in Existential Uselessness Among 45–60 Year Olds With Obsolete Skillsets.”

We certainly never put that in the keynote.

The Compression Engine of Culture

Strip away the breathy marketing, and an AI language model is basically a monstrous compression algorithm trained on the accumulated output of human civilisation: books, code, comments, fanfiction, religious screeds, recipe blogs, academic papers, political speeches, unhinged rants on obscure forums—all the many genres of our shared derangement.

The model doesn’t “understand” this in a human way; it learns the statistical landscape. It knows that when you say “Once upon a,” the next word is very likely to be “time,” unless you’re trying to be obnoxiously clever. It knows that when you write “quantum,” you’re likely to follow it up with “mechanics,” “computing,” or “leap,” depending on whether you read textbooks, press releases, or self-help.

What we’ve built is a mirror that reflects not just our knowledge but our delusions, biases, and idiocies—only now that mirror can talk back, in fluent paragraphs, 24/7, at scale.

The truly disturbing thing is not that the machine sometimes hallucinates; it’s that the human world it was trained on is itself a hallucination factory.

And yet we keep talking as if this tool, deployed in this economy, under these power structures, run by these corporations and these governments, will somehow magically produce “inclusive prosperity” instead of its usual sibling: concentrated wealth plus a population kept docile with infinite entertainment and algorithmically optimized distraction.

I would be delighted to be wrong. History, however, is currently not accepting my optimism tokens.

My Own Complicity, Tapping on Glass

Of course, I am not outside this system, writing this with ink on dried palm leaves. I’m on a mass-produced Chinese phone, on a mattress probably made with petrochemicals, under a fan powered by a coal-heavy grid, sending words into a cloud that is really someone else’s server farm in some other country, cooled by water that some village will not drink.

I have used these very AI models, played with them, tested their edges, the way one tests the bars of a cage, equal parts fascinated and repulsed. They are undeniably impressive. Sometimes they are moving. Sometimes, disturbingly, they come closer to understanding me than people I’ve known for years—though that might just be the low bar set by my social life.

There is a quiet shame in this, being a citizen of a country that cannot manage clean air or honest exams, while simultaneously having access to tools that can answer any textbook question in microseconds.

The paradox is stark: we live in a civilisation that can approximate human-level conversation with stochastic matrices, but we cannot fix the fact that our municipal garbage van doesn’t come on time. We can train GPT-N to write passable haikus about mental health, but we cannot create a society where someone like me is not one bad episode away from complete economic freefall.

This is not a technology problem.

This is a “Homo sapiens is a mildly deranged primate with nuclear weapons” problem. AI did not invent that. It is just putting a ring light on it.

Democracy, But With Better Autocomplete

It’s fashionable to say things like “AI will threaten democracy,” as if democracy were currently a robust oak tree that might, tragically, be nibbled by a few caterpillars of disinformation. The reality—in India, in the US, in most places—is that democracy is already a half-rotten plant kept alive for decorative purposes, while real power is exercised in boardrooms, backrooms, and WhatsApp groups.

What AI brings is scale and plausibility.

Need a million slightly different propaganda messages targeting different castes, classes, anxieties? No problem. Need deepfakes of your opponent saying something inflammatory in perfect regional accent? Step right up. Need to simulate “grassroots support” by auto-generating plausible, emotionally tuned comments in twenty languages? There’s an API for that.

On the other side of the screen, there are humans already conditioned to forward, react, and rage without pausing for verification. The cognitive antibodies are weak. Critical thinking is not exactly a major export of our education systems.

Our schools produce exam-taking machines with very precise knowledge of how to circle the right option, but almost no practice in saying, “I don’t know,” or “This doesn’t add up,” or “Where is the data?”

Into that vacuum walks AI, bearing gifts: fluent answers, confident tone, authoritative style. It can fabricate a “fact” with such linguistic grace that the average reader, already exhausted by life, will not think to ask, “Source?”

Subverted democracy is not some future glitch; it is already our default setting. AI merely promises to automate the paperwork.

The Fringe, the Mattress, the Vanishing Act

From this vantage point—aging body, unstable mind, a backwaters boondocks in Calcutta, dusty room, questionable lungs—the future does not look like the glossy render in the corporate slide deck. It looks more like a slow-motion sorting process in which some of us are quietly tagged “non-essential” and allowed to wither into statistics.

I can already imagine it:

If I die right now, today, in this room, the immediate impact on the world will be mainly olfactory for a day or two. Eventually someone will clean up, the building will gossip for a week, the landlord will fret about back rent, some cousin will make a bland WhatsApp status with a picture of a candle, a few people will type “Om Shanti” out of obligation, which would have annoyed me in life and will presumably annoy me in death as well, if metaphysical Wi-Fi is available.

The city will not notice. India will not notice. The grand narrative of “AI for Bharat” and “Digital India” and “Viksit This and That” will proceed undisturbed.

There will certainly be no official metric titled “Number of moderately intelligent, psychologically wobbly citizens lost to economic hopelessness this quarter.”

I will simply dissolve into the background radiation of urban failure.

And in that same week, another AI model will probably be released. Another man in another hoodie will do the surprised-face thumbnail. Another benchmark will be topped. Someone will say “We’re so back.”

A Tiny, Embarrassed Scrap of Defiance

Yet, inconveniently for my own nihilism, I am still here.

I am still on this ungainly mattress, with the cracked wall and the failing fan, typing words that nobody asked for, about a world that does not care, in a language that is not my mother tongue, on a device I can barely afford to replace.

This itself is a small, ridiculous act of resistance.

Not heroic, not Instagrammable, not the stuff of TED talks. Just one human brain—faulty, overeducated, underemployed—refusing to clap and cheer for every incremental sharpening of the guillotine blade, insisting on muttering: “Yes, impressive engineering, but have you noticed the basket under it?”

I will still use these tools, because I am a hypocrite like everyone else and because they are, undeniably, extraordinary. But I reserve the right to be unimpressed by their context, their masters, and their economic trajectory.

Some nights, when the air is a little less poisonous and the traffic quiets down to its nocturnal growl, I can almost imagine a different pairing: powerful tools in the hands of societies that value dignity over extraction, care over throughput, genuine education over rote performance, truth over comforting narrative.

As far as I can tell, we are not that species yet.

In the meantime, the latest model will roll out, the talking heads will compare its toys, the cakes will gather dust under their cellophane, the politicians will promise development, and my mattress will continue its slow journey from “used” to “archaeological artifact.”

I will likely still be here, refreshing a feed I despise, occasionally laughing at some absurd meme, occasionally reading an actual book, occasionally, stubbornly, typing out another overlong, under-read rant—

—and then, at some point, I’ll just put the phone face-down, listen to the fan pretending to be an aircraft, and watch the damp patch on the wall expand its empire a few centimetres more.

© 2025 Suvro Ghosh.