Here’s An Example Of How To Make A Debate Less Stupid

Article header image for Here’s An Example Of How To Make A Debate Less Stupid

Which is not the norm!

Author: anon
Published:
Format: Markdown (kind 30023)
Identifier:
naddr1qvzqqqr4gupzqja9z3waeeej9s6zyztfjl7ln4w0jxvrztt4v7cdmgn4ukqx2j5lqqgrzdecxejnge3kvccngwfjx5uxzmu46qx

Source: Here’s An Example Of How To Make A Debate Less Stupid
Publisher: Singal-Minded | Author: Jesse Singal
Published: March 26, 2026 | Archived: March 27, 2026

A huge number of public debates are fundamentally pointless and, at least arguably, scammy.

What I mean is this: People express extremely strong opinions about certain subjects without ever explaining exactly what they mean. They’ll say they are ardently pro-X or anti-Y. But if you ask them to explain exactly what they mean by X or Y, you’ll often be met with hostility, the request itself treated as prima facie bad faith.

This sort of half-baked conversation benefits a lot of demagogues. These sorts of people are good at coming up with creative and often vitriolic ways of expressing their ardent support for X or their opposition to Y and, just as importantly, their disdain for people who feel differently. But if they were forced to carefully and transparently define their terms, they’d have a lot less control over the ensuing conversation and might find that they are not intellectually or rhetorically equipped to engage meaningfully in it.

Take the (related) debates over Black Lives Matter and DEI (diversity, equity, and inclusion) programs. When these terms first went viral, good-versus-bad dichotomies emerged. To say you were in favor of BLM or DEI was to say you endorsed progressive views on race, policing, and the like. Likewise the opposite: To oppose BLM or DEI was to express support for conservative views on race, policing, and the like. It was all rather performative.

Eventually, the debates got more complex — it became clear that these terms could be seen as “floating signifiers” that meant different things to different people. Clarification became important. Did endorsing BLM mean that when police shoot someone, they should be subjected to a rigorous and independent investigation? Or did BLM mean, as the website BlackLivesMatter.com at one point endorsed, “disrupt[ing] the Western-prescribed nuclear family structure requirement by supporting each other as extended families and ‘villages’ that collectively care for one another, especially our children, to the degree that mothers, parents, and children are comfortable”?

I’ve found the DEI debate even sillier and even more in need of clarification. Do you “support DEI”? Shouldn’t it depend? If supporting DEI means ensuring everyone follows civil rights law, supported by trainings centered on the nature of those laws, that’s one thing. If supporting DEI means intense workplace encounter sessions in which employees are forced to cop to being (perhaps unconsciously) racist, perhaps exacerbating rather than ameliorating intergroup tensions in the process, that’s another thing. To this day, though, you see a lot of pundits, academics, and others pretend that being for or against DEI is an inherently good/bad position, as though this means much of anything in the absence of a lot more information. It’s a floating signifier and also a political football.

***

Earlier this month, the researcher David Manheim wrote on X, “People keep repeating ‘stochastic parrot’ — often without any mental process behind it to specify what the argument is.” So he decided to do something about it: He wrote “a paper dissecting various possible arguments, and explaining which are valid. Here’s a blog-post version.” He published that version to the rationalist website LessWrong. It’s titled “Hunting Undead Stochastic Parrots: Finding and Killing the Arguments.”

For those unfamiliar with the stochastic parrot argument, it comes from a very influential 2021 conference paper published by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (going by “Shmargaret Shmitchell” here) titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”

I promise this isn’t another Singal-Minded piece about AI. It’s a piece about how to think and write clearly. But, briefly, Bender and her colleagues wrote that “Contrary to how it may seem when we observe its output, [a language model] is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.”

Even though the paper is about a lot of other stuff as well, because stochastic parrot is both in the title and is an ingeniously memorable phrase, it has turned into its own sort of viral floating signifier. You’ll commonly see arguments of the form that AI can’t really think or reason or understand, because it’s just a stochastic parrot. In my experience — and to borrow slightly from my last post on this subject — these arguments are often very vague, depending entirely on how the arguer him or herself would define terms like think or reason or understand. It’s also gotten harder and harder to treat these models as mere parrots as they’ve become increasingly powerful, unless we’re talking about some sort of mutant super-parrot (again, to be fair, the article was written in 2021).

Manheim’s post starts:

I argue the “stochastic parrot” critique of LLMs is philosophically undead — refuted under some interpretations, still valid under others, and persistently confused because nobody defined it clearly. This is an attempt to fix that.

He then lays out seven different versions of the “stochastic parrots” claim, all of which have different meanings.

He continues:

The most practically significant observation from this taxonomy is that there’s a conflationary alliance, a term Andrew Critch coined, among groups skeptical of LLMs, built on the ambiguity of “stochastic parrots.”

People with materially different philosophical and empirical commitments can all say “LLMs are just stochastic parrots” and mean completely different things. The Markovian version appeals to critics who wish to claim LLMs are simple. The Social Normative version appeals to critics who think society needs accountability from the LLMs. The Teleological version appeals to those dismissive of increased agency, and to safety researchers worried about goal-directed systems. Clearly, the groups have overlapping rhetoric but widely divergent implications — they’d give different advice about what to do, what evidence would change their minds, and what would constitute a solution.

The stochastic parrot argument was always a cluster of distinct claims, some of which were valid about earlier systems and have since been refuted, some of which remain live but depend on specific philosophical commitments, and some of which are unfalsifiable. Once these are separated, most of the ammunition for the argument disappears — what remains are legitimate concerns about accountability and social norms (Social Normative SPs) and a contingent empirical debate about generalization robustness (Unreasoning and Optimization-Artifact SPs), rather than a fundamental barrier to LLM understanding.

It doesn’t matter if you, personally, don’t get all of this. Parts of this paper were a bit above my own head, technically speaking. The point is that this is an extremely useful and intellectually helpful way of approaching this conversation. It advances it in a way merely having the ten-thousandth yes/no argument about “stochastic parrots” doesn’t.

The next person who wants to come along and participate in this debate can look at Manheim’s list and do all sorts of. . . stuff. They can refute the claim that anyone’s even arguing for this or that version. They can argue that versions Manheim is claiming are dead or ill are actually alive. And so on. This post, and the paper it will soon turn into, is fundamentally productive. Whereas a lot of pundits and a distressing number of actual “intellectuals” use slipperiness as a substitute for actual reasoning and argumentation, Manheim is actually intellectual and has reputational skin in the game, making arguments that are rather specific.

This might sound basic, but I’ve been depressed by how much of intellectual life — again, including intellectual life as carried out by actual intellectuals — operates at a much lower, fuzzier, more demagogic level than this. People like Manheim should be lauded for engaging in productive intellectual work, even if it turns out to be wrong, and the scamsters whose reputations rest on slipperiness should be called out as such.

As public (pseudo)intellectual life gets increasingly fractured and balkanized and shot through with demagogues and scam artists, I’m becoming increasingly interested in finding signals of trustworthiness. In general, if someone makes a strong claim — especially a morally weighted one — and then refuses to define their terms or answer follow-up questions, I think this is an extremely useful heuristic: This probably isn’t someone worth taking seriously. Of course this, like any other heuristic, isn’t 100% accurate, but if you’re out here attempting to build your reputation and/or intellectual project by making public claims, what excuse do you have to not meaningfully elaborate on and defend them?

Questions? Comments? Follow-up questions and requests for clarification? I’m at singalminded@gmail.com. Image: 07 March 2026, Berlin: A pair of parrots play at their nesting place in an alder tree on the Landwehr Canal. The collared parakeets (Psittacula krameri) have been at home in a park in Kreuzberg for two years and are very popular with walkers. They are extremely robust and can therefore spend the winter in freedom in cities. Photo: Jens Kalaene/dpa (Photo by Jens Kalaene/picture alliance via Getty Images)

Subscribe to Singal-Minded

Thousands of paid subscribers

A newsletter about science, social-justice-activism, why they sometimes fight, and how to help them get along better -- plus a good deal of other, more random stuff.

Comments (0)

No comments yet.