Why Does Research Clump?
Written by
Categories: Modern, Not an expert

Why Does Research Clump?

A wrought silver bowl showing Ajax, Athena, and Odysseus
Since my view of how research works involves a lot of arguing in front of an audience, how about the debate about who should inherit the armour of Achilles? Silver plate in the Hermitage with Ajax and Odysseus competing for the armour of Achilles (number ω-279.) Photo by Sean Manning, September 2015.

Robin Hanson, the economist and futurist with a great deadpan, has been thinking about why academic research tends to clump around particular problems. Like many American thinkers today, he appeals to a theory of mind where most of what people do is really about status and social position and nobody is sincere. In his post Idea Talkers Clump, he puts it thus:

I keep encountering people who are mad at me, indignant even, for studying the wrong scenario. While my book assumes that brain emulations are the first kind of broad human-level AI, they expect more familiar AI, based on explicitly-coded algorithms, to be first.

… I’d estimate that there is now at least one hundred times as much attention given to the scenario of human level AI based on explicit coding (including machine learning code) than to brain emulations.

But I very much doubt that ordinary AI first is over one hundred times as probable as em-based AI first. …

In addition, due to diminishing returns, intellectual attention to future scenarios should probably be spread out more evenly than are probabilities. The first efforts to study each scenario can pick the low hanging fruit to make faster progress. In contrast, after many have worked on a scenario for a while there is less value to be gained from the next marginal effort on that scenario.

Yes, sometimes there can be scale economies to work on a topic; enough people need to do enough work to pass a critical threshold of productivity. But I see little evidence of that here, and much evidence to the contrary. Even within the scope of working on my book I saw sharply diminishing returns to continued efforts. So even if em-based AI had only 1% the chance of the other scenario, we’d want much more than 1% of thinkers to study it. At least we would if our goal were better understanding.

But of course that is not usually the main goal of individual thinkers. We are more eager to jump on bandwagons than to follow roads less travelled. All those fellow travellers validate us and our judgement. We prefer to join and defend a big tribe against outsiders, especially smaller weaker outsiders.

Now, I share his frustration when I see large amounts of attention being devoted to some problems, while others which seem just as interesting are ignored. If smart people have been arguing about something for 200 years, and no new sources or methods have appeared, I have trouble believing that my opinion will add anything to the conversation (this is Daniel Kahneman’s principle “thou shalt respect base rates, and not let thyself make excuses about why this time is different” and Edsger W. Dijkstra’s Third Golden Rule for Scientific Research [EWD 637]). On the other hand, as an ancient historian from Canada, I can think of some other reasons why research tends to clump.

First, having other people to talk to is fun. Writing about an accepted topic and addressing other people’s ideas makes it more likely that someone will read your work, even if they disagree. If the topic has been around for some time, you might even get to talk to really smart people who really know the evidence and the arguments. On the other hand, if you chose a topic which few people are interested in, or which does not fit firmly within some existing community’s area of interest, you might have to wait a long time to get that kind of fun. In ancient history there are plenty of topics where someone contributes something serious every 20 years or so, at which point the author of the last important article has likely forgotten the details or changed his view, and the author of the article before that has retired. And one of the costs of being part of a community is not leaping at every hint of a difference in assumptions, but trying to find common ground and show that your ideas are still valid even for people who think differently.

Second, the existence of a community who propose different perspectives and criticize each other’s ideas makes progress much more likely. Most people have trouble seeing the weaknesses in their own ideas: psychologists have one way of talking about this, but there are informal versions like Schneier’s Law (“anyone can invent a cipher which they cannot break”). Most people have trouble reasoning on the basis of several contradictory paradigms, but progress often comes when ideas and heuristics from several different fields are brought together. All humans are unreliable observers and reasoners, so the more people who have gone through an argument, the more likely that any weaknesses will be exposed and assessed (this is why academics have the custom of peer review). Most educated people learn that before they accept something on the edge of their expertise as true, they should find a response by another expert, or at least read something else on the same topic and look for tension (this is one reason why academics have the custom of publishing reviews of each other’s work). And since academics are rewarded for publishing and teaching, not for being experts, members of a community working on a problem are strongly encouraged to publish something for themselves and not just talk about other people’s work. Nobody is paid for being an expert on Homer: they are paid for having written a definitive book on animal similes in the Iliad (which requires expertise to write, but no university today will let them sit and read and think for 30 years then publish one excellent book). Works by lone geniuses are just not reliable ways advance human knowledge, because human beings are very good at fooling themselves, and because the knowledge which is relevant to even the simplest questions is far wider than any human being can grasp. So people who want to advance human knowledge have good reasons to seek out a community of peers and run ideas past them.

I find that focusing on the fun of a good conversation and the feebleness of human reason fits my experience better than focusing on posturing and status. Maybe things are different among people who are sure that human-level AI is a coherent concept (some would argue that asking for a computer which thinks like a human is sort of like asking for an aircraft which flies like a bird), and that we can envision how it might be built any better than a 19th century railroad engineer could have designed the New Horizons probe. It could be that when I see academics squabbling bitterly over details or fiercely insisting that their supervisors’ generation was all wrong about a topic, that is just an illusion and they are really conformists. If Hanson does not like the intellectual climate in futurism, ancient history is always looking for people who are willing to do hard work!

Edit 2023-11-17: block editor

paypal logo
patreon logo

Write a comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.