Years ago, when I was at Purdue University studying Anthropology, I was in one of those combined departments you get in a relatively unremunerative major. In this case, it was the Department of Sociology and Anthropology, and Sociology vastly outnumbered, outspent, and generally outdid us. We were like poor relations who were allowed to live with our betters because otherwise we would have been homeless, and that would not have reflected well on the family, would it?
I had a pet peeve, of course, and it was against the sociologists, of course. It seemed to me that they spent all their time studying either trivialities (e.g., why Freshman girls pledging sororities chose the ones they did) or the obvious (e.g., whether people generally prefer to socialize with others like themselves). God knows what conclusions were gleaned from the former, but the latter almost universally confirmed what was expected. Every once in a while, they would turn up something counterintuitive, but that was seldom, and when it did happen, more studies followed up, to see if the results could be replicated.
Of course, my perceptions of the kinds of research conducted by Purdue sociologists were severely biased, and almost certainly grossly exaggerated the percentage of pointless studies. All the same, there was one significant factor about the second variety, the studies of the obvious. When such studies confirmed expectations, they were almost never revisited. In the wisdom of my youth, I thought, good, they never should have been done in the first place.
Nowadays, it seems clear to me that there is good value in investigating “common sense,” since, as the old saw goes, it is often neither common nor sense. But here’s the rub: when such studies confirm general expectations, they’re still only rarely revisited for replication. In fact lack of attempts to replicate research has become an issue across the board; just google “replication in research,” and you’ll see what I mean.
Not revisiting studies of what seems obvious probably stems from a combination of confirmation bias and reluctance to waste time and money in short supply. But the extension to studies of any kind undoubtedly relates to mundane career decisions. There’s no glory in replicating someone else’s study. If you’re a junior scientist, or even a senior one, it is far better for your career to come up with something unique. Even then, you’d better get positive results, or your chances of publication are slim, and no publication means time wasted, career-wise.
And why, you’re asking yourself, does this matter? Two reasons: First, there are potentially many common-sensical hypotheses that are unsound, but are propped up by poorly designed studies, and thus become part of the scientific canon. Second, the pursuit of the trivial in an effort to avoid replication opens up science for ridicule by people with political agendas.
The remedy seems clear enough. Replication studies and negative results need to have the same status as original studies with positive results. Try convincing a journal editor or department chair of that, and good luck.