Everybody knows that science is a rigorous discipline. All new research is peer-reviewed by expert referees who do everything in their power to highlight the potential flaws in a paper, much to the chagrin of us postdocs. But this is a great way to learn how to do meticulous experiments, so we'll thank them in the end. And if you thought that once your work is published it is no longer under the microscope, be aware that scientists often repeat one another's experiments many times over to determine whether they can replicate the same set of results. What's more, corrections and retractions are published if any results are subsequently found to be not quite as the original author had claimed. As a result of all this critical analysis and self-scrutiny, science ranks quite high in polls of professions the public trusts.
Although these safeguards and checks are the accepted practice in the scientific community, the more I become immersed in the real world of science, the more I come to realise that occasionally things are not quite as neat and tidy as they first seemed. Getting your own work published is not just about your own scientific rigour. It's also about making contacts with the right people at the right time and in the right way. We postdocs should wise up to this reality lest we sleepwalk into an unexpected disappointment when the letter arrives from the journal editor.
Let me say at the outset that I believe that the vast majority of the scientific literature is impartially refereed, faithfully represents the results obtained, and provides a sound interpretation of what it all means. In spite of this, I also maintain that a small but significant proportion of published research is not subjected to the same degree of impartiality and may not be quite as reliable. Although there have been some recent and well-publicised instances of scientific plagiarism and tomfoolery, this is not what this article is about. Fortunately, these acts are, in any event, quite rare. No, I am writing about certain extraneous factors, namely the human ones, that have nothing to do with your science but everything to do with how you communicate it.
First, let me deal with the central issue of referees. Not all referees are 100% dispassionate and impartial. Referees are human beings who have friends and enemies. If authors get their colleagues' backs up by being dogmatic or impolite, I would venture to suggest that they would, on the whole, be less likely to receive a nice smooth ride when these same colleagues review their manuscript. Conversely, the work of someone known, liked, and trusted, perhaps even a personal friend, may not be subjected to the harshest degree of scrutiny. In addition to your own reputation, or lack of it, whether you are working in the lab of someone known to your referee adds another layer of complexity to this story.
I'm not proposing that referees generally show favouritism, merely that a referee's personal knowledge of your trustworthiness and scientific rigour must be a beneficial factor when they are considering your new piece of work. Of course, none of us can or should know who our referees are, so the only way to increase your chances of impressing the right people is to play the numbers game. You need to raise your profile and appreciate that the known and trusted get a disproportionate amount of credit relative to the unknown and mistrusted.
So get out there and meet the people in your field. At a meeting, always opt to give a talk rather than present a poster. Be visible. And even if you are exhausted after your conference presentation and you would prefer to slip away unnoticed, hang around and be available to chat about your work. Make use of the chairperson in your session to help you raise your profile. More than likely they will be highly respected in the field, so make sure you have time to exchange ideas and find out the latest news. Get in touch with more-senior scientists in your field; these are your future referees. Consider getting involved in a learned society; new blood is often highly appreciated. Although it is impossible to quantify the effect, if you put in this effort now, you will be less likely to face the peer-review jury as an unknown quantity or as an object of suspicion.
Apart from someone's scientific reputation per se, a referee's potential mistrust is particularly focused on materials and methods. It's a tradeoff between the increased likelihood of getting accepted if you are using tried-and-tested techniques and the pressure to move the technology forward and break new ground. Innovative methods are often favoured in principle, but they are generally treated with suspicion in practice until proven sound and reliable. This is all well and good for scientific rigour; yet it is a little irksome when you have bent over backward to do every control experiment under the sun and still aren't believed.
But what about methods that are widely accepted but are built on a less-than-solid foundation? Consider an imaginary method. It might have been widely used in papers for years, despite its lack of proper controls. Proponents of a new, more rigourous method, which is still unproven, may, at first, find it hard to get their work published, whilst referees will be less hesitant about approving work using the old accepted method, which they have used and published themselves. Techniques used in published work often hang around beyond their sell-by date, even if the results based on them do not survive as long. Don't get me wrong; I'm not proposing that a solid and innovative new method should be ditched in favour of a tried-and-tested albeit scientifically weak one. But if you are presenting work that uses a new approach, you should go for control overkill and include a couple of well-established methods. You should also include a highly explanatory and clear paragraph in your introduction giving decent background on the new method and include really thorough experimental details in the paper. Some publications favour innovative techniques; submitting to one of these journals would also be an option.
The flip side of the argument about tried-and-tested methods is the "flavour of the month" issue. This is when a new method is accepted rapidly before anyone is certain what it shows. Scientists get excited about new breakthroughs, and this can lead to a pack mentality, in which they chase after the money that inevitably follows the arrival of the next big thing that has already been "sold" to the politicians and financiers. We all have to run with the pack and use the correct in-words in our grant proposals, even if we know it's sometimes daft and shortsighted. A colleague who recently landed a permanent job confessed to me that he had been offered it only as his two important papers "caught the wave" of mass hysteria surrounding a breakthrough technology. He said he found this a sobering thought.
So, even if your manuscript is solid, the net result of factors such as being an unknown or using a very novel method shifts the probability, at least to a small degree, toward your manuscript being rejected. Scientists, including the referee who will review your manuscript and the senior scientist who is, or will be, your boss, are not 100% impartial. But that is OK. Despite its flaws, the system still works pretty well. Just don't be afraid to be your own advocate; aim your sales pitch at the right people and win their confidence. But the most important thing is not to let these facts of life push you over into cynicism, because cynicism, especially if it is perceived by others, will hurt your career.