Read our COVID-19 research and news.

EE cartoon
Credit: Hal Mayforth

I've Got Your Impact Factor Right Here

It happens to all of us. We sweat and toil in the lab only to produce -- nothing. Experiments that don’t work. Data no one cares to see. Unpromising results, dead ends, and vague thoughts of ending it all, by which I mean leaving science and getting an MBA.

And when it comes time to publish what we’ve accomplished, we freeze. Our results aren’t worth reporting as they are, and ethics won’t let us exaggerate them -- unless it’s in a grant application.

Until now.

Last month, I learned about a publication that has been quickly gaining popularity, the Journal of Negative Results in BioMedicine (JNRBM). Published, presumably, by a gang of dour curmudgeons who hate everything, JNRBM openly welcomes the data that other journals won’t touch because it doesn’t fit the unspoken rule that all articles must end on a cheery note of promise. (“This could lead to new therapies!” boast most journal articles, relying on the word “could” to keep their platitudes accurate and the exclamation point to boost excitement, stand for “factorial,” or make a clicking sound, depending on your field.)

You might imagine that JNRBM is a place where losers gather to celebrate their failures, kind of like Best Buy or Division III football. But JNRBM meets two important needs in science reporting: the need to combat the positive spin known as publication bias and the need to make other scientists feel better about themselves.

(Unfortunately, if you don’t work in biomedicine, you’re still screwed. The Journal of Negative Results in Zoology, for example, is just called “not seeing animals.” And the Journal of Negative Results in Homeopathy is the entire field of homeopathy.)

You can get published even when the experiment fails -- it’s the easiest way to pad your CV since the invention of 1.25-inch margins.

When it comes time to put our science into words, why do we pretend that the negative results never happened? Why do we have so much trouble accepting that sometimes our hypotheses are disproved? But most importantly, where was this freaking journal when I was in grad school? You can get published even when the experiment fails -- it’s the easiest way to pad your CV since the invention of 1.25-inch margins.

As first-year grad students, we did short rotations in labs of our choice. At the end of 2 months, we’d all gather to present our data in front of the department. (This led to my favorite joke that I ever told during a scientific seminar: “I just spent the last 2 months rotating in Dr. Schleif’s lab … and boy am I dizzy!”)

The other first-years would listen intently, while the older, savvier grad students stole sandwiches and left. Because every rotation talk sounded exactly the same:

1. My principal investigator suggested this great experiment.

2. I spent 2 months trying to make my starting materials.

3. I did not succeed in making my starting materials.

4. I therefore did not perform the experiment.

5. Future directions: I would succeed in making my starting materials and then perform the experiment.

Science is full of negative results, so thank goodness we now have a place to publish them. (Though this raises the question: If there are more negative results in science than positive ones, and this is the only journal for them, is this the most selective journal in the world?)

But as long as we’re dreaming of new journals, why stop there? Here are a few other science publications that ought to exist:

Journal of Ambiguous Results

Journal of Data Manipulation

Journal of Questionable Conclusions

Journal of Outrageous Claims

Journal of p > 0.05

Journal of Work I Didn’t Even Do

Journal of Outliers Almost Certainly Caused by Inattention

Journal of Highfalutin Jargon

Journal of Fudge -- Mmmm, Fudge!

And, what the hell, these don’t follow the pattern, but here are some more journals I’d like to see:

Advances in Alchemy

Science: The Swimsuit Issue

Current Opinion in Bigotry

The Undergraduate Journal of Look, I Played Angry Birds in Your Lab for an Hour, and I Want Course Credit

Data Not Shown: The Journal of ‘Well, Trust Us’

The Literature (so that you can tell people you thoroughly searched The Literature)

Annals of Heh Heh He Said Anals

International Journal of Acknowledgments: All Acknowledgments, All the Time

New England Journal of Pahking the Cah in Hahvahd Yahd

Pediatric Pederasty

Acta Alanalda (nominated in 2004 for Best Supporting Acta)

As long as we’re adding journals, why not change the whole landscape of scientific publishing? After all, many people are saying that scientific publishing is broken. It’s an industry mired in tradition, favoring formatting guidelines that require a separate Ph.D. to understand, so it’s time to bring it into the 21st century. Here’s how we can remake our beloved scientific journals:

• Assess impact factors based on whether your grandparents think they’ve heard of the journal. (“Oh, Science! I think I’ve heard of that!” equals high impact.)

• Eliminate the footnote about authors having “contributed equally to this work.” Instead, researchers should battle in a cage match. Only one shall be first author.

• When authors or reviewers disclose a conflict of interest, they must take a defiant attitude: “Yeah, I got a conflict of interest. What.

• Use acronyms wherever possible. Scientists love well-planned acronyms (WPAs), and the extent to which they make an impact (MaI) on your readers (YR) should not be underestimated (SNBU). Yes, WPAs should MaI on YR, & if it might BU, which it SN, HCSMD WARB SOHF.

• Replace lengthy peer review with “likes” on Facebook. (Low-impact journals are reviewed on MySpace.)

• Or, instead of traditional peer reviewers, every scientist should give a snazzy presentation to investors Kevin O’Leary and Mark Cuban on ABC’s Shark Tank.

• Rather than alphabetically or in order of appearance, order references by author hotness.

• Since we live in an information age, anyone reading your article should be able to access the original data in its entirety. Charts and tables are insufficient. If you study the effects of drugs on mice, you should append actual mice to your manuscript. If you study comets, look for a really big stapler.

Yes, the JNRBM provides a needed service. After all, few journals reflect the kinds of results we see in our labs every day -- i.e., crappy ones.

But if we work at it, we can revolutionize the whole system. We can topple these institutions with their esoteric titles and their expensive reprints, and we can found a new method of disseminating our results from the ground up. We can take the antiquated, biased organizations that tacitly force us to shape and frame data that ought to be expressed objectively, and we can make them reflect the reality that science is messy, indirect, and doesn’t necessarily tell a simple story.

Or we can just steal some sandwiches and leave. Either way.