A common statistical technique is being overused, FiveThirtyEight reports—and it could let incorrect results sneak through. A p-value of less than 0.05 is often taken to mean that a result is “statistically significant,” a boundary that can make the difference between a scientific paper being published or rejected. Such high stakes create heated discussions, and when a group of statisticians got together to write a report about the technique, it produced a year-long debate. The resulting statement, released today by the American Statistical Association, describes the many limitations of p-values, such as their inability to distinguish between small and large differences. In other words, a result may have a low enough p-value to be “statistically significant”—but that doesn’t mean it’s important. P-values still have their place, many of the experts say. But they should only be one tool in a scientist’s toolkit, rather than the bar by which all research is judged.