The “p value” is a number, calculated from a statistical test, that describes how likely you are to have found a particular set of observations if the null hypothesis were true.
P values are used in hypothesis testing to help decide whether to reject the null hypothesis. The smaller the *p *value, the more likely you are to reject the null hypothesis.
Adding onto this. p < 0.05 is the somewhat arbitrary standard that many journals have for being able to publish a result at all.
Is you do an experiment to see we whether X affects Y, and get a p = 0.05, you can say, “Either X affects Y, or it doesn’t and an unlikely fluke event occurred during this experiment that had a 1 in 20 chance.”
Usually, this kind of thing is publishable, but we’ve decided we don’t want to read the paper if that number gets any higher than 1 in 20. No one wants to read the article on, “We failed to determine whether X has an effect on Y or not.”
Which is sad because a lot of science is just ruling things out. We should still publish papers that say that if we do an experiment with too small of a sample, we get an inconclusive result, because that starts to put bounds on how strongly a thing gets affected, if an effect occurs at all.
The “p value” is a number, calculated from a statistical test, that describes how likely you are to have found a particular set of observations if the null hypothesis were true.
P values are used in hypothesis testing to help decide whether to reject the null hypothesis. The smaller the *p *value, the more likely you are to reject the null hypothesis.
Adding onto this. p < 0.05 is the somewhat arbitrary standard that many journals have for being able to publish a result at all.
Is you do an experiment to see we whether X affects Y, and get a p = 0.05, you can say, “Either X affects Y, or it doesn’t and an unlikely fluke event occurred during this experiment that had a 1 in 20 chance.”
Usually, this kind of thing is publishable, but we’ve decided we don’t want to read the paper if that number gets any higher than 1 in 20. No one wants to read the article on, “We failed to determine whether X has an effect on Y or not.”
Which is sad because a lot of science is just ruling things out. We should still publish papers that say that if we do an experiment with too small of a sample, we get an inconclusive result, because that starts to put bounds on how strongly a thing gets affected, if an effect occurs at all.