A lot of bad statistics in one picture

I found this picture on twitter, thanks to Kareem Carr who took a screenshot before the person who uploaded it deleted his tweet. I googled and found it in a website named Data Science Central (No link. Sorry). Can you tell what is wrong in this picture?

p-value in one picture???
p-value in one picture???

 

Well, almost everything is wrong in this picture.

Let’s start with the flowchart on the top.

The null hypothesis is that the average number of pizza slices is 4 and the alternative hypothesis is much higher than 4. What does “much higher” mean? Normally the hypotheses would be Ho:µ<=4 and H1:µ>4, or maybe Ho:µ=4, H1:µ>4, if you are looking for one sided hypothesis. I understand “much higher” as µ>>4, or µ>4+Δ. Now the hypotheses are not exhaustive: there is a gap of Δ. If you studied a course on statistical inference and are familiar with the Neyman-Pearson Lemma, it should not be a problem. You can construct a likelihood ratio test. But then you do not need this picture. The way the hypotheses are stated, standard tests as z-test or t-test are not valid.

Collect your data: there is not much information on how to collect the data except for mentioning to make sure that the observations are random. I suppose that the author assumes that the readers are familiar with standard sampling methods. Or maybe there is a “Data Collection Explained in One Picture” somewhere. I am too afraid to google it.

The author states that in this example the average is 5.6. What about the standard deviation and the sample size?

Next in the flowchart: test the result. The author requires setting the significance level, which is good. However, a good statistician will set the significance level before collecting the data. He will also set the desired power and collect the sample size.

Then you should run the test, e.g. chi-square, t-test or z-test. Again, this should be decided before collecting the data.

Next you need to decide if the result is significant. The author suggests four levels of significance according the value of the p-value. This is totally wrong. The result is either significant or not significant. Statistical significance is not a quantitative property.  The author suggests using the p-value not just for determining significance but also as a measure for the strength of evidence.

Oh, and one more thing: if p<=.01 it is also <=.05 and <=0.10.

Now let’s look at the nice graph with the bell shape curve, which is almost certainly the Normal Distribution density curve. But look at the Y-axis label: “probability of X number of slices eaten”.

No, it is not probability, it is density, under the assumption that the null hypothesis is true (and this assumption is not stated anywhere).

And it is not the density of the number of slices eaten per visit as the X-axis label says, but the density of the sample mean. Again, one should state that this distribution is derived assuming the null hypothesis is true.

And even if it was a probability, the maximal value of the curve would be much lower than 100%. In fact, the probability of a customer eating less than one slice is zero (otherwise, why would he pay for the pizza in the first place?), and as far as the restaurant concerns, if the customer ate 1.5 slices, it counts as two slices – there is no recycling. In any case, the sample mean cannot be lower than 1. This brings me back to the statistical test. The variable under consideration is a count variable, therefore there are better alternatives to test the hypotheses, even though a normal approximation should work if the sample size is sufficiently large.

Next is the p-value calculation. The calculated p-value is 0.49. This means that if the mean under the null is 4, and the sample mean is 5.6, then we can back calculate and find that the standard error is 0.967 and the 95% significance threshold is 5.591. That’s fine.

Finally, the explanation of what is actually the p-value is clumsy. The term “Probability value (Observed)” may lead someone to think that this is the probability of the observed sample mean.  However, the author states correctly that the p-value is the area (under the normal curve) right to the observed result.

So what we got here? Poor example, negligent descriptions of the statistical process and concepts (to say the least), somewhat reasonable explanation of the p-value calculation, no explanation of what the p-value actually is and what it means, and finally abusing the p-value and its interpretation.

And personal note: why would I care? Someone is wrong on the internet. So what?

I care because I care about statistics. There are similar pictures, blog posts and YouTube videos all over the internet. And they cause bad statistics and bad science. P-values and other statistical concepts are abused on a daily basis, even in leading journals such as JAMA. And bad statistics lead to loss of money if we are lucky and to loss of lives if we are not so lucky. So I care. And you should care too.

Credit: some of the issues discussed in the post were brought up by Kareem and other Twiterers.

Nobel Prize in Mathematics

During this week, the 2019 Nobel Prize winners are announced. We can be sure that nobody will win the Nobel Prize for mathematics. The reason is simple. There is no Nobel Prize for Mathematics.

There are many urban legends regarding this. Most of them relate to an alleged love affair that Ms. Nobel had with a prominent mathematician. These stories conclude that Alfred Nobel did not want to award a prize for mathematics, since his wife’s lover is a leading candidate to win the prize. The most famous of this stories relates Ms. Nobel to the French mathematician Augustine Cauchy. Nice story, but Alfred Nobel was not married.

There is no Nobel Prize for Mathematics, but the Nobel Prize was awarded to mathematicians many times. Some of the most prominent winners were mathematicians John NashRobert Aumann and Kenneth Arrow, who all won the Nobel Prize of Economics. However, some will argue that the Nobel Prize in Economics is not a real Nobel Prize but a Nobel Prize. Oh well.

In addition, many mathematicians won the Nobel Prize in Physics, because in fact one cannot engage in high-level physics without proper mathematical education, and engaging in theoretical physics for themselves need to develop innovative mathematical tools.

My research has shown that four mathematicians have won Nobel Prizes that are not in Economics or Physics. Two won the Literature Prize, and two others won the Chemistry Prize. In this post I will review them and their work.

Four mathematicians who won the Nobel Prize. From left to right: Jose Echegaray (Literature, 1904), Bertrand Russell (Literature, 1950), Herbert Hauptman (Chemistry, 1985), John Pople (Chemistry, 1998).

The first mathematician to win the Nobel Prize was the Spanish Jose Echegaray. Echegaray was born in Madrid in 1832. He demonstrated his mathematical talent at a very young age, and was appointed professor of mathematics at the University of Madrid at the age of 21. In addition to his work in mathematics, he devoted his time to research in economics and worked to promote Spain’s international trade. With the abolition of the Spanish monarchy in the revolution of 1868, he retired from his academic position and was appointed Minister of Finance and Education of the Spanish Government. With the restoration of the monarchy in 1874, he retired from political life and began a new career as a writer, producing a series of successful satirical plays presented throughout Europe at the end of the 19th century. These plays earned him the Nobel Prize for Literature, awarded to him in 1904. Had Echegaray’s mathematical skills been useful to him in his literary career? Maybe. Literature scholars praise the meticulous structure of his plays. More likely, he was a very talented man who succeeded in everything he has done.

Another mathematician won the Nobel Prize for Literature 46 years later. The prize was awarded to Bertrand Russell in 1950. The prize committee noted it was awarded to him for his writings, which are “a victory for human ideals and freedom of thought”. Among these writings are the Foundations of Geometry (1897), A Critical Review of Leibniz’s Philosophy (1900), The Foundations of Mathematics — the monumental work he wrote with Whitehead between 1910 and 1913, and Introduction to Mathematical Philosophy (1919). All these books dealt with logic, along with his many other writings in many other fields. Bertrand Russell undoubtedly won the Nobel Prize for his mathematical work.

In 1985 another mathematician met with the King of Sweden. Herbert A. Hauptman , a mathematician who worked at the Institute for Medical Research in Buffalo, New York, shared the Nobel Prize with his fellow chemist Jerome Karl. Hauptman developed an algorithm that combined geometric and probabilistic methods to determine the molecular structure of materials using x-rays. This method, when applied in the 1980s by a computer, shortened the amount of time needed to determine the molecular structure of simple biological molecules from two years to two days, making it possible to determine the three-dimensional molecular structure of vitamins, hormones and antibiotic materials easily, to be used in the development of new drugs.

In 1998 another mathematician won the Nobel Prize in Chemistry: John A. Pople . He won the Nobel Prize for the development of new computational methods in the field of quantum chemistry. Pople sought and found methods for solving Schrodinger equations, the fundamental equations of quantum theory. These equations were previously considered insoluble, except for a few simple special cases. The software he developed for the implementation of his methods bears the name “Gaussian”, and is now used as the basic work tool of any chemist.

We must abandon the p-value

Just a couple of weeks ago I wrote here that we should not abandon statistical significance. No, I did not change my mind. We should not abandon statistical significance but we must abandon the p-values.

Executive summary: p-values make a false impression that statistical significance is a continuous concept, but it is not. It is dichotomous.

You are welcome to read on.

I will not get into a lengthy discussion on the difference between Fisher‘s significance testing and Neyman and Pearson‘s hypothesis testing. In short: Fisher had a null hypothesis, a test statistic and a p-value. The decision whether or not to reject the null hypothesis based on the p-value was flexible. Indeed, in one place Fisher suggested to use the 5% threshold for deciding significance, but in other places he used different thresholds.

Neyman and Pearson’s approach is totally different. In their framework there are two hypotheses: the null hypothesis and the alternative hypothesis. The Neyman-Pearson lemma provides as a tool to construct decision rule. Given data, you evaluate the likelihood of these data in two cases: once assuming that the null hypothesis is true, and then assuming that the alternative hypothesis is true. You reject the null hypothesis in favor of the alternative if the ratio of the two likelihoods exceeds a pre-defined threshold. This threshold is determined by the significance level: an acceptable probability of falsely rejecting the null hypothesis (in a frequentist manner), that meets your scientific standards.

When I was a young under-graduate student, we did not have easy access to computers in my university (yes, I am that old). So we calculated the test statistic (Z, t, F, chi-square) and looked in a table similar to this one to see if our test statistic exceeds the threshold.

Fisher was a genius, and so was Karl Pearson. They developed statistical tests such as F and Chi-square based on geometrical considerations. But when you try to construct a test for equality of means as is in ANOVA or a test for independence of two categorical variables by using the NP lemma, you get the same tests. This opened a computational shortcut: The NP decision rule is equivalent to a p-value decision rule. You can decide to reject the null by comparing your test statistic to the value you look up in a table, or by comparing the p-value to your pre-determined significance level. You will get the same result either way. But still, you either reject the null hypothesis or you don’t. The size of the test statistic or the p-value does not matter. There is are such things as “nearly significant”, “almost significant”, “borderline significant” or “very significant”. IF your pre-determined significance level is 0.05, and your p-value is 0.051, that’s really depressing. But if you declare that your result is “almost significant”, it means that you changed your decision rule after the fact.

Let me illustrate it with a story my teacher and mentor, Professor Zvi Gilula, told me once.

Someone robbed a convenience store in the middle of nowhere. The police caught him and brought him to court. There were three eye witnesses, and a video from the security camera. The jury found him guilty and he was sentenced to serve 6 months in jail. In statistical jargon, the null hypothesis states that the defendant is not guilty, and the jury rejected the null hypothesis.

Let’s look at another robber. This man robbed a convenience store in New York City, next to Yankee Stadium.  The robbery occurred when the game just ended, the Yankees won, 47000 happy fans witnessed the robbery, and they all came to testify in court, because the poor robber was a Red Sox fan. The jury found him guilty.

Is the second robber guiltier than the first robber? Is the first robber “almost not guilty” or “borderline guilty”? Is the second robber “very guilty”? What would be the right punishment to the second robber? 10 years? Life imprisonment? Death sentence? Do you think now that if you were in the jury of the first robber you would acquit him?

Here is what we need to do if we really want to improve science: when publishing results, one should provide all the necessary information, such as data, summary statistics, test statistics and more. And then state: “we reject the null hypothesis at the 5% significance level”, or maybe “we do not reject the null hypothesis at the 5% significance level” (or any other reasonable and justified significance level. No p-values. And if someone wants to know if the null hypothesis will be rejected at the 5.01% significance level, they can go and calculate their own p value. P values should be abandoned.

We should not abandon statistical significance

The call to ban statistical significance is not new. Several researchers, the most notable of them is Andrew Gellman from Columbia University, are promoting this idea in the last few years. What is new is the recent article in the prestigious journal Nature, that brought up the issue again. Actually, it is a petition signed by about 800 researchers, some of them are statisticians (and yes, Gellman signed this petition too). Gellman and Blake McShane from Northwestern University, who is one of the lead authors of the current petition, also expressed this idea  in another provocative article at Nature titled “Five ways to fix statistics” published in November 2017.

I think that it is not a good idea. It is not throwing the baby out with the bathwater. It is attempting to take the baby and throw it out of the sixth-floor window.

Let me make it clear: I do not oppose taking into account prior knowledge, plausibility of mechanism, study design and data quality, real-world costs and benefits, and other factors (I actually quoted here what Gellman and McShane wrote in the 2017 article). I think that when editors are considering a paper that was submitted to publication, they must take all of this into account, provided that the results are statistically significant.

“Abandoning statistical significance” are just nice words for abandoning the Neyman-Pearson Lemma. The lemma guarantees that likelihood ratio tests control the rate of the false positive results at the acceptable significance level, which is currently equal to 5%. The rate is guaranteed in a frequentist manner. I do not know how many studies are published every year. Maybe tens of thousands, maybe even more. If all the researchers are applying the Neyman-Pearson theory with a significance level of 5%, then the proportion of the false positive results approaches 0.05. No wonder that Gellman, who is a Bayesian statistician, opposes this.

Once statistical significance is abandoned, bad things will happen. First, the rate of false positive published results will rise. On the other hand, there is no guarantee that the rate of false negative results (that are not published anyway) will fall. The Neyman-Pearson Lemma provides protection against the false negative rates as well. Granted, you need to have an appropriate sample size to control it. But if you don’t care for statistical significance anymore, you will not care for controlling the false negative rate. Researchers will get an incentive for performing many small sample studies. Such studies are less expensive, and have higher variation. The chance of getting a nice and publishable effect is larger. This phenomenon is known as “The law of small numbers“. What about external validity and reproducibility? In a world of “publish or perish”, nobody seems to care.

And this brings us to the question how one would determine if an effect is publishable. I can think of two possible mechanisms. One possibility is to calculate a p-value and apply a “soft” threshold. P is 0.06? that’s fine. What about 0.1? it depends. 0.7? no way (I hope). A second possibility is the creation of arbitrary rules of thumbs that have no theoretical basis. In this paper, for example, the researchers expressed the effects in terms of Hedges’ g, and considered an effect as meaningful if g>1.3. Why 1.3? They did not provide any justification.

I worked many years in the pharmaceutical industry, and now I work in a research institute affiliated with a large healthcare organization. False positive results, as well as false negative results, can lead to serious consequences. Forget about the money involved. It can be a matter of life and death. Therefore, I care for the error rates, and strongly support hypothesis testing and insisting on statistical significance. And you should care too.

 

 

 

 

 

 

 

 

 

 

Will baseball’s winning streak record be broken?

I came across this story in ESPN: Which of baseball’s most unbreakable records might actually get broken in 2019?
You can pass the story, unless you are really baseball statistics nerds, or don’t have something better to do, like me. But there is an interesting probability question there.
The longest winning streak in MLB history was 26 wins in a row, and this record was set by the Giants in 1916. What is the probability that this record will be broken?
The author, Sam Miller, argues that the chance is about 1 in 250. His reasoning goes along these lines: First he assumes that the best team will win about 100-110 games out of the season’s 162 games. The next assumption is that the probability of winning is the same for each game, therefore this probability ranges in 0.62 to 0.68. He does not state the third assumption explicitly, but you can’t do the math without it: games are independent.
All these assumptions translate to a series of 162 Bernoulli trial, with success probability of about 0.65 (plus/minus 0.03). So, what is the probability of getting a streak of at least 27 successes? Can you do the calculations?

This post was originally posted at Statistically Speaking.

How to select a seed for simulation or randomization

If you need to generate a randomization list for a clinical trial, do some simulations or perhaps perform a huge bootstrap analysis, you need a way to draw random numbers. Putting many pieces of paper in a hat and drawing them is possible in theory, but you will probably be using a computer for doing this. The computer, however, does not generate random numbers. It generates pseudo random numbers. They look and feel almost like real random numbers, but they are not random. Each number in the sequence is calculated from its predecessor, so the sequence has to begin somewhere;  it begins in the seed – the first number in the sequence.

Knowing the seed is a good idea. It enables reproducing the analysis, the simulation or the randomization list. If you run a clinical trial, reproducibility is crucial. You must know at the end of the trial which patient was randomized to each treatment; otherwise you will throw all your data to the garbage. During the years I worked at Teva Pharmaceuticals, we took every possible safety measure: We burnt the randomization lists, the randomization SAS code and the randomization seed on a CD and kept it in a fire-proof safe. We also kept all this information in analog media. Yes, we printed the lists, the SAS code and the seed on paper, and these were also kept in the safe.

Using the same seed every time is not a good idea. If you use the same seed every time, you get the same sequence of pseudo-random numbers every time, and therefore your numbers are not pseudo-random anymore. Selecting a different seed every time is good practice.

How do you select the seed? Taking a number out of your head is still not a good idea. Our heads are biased. Like passwords, people tend to use some seeds more often than other possible seeds. Don’t be surprised if you see codes with seeds like 123, 999 or 31415.

The best practice is to choose a random seed, but this creates a magic circle. You can still turn to your old hat and some pieces of paper, but there is a simple alternative: generate the seed from the computer clock.

This is how you do it in R by using the Sys.time() function: Get the system time, convert it to an integer, and you’re done. In practice, I take only the last 5 digits of that integer. And off course, I keep the chosen seed on record. I also save the Sys.time value and its integer value, just in case. Finally, I hardcode the seed I get. This is the R code:

> # set the initial seed as the computer time
> initial_seed=Sys.time()
> print (initial_seed)
[1] "2019-03-16 10:56:37 IST"

> # convert the time into a numeric variable
> initial_seed=as.integer(initial_seed)
> print (initial_seed)
[1] 1552726597

> # take the last five digits f the initial seed
> the_seed=initial_seed %% 100000
> print(the_seed) # 
[1] 26597

> # do your simulation
> set.seed(26597)
> print(rnorm(3))
[1] -0.4759375 -1.9624872 -0.1601236

> # reproduce your simulation
> set.seed(26597)
> print(rnorm(3))
[1] -0.4759375 -1.9624872 -0.1601236
>

Pi day quiz

Today is the Pi day – an annual celebration of the mathematical constant Pi. It is observed every year on March 14, since Pi can be approximated by 3.14, and the this date is written as 3/14 in the month/day format.

To celebrate this day, here is a short quiz about Pi. You can find all the answers on the web, but it is more fun to try answering while depending only on personal knowledge. Let’s go!

1) In the ancient world, many cultures derived approximations to Pi. Which ancient culture had the best approximation?
a. Egypt
b. Babylon
c. Hebrews
d. India

2) Which of these fractions is the best approximation Pi?
a. 2549491779/811528438
b. 22/7
c. 3927/1250
d. 864/275

3) Who popularized the use of the Greek letter Pi to represent the ratio of a circle’s circumference to its diameter?
a. Carl Friedrich Gauss
b. Leonard Euler
c. Pierre Simon Laplace
d. Issac Newton

4) The problem of squaring the circle does not have a solution because Pi is
a. An algebraic number
b. A rational number
c. A transcendental number
d. An irrational number

5) Who prove that the problem of squaring the circle does not have a solution?
a. Carl Friedrich Gauss
b. Adrien Marie Legendre
c. Ferdinand von Lindman
d. Evariste Galois

6) Pi plays an important role in Statistics because
a. The value of Pi can be approximated by dropping needles on the floor
b. The sample size calculation formula contains Pi
c. The probability density function of the Normal distribution contains Pi
d. Pi is the maximal value of Euler’s population density curve

7) Who was born on Pi day?
a. Johann Strauss
b. Wolfgang Amadeus Mozart
c. Johann Sebastian Bach
d. Georges Bizet

8) How many decimal digits of Pi were calculated up to date?
a. ~ 206 billion
b. ~ 50 quintillion
c. ~ 22.5 trillion
d. ~ 352 million

9) The first known rigorous algorithm for calculating the value of Pi was devised by
a. Shankara Variyar
b. Liu Hui
c. Archimedes
d. Ibn al-Haytham

10) Which of the following series does not converge to Pi?

Good luck! I will post the answers next week.