Cancer clusters and the Poisson distributions

On March 1, 2019, an article was published in Israel’s Ynetnews website, under the title “The curious case of the concentration of cancer”. The story reports on a concentration of cancer cases in the town of Rosh Ha’ayin in central Israel.

In the past few years dozens of cases of cancer have been discovered in the center of Rosh Ha’ayin. About 40 people have already died of the disease. Residents are sure that the cause of the disease is cellular antennas on the roof of a building belonging to the municipality. “Years we cry and no one listens”, They say, “People die one after the other.”

I do not underestimate the pain of the residents of Rosh Ha’ayin. I also do not intend to discuss the numbers mentioned in the article. I accept them as they are. I just want to relate only to the claim that the cause of the disease is cellular antennas. It is easy (at least for me) to explain why this claim is at least questionable: there are many more cellular antennas in many places, and around them there is no high rate of morbidity in cancer. If the antennas are carcinogenic, then they need cancer everywhere, not just in Rosh Ha’ayin. On the other hand, I can’t blame them for blaming the antennas. People try to rationalize what they see and find a cause.

I also must emphasize that the case of Rosh Ha’ayin must not be neglected. If it is not because the cellular antennas, it is possible that there is some other risk factor, and the state authorities must investigate.

So why does Rosh Ha’ayin have such a large cluster of cancer morbidity? One possible answer is that there is an environmental factor (other than the antennas) that does not exist elsewhere. Another possible answer is that there may be a non-environmental factor that does not exist elsewhere, maybe a genetic factor (most of the town’s residents are immigrants from Yemen and their descendants). A third and particularly sad possibility is that the local residents suffer from really bad luck. Statistics rules can be cruel.

Clusters happen: if there are no local (environmental or other) risk factors that cause cancer (or any other disease), and the disease spreads randomly across the whole country, then clusters are formed. If the country is divided into units of equal size, then the number of cases in a given area unit follows a Poisson distribution. Then there is a small, but not negligible possibility that one of these units will contain a large number of cases. The problem is that there is no way of knowing in advance where it will happen.

The opposite is also true: If the distribution of the number of cases in a given area unit is a Poisson distribution, then it can be concluded that the dispersion on the surface is random.

I will demonstrate the phenomenon using simulation.

Consider a hypothetical country that has a perfect square shape, and its size is 100 x 100 kilometers. I randomized 400 cases of morbidity across the country:

# generate n random points on 0:100 x 0:100
> set.seed(21534)
> n=400
> x=runif(n)*100
> y=runif(n)*100
> dat=data.frame(x,y)
> head(dat)
x y
1 15.73088 8.480265
2 12.77018 78.652808
3 45.50406 31.316797
4 86.46181 6.669138
5 27.25488 48.164316
6 17.42388 98.429575

I plotted the 400 cases.

> # plot the points
> plot(dat$x, dat$y, ylim=c(0,100), xlim=c(0,100), 
+ asp=1, frame.plot=FALSE, axes=FALSE,
+ xlab=' ', ylab=' ', col='aquamarine', pch=16,
+ las=1, xaxs="i", yaxs="i",
+ )
> axis(1, at=0:10*10)
> axis(2, at=0:10*10, las=1, pos=0)
> axis(3, at=0:10*10, labels=FALSE)
> axis(4, at=0:10*10, labels=FALSE, pos=100)

Here’s the map I got.

Next, I divided the map into 100 squares, each 10 x 10 kilometers:

> #draw gridlines
> for (j in 1:9){
+ lines(c(0,100), j*c(10,10))
+ lines(j*c(10,10), c(0,100))
+ }
>

In order to count the cases in each square, I assigned row and column numbers for each of the squares, and recorded the position (row and column) for every case/dot:

> # row and column numbers and cases positions
> dat$row=0
> dat$col=0
> dat$pos=0
> for (j in 1:nrow(dat)){
+ dat$row=ceiling(dat$y/10)
+ dat$col=ceiling(dat$x/10)
+ dat$pos=10*(dat$row-1)+dat$col
+ }
>

Now I can count the number of points/cases in each square:

> # calculate number of points for each position
> # ppp=points per position
> dat$count=1
> ppp=aggregate(count~pos, dat, sum)
> dat=dat[,-6]

But of course, it is possible that there are squares with zero cases; (actually the data frame ppp has only 97 rows). Let’s identify them:

> # add positions with zero counts, if any
> npp=nrow(ppp)
> if(npp<100){
+ w=which(!(1:100 %in% ppp$pos))
+ addrows=(npp+1):(npp+length(w))
+ ppp[addrows,1]=w
+ ppp[addrows,2]=0
+ ppp=ppp[order(ppp$pos),]
+ }
>

And now we can get the distribution of number of cases in each of the 100 squares:

> # distribution of number of points/cases in each position
> tb=table(ppp$count)
> print(tb)
0  1  2  3  4  5  6  7  8  9 11 
3  9 12 21 15 17 13  5  1  3  1 
>

We see that there is one very unlucky cluster with 11 cases, and there also 3 squares with 9 cases each. Let’s paint them on the map:

> # identify largest cluster
> mx=max(ppp$count)
> loc=which(ppp$count==11)
> clusters=dat[dat$pos %in% loc,]
> points(clusters$x, clusters$y, col='red', pch=16)
> 
> # identify second lasrgest cluster/s
> loc=which(ppp$count==9)
> clusters=dat[dat$pos %in% loc,]
> points(clusters$x, clusters$y, col='blue', pch=16)
>

Let’s also mark the squares with zero points/cases. In order to do this, we first need to identify the row and column locations of these squares:

> # identify sqaures without cases
> # find row and column locations
> loc=which(ppp$count==0)
> zeroes=data.frame(loc)
> zeroes$row=ceiling(zeroes$loc/10)
> zeroes$col=zeroes$loc %% 10
> w=which(zeroes$col==0)
> if(length(w)>0){
+ zeroes$col[w]=10
+ }
> print(zeroes)
loc row col
1 8 1 8
2 31 4 1
3 99 10 9
>

So there is one empty square in the 8th column of the first row, one in the first column of the 4th row, and one in the 9th column of the 10th row. Let’s mark them. To do that, we need to know the coordinates of each of the four vertices of these squares:

# mark squares with zero cases
> for (j in 1:nrow(zeroes)){
+ h1=(zeroes$col[j]-1)*10
+ h2=h1+10
+ v1=(zeroes$row[j]-1)*10
+ v2=v1+10
+ lines(c(h1,h2), c(v1,v1), lwd=3, col='purple')
+ lines(c(h1,h2), c(v2,v2), lwd=3, col='purple')
+ lines(c(h1,h1), c(v1,v2), lwd=3, col='purple')
+ lines(c(h2,h2), c(v1,v2), lwd=3, col='purple')
+ }

Do you see any pattern?

How well does the data fit the Poisson distribution? We can perform a goodness of fit test.
Let’s do the log-likelihood chi-square test (also known as the G-test):

> # log likelihood chi square to test the goodness of fit 
> # of the poisson distribution to the data
> 
> # the obserevd data
> observed=as.numeric(tb)
> values=as.numeric(names(tb))
> 
> # estimate the poisson distribution parameter lambda
> # it is the mean number of cases per square
> lambda=nrow(dat)/100
> print(lambda)
[1] 4
> 
> # calculate the expected values according to 
> # a poisson distribution with mean lambda
> expected=100*dpois(values, lambda)
> 
> # view the data for the chi-square test
> poisson_data=data.frame(values, observed, expected)
> print(poisson_data)
values observed expected
1 0 3 1.8315639
2 1 9 7.3262556
3 2 12 14.6525111
4 3 21 19.5366815
5 4 15 19.5366815
6 5 17 15.6293452
7 6 13 10.4195635
8 7 5 5.9540363
9 8 1 2.9770181
10 9 3 1.3231192
11 11 1 0.1924537
> 
> # calculate the degrees of freedom
> df=max(values)
> print(df)
[1] 11
> 
> # calculate the test statistic and p-value
> g2=sum(observed*log(observed/expected))
> pvalue=1-pchisq(g2,df)
> log_likelihood_chi_squrae_test=data.frame(g2, df, pvalue)
> print(log_likelihood_chi_squrae_test)
g2 df pvalue
1 4.934042 11 0.9343187
>

We cannot reject the hypothesis that the data follows the Poison distribution. This does not imply, of course, that the data follows the Poisson distribution, but we can say that the Poisson model fits the data well.

A brief history of clinical trials

The earliest report of a clinical trial is probably provided in the Book of Daniel. Daniel and a group of other Jewish people who stayed at the palace of the king of Babylon, did not want to eat the king’s non-Kosher food and preferred a vegetarian diet. To show that vegetarian and Kosher diet is healthier, Daniel suggested to conduct an experiment. There were two “treatment group” in this trial. One group ate the royal Babylonian food, the other kept the vegetarian diet. The health of the groups was compared after a follow-up period of 10 days. The conclusion was that the vegetarian diet is healthier.

The first modern clinical trial is James Lind’s scurvy trial, which many consider to be the starting point of modern medicine. This is the first documented controlled clinical trial (if you ignore Daniel’s trial). Lind conducted an experiment to test possible treatments for scurvy, the leading cause of death among sailors by the end of the 18th century. In a relatively brief voyage in the Mediterranean in 1749, Linde divided the 12 sailors who fell sick during the voyage to six equal groups. They were all hosted in the same place on the ship and were given the same menu, which was distinguished only by the experimental treatment given to them. The treatments were: drinking a liter of cider a day, drinking 25 drops of sulfuric acid three times a day, drinking two tablespoons of vinegar three times a day, drinking half a liter of seawater a day, an ointment made of garlic, mustard, and radish, or eating two oranges and lemon a day. The citrus patients had recovered completely, and the condition of cider patients improved slightly. The comparison between the groups allowed Lind to evaluate the efficacy of each treatment in relation to other therapeutic alternatives.

The next milestone is William Watson’s trial of treatments to reduce the risk of smallpox. Already in the 11th century it was known that anyone who had this disease and survived would not get sick again. As a result, a practice of immunization of the disease by “mild infection” of healthy people was developed. However, among the doctors there were disagreements about optimal adhesion and treatment for infection. Watson conducted a series of three clinical trials at London Children’s Hospital in 1767. His methodology was similar to that of Lind: The children participating in each trial were divided into groups, and in each group, controlled infection was performed using a bladder from an early stage of the disease. Each group was given a different adjuvant treatment that was supposed to reduce the risk of infection. Watson’s experiments had a number of innovations compared to Lind’s experiment. Watson ensured that in each treatment group there was an equal number of boys and girls to avoid possible bias in case the response to treatment was different between the genders. In addition, one group in each trial did not receive supplementary treatment but served as a control group. Most importantly, Watson was the first to report a quantitative measurement of results. The measure of success of treatment was the number of smallpox that occurred in each child participating in the trial. He also performed a basic statistical analysis and published the average number of blisters per child in each group. Watson concluded that conventional treatments to reduce risk, including mercury, various plants and laxatives, were ineffective.

The next significant milestone is the milk experiment in the Lancashire county of Scotland in the early 20th century. The purpose of the trials was to determine whether daily milk intake improves the growth of children compared to children who did not drink milk on a daily basis, and to check whether there is a difference in growth rates between children fed fresh milk and those fed in pasteurized milk. The experiment, conducted in 1930, was large-scale and included a total of about 20,000 children aged 6–12, who studied in 67 schools. About 5,000 were fed in fresh milk, about 5,000 in pasteurized milk, and approximately 10,000 children were assigned to the control group. The height and weight of the children were measured at the beginning of the experiment (February 1930) and at the end (June 1930). The conclusion was that a daily diet of milk improves the growth of children, and that there is no significant difference between fresh milk and pasteurized milk. The researchers also concluded that children’s age had no effect on the effect of growth rate.

This experiment entered my list because of the criticism leveled at it. The critics included Fisher and Bartlett, but the most comprehensive criticism was cast by William Sealy Gosset, also known as “Student”. In an article published in Biometrika, Gosset actually set rules that were necessary to ensure the validity of a clinical trial. First, he noted that in each school the children were treated with fresh milk or pasteurized milk, but the two groups were not represented in any school. As a result, it is not possible to directly compare fresh and pasteurized milk, due to differences between the different schools. He also noted that the treatments were assigned by the teachers in each class and not randomly. As a result, students in the control group were larger in their body size than students in the treatment groups. Thirdly he notes that although the measurements were conducted in February and June, the weight measurements did not consider the weights of the children cloths. Winter clothes are heavier than spring / summer clothes, and the weight difference between clothes offset the real weight differences. The researchers assumed that the difference in the weight of the clothes would be similar among the groups, but Gosset argued that the bias in the distribution of students to economically affected groups — children from affluent families were usually included in the control groups — meant that the weight of the control group’s winter clothing would be higher.

Gosset concluded that the results did not support the conclusion that there is no difference between a diet with fresh milk and a pasteurized milk diet, and claimed that it is impossible to conclude that there is no connection between age and the change in growth rate. He also mentions the analysis of Fisher and Bartlett that showed that fresh milk has an advantage over pasteurized milk as to the rate of growth.

Following his criticism, Gosset made a number of recommendations, including a proposal to conduct the experiment in a group of twins, one of whom will be fed milk and the other will serve as a control (or one of them will be fed in fresh milk and the other in pasteurized milk to compare the two types of milk). I think that such planning is not accepted ethically today. A more practical recommendation is to re-analyze the data collected to try to overcome the bias created in the non-random allocation to treatment and control groups. His ultimate recommendation was to re-conduct the experiment, this time using randomization, considering bias due to the weight of the clothes worn by each student, and planning the experiment so that each school has representation for the three treatment groups.

The main recommendation of Gosset, to ensure random allocation of patients to groups, was not immediately accepted, as this idea was perceived by some of the scientific community as “unethical”. It should be noted that the principle of randomization was only presented by Fisher in 1923, and there was still insufficient recognition of its importance.

The first clinical trial with random assignment to a treatment and control groups was conducted only in 1947, and is the fourth in my list. This is an experiment to test the efficacy of streptomycin antibiotics to treat pneumonia. Due to the short supply of antibiotics, there was no choice but to decide by “lottery” between the patients who will receive antibiotic treatment and who will not, and thus the planning of the experiment overcame the ethical barrier. However, the experiment was not double blind, and placebo was not used.

It should be noted that there has already been a precedent for a double blind trial: the first clinical trial using the double-blind method was conducted in 1943 to test the efficacy of penicillin as a treatment for common cold. Patients did not know whether they were treated with penicillin, or whether they were treated with placebo. The doctors who treated the patients did not know what treatment each patient received. Such a design prevents bias that may result from doctors’ prior judgment about the efficacy of the treatment, and in fact forces them to give an objective opinion about the patient’s medical condition. However, this trial did not randomize patients for treatment or control.

The debate regarding the importance of the principles outlined by Gosset and Fisher was finally decided in the trial to test the efficacy of Salk’s vaccine against polio virus, carried out in 1954. In fact, two trials were conducted. The main trial, led by Paul Meier, was a double-blind randomized trial, showing a 70% reduction in Polio-related paralysis in the treatment group compared to the control group. The size of the large sample (about 400,000 children aged 6–8) helped to establish external validity of the results. At the same time, another trial was conducted, in which the allocation of treatment (vaccination or placebo) was not random. 725,000 first and third graders who participated in the experiment served as a control group, to which 125,000 second grade children whose parents refused the vaccine were added. Their data were compared with the data of 225,000 second graders whose parents agreed to vaccinate them. A total of more than one million students participated in the experiment, almost three times the size of Meier’s trial. However, this trial results showed a decrease of only 44% in polio-related paralysis. Later analysis found that the effect was reduced due to bias related to the socioeconomic status of the treatment group. Many children in this group belonged to more affluent families, and in this population stratum polio incidence was higher because the proportion of children vaccinated naturally (the polio was mild and recovered without documentation) was lower due to the higher level of sanitation in their environment. The polio trails established the fact that the most important feature of a clinical trial is the randomization , and that only a random and double-blind allocation ensures the internal validity of the experiment.

References

  • Boylston, AW (2002). Clinical investigation of smallpox in 1767.New England Journal of Medicine, 346 (17), 1326–1328.
  • Leighton G, McKinlay P (1930). Milk consumption and the growth of school-children. Department of Health for Scotland, Edinburgh and London: HM Stationery Office.
  • Student (1931). The Lanarkshire Milk Experiment. Biometrika 23: 398–406
  • Fisher RA, Bartlett S (1931). Pasteurised and raw milk. Nature 127: 591–592.
  • Medical Research Council Streptomycin in Tuberculosis Trials Committee. (1948).
  • Streptomycin treatment for pulmonary tuberculosis. BMJ, 2 , 769–82.
  • Hart, PDA (1999). A change in scientific approach: from alternation to randomized allocation in clinical trials in the 1940s.BMJ, 319 (7209), 572–573.
  • Meier, Paul. “Polio trial: an early efficient clinical trial.” Statistics in medicine 9.1–2 (1990): 13–16.

How to make children eat more vegetables

Let’s start from the end: I do not know how to make children eat more vegetables or even eat some vegetables. At least with my children, success is minimal. But two researchers from the University of Colorado had an idea: we would serve them the vegetables on plates with pictures of vegetables. To test whether the idea works, they conducted an experiment whose results were published in the prestigious journal JAMA Pediatrics . Because the results have been published you can guess that the result of the experiment was positive. But, did they really prove that their idea works?

Design of the experiment and its results

18 kindergarten and school classes (children aged 3–8) were selected in one of the suburbs of Denver. At first the children were offered fruits and vegetables when they were given white plates. In each class a bowl of fruits and a bowl of vegetables were placed, and each child took fruit and vegetables for himself or herself and ate them as he pleased. The weights of the vegetables and fruits were recorded before they were served to the children, and when the children had finished their meal, the researchers weighed the remaining fruits and vegetables. The difference between the weights (before and after the meal) was divided by the number of children, and thus the average amount of fruit and vegetables each child ate was obtained. Fruit and vegetables averages were also calculated separately. The researchers repeated these measurements three times per class.

After a while, the measurements were repeated the same way, but this time the children were given plates with pictures of vegetables and fruits. The result was an average increase of 13.82 grams in vegetables consumption, between 3 and 5 years of age. This result is statistically significant. In percentages it sounds much better: this is an increase of almost 47%.

So, what’s the problem? There are several problems.

First problem — extra precision

I will start with what is seemingly not a problem, but a warning: over-precision. When super precise results are published, you have to start worrying. I would like to emphasize: I mean precision, not accuracy. Accuracy refers to the distance between the measured value and the real, unobserved value, and is usually measured by standard deviation or confidence interval. The issue here is about precision: the results are reported at the level of two decimal places; they are very precise. I’m not saying it’s not important, but from my experience, when someone exaggerates, you have to look more thoroughly at what’s going on. Precision of two digits after decimal when it comes to grams seems excessive to me. You can of course think differently, but that’s the warning signal that made me read the article to the end and think about what was described in it.

Second problem — on whom was the experiment conducted?

The second problem is much more fundamental: the choice of the experimental unit, or unit of observation . The experimental units here are the classrooms. The observations were made at the class level. The researchers measured how many vegetables and fruits were eaten by all the children in the class. They did not measure how many vegetables and fruits each child ate. Although they calculated an average for a child, I suppose everyone knows that the average alone is a problematic measure: it ignores the variation between the children. Before experimental intervention, Each child ate an average of about 30 grams of vegetables at a meal, but I do not think there will be anyone who disagrees with the statement that each child ate a different amount of vegetables. What is the standard deviation? We do not know, and the researchers do not know, but this is essential, because the difference between the children affects the final conclusion. Because the researchers ignored (regardless of the reason) the variation between the children, they practically assumed that the variance was very low, in fact zero. Had the researchers consider this variation, the conclusions of the experiment would be different: the confidence intervals would be different, and wider than the confidence intervals calculated by the researchers.

Another type of variance that was not considered is the variation within children. Let me explain: Even if we watched one child and saw that on average he ate 30 grams of vegetables at every meal, at different meals he eats a different amount of vegetables. The same the question arises again: What is the standard deviation? This standard deviation also has an impact on the final conclusion of the experiment. Of course, each child has a different standard deviation, and this variability should also be taken into consideration.

A third type of variation that was not considered is the variation between children of different ages: it is reasonable to assume that an 8-year-old will react differently to a painted plate than a 3-year-old. An 8-year-old will probably eat more vegetables than a 3-year-old.

I think that the researchers did not pay attention to all these issues. The words variation, adjust or covariate do not appear in the article. Because the researchers ignored these sources of variation, the confidence intervals they calculate are too narrow to reflect the real differences between the children and the types of successes.

Finally, although the experimental unit was the class, the results were reported as measurements were made at the child’s level. In my opinion, this also shows that the researchers were not aware of the variation between and within the children. For them, class and child are one and the same.

Third problem — what about the control?

There is no control group in this experiment. At a first sight, there is no problem: according to the design of the experiment, each class constitutes its own control group. After all, the children received the vegetables in white plates as well as plates with paintings of vegetables and fruits. But I think that’s not enough.

There are lots of types of plates for children, with drawings by Bob the Builder, Disney characters, Adventure Bay, Thomas the engine, and the list goes on. Could it be that the change was due to the very fact of the paintings themselves, and not because they are paintings of vegetables and fruits? Maybe a child whose meal is served on a plate with pictures of his favorite superhero will eat even more vegetables? The experimental design does not answer this question. A control group is needed. In my opinion, two control groups are needed in this experiment. In one of them the children initially get white plates, and then plates of Thomas the engine, Disney or superheroes, depending on their age and preferences. In the second control group there will be children who will initially receive “ordinary” plates (i.e. Thomas, Disney, etc.) and then plates with paintings of vegetables and fruits.

Fourth problem — subgroup analysis

Although the age group of the children in the study was 3–8, the researchers discuss the results for children in ages 3–5. What happened to children at age 6–8? Was the analysis for the two (or more) age groups pre-defined? The researchers do not provide this information.

Fifth Problem — What does all this mean?

First, it was found that there was a statistically significant change in the consumption of vegetables, but no significant change was observed in the fruit. The researchers referred to this in a short sentence: a possible explanation, they said, is the ceiling effect . Formally they are right. ceiling effect is a statistical phenomenon, and that is what happened here. The really important question they did not answer: Why did this effect occur?

And the most important question: Is the significant change also meaningful? What does the difference of 14 grams (sorry, 13.82 grams) mean? The researchers did not address this question. I’ll give you some food for thought. I went to my local supermarket and weighted one cucumber and one tomato (yes, it’s a small sample, I know). The weight of the cucumber was 126 grams, and the weight of the tomato was 124 grams. In other words, each child ate on average an extra half a bite of a tomato or a cucumber. Is this amount of vegetables meaningful in terms of health and / or nutrition? The researchers did not address this question, nor did the editors of the journal.

Summary

It is possible that plates with vegetables and fruit paintings cause children to eat more vegetables and fruits. This is indeed an interesting hypothesis. The study that was described here does not answer this question. The manner in which it was planned and implemented does not allow even a partial answer to this question, apparently due to the lack of basic statistical thinking.