R
R
When working with different statistical distributions, we often want to make probabilistic statements based on the distribution.
We typically want to know one of four things:
This used to be done with statistical tables printed in the back of textbooks. Now, R
has functions for obtaining density, distribution, quantile and random values.
The general naming structure of the relevant R
functions is:
dnorm
calculates density (pdf) at input x
.pnorm
calculates distribution (cdf) at input x
.qnorm
calculates the quantile at an input probability.rnorm
generates a random draw from a particular distribution.Note that norm
represents the name of the given distribution.
For example, consider a random variable \(X\) which is \(N(\mu = 2, \sigma^2 = 25)\). (Note, we are parameterizing using the variance \(\sigma^2\). R
however uses the standard deviation.)
To calculate the value of the pdf at x <- 3
, that is, the height of the curve at x <- 3
, use:
## [1] 0.07820854
To calculate the value of the cdf at x <- 3
, that is, \(P(X \leq 3)\), the probability that \(X\) is less than or equal to 3
, use:
## [1] 0.5792597
Or, to calculate the quantile for probability 0.975, use:
## [1] 11.79982
Lastly, to generate a random sample of size n <- 10
, use:
## [1] -2.1023419 4.4371453 5.6916235 4.8789068 0.4730581 9.5589058 3.9492162 -1.1062029
## [9] -9.0734994 7.6246546
These functions exist for many other distributions, including but not limited to:
Command | Distribution |
---|---|
*binom |
Binomial |
*t |
t |
*pois |
Poisson |
*f |
F |
*chisq |
Chi-Squared |
Where *
can be d
, p
, q
, and r
. Each distribution will have its own set of parameters which need to be passed to the functions as arguments. For example, dbinom()
would not have arguments for mean
and sd
, since those are not parameters of the distribution. Instead a binomial distribution is usually parameterized by \(n\) and \(p\), however R
chooses to call them something else. To find the names that R
uses we would use ?dbinom
and see that R
instead calls the arguments size
and prob
. For example:
## [1] 0.145998
Also note that, when using the dnorm
functions with discrete distributions, they are the pmf of the distribution. For example, the above command is \(P(Y = 6)\) if \(Y \sim b(n = 10, p = 0.75)\). (The probability of flipping an unfair coin 10
times and seeing 6
heads, if the probability of heads is 0.75
.)
R
A prerequisite for STAT 420 is an understanding of the basics of hypothesis testing. Recall the basic structure of hypothesis tests:
We’ll do some quick review of two of the most common tests to show how they are performed using R
.
Suppose a grocery store sells “16 ounce” boxes of Captain Crisp cereal. A random sample of 9 boxes was taken and weighed. The weight in ounces are stored in the data frame capt_crisp
.
The company that makes Captain Crisp cereal claims that the average weight of a box is at least 16 ounces. We will assume the weight of cereal in a box is normally distributed and use a 0.05 level of significance to test the company’s claim.
To test \(H_{0}: \mu \geq 16\) versus \(H_{1}: \mu < 16\), the test statistic is
\[ t = \frac{\bar{x} - \mu_{0}}{s / \sqrt{n}} \]
The sample mean \(\bar{x}\) and the sample standard deviation \(s\) can be easily computed using R
. We also create variables which store the hypothesized mean and the sample size.
We can then easily compute the test statistic.
## [1] -1.2
Under the null hypothesis, the test statistic has a \(t\) distribution with \(n - 1\) degrees of freedom, in this case 8.
To complete the test, we need to obtain the p-value of the test. Since this is a one-sided test with a less-than alternative, we need the area to the left of -1.2 for a \(t\) distribution with 8 degrees of freedom. That is,
\[ P(t_{8} < -1.2) \]
## [1] 0.1322336
We now have the p-value of our test, which is greater than our significance level (0.05), so we fail to reject the null hypothesis.
Alternatively, this entire process could have been completed using one line of R
code.
##
## One Sample t-test
##
## data: capt_crisp$weight
## t = -1.2, df = 8, p-value = 0.1322
## alternative hypothesis: true mean is less than 16
## 95 percent confidence interval:
## -Inf 16.05496
## sample estimates:
## mean of x
## 15.9
We supply R
with the data, the hypothesized value of \(\mu\), the alternative, and the confidence level. R
then returns a wealth of information including:
Since the test was one-sided, R
returned a one-sided confidence interval. If instead we wanted a two-sided interval for the mean weight of boxes of Captain Crisp cereal we could modify our code.
capt_test_results = t.test(capt_crisp$weight, mu = 16,
alternative = c("two.sided"), conf.level = 0.95)
This time we have stored the results. By doing so, we can directly access portions of the output from t.test()
. To see what information is available we use the names()
function.
## [1] "statistic" "parameter" "p.value" "conf.int" "estimate" "null.value"
## [7] "stderr" "alternative" "method" "data.name"
We are interested in the confidence interval which is stored in conf.int
.
## [1] 15.70783 16.09217
## attr(,"conf.level")
## [1] 0.95
Let’s check this interval “by hand.” The one piece of information we are missing is the critical value, \(t_{n-1}(\alpha/2) = t_{8}(0.025)\), which can be calculated in R
using the qt()
function.
## [1] 2.306004
So, the 95% CI for the mean weight of a cereal box is calculated by plugging into the formula,
\[ \bar{x} \pm t_{n-1}(\alpha/2) \frac{s}{\sqrt{n}} \]
c(mean(capt_crisp$weight) - qt(0.975, df = 8) * sd(capt_crisp$weight) / sqrt(9),
mean(capt_crisp$weight) + qt(0.975, df = 8) * sd(capt_crisp$weight) / sqrt(9))
## [1] 15.70783 16.09217
Assume that the distributions of \(X\) and \(Y\) are \(\mathrm{N}(\mu_{1},\sigma^{2})\) and \(\mathrm{N}(\mu_{2},\sigma^{2})\), respectively. Given the \(n = 6\) observations of \(X\),
and the \(m = 8\) observations of \(Y\),
we will test \(H_{0}: \mu_{1} = \mu_{2}\) versus \(H_{1}: \mu_{1} > \mu_{2}\).
First, note that we can calculate the sample means and standard deviations.
We can then calculate the pooled standard deviation.
\[ s_{p} = \sqrt{\frac{(n-1)s_{x}^{2}+(m-1)s_{y}^{2}}{n+m-2}} \]
Thus, the relevant \(t\) test statistic is given by
\[ t = \frac{(\bar{x}-\bar{y})-\mu_{0}}{s_{p}\sqrt{\frac{1}{n}+\frac{1}{m}}}. \]
## [1] 1.823369
Note that \(t \sim t_{n + m - 2} = t_{12}\), so we can calculate the p-value, which is
\[ P(t_{12} > 1.8233692). \]
## [1] 0.04661961
But, then again, we could have simply performed this test in one line of R
.
##
## Two Sample t-test
##
## data: x and y
## t = 1.8234, df = 12, p-value = 0.04662
## alternative hypothesis: true difference in means is greater than 0
## 95 percent confidence interval:
## 0.1802451 Inf
## sample estimates:
## mean of x mean of y
## 80 72
Recall that a two-sample \(t\)-test can be done with or without an equal variance assumption. Here var.equal = TRUE
tells R
we would like to perform the test under the equal variance assumption.
Above we carried out the analysis using two vectors x
and y
. In general, we will have a preference for using data frames.
We now have the data stored in a single variables (values
) and have created a second variable (group
) which indicates which “sample” the value belongs to.
## values group
## 1 70 A
## 2 82 A
## 3 78 A
## 4 74 A
## 5 94 A
## 6 82 A
## 7 64 B
## 8 72 B
## 9 60 B
## 10 76 B
## 11 72 B
## 12 80 B
## 13 84 B
## 14 68 B
Now to perform the test, we still use the t.test()
function but with the ~
syntax and a data
argument.
##
## Two Sample t-test
##
## data: values by group
## t = 1.8234, df = 12, p-value = 0.04662
## alternative hypothesis: true difference in means is greater than 0
## 95 percent confidence interval:
## 0.1802451 Inf
## sample estimates:
## mean in group A mean in group B
## 80 72
To test the effectiveness of a drug for a certain medical condition, we will consider a hypothetical case.
Suppose we have 105 patients under study and 50 of them were treated with the drug. Moreover, the remaining 55 patients were kept under control samples. Thus, the health condition of all patients was checked after a week.
With the following table, we can assess if their condition has improved or not. By observing this table, one can you tell if the drug had a positive effect on the patient?
Here in this example, we can see that 35 out of the 50 patients showed improvement. Suppose if the drug had no effect, the 50 will split the same proportion of the patients who were not given the treatment. Here, in this case, improvement of the control case is high as about 70% of patients showed improvement, since both categorical variables which we have already defined must have only 2 levels. Also, it was sort of perceptive today that the drug treatment and health condition are dependent.
Must Learn – How to create contingency tables in R
Particularly in this test, we have to check the p-values. Moreover, like all statistical tests, we assume this test as a null hypothesis and an alternate hypothesis.
The main thing is, we reject the null hypothesis if the p-value that comes out in the result is less than a predetermined significance level, which is 0.05 usually, then we reject the null hypothesis.
H0: The two variables are independent. H1: The two variables relate to each other.
In the case of a null hypothesis, a chi-square test is to test the two variables that are independent.
data_frame <- read.csv("https://goo.gl/j6lRXD") #Reading CSV
table(data_frame$treatment, data_frame$improvement)
##
## improved not-improved
## not-treated 26 29
## treated 35 15
Let’s do the chi-squared test using the chisq.test() function. It takes the two vectors as the input. We also set correct=FALSE
to turn off Yates’ continuity correction.
##
## Pearson's Chi-squared test
##
## data: data_frame$treatment and data_frame$improvement
## X-squared = 5.5569, df = 1, p-value = 0.01841
We have a chi-squared value of 5.5569. Since we get a p-Value less than the significance level of 0.05, we reject the null hypothesis and conclude that the two variables are in fact dependent