What is Parametric Tests?
Parametric tests are statistical measures used in the analysis phase of research to draw inferences and conclusions to solve a research problem. There are various types of parametric tests, such as z-test, t-test and F-test. The selection of a particular test for research depends upon various factors, such as the type of population, sample size, Standard Deviation (SD) and variance of population. It is important for a researcher to identify the appropriate test to maintain the authenticity and validity of research results.
Table of Content
Types of Hypothesis Tests
A hypothesis can be tested by using a large number of tests. Therefore, researchers have found it more convenient to categorise these tests on the basis of their similarities and differences. Hypothesis tests are divided into two types, as mentioned below:
In these tests, the researcher makes assumptions about the parameters of the population from which a sample is derived. An example of a parametric test is z-test.
These are distribution-free tests of hypotheses. Here, the researcher does not make assumptions about the parameters of the population from which a sample is derived. An example of a non-parametric test is the Kruskal Wallis test.
Types of Parametric Tests
In parametric tests, researchers assume certain properties of the parent population from which samples are drawn. These assumptions include properties, such as the sample size, type of population, mean and variance of population and distribution of the variable. For example, t-test assumes that the variable under study in population is normally distributed.
Researchers calculate the parameters of population using various test statistics. Then, they test the hypothesis by comparing the calculated value of parameters with the benchmark value given in the problem. The scale used for dependent value in parametric tests is mostly the interval scale or ratio.
There are various types of parametric tests are:
This test is used to study the mean and proportion of samples having a sample size of more than 30. It involves comparison of means of two different and unrelated samples drawn from the same population whose variance in known. The z-value (test statistic) is calculated for the present data and compared with the z-value at that level of significance, which is decided earlier in the question/problem. After comparison, researcher may decide to reject or support null hypothesis.
The z-test is used in the following cases:
- To compare the mean of a sample with the mean of a hypothesised population when the sample size is large and the population variance is known
- To compare the significant difference between the means of two independent samples in the case of large samples or when the population variance is known
- To compare the proportion of a sample with the proportion of the population
This test is used to study the mean of samples when the sample size is less than 30 and/or the population variance is unknown. It is based on t-distribution. A t-distribution is a type of probability distribution that is appropriate for estimating the mean of a normally distributed population where the sample size is small and population variance is unknown.
The t-value (test statistic) is calculated for the present data and compared with the t-value at a specified level of significance for concerning degrees of freedom for accepting/rejecting the null hypothesis. The degree of freedom is calculated by subtracting one observation from the number of observations. It is used to check the t-value in the t-distribution table.
Sometimes, the t-test is used to compare the means of two related samples when the sample size is small and the population variance is unknown. In such a situation, it is known as the paired t-test.
This test is used to compare the ratio of variances of two samples under study. It involves comparing the ratio of two variances of two samples. The F-distribution is a right-skewed distribution that is used most common in Analysis of Variance (ANOVA). Here, the test statistic has an F-distribution. The F-value (test statistic) is calculated for the present data and compared with the F-value at that level of significance, which is decided earlier in the question/ problem.
In a F-test, these are two independent degrees of freedom in numerator and denominator respectively. The degrees of freedom (d.f.) of two samples are calculated separately by subtracting one from the number of observations. After that, the F-value is calculated from the F-distribution table.
Parametric tests are further divided into two parts – one-sample tests and two-sample tests. You will learn more about them in the next sections.
Assumptions of F-Test
F-distribution is usually asymmetric with minimum value of zero. However, the maximum value is infinity. Assumptions for using an F-test include:
- Both the samples come from normal distribution.
- Observations in each sample are selected randomly
F-statistic can never be negative as it is a ratio of two squared numbers. The degrees of freedom for different tests is calculated in different ways as follows:
|Degree of Freedom
|One sample t-test
|n – 1; where, n = sample size
|Paired data t-test
|n – 1; where, n = number of pairs of data points
|t-test for two independent populations
|(n1 – 1) + (n2 – 1); where, n1 and n2 are sizes of two samples
|Chi-square test for independence
|(r–1) (c–1); where r equals number of levels for one category of variable and c equals number of levels for second category of variable
|Chi-square test for goodness of fit
|n – 1; where, n = the number of levels of a single categorical variable
|One Factor ANOVA (F-test)
|Degree of Freedom of Numerator (dfn) = k – 1; and Degree of Freedom of Denominator (dfd) = N – k; Where, n = Total number of data values in an experiment, and k = the number of groups