How to find critical value in statistics (with definition)

By Indeed Editorial Team

Updated 13 September 2022

Published 3 January 2022

The Indeed Editorial Team comprises a diverse and talented team of writers, researchers and subject matter experts equipped with Indeed's data and insights to deliver useful tips to help guide your career journey.

Calculating critical value in statistics is an important function of being able to declare statistical significance. A critical value can describe how far the data you're handling is from the average value of the entire information. By understanding how to use a critical value, you can improve your knowledge of cumulative probability and prepare you for more complex concepts. In this article, we learn how to find the critical value, discuss what a critical value is, discover how to use it in a hypothesis test, explain its importance and review some approaches to find this essential value.

What is a critical value?

A critical value is a mark on the hypothesis that you can compare to the test statistic to determine whether to reject the null hypothesis. If the absolute value of your test statistic is greater than the critical value, you can declare statistical significance and reject the null hypothesis. A critical value is also a line on a graph that splits it into different sections. These lines specify a 'cut-off value' where test results divide into different categories.

A cut-off value shows regions on a graph where a test statistic is least likely to mislead. Beyond the critical value line is the 'rejection region' section of a graph. If your test value falls into this rejection region, you can say that no significant difference exists in a set of given observations and data (null hypothesis).

How to find the critical value in 3 steps

If you're a student of statistics, you may wonder how to find the critical value. You can calculate the critical value based on the given significance level and the type of probability distribution of your idealised model. When the sample distribution of your statistic is normal, you can express the critical value as a 't-score' or as a 'z-score'. Typically, to find the critical value, you follow these steps:

1. Determine the alpha value

An alpha value indicates the probability that a statistical parameter may also be true or false for the population you're measuring. It represents an acceptable probability of an error and ranges from 0 to 1. This means if the alpha value is 0.05, there is a 5% opportunity of error within the study. For example, a confidence level of 95% within a sample set indicates that your specific statistical parameter has a 95% probability of being true for the entire population. You can determine the alpha value by using the formula:

Alpha value (α) = 1 - (the confidence level / 100)

Yorkshire Statistics is a Yorkshire-based company that provides survey services in the United Kingdom. The company has recently conducted several surveys for a client. They want to know how accurate their studies are. Using the alpha value formula, the company's senior data scientist calculates the value:
Survey's confidence level: 95%
Total confidence level: 100%
Formula: 0.05 = 1 - (95/100) = 1 - (0.95) = 0.05
The survey's alpha value is 0.05."

Your critical probability is the critical value. You can then express your critical value as a test statistic or a Z-score.

2. Find the critical probability

By using the alpha value you determined, you can then calculate the critical probability. Here is the formula:

Critical probability (p*) = 1 - alpha value / 2

The critical probability of the surveys in the previous example is 0.975, or 97.5%. You can calculate it by using the alpha value of 0.05. Here is the formula:

0.975 = 1 - (0.05 / 2) = 1 - (0.025)

Related: How to become a data analyst

3. Express the critical probability as a z-score or a test statistic

A z-score is a numerical evaluation that expresses a value's relationship to the average value of a set of data. Alternatively, a test statistic is a value measured from sample data during a hypothesis test. The test statistic calculation compares the data to the expected results under the null hypothesis, which is the hypothesis that there is no substantial change between a specified set of data. To express the critical value as a test statistic, it's necessary to find the degrees of freedom (df). Typically, the degree of freedom is equal to the sample size minus one. Here is the formula:

Degree of freedom (df) = the sample size - 1

It's more appropriate to express your critical probability as a test statistic if you're measuring a small sample size. And to express the critical value as a z-score, you can find the z-score having a cumulative probability equal to the critical probability (p*). A z-score is appropriate to use for population sizes larger than 40 samples in a set.

Related: What is quantitative analysis? (With definitions and examples)

Using critical value in a hypothesis test

There are four steps you can take to use a critical value approach when you conduct a hypothesis test. These include:

  • Specify the null and alternative hypotheses: You can gather and use your sample data and, by assuming the null hypothesis to be true, calculate the value of your test statistic.

  • Determine your critical value by finding your test statistic's known distribution: This can help you avoid making a type I error, also known as the significance level, which describes the probability of the study rejecting the null hypothesis when the null hypothesis was assumed to be true.

  • Compare your test statistic to your critical value. If your test statistic is less extreme towards your alternative than your critical value, it's unnecessary to reject your null hypothesis.

Related: How to become a data scientist in 4 steps

Why is critical value important?

Critical value calculation can help you determine your margin of error within your sample set. And knowing your margin of error can help you evaluate the validity of your sample set and its accuracy. Critical value can enable you to make predictions and assertions about your data with confidence and statistical significance, and it can provide important insights into the sample set you're assessing.

If you can determine and express the critical value of a small data set as a cumulative probability, you can more accurately evaluate a larger data set. Being able to make confident predictions about validity and accuracy is precisely why critical values are so important. The critical value can also help you assess and gain insight into discrepancies in populations of different sizes.

Related: 9 essential business analyst skills

Approaches for finding critical value

There are a few approaches you can take to identify critical value, and they each provide different information:

The p-value approach

The p-value approach can determine the likelihood or p-value of the test statistic by comparing it to the specified significance level (α) of the hypothesis test. This is the level at which you can confirm whether an event is statistically relevant. In null hypothesis significance testing, the p-value represents the probability of getting test results at least as extreme as the results observed. This approach considers that the null hypothesis is correct.

Small p-values can provide evidence against the null hypothesis. The smaller or closer to 0 the p-value, the stronger the evidence against the null hypothesis. If the p-value is less than or equal to the specified significance level α, you can reject the null hypothesis. Typically, you use the p-value to evaluate the strength of the evidence against the null hypothesis without reference to your significance level.

Right-tailed tests

One tailed-tests looks at one side of a statistic, such as 'the mean is greater than 5' or 'the mean is less than 5.' One-tailed tests deal with only one tail of the distribution. In these tests, the z-score is on only one side of the statistic, either to the right or left.

For a right-tailed test, these values represent large positive values, known as the right tail of the distribution. The area in the tail can equal the level of significance of the hypothesis test. For a right-tailed test, you can reject the null hypothesis if the test statistic is overly large. The rejection region comprises one part, which is on the right, from the centre.

Left-tailed tests

The rejection region can be below the acceptance region or be to the left of it, depending on how you formulated the test. When the rejection region is below the acceptance region, you can state that it's a left-tail test. For a left-tailed test, you can reject the null hypothesis if the test statistic is overly small.

Two-tailed tests

Two-tailed tests deal with both tails of the distribution, and the z-score is on both sides of the statistic. For example, a hypothesis like 'the mean is not equal to 5' involves a two-tailed test because the claim is that the mean can be less than 5 or it can be greater than 5. Two-tailed hypothesis tests, also known as two-sided tests, can test the hypothesis in both directions. When you perform a two-tailed test, you can split the significance level percentage between both tails of the distribution.

Related:

  • How to value a company (with definition and jobs list)

  • A quick guide to descriptive statistics (with examples)

  • Inferential statistics: definition, tips and applications

  • 8 statistics degree jobs to consider

  • What is skewness in statistics? (Including formulas)



Explore more articles