1. Define Type I and Type II error. Which error do we work to minimize? How do w
ID: 3231374 • Letter: 1
Question
1. Define Type I and Type II error. Which error do we work to minimize? How do we minimize Type I and Type II error. Are there any problems that might occur when we use these techniques? What technique works for both types of error? 1. Define Type I and Type II error. Which error do we work to minimize? How do we minimize Type I and Type II error. Are there any problems that might occur when we use these techniques? What technique works for both types of error? 1. Define Type I and Type II error. Which error do we work to minimize? How do we minimize Type I and Type II error. Are there any problems that might occur when we use these techniques? What technique works for both types of error?Explanation / Answer
Type I error, also known as a “false positive”: the error of rejecting a null hypothesis when it is actually true. In ther words, this is the error of accepting an alternative hypothesis (the real hypothesis of interest) when the results can be attributed to chance. Plainly speaking, it occurs when we are observing a difference when in truth there is none (or more specifically - no statistically significant difference). So the probability of making a type I error in a test with rejection region R is 0 P(R / H0 is true).
Type II error, also known as a "false negative": the error of not rejecting a null hypothesis when the alternative hypothesis is the true state of nature. In other words, this is the error of failing to accept an alternative hypothesis when you don't have adequate power. Plainly speaking, it occurs when we are failing to observe a difference when in truth there is one. So the probability of making a type II error in a test with rejection region R is 1 -P(R |Ha is true). The power of the test can be P(R |Ha is true).
Type I error is more serious than type II error and therefore more important to avoid that a type II error.
If you do reject your null hypothesis, then it is also essential that you determine whether the size of the relationship is practically significant.
The hypothesis test procedure is therefore adjusted so that there is a guaranteed “low” probability of rejecting the null hypothesis wrongly; this probability is never zero.
Example of type I and type II error
To understand the interrelationship between type I and type II error, and to determine which error has more severe consequences for your situation, consider the following example.
A medical researcher wants to compare the effectiveness of two medications. The null and alternative hypotheses are:
Null hypothesis (H0): 1= 2
The two medications are equally effective.
Alternative hypothesis (H1): 1 2
The two medications are not equally effective.
A type I error occurs if the researcher rejects the null hypothesis and concludes that the two medications are different when, in fact, they are not. If the medications have the same effectiveness, the researcher may not consider this error too severe because the patients still benefit from the same level of effectiveness regardless of which medicine they take. However, if a type II error occurs, the researcher fails to reject the null hypothesis when it should be rejected. That is, the researcher concludes that the medications are the same when, in fact, they are different. This error is potentially life-threatening if the less-effective medication is sold to the public instead of the more effective one.
As you conduct your hypothesis tests, consider the risks of making type I and type II errors. If the consequences of making one type of error are more severe or costly than making the other type of error, then choose a level of significance and a power for the test that will reflect the relative severity of those consequences.
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.