Date of Award

Summer 2018

Document Type

Thesis Restricted

Degree Name

Master of Science (MS)

Department

Mathematics

Committee Chairperson

Robert J. Gallop, Ph.D.

Committee Member

Thomas H. Short, Ph.D.

Committee Member

Scott D. McClintock, Ph.D.

Abstract

Abstract

In statistics, multiple comparison or multiple testing problems arise when there are a set of statistical tests to compare simultaneously (In Wikipedia, Multiple comparison problems). As the increase of the amount of the multiple testing, the type I error (incorrectly rejected overall null hypothesis) of the overall test also increase. For guaranteeing the overall type I error (Family wise error rate), statistician introduce a lot of adjustment methods to control the overall Type I error. In this thesis, we mainly discuss about four basic adjustment methods, respectively Bonferroni Adjustment Method, Tukey Adjustment Method, Scheffe Adjustment Method and False Discovery Adjustment Method.

The primary objective of this thesis is to compare the advantages or disadvantages of the four adjustment methods in general and generalized linear model. The criterion for the comparison between Bonferroni, Tukey and Scheffe method is to compare the range of confidence interval for the same sub-test. The narrower of the range, the better of the related adjustment method.

Findings: In the large number of multiple pair-wise comparisons, Tukey Adjustment method is preferred than Bonferroni Adjustment; In the large number of multiple linear combination or contrasts comparisons, Scheffe Adjustment method is preferred than Bonferroni Adjustment method. If the amount of pair-wise comparisons or contrasts or linear combinations is large, False Discovery Rate Adjustment methods can be an alternative way. Bonferroni Adjustment Methods is available with the small number of pairwise comparisons, contrasts or linear combinations.

Share

COinS