摘要:Historically, behavior analytic research and practice have been grounded in the single subject examination of behavior change. Within the behavior analytic community, there remains little doubt that the graphing of behavior is a powerful strategy for demonstrating functional control by the independent variable; however, during the past thirty years, various statistical techniques have become a popular alternative form of evidence for demonstrating a treatment effect. Concurrently, a mounting number of behavior analytic investigators are measuring multiple dependent variables when conducting statistical analyses. Without employing strategies that protect the experimentwise error rate, evaluation of multiple dependent variables within a single experiment is likely to inflate the Type I error rate. In fact, with each additional dependent variable examined in univariate fashion, the probability of “incorrectly” identifying statistical significance increases exponentially as a function of chance. Multivariate analysis of variance (MANOVA) and several other statistical techniques can preclude this common error. We provide an overview of the procedural complications arising from methodologies that might inflate the Type I error rate. Additionally, we provide a sample of reviewer comments and suggestions, and an enrichment section focusing on this somewhat contentious issue, as well as a number of statistical and neural network techniques that enhance power and preclude the inflation of Type I error rates