








Introduction to Probability Theory—Bayesian Inference Illustrated by PassFail Trials












Wolfram Technology Conference 2011






Champaign, Illinois, USA






This presentation illustrates Bayes' rule as the tool for inductive reasoning using the context of single parameter binary trials like: a head/tail coin toss, a pass/fail regulatory inspection, or a guilty/not guilty jury decision. The binomial distribution quantifies the probability of m passes in n trials, given p equals the passing probability in each trial. Bayes' rule infers the probability distribution function (PDF) for p from the data ( m passes in n trials) and easily generates stopping criteria by signaling during sequential trials when desired precision for the inferred value of p is achieved. When precision after n trials is insufficient, the Bayesian may perform k additional trials, while the frequentist must start over with k + n additional trials for a total of k + 2n trials. This is because the Bayesian prior uses available data while inferring the implications of new data. As used here, a likelihood function (LKF) differs from its underlying PDF by a factor, kk : LKF = kk*PDF and PDF = LKF/kk. Bayes' rule, in terms of likelihood functions, says the posterior LKF is the product of a prior LKF and a data LKF. LKFpost[ p  m, n, c ] = LKFprior[ p ]*LKFdata[ m, n  c ]. The Bayesian credible interval (the range of p given m passes in n trials) is contrasted with the frequentist confidence interval (the range of p in a stated proportion of repeated experiments, each with n trials). The maximum likelihood estimate for p is shown to be a special case of the more general Bayesian inference for p.












http://www.wolfram.com/events/technologyconference2011






 IntroductionToProbabilityTheory.cdf (278.9 KB)  CDF Document   IntroductionToProbabilityTheory.nb (278.2 KB)  Mathematica Notebook 







   
 
