• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1 // inverse_chi_squared_bayes_eg.cpp
2 
3 // Copyright Thomas Mang 2011.
4 // Copyright Paul A. Bristow 2011.
5 
6 // Use, modification and distribution are subject to the
7 // Boost Software License, Version 1.0.
8 // (See accompanying file LICENSE_1_0.txt
9 // or copy at http://www.boost.org/LICENSE_1_0.txt)
10 
11 // This file is written to be included from a Quickbook .qbk document.
12 // It can still be compiled by the C++ compiler, and run.
13 // Any output can also be added here as comment or included or pasted in elsewhere.
14 // Caution: this file contains Quickbook markup as well as code
15 // and comments: don't change any of the special comment markups!
16 
17 #include <iostream>
18 //  using std::cout; using std::endl;
19 
20 //#define  define possible error-handling macros here?
21 
22 #include "boost/math/distributions.hpp"
23 // using ::boost::math::inverse_chi_squared;
24 
main()25 int main()
26 {
27   using std::cout; using std::endl;
28 
29   using ::boost::math::inverse_chi_squared;
30   using ::boost::math::inverse_gamma;
31   using ::boost::math::quantile;
32   using ::boost::math::cdf;
33 
34   cout << "Inverse_chi_squared_distribution Bayes example: " << endl <<endl;
35 
36   cout.precision(3);
37 // Examples of using the inverse_chi_squared distribution.
38 
39 //[inverse_chi_squared_bayes_eg_1
40 /*`
41 The scaled-inversed-chi-squared distribution is the conjugate prior distribution
42 for the variance ([sigma][super 2]) parameter of a normal distribution
43 with known expectation ([mu]).
44 As such it has widespread application in Bayesian statistics:
45 
46 In [@http://en.wikipedia.org/wiki/Bayesian_inference Bayesian inference],
47 the strength of belief into certain parameter values is
48 itself described through a distribution. Parameters
49 hence become themselves modelled and interpreted as random variables.
50 
51 In this worked example, we perform such a Bayesian analysis by using
52 the scaled-inverse-chi-squared distribution as prior and posterior distribution
53 for the variance parameter of a normal distribution.
54 
55 For more general information on Bayesian type of analyses,
56 see:
57 
58 * Andrew Gelman, John B. Carlin, Hal E. Stern, Donald B. Rubin, Bayesian Data Analysis,
59 2003, ISBN 978-1439840955.
60 
61 * Jim Albert, Bayesian Computation with R, Springer, 2009, ISBN 978-0387922973.
62 
63 (As the scaled-inversed-chi-squared is another parameterization of the inverse-gamma distribution,
64 this example could also have used the inverse-gamma distribution).
65 
66 Consider precision machines which produce balls for a high-quality ball bearing.
67 Ideally each ball should have a diameter of precisely 3000 [mu]m (3 mm).
68 Assume that machines generally produce balls of that size on average (mean),
69 but individual balls can vary slightly in either direction
70 following (approximately) a normal distribution. Depending on various production conditions
71 (e.g. raw material used for balls, workplace temperature and humidity, maintenance frequency and quality)
72 some machines produce balls tighter distributed around the target of 3000 [mu]m,
73 while others produce balls with a wider distribution.
74 Therefore the variance parameter of the normal distribution of the ball sizes varies
75 from machine to machine. An extensive survey by the precision machinery manufacturer, however,
76 has shown that most machines operate with a variance between 15 and 50,
77 and near 25 [mu]m[super 2] on average.
78 
79 Using this information, we want to model the variance of the machines.
80 The variance is strictly positive, and therefore we look for a statistical distribution
81 with support in the positive domain of the real numbers.
82 Given the expectation of the normal distribution of the balls is known (3000 [mu]m),
83 for reasons of conjugacy, it is customary practice in Bayesian statistics
84 to model the variance to be scaled-inverse-chi-squared distributed.
85 
86 In a first step, we will try to use the survey information to model
87 the general knowledge about the variance parameter of machines measured by the manufacturer.
88 This will provide us with a generic prior distribution that is applicable
89 if nothing more specific is known about a particular machine.
90 
91 In a second step, we will then combine the prior-distribution information in a Bayesian analysis
92 with data on a specific single machine to derive a posterior distribution for that machine.
93 
94 [h5 Step one: Using the survey information.]
95 
96 Using the survey results, we try to find the parameter set
97 of a scaled-inverse-chi-squared distribution
98 so that the properties of this distribution match the results.
99 Using the mathematical properties of the scaled-inverse-chi-squared distribution
100 as guideline, we see that that both the mean and mode of the scaled-inverse-chi-squared distribution
101 are approximately given by the scale parameter (s) of the distribution. As the survey machines operated at a
102 variance of 25 [mu]m[super 2] on average, we hence set the scale parameter (s[sub prior]) of our prior distribution
103 equal to this value. Using some trial-and-error and calls to the global quantile function, we also find that a
104 value of 20 for the degrees-of-freedom ([nu][sub prior]) parameter is adequate so that
105 most of the prior distribution mass is located between 15 and 50 (see figure below).
106 
107 We first construct our prior distribution using these values, and then list out a few quantiles:
108 
109 */
110   double priorDF = 20.0;
111   double priorScale = 25.0;
112 
113   inverse_chi_squared prior(priorDF, priorScale);
114   // Using an inverse_gamma distribution instead, we could equivalently write
115   // inverse_gamma prior(priorDF / 2.0, priorScale * priorDF / 2.0);
116 
117   cout << "Prior distribution:" << endl << endl;
118   cout << "  2.5% quantile: " << quantile(prior, 0.025) << endl;
119   cout << "  50% quantile: " << quantile(prior, 0.5) << endl;
120   cout << "  97.5% quantile: " << quantile(prior, 0.975) << endl << endl;
121 
122  //] [/inverse_chi_squared_bayes_eg_1]
123 
124 //[inverse_chi_squared_bayes_eg_output_1
125 /*`This produces this output:
126 
127     Prior distribution:
128 
129     2.5% quantile: 14.6
130     50% quantile: 25.9
131     97.5% quantile: 52.1
132 
133 */
134 //] [/inverse_chi_squared_bayes_eg_output_1]
135 
136 //[inverse_chi_squared_bayes_eg_2
137 /*`
138 Based on this distribution, we can now calculate the probability of having a machine
139 working with an unusual work precision (variance) at <= 15 or > 50.
140 For this task, we use calls to the `boost::math::` functions `cdf` and `complement`,
141 respectively, and find a probability of about 0.031 (3.1%) for each case.
142 */
143 
144   cout << "  probability variance <= 15: " << boost::math::cdf(prior, 15.0) << endl;
145   cout << "  probability variance <= 25: " << boost::math::cdf(prior, 25.0) << endl;
146   cout << "  probability variance > 50: "
147     << boost::math::cdf(boost::math::complement(prior, 50.0))
148   << endl << endl;
149  //] [/inverse_chi_squared_bayes_eg_2]
150 
151 //[inverse_chi_squared_bayes_eg_output_2
152 /*`This produces this output:
153 
154     probability variance <= 15: 0.031
155     probability variance <= 25: 0.458
156     probability variance > 50: 0.0318
157 
158 */
159 //] [/inverse_chi_squared_bayes_eg_output_2]
160 
161 //[inverse_chi_squared_bayes_eg_3
162 /*`Therefore, only 3.1% of all precision machines produce balls with a variance of 15 or less
163 (particularly precise machines),
164 but also only 3.2% of all machines produce balls
165 with a variance of as high as 50 or more (particularly imprecise machines). Moreover, slightly more than
166 one-half (1 - 0.458 = 54.2%) of the machines work at a variance greater than 25.
167 
168 Notice here the distinction between a
169 [@http://en.wikipedia.org/wiki/Bayesian_inference Bayesian] analysis and a
170 [@http://en.wikipedia.org/wiki/Frequentist_inference frequentist] analysis:
171 because we model the variance as random variable itself,
172 we can calculate and straightforwardly interpret probabilities for given parameter values directly,
173 while such an approach is not possible (and interpretationally a strict ['must-not]) in the frequentist
174 world.
175 
176 [h5 Step 2: Investigate a single machine]
177 
178 In the second step, we investigate a single machine,
179 which is suspected to suffer from a major fault
180 as the produced balls show fairly high size variability.
181 Based on the prior distribution of generic machinery performance (derived above)
182 and data on balls produced by the suspect machine, we calculate the posterior distribution for that
183 machine and use its properties for guidance regarding continued machine operation or suspension.
184 
185 It can be shown that if the prior distribution
186 was chosen to be scaled-inverse-chi-square distributed,
187 then the posterior distribution is also scaled-inverse-chi-squared-distributed
188 (prior and posterior distributions are hence conjugate).
189 For more details regarding conjugacy and formula to derive the parameters set
190 for the posterior distribution see
191 [@http://en.wikipedia.org/wiki/Conjugate_prior Conjugate prior].
192 
193 
194 Given the prior distribution parameters and sample data (of size n), the posterior distribution parameters
195 are given by the two expressions:
196 
197 __spaces [nu][sub posterior] = [nu][sub prior] + n
198 
199 which gives the posteriorDF below, and
200 
201 __spaces s[sub posterior] = ([nu][sub prior]s[sub prior] + [Sigma][super n][sub i=1](x[sub i] - [mu])[super 2]) / ([nu][sub prior] + n)
202 
203 which after some rearrangement gives the formula for the posteriorScale below.
204 
205 Machine-specific data consist of 100 balls which were accurately measured
206 and show the expected mean of 3000 [mu]m and a sample variance of 55 (calculated for a sample mean defined to be 3000 exactly).
207 From these data, the prior parameterization, and noting that the term
208 [Sigma][super n][sub i=1](x[sub i] - [mu])[super 2] equals the sample variance multiplied by n - 1,
209 it follows that the posterior distribution of the variance parameter
210 is scaled-inverse-chi-squared distribution with degrees-of-freedom ([nu][sub posterior]) = 120 and
211 scale (s[sub posterior]) = 49.54.
212 */
213 
214   int ballsSampleSize = 100;
215   cout <<"balls sample size: " << ballsSampleSize << endl;
216   double ballsSampleVariance = 55.0;
217   cout <<"balls sample variance: " << ballsSampleVariance << endl;
218 
219   double posteriorDF = priorDF + ballsSampleSize;
220   cout << "prior degrees-of-freedom: " << priorDF << endl;
221   cout << "posterior degrees-of-freedom: " << posteriorDF << endl;
222 
223   double posteriorScale =
224     (priorDF * priorScale + (ballsSampleVariance * (ballsSampleSize - 1))) / posteriorDF;
225   cout << "prior scale: " << priorScale  << endl;
226   cout << "posterior scale: " << posteriorScale << endl;
227 
228 /*`An interesting feature here is that one needs only to know a summary statistics of the sample
229 to parameterize the posterior distribution: the 100 individual ball measurements are irrelevant,
230 just knowledge of the sample variance and number of measurements is sufficient.
231 */
232 
233 //] [/inverse_chi_squared_bayes_eg_3]
234 
235 //[inverse_chi_squared_bayes_eg_output_3
236 /*`That produces this output:
237 
238 
239   balls sample size: 100
240   balls sample variance: 55
241   prior degrees-of-freedom: 20
242   posterior degrees-of-freedom: 120
243   prior scale: 25
244   posterior scale: 49.5
245 
246   */
247 //] [/inverse_chi_squared_bayes_eg_output_3]
248 
249 //[inverse_chi_squared_bayes_eg_4
250 /*`To compare the generic machinery performance with our suspect machine,
251 we calculate again the same quantiles and probabilities as above,
252 and find a distribution clearly shifted to greater values (see figure).
253 
254 [graph prior_posterior_plot]
255 
256 */
257 
258  inverse_chi_squared posterior(posteriorDF, posteriorScale);
259 
260   cout << "Posterior distribution:" << endl << endl;
261   cout << "  2.5% quantile: " << boost::math::quantile(posterior, 0.025) << endl;
262   cout << "  50% quantile: " << boost::math::quantile(posterior, 0.5) << endl;
263   cout << "  97.5% quantile: " << boost::math::quantile(posterior, 0.975) << endl << endl;
264 
265   cout << "  probability variance <= 15: " << boost::math::cdf(posterior, 15.0) << endl;
266   cout << "  probability variance <= 25: " << boost::math::cdf(posterior, 25.0) << endl;
267   cout << "  probability variance > 50: "
268     << boost::math::cdf(boost::math::complement(posterior, 50.0)) << endl;
269 
270 //] [/inverse_chi_squared_bayes_eg_4]
271 
272 //[inverse_chi_squared_bayes_eg_output_4
273 /*`This produces this output:
274 
275  Posterior distribution:
276 
277     2.5% quantile: 39.1
278     50% quantile: 49.8
279     97.5% quantile: 64.9
280 
281     probability variance <= 15: 2.97e-031
282     probability variance <= 25: 8.85e-010
283     probability variance > 50: 0.489
284 
285 */
286 //] [/inverse_chi_squared_bayes_eg_output_4]
287 
288 //[inverse_chi_squared_bayes_eg_5
289 /*`Indeed, the probability that the machine works at a low variance (<= 15) is almost zero,
290 and even the probability of working at average or better performance is negligibly small
291 (less than one-millionth of a permille).
292 On the other hand, with an almost near-half probability (49%), the machine operates in the
293 extreme high variance range of > 50 characteristic for poorly performing machines.
294 
295 Based on this information the operation of the machine is taken out of use and serviced.
296 
297 In summary, the Bayesian analysis allowed us to make exact probabilistic statements about a
298 parameter of interest, and hence provided us results with straightforward interpretation.
299 
300 */
301 //] [/inverse_chi_squared_bayes_eg_5]
302 
303 } // int main()
304 
305 //[inverse_chi_squared_bayes_eg_output
306 /*`
307 [pre
308  Inverse_chi_squared_distribution Bayes example:
309 
310    Prior distribution:
311 
312     2.5% quantile: 14.6
313     50% quantile: 25.9
314     97.5% quantile: 52.1
315 
316     probability variance <= 15: 0.031
317     probability variance <= 25: 0.458
318     probability variance > 50: 0.0318
319 
320   balls sample size: 100
321   balls sample variance: 55
322   prior degrees-of-freedom: 20
323   posterior degrees-of-freedom: 120
324   prior scale: 25
325   posterior scale: 49.5
326   Posterior distribution:
327 
328     2.5% quantile: 39.1
329     50% quantile: 49.8
330     97.5% quantile: 64.9
331 
332     probability variance <= 15: 2.97e-031
333     probability variance <= 25: 8.85e-010
334     probability variance > 50: 0.489
335 
336 ] [/pre]
337 */
338 //] [/inverse_chi_squared_bayes_eg_output]
339