Strong Inferencehttp://stronginference.com/2017-01-10T00:00:00-06:00The First Release of PyMC32017-01-10T00:00:00-06:002017-01-10T00:00:00-06:00Christopher Fonnesbecktag:stronginference.com,2017-01-10:/pymc3-release.html<p><img alt="pymc3" src="http://d.pr/i/lJ7d+"></p>
<p>On Monday morning the PyMC dev team pushed the first release of <a href="https://peerj.com/articles/cs-55/">PyMC3</a>, the culmination of over 5 years of collaborative work. We are very pleased to be able to provide a stable version of the package to the Python scientific computing community. For those of you unfamiliar with the …</p><p><img alt="pymc3" src="http://d.pr/i/lJ7d+"></p>
<p>On Monday morning the PyMC dev team pushed the first release of <a href="https://peerj.com/articles/cs-55/">PyMC3</a>, the culmination of over 5 years of collaborative work. We are very pleased to be able to provide a stable version of the package to the Python scientific computing community. For those of you unfamiliar with the history and progression of this project, PyMC3 is a complete re-design and re-write of the PyMC code base, which was primarily the product of the vision and work of John Salvatier. While PyMC 2.3 is still actively maintained and used (I continue to work with it in a number of project myself), this new incarnation allows us to be able to provide newer methods for Bayesian computation to a degree that would have been impossible impossible previously. </p>
<p>While PyMC2 relied on Fortran extensions (via <code>f2py</code>) for most of the computational heavy-lifting, PyMC3 leverages <a href="http://deeplearning.net/software/theano/">Theano</a>, a library from the LISA lab for array-based expression evaluation, to perform its computation. What this provides, above all else, is fast automatic differentiation, which is at the heart of the gradient-based sampling and optimization methods currently providing inference for probabilistic programming. While the addition of Theano adds a level of complexity to the development of PyMC, fundamentally altering how the underlying computation is performed, we have worked hard to maintain the elegant simplicity of the original PyMC model specification syntax. Since the beginning (over 13 years ago now!), we have tried to provide a simple, black-box interface to model-building, in the sense that the user need only concern herself with the modeling problem at hand, rather than with the underlying computer science. </p>
<p>As a point of comparison, here is what a simple hierarchical model (taken from <a href="https://www.amazon.com/Bayesian-Analysis-Chapman-Statistical-Science/dp/1439840954">Gelman <em>et al.</em>'s book</a>) looked like under PyMC 2.3:</p>
<div class="highlight"><pre><span></span><span class="c1"># Priors</span>
<span class="n">alpha</span> <span class="o">=</span> <span class="n">Normal</span><span class="p">(</span><span class="s1">'alpha'</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mf">0.01</span><span class="p">)</span>
<span class="n">beta</span> <span class="o">=</span> <span class="n">Normal</span><span class="p">(</span><span class="s1">'beta'</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mf">0.01</span><span class="p">)</span>
<span class="c1"># Transformed variables</span>
<span class="n">theta</span> <span class="o">=</span> <span class="n">Lambda</span><span class="p">(</span><span class="s1">'theta'</span><span class="p">,</span> <span class="k">lambda</span> <span class="n">a</span><span class="o">=</span><span class="n">alpha</span><span class="p">,</span> <span class="n">b</span><span class="o">=</span><span class="n">beta</span><span class="p">,</span> <span class="n">d</span><span class="o">=</span><span class="n">dose</span><span class="p">:</span> <span class="n">invlogit</span><span class="p">(</span><span class="n">a</span> <span class="o">+</span> <span class="n">b</span> <span class="o">*</span> <span class="n">d</span><span class="p">))</span>
<span class="c1"># Data likelihood</span>
<span class="n">deaths</span> <span class="o">=</span> <span class="n">Binomial</span><span class="p">(</span><span class="s1">'deaths'</span><span class="p">,</span> <span class="n">n</span><span class="o">=</span><span class="n">n</span><span class="p">,</span> <span class="n">p</span><span class="o">=</span><span class="n">theta</span><span class="p">,</span> <span class="n">value</span><span class="o">=</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">3</span><span class="p">,</span><span class="mi">5</span><span class="p">]),</span> <span class="n">observed</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
<span class="c1"># Instantiate a sampler, and run</span>
<span class="n">M</span> <span class="o">=</span> <span class="n">MCMC</span><span class="p">(</span><span class="nb">locals</span><span class="p">())</span>
<span class="n">M</span><span class="o">.</span><span class="n">sample</span><span class="p">(</span><span class="mi">10000</span><span class="p">,</span> <span class="n">burn</span><span class="o">=</span><span class="mi">5000</span><span class="p">)</span>
</pre></div>
<p>and here is the same model in PyMC3:</p>
<div class="highlight"><pre><span></span><span class="k">with</span> <span class="n">Model</span><span class="p">()</span> <span class="k">as</span> <span class="n">bioassay_model</span><span class="p">:</span>
<span class="n">alpha</span> <span class="o">=</span> <span class="n">Normal</span><span class="p">(</span><span class="s1">'alpha'</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="n">sd</span><span class="o">=</span><span class="mi">100</span><span class="p">)</span>
<span class="n">beta</span> <span class="o">=</span> <span class="n">Normal</span><span class="p">(</span><span class="s1">'beta'</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="n">sd</span><span class="o">=</span><span class="mi">100</span><span class="p">)</span>
<span class="n">theta</span> <span class="o">=</span> <span class="n">invlogit</span><span class="p">(</span><span class="n">alpha</span> <span class="o">+</span> <span class="n">beta</span><span class="o">*</span><span class="n">dose</span><span class="p">)</span>
<span class="n">deaths</span> <span class="o">=</span> <span class="n">Binomial</span><span class="p">(</span><span class="s1">'deaths'</span><span class="p">,</span> <span class="n">n</span><span class="o">=</span><span class="n">n</span><span class="p">,</span> <span class="n">p</span><span class="o">=</span><span class="n">theta</span><span class="p">,</span> <span class="n">observed</span><span class="o">=</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">3</span><span class="p">,</span><span class="mi">5</span><span class="p">]))</span>
<span class="n">trace</span> <span class="o">=</span> <span class="n">sample</span><span class="p">(</span><span class="mi">2000</span><span class="p">)</span>
</pre></div>
<p>If anything, the model specification has simplified, for the majority of models.</p>
<p>Though the version 2 and version 3 models are superficially similar (by design), there are very different things happening underneath when <code>sample</code>is called in either case. By default, the PyMC3 model will use a form of gradient-based MCMC sampling, a self-tuning form of Hamiltonian Monte Carlo, called <a href="https://arxiv.org/abs/1111.4246">NUTS</a>. Gradient based methods serve to drastically improve the efficiency of MCMC, without the need for running long chains and dropping large portions of the chains due to lack of convergence. Rather than conditionally sampling each model parameter in turn, the NUTS algorithm walks in k-space (where k is the number of model parameters), simultaneously updating all the parameters as it leap-frogs through the parameter space. Models of moderate complexity and size that would normally require 50,000 to 100,000 iterations now typically require only 2000-3000.</p>
<p>When we run the PyMC3 version of the model above, we see this:</p>
<div class="highlight"><pre><span></span>Auto-assigning NUTS sampler...
Initializing NUTS using advi...
Average ELBO = -6.2597: 100%|████████████████████████████████████████| 200000/200000 [00:11<00:00, 16873.12it/s]
Finished [100%]: Average ELBO = -6.27
100%|██████████████████████████████████████████████████████████████████████| 2000/2000 [00:02<00:00, 928.24it/s]
</pre></div>
<p>Unless specified otherwise, PyMC3 will assign the NUTS sampler to all the variables of the model. This happens here because our model contains only <em>continuous</em> random variables; NUTS will not work with discrete variables because it is impossible to obtain gradient information from them. Discrete variables are assigned the <code>Metropolis</code>sampling algorithm (<em>step method</em>, in PyMC parlance). The next thing that happens is that the variables' initial values are assigned using Automatic Differentiation Variational Inference (ADVI). This is an approximate Bayesian inference algorithm that we have added to PyMC — more on that later. Though it can be used for inference in its own right, here we are using it merely to find good starting values for NUTS (in practice, this is important for getting NUTS to run well). Its an excessive step for small models like this, but it is the default behavior, designed to try and guarantee a good MCMC run.</p>
<p>Another nice innovation includes some new plotting functions for visualizing the posterior distributions obtained with the various estimation methods. Let's look at the regression parameters from our fitted model:</p>
<div class="highlight"><pre><span></span><span class="n">plot_posterior</span><span class="p">(</span><span class="n">trace</span><span class="p">,</span> <span class="n">varnames</span><span class="o">=</span><span class="p">[</span><span class="s1">'beta'</span><span class="p">,</span> <span class="s1">'alpha'</span><span class="p">])</span>
</pre></div>
<p><img alt="posterior plot" src="http://d.pr/i/41uE+"></p>
<p><code>plot_posterior</code> generates histograms of the posterior distribution that is annotated with summary statistics of interest, in the style of <a href="https://www.amazon.com/Doing-Bayesian-Data-Analysis-Tutorial/dp/0123814855">John Kruschke's book</a>. This is just one of several options for visualizing output.</p>
<p>The addition of variational inference (VI) methods in version 3.0 is a transformative change to the sorts of problems you can tackle with PyMC3. I showed it being used to intialize a model that was ultimately fit using MCMC, but variational inference can be used as a tool for obtaining statistical inference in its own right. Just as MCMC approximates a complex posterior by drawing dependent samples from its posterior distribution, variational inference performs an approximation by replacing the true posterior with a more tractable form, then iteratively changes the approximation so that it resembles the posterior distribution as closely as it can, in terms of the <em>information distance</em> between the two distributions. Where MCMC uses sampling, VI uses optimization to estimate the posterior distribution. The benefit to you in doing this is that Bayesian models informed by very large datasets can be fit in a reasonable amount of time (MCMC notoriously scales poorly with data size); the drawback is that you only get an approximation to the posterior, and that appoximation can be unacceptably poor for some applications. Nevertheless, improvements to variational inference methods continue to roll in, and <a href="https://arxiv.org/abs/1505.05770">some have the potential to drastically improve the quality of the approximation</a>. The key advance that allowed PyMC3 to implement variational methods was the development of automated algorithms for specifying a variational approximation generally, across a wide variety of models. In particular, Alp Kucukelbir and colleagues' introduction of <a href="https://arxiv.org/abs/1603.00788">Automatic Differentiation Variational Inference (ADVI)</a> two years ago made VI relatively easy to apply to arbitrary models (again, assuming the model variables are continuous). Here it is, in action, fitting the same model we used NUTS to estimate before:</p>
<div class="highlight"><pre><span></span><span class="k">with</span> <span class="n">model</span><span class="p">:</span>
<span class="n">advi_fit</span> <span class="o">=</span> <span class="n">advi</span><span class="p">(</span><span class="n">n</span><span class="o">=</span><span class="mi">10000</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span>Average ELBO = -6.2765: 100%|████████████████████████████████████████████████| 100000/100000 [00:05<00:00, 17072.45it/s]
Finished [100%]: Average ELBO = -6.2835
</pre></div>
<p>ADVI returns the means and standard deviations of the approximating distribution after it has converged to the best approximation. These values can be used to sample from the disribtution:</p>
<div class="highlight"><pre><span></span><span class="k">with</span> <span class="n">model</span><span class="p">:</span>
<span class="n">trace</span> <span class="o">=</span> <span class="n">sample_vp</span><span class="p">(</span><span class="n">advi_fit</span><span class="p">,</span> <span class="mi">10000</span><span class="p">)</span>
</pre></div>
<p><img alt="advi samples" src="http://d.pr/i/IT5O+"></p>
<p>As we push past the PyMC3 3.0 release, we have a number of innovations either under development or in planning. For example, in order to improve the quality of approximations using variational inference, we are looking at implementing methods that transform the approximating density to allow it to represent more complicated distributions, such as the application of normalizing flows to ADVI; this work is being led by Taku Yoshioka. Thomas Wiecki is currently working on adding Stein Variational Gradient Descent to the suite of VI algorithms, which should allow much larger datasets to be fit to PyMC models. To more easily accommodate the number of different VI algorithms that are being developed, Maxim Kochurov is leading the development of a flexible base class for variational methods that will unify their interfaces. Work is also underway to allow PyMC3 to take advantage of computation on GPUs, something that Theano allows us to do, but requires some engineering to allow it to work generally. These are just a few notable enhancements, along with all of the incremental but steady improvement throughout the code base.</p>
<p>When I began the PyMC project as a postdoctoral fellow <a href="https://en.wikipedia.org/wiki/Billboard_Year-End_Hot_100_singles_of_2003">back in 2003</a>, it was intended only as a set of functions and classes for personal use, to simplify the business of building and iterating through sets of models. At the time, the world of Bayesian computation was dominated by WinBUGS, a truly revolutionary piece of software that made hierarchical modeling and MCMC available to applied statisticians and other scientists who would otherwise been unable to consider these approaches. All the same, the BUGS language was not ideal for all problems and workflows, so if you needed something else you were forced to write your own software. We live in a very different scientific computing world today; for example, there are, as of this writing, no fewer than six libraries for building Gaussian process models in Python! The ecosystem for probabilistic programming and Bayesian analysis is rich today, and becoming richer every month, it seems.</p>
<p>I'd like to take the opportunity now to thank the ever-changing and -growing PyMC development team for <a href="https://github.com/pymc-devs/pymc3/graphs/contributors">all of their hard work over the years</a>. I've been truly awe-stricken by the level of talent and degree of comittment that the project has attracted over the years. Some contributors added value to the project for very short intervals, perhaps in order to facilitate the completion of their own work, and others have stuck around through multiple releases, not only implementing exciting new functionality, but also taking on more mundane chores like squashing bugs and refactoring old code. Of course, every bit helps. Thanks again.</p>
<p>Finally, I'd like to extend an invitation to all who are interested (or just curious) to <a href="https://github.com/pymc-devs/pymc3">come on board and contribute</a>. Now is an exciting time to be a part of the team, with novel methodological innovations in Bayesian modeling arriving at such a rapid pace, and with data science coming into its own as a field. We welcome contributions to all aspects of the project: code development, <a href="https://github.com/pymc-devs/pymc3/issues">issue</a> resolution, <a href="http://pymc-devs.github.io/pymc3/">documentation</a> writing—simply trying out PyMC3 on your own problem and reporting what does and doesn't work is even a great way to get involved. It doesn't take much to get started! </p>Calculating Bayes factors with PyMC2014-11-30T00:00:00-06:002014-11-30T00:00:00-06:00Christopher Fonnesbecktag:stronginference.com,2014-11-30:/bayes-factors-pymc.html<script type="text/javascript"
src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
<p>Statisticians are sometimes interested in comparing two (or more) models, with respect to their relative support by a particular dataset. This may be in order to select the best model to use for inference, or to weight models so that they can be averaged for use in multimodel inference. </p>
<p>The …</p><script type="text/javascript"
src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
<p>Statisticians are sometimes interested in comparing two (or more) models, with respect to their relative support by a particular dataset. This may be in order to select the best model to use for inference, or to weight models so that they can be averaged for use in multimodel inference. </p>
<p>The <a href="http://en.wikipedia.org/wiki/Bayes_factor">Bayes factor</a> is a good choice when comparing two arbitrary models, and the parameters of those models have been estimated. Bayes factors are simply ratios of <em>marginal</em> likelihoods for competing models:</p>
<p>$$ \text{BF}_{i,j} = \frac{L(Y \mid M_i)}{L(Y \mid M_j)} = \frac{\int L(Y \mid M_i,\theta_i)p(\mid \theta_i \mid M_i)d\theta}{\int L(Y \mid M_j,\theta_j)p(\theta_j \mid M_j)d\theta} $$</p>
<p>While passingly similar to likelihood ratios, Bayes factors are calculated using likelihoods that have been integrated with respect to the unknown parameters. In contrast, likelihood ratios are calculated based on the maximum likelihood values of the parameters. This is an important difference, which makes Bayes factors a more effective means of comparing models, since it takes into account parametric uncertainty; likelihood ratios ignore this uncertainty. In addition, unlike likelihood ratios, the two models need not be nested. In other words, one model does not have to be a special case of the other.</p>
<p>Bayes factors are called Bayes factors because they are used in a Bayesian context by updating prior odds with information from data.</p>
<blockquote>
<p>Posterior odds = Bayes factor x Prior odds</p>
</blockquote>
<p>Hence, they represent the evidence in the data for changing the prior odds of one model over another. It is this interpretation as a measure of evidence that makes the Bayes factor a compelling choice for model selection.</p>
<p>One of the obstacles to the wider use of Bayes factors is the difficulty associated with calculating them. While likelihood ratios can be obtained simply by the use of MLEs for all model parameters, Bayes factors require the integration over all unknown model parameters. Hence, for most interesting models Markov chain Monte Carlo (MCMC) is the easiest way to obtain Bayes factors.</p>
<p>Here's a quick tutorial on how to obtain Bayes factors from <a href="https://github.com/pymc-devs/pymc">PyMC</a>. I'm going to use a simple example taken from Chapter 7 of <a href="http://amzn.to/gGV2rK">Link and Barker (2010)</a>. Consider a short vector of data, consisting of 5 integers:</p>
<div class="highlight"><pre><span></span><span class="n">Y</span> <span class="o">=</span> <span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">,</span><span class="mi">8</span><span class="p">])</span>
</pre></div>
<p>We wish to determine which of two functional forms best models this dataset. The first is a <a href="http://en.wikipedia.org/wiki/Geometric_distribution">geometric model</a>:</p>
<p>$$ f(x|p) = (1-p)^x p $$</p>
<p>and the second a <a href="http://en.wikipedia.org/wiki/Poisson_distribution">Poisson model</a>:</p>
<p>$$ f(x|\mu) = \frac{\mu^x e^{-\mu}}{x!} $$</p>
<p>Both describe the distribution of non-negative integer data, but differ in that the variance of Poisson data is equal to the mean, while the geometric model describes variance greater the mean. For this dataset, the sample variance would suggest that the geometric model is favored, but the sample is too small to say so definitively.</p>
<p>In order to calculate Bayes factors, we require both the prior and posterior odds:</p>
<blockquote>
<p>Bayes factor = Posterior odds / Prior odds</p>
</blockquote>
<p>The Bayes factor does not depend on the value of the prior model weights, but the estimate will be most precise when the posterior odds are the same. For our purposes, we will give 0.1 probability to the geometric model, and 0.9 to the Poisson model:</p>
<div class="highlight"><pre><span></span><span class="n">pi</span> <span class="o">=</span> <span class="p">(</span><span class="mf">0.1</span><span class="p">,</span> <span class="mf">0.9</span><span class="p">)</span>
</pre></div>
<p>Next, we need to specify a latent variable, which identifies the true model (we don't believe either model is "true", but we hope one is better than the other). This is easily done using a Bernoulli random variable, that identifies one model or the other, according to their relative weight.</p>
<div class="highlight"><pre><span></span><span class="n">true_model</span> <span class="o">=</span> <span class="n">Bernoulli</span><span class="p">(</span><span class="s1">'true_model'</span><span class="p">,</span> <span class="n">p</span><span class="o">=</span><span class="n">pi</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">value</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span>
</pre></div>
<p>Here, we use the specified prior weights as the Bernoulli probabilities, and the variable has been arbitrarily initialized to zero (the geometric model).</p>
<p>Next, we need prior distributions for the parameters of the two models. For the Poisson model, the expected value is given a uniform prior on the interval [0,1000]:</p>
<div class="highlight"><pre><span></span><span class="n">mu</span> <span class="o">=</span> <span class="n">Uniform</span><span class="p">(</span><span class="s1">'mu'</span><span class="p">,</span> <span class="n">lower</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">upper</span><span class="o">=</span><span class="mi">1000</span><span class="p">,</span> <span class="n">value</span><span class="o">=</span><span class="mi">4</span><span class="p">)</span>
</pre></div>
<p>This stochastic node can be used for the geometric model as well, though it needs to be transformed for use with that distribution:</p>
<div class="highlight"><pre><span></span><span class="n">p</span> <span class="o">=</span> <span class="n">Lambda</span><span class="p">(</span><span class="s1">'p'</span><span class="p">,</span> <span class="k">lambda</span> <span class="n">mu</span><span class="o">=</span><span class="n">mu</span><span class="p">:</span> <span class="mi">1</span><span class="o">/</span><span class="p">(</span><span class="mi">1</span><span class="o">+</span><span class="n">mu</span><span class="p">))</span>
</pre></div>
<p>Finally, the data are incorporated by specifying the appropriate likelihood. We require a mixture of geometric and Poisson likelihoods, depending on which value <em>true_model</em> takes. While BUGS requires an obscure trick to implement such a mixture, PyMC allows for the specification of arbitrary stochastic nodes: </p>
<div class="highlight"><pre><span></span><span class="nd">@observed</span>
<span class="k">def</span> <span class="nf">Ylike</span><span class="p">(</span><span class="n">value</span><span class="o">=</span><span class="n">Y</span><span class="p">,</span> <span class="n">mu</span><span class="o">=</span><span class="n">mu</span><span class="p">,</span> <span class="n">p</span><span class="o">=</span><span class="n">p</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="n">true_model</span><span class="p">):</span>
<span class="sd">"""Either Poisson or geometric, depending on M"""</span>
<span class="k">if</span> <span class="n">M</span><span class="p">:</span>
<span class="k">return</span> <span class="n">poisson_like</span><span class="p">(</span><span class="n">value</span><span class="p">,</span> <span class="n">mu</span><span class="p">)</span>
<span class="k">return</span> <span class="n">geometric_like</span><span class="p">(</span><span class="n">value</span><span class="o">+</span><span class="mi">1</span><span class="p">,</span> <span class="n">p</span><span class="p">)</span>
</pre></div>
<p>Notice that the function returns the geometric likelihood when M=0, or the Poisson model otherwise. Now, all that remains is to run the model, and extract the posterior quantities to calculate the Bayes factor.</p>
<p>Though we may be interested in the posterior estimate of the mean, all that we care about from a model selection standpoint is the estimate of <em>true_model</em>. At every iteration, the value of this parameter takes the value of zero for the geometric model and one for the Poisson. Hence, the mean (or median) will be an estimate of the probability of the Poisson model: </p>
<div class="highlight"><pre><span></span><span class="n">In</span> <span class="p">[</span><span class="mi">11</span><span class="p">]:</span> <span class="n">M</span><span class="o">.</span><span class="n">true_model</span><span class="o">.</span><span class="n">stats</span><span class="p">()[</span><span class="s1">'mean'</span><span class="p">]</span>
<span class="n">Out</span><span class="p">[</span><span class="mi">11</span><span class="p">]:</span> <span class="mf">0.39654545454545453</span>
</pre></div>
<p>So, the posterior probability that the Poisson model is true is about 0.4, leaving 0.6 for the geometric model. The Bayes factor in favor of the geometric model is simply:</p>
<div class="highlight"><pre><span></span><span class="n">In</span> <span class="p">[</span><span class="mi">18</span><span class="p">]:</span> <span class="n">p_pois</span> <span class="o">=</span> <span class="n">M</span><span class="o">.</span><span class="n">true_model</span><span class="o">.</span><span class="n">stats</span><span class="p">()[</span><span class="s1">'mean'</span><span class="p">]</span>
<span class="n">In</span> <span class="p">[</span><span class="mi">19</span><span class="p">]:</span> <span class="p">((</span><span class="mi">1</span><span class="o">-</span><span class="n">p_pois</span><span class="p">)</span><span class="o">/</span><span class="n">p_pois</span><span class="p">)</span> <span class="o">/</span> <span class="p">(</span><span class="mf">0.1</span><span class="o">/</span><span class="mf">0.9</span><span class="p">)</span>
<span class="n">Out</span><span class="p">[</span><span class="mi">19</span><span class="p">]:</span> <span class="mf">13.696011004126548</span>
</pre></div>
<p>This value can be interpreted as strong evidence in favor of the geometric model.</p>
<p>If you want to run the model for yourself, <a href="https://github.com/pymc-devs/pymc3/wiki/BayesFactor">you can download the code here</a>.</p>Burn-in, and Other MCMC Folklore2014-08-09T00:00:00-05:002014-08-09T00:00:00-05:00Christopher Fonnesbecktag:stronginference.com,2014-08-09:/burn-in-and-other-mcmc-folklore.html<script type="text/javascript"
src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
<p>I have been slowly working my way through <a href="http://amzn.to/mR9PVr">The Handbook of Markov Chain Monte Carlo</a>, a compiled volume edited by Steve Brooks <em>et al.</em> that I picked up at last week's Joint Statistical Meetings. The first chapter is a primer on MCMC by <a href="http://www.stat.umn.edu/~charlie/">Charles Geyer</a>, in which he summarizes the …</p><script type="text/javascript"
src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
<p>I have been slowly working my way through <a href="http://amzn.to/mR9PVr">The Handbook of Markov Chain Monte Carlo</a>, a compiled volume edited by Steve Brooks <em>et al.</em> that I picked up at last week's Joint Statistical Meetings. The first chapter is a primer on MCMC by <a href="http://www.stat.umn.edu/~charlie/">Charles Geyer</a>, in which he summarizes the key concepts of the theory and application of MCMC. In a particularly provocative passage, Geyer rips several of the traditional practices in setting up, running and diagnosing MCMC runs, including multi-chain runs, burn-in and sample-based diagnostics. Though they are applied regularly, these steps are simply heuristics that are applied to either aid in reaching or identifying the equilibrium distribution of the Markov chain. There are no guarantees on the reliability of any of them.</p>
<p>In particular, he questions the utility of burn-in:</p>
<blockquote>
<p>Burn-in is only one method, and not a particuarly good method, for finding a good starting point.</p>
</blockquote>
<p>I can't disagree with this, though I have always viewed MCMC sampling (for most models that I have dealt with) as being cheap enough that there is little cost to simply throwing away thousands of them. I have often thrown away as many as the first 90 percent of my samples! However, as Geyer notes, there are better ways of getting your chain into a decent region of its support without throwing anything away.</p>
<p>One method is to use an approximation method on your model before applying MCMC. For example, the <a href="http://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation">maximum a posteriori (MAP)</a> estimate can be obtained using numerical optimization, then used as the initial values for an MCMC run. It turns out to be pretty easy to do in PyMC. For example, using the built-in bioassay example:</p>
<div class="highlight"><pre><span></span><span class="n">In</span> <span class="p">[</span><span class="mi">3</span><span class="p">]:</span> <span class="kn">from</span> <span class="nn">pymc.examples</span> <span class="kn">import</span> <span class="n">gelman_bioassay</span>
<span class="n">In</span> <span class="p">[</span><span class="mi">4</span><span class="p">]:</span> <span class="kn">from</span> <span class="nn">pymc</span> <span class="kn">import</span> <span class="n">MAP</span><span class="p">,</span> <span class="n">MCMC</span>
<span class="n">In</span> <span class="p">[</span><span class="mi">5</span><span class="p">]:</span> <span class="n">M</span> <span class="o">=</span> <span class="n">MAP</span><span class="p">(</span><span class="n">gelman_bioassay</span><span class="p">)</span>
<span class="n">In</span> <span class="p">[</span><span class="mi">6</span><span class="p">]:</span> <span class="n">M</span><span class="o">.</span><span class="n">fit</span><span class="p">()</span>
</pre></div>
<p>This yields MAP estimates for all the parameters in the model, which are less likely to be true modes as the complexity of the model increases, but are a pretty good bet to be a decent starting point for MCMC.</p>
<div class="highlight"><pre><span></span><span class="n">In</span> <span class="p">[</span><span class="mi">7</span><span class="p">]:</span> <span class="n">M</span><span class="o">.</span><span class="n">alpha</span><span class="o">.</span><span class="n">value</span>
<span class="n">Out</span><span class="p">[</span><span class="mi">7</span><span class="p">]:</span> <span class="n">array</span><span class="p">(</span><span class="mf">0.8465802225061101</span><span class="p">)</span>
</pre></div>
<p>All that remains is to move these estimates into an MCMC sampler. While one could manually plug the values of each node into the model specification, its easiest just to extract the variables from the MAP estimator, and use them to instantiate an <code>MCMC</code> object:</p>
<div class="highlight"><pre><span></span><span class="n">In</span> <span class="p">[</span><span class="mi">8</span><span class="p">]:</span> <span class="n">M</span><span class="o">.</span><span class="n">variables</span>
<span class="n">Out</span><span class="p">[</span><span class="mi">8</span><span class="p">]:</span>
<span class="nb">set</span><span class="p">([</span><span class="o"><</span><span class="n">pymc</span><span class="o">.</span><span class="n">PyMCObjects</span><span class="o">.</span><span class="n">Stochastic</span> <span class="s1">'alpha'</span> <span class="n">at</span> <span class="mh">0x10f78e810</span><span class="o">></span><span class="p">,</span>
<span class="o"><</span><span class="n">pymc</span><span class="o">.</span><span class="n">PyMCObjects</span><span class="o">.</span><span class="n">Stochastic</span> <span class="s1">'beta'</span> <span class="n">at</span> <span class="mh">0x10f78e910</span><span class="o">></span><span class="p">,</span>
<span class="o"><</span><span class="n">pymc</span><span class="o">.</span><span class="n">PyMCObjects</span><span class="o">.</span><span class="n">Deterministic</span> <span class="s1">'theta'</span> <span class="n">at</span> <span class="mh">0x10f78e9d0</span><span class="o">></span><span class="p">,</span>
<span class="o"><</span><span class="n">pymc</span><span class="o">.</span><span class="n">distributions</span><span class="o">.</span><span class="n">Binomial</span> <span class="s1">'deaths'</span> <span class="n">at</span> <span class="mh">0x10f78ea50</span><span class="o">></span><span class="p">,</span>
<span class="o"><</span><span class="n">pymc</span><span class="o">.</span><span class="n">CommonDeterministics</span><span class="o">.</span><span class="n">Lambda</span> <span class="s1">'LD50'</span> <span class="n">at</span> <span class="mh">0x10f78ec10</span><span class="o">></span><span class="p">])</span>
<span class="n">In</span> <span class="p">[</span><span class="mi">9</span><span class="p">]:</span> <span class="n">MC</span> <span class="o">=</span> <span class="n">MCMC</span><span class="p">(</span><span class="n">M</span><span class="o">.</span><span class="n">variables</span><span class="p">)</span>
<span class="n">In</span> <span class="p">[</span><span class="mi">10</span><span class="p">]:</span> <span class="n">MC</span><span class="o">.</span><span class="n">sample</span><span class="p">(</span><span class="mi">1000</span><span class="p">)</span>
<span class="n">Sampling</span><span class="p">:</span> <span class="mi">100</span><span class="o">%</span> <span class="p">[</span><span class="mo">0000000000000000000000000000000000000000000000</span><span class="p">]</span> <span class="n">Iterations</span><span class="p">:</span> <span class="mi">1000</span>
</pre></div>
<p>Notice that I did not pass a <code>burn</code> argument to MCMC, which defaults to zero. As is evident from the graphical output of the posteriors, this results in what appears to be a homogeneous chain, and which is hopefully already at its equilibrium distribution.</p>
<p><img src="http://f.cl.ly/items/4513263v3x3n1T0m3o27/alpha.png" width="500"></p>
<p><img src="http://f.cl.ly/items/1i0W0k1Q2S3h172E2v0b/beta.png" width="500"></p>
<p>What the MCMC practitioner fears is using a chain for inference that has not yet converged to its target distribution. Unfortunately, diagnostics cannot reliably alert you to this, nor does starting a model in several chains from disparate starting values guarantee this. There is also no magical threshold to distinguish convergence from pre-convergence regions in a MCMC trace. Geyer insists that only running chains for a very, very long time will inspire confidence:</p>
<blockquote>
<p>Your humble author has a dictum that the lease one can do is make an overnight run. ... If you do not make runs like that, you are simply not serious about MCMC.</p>
</blockquote>Implementing Dirichlet processes for Bayesian semi-parametric models2014-03-07T00:00:00-06:002014-03-07T00:00:00-06:00Christopher Fonnesbecktag:stronginference.com,2014-03-07:/implementing-dirichlet-processes-for-bayesian-semi-parametric-models.html<script type="text/x-mathjax-config">
MathJax.Hub.Config({
tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}
});
</script>
<script type="text/javascript"
src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
<p>Semi-parametric methods have been preferred for a long time in survival analysis, for example, where the baseline hazard function is expressed non-parametrically to avoid assumptions regarding its form. Meanwhile, the use of non-parametric methods in Bayesian statistics is increasing. However, there are few resources to …</p><script type="text/x-mathjax-config">
MathJax.Hub.Config({
tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}
});
</script>
<script type="text/javascript"
src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
<p>Semi-parametric methods have been preferred for a long time in survival analysis, for example, where the baseline hazard function is expressed non-parametrically to avoid assumptions regarding its form. Meanwhile, the use of non-parametric methods in Bayesian statistics is increasing. However, there are few resources to guide scientists in implementing such models using available software. Here, I will run through a quick implementation of a particular class of non-parametric Bayesian models, using PyMC.</p>
<p>Use of the term "non-parametric" in the context of Bayesian analysis is something of a misnomer. This is because the first and fundamental step in Bayesian modeling is to specify a <em>full probability model</em> for the problem at hand. It is rather difficult to explicitly state a full probability model without the use of probability functions, which are parametric. It turns out that Bayesian non-parametric models are not really non-parametric, but rather, are infinitely parametric.</p>
<p>A useful non-parametric approach for modeling random effects is the <a href="http://en.wikipedia.org/wiki/Dirichlet_process">Dirichlet process</a>. A Dirichlet process (DP), just like Poisson processes, Gaussian processes, and other processes, is a stochastic process. This just means that it comprises an indexed set of random variables. The DP can be conveniently thought of as a probability distribution of probability distributions, where the set of distributions it describes is infinite. Thus, an observation under a DP is described by a probability distribution that itself is a random draw from some other distribution. The DP (lets call it $G$) is described by two quantities, a baseline distribution $G_0$ that defines the center of the DP, and a concentration parameter $\alpha$. If you wish, $G_0$ can be regarded as an <em>a priori</em> "best guess" at the functional form of the random variable, and $\alpha$ as a measure of our confidence in our guess. So, as $\alpha$ grows large, the DP resembles the functional form given by $G_0$.</p>
<p>To see how we sample from a Dirichlet process, it is helpful to consider the constructive definition of the DP. There are several representations of this, which include the Blackwell-MacQueen urn scheme, the stick-breaking process and the <a href="http://en.wikipedia.org/wiki/Chinese_restaurant_process">Chinese restaurant process</a>. For our purposes, I will consider the stick-breaking representation of the DP. This involves breaking the support of a particular variable into $k$ disjoint segments. The first break occurs at some point $x_0$, determined stochastically; the first piece of the notional "stick" is taken as the first group in the process, while the second piece is, in turn, broken at some selected point $x_1$ along its length. Here too, one piece is assigned to be the second group, while the other is subjected to the next break, and so on, until $k$ groups are created. Associated with each piece is a probability that is proportional to its length; these $k$ probabilities will have a Dirichlet distribution -- hence, the name of the process. Notice that $k$ can be infinite, making $G$ an infinite mixture.</p>
<p>We require two random samples to generate a DP. First, take a draw of values from the baseline distribution:</p>
<p>$$ \theta_1, \theta_2, \ldots \sim G_0 $$</p>
<p>then, a set of draws $v_1, v_2, \ldots$ from a $\text{Beta}(1,\alpha)$ distribution. These beta random variates are used to assign probabilities to the $\theta_i$ values, according to the stick-breaking analogy. So, the probability of $\theta_1$ corresponds to the first "break", and is just $p_1 = v_1$. The next value corresponds to the second break, which is a proportion of the remainder from the first break, or $p_2 = (1-v_1)v_2$. So, in general:</p>
<p>$$ p_i = v_i \prod_{j=1}^{i-1} (1 - v_j) $$</p>
<p>These probabilities correspond to the set of draws from the baseline distribution, where each of the latter are point masses of probability. So, the DP density function is:</p>
<p>$$ g(x) = \sum_{i=1}^{\infty} p_i I(x=\theta_i) $$</p>
<p>where $I$ is the indicator function. So, you can see that the Dirichlet process is discrete, despite the fact that its values may be non-integer. This can be generalized to a mixture of continuous distributions, which is called a DP mixture, but I will focus here on the DP alone.</p>
<p><strong>Example: Estimating household radon levels</strong></p>
<p>As an example of implementing Dirichlet processes for random effects, I'm going to use the radon measurement and remediation example from <a href="http://amzn.to/gFfJbs">Gelman and Hill (2006)</a>. This problem uses measurements of <a href="http://en.wikipedia.org/wiki/Radon">radon</a> (a carcinogenic, radioactive gas) from households in 85 counties in Minnesota to estimate the distribution of the substance across the state. This dataset has a natural hierarchical structure, with individual measurements nested within households, and households in turn nested within counties. Here, we are certainly interested in modeling the variation in counties, but do not have covariates measured at that level. Since we are more interested in the variation among counties, rather than the particular levels for each, a random effects model is appropriate. Whit Armstrong was kind enough to <a href="https://github.com/armstrtw/pymc_radon">code several parametrizations of this model in PyMC</a>, so I will use his code as a basis for implementing a non-parametric random effect for radon levels among counties.</p>
<p>In the original example from Gelman and Hill, measurements are modeled as being normally distributed, with a mean that is a hierarchical function of both a county-level random effect and a fixed effect that accounted for whether houses had a basement (this is thought to increase radon levels).</p>
<p>$$ y_i \sim N(\alpha_{j[i]} + \beta x_i, \sigma_y^2) $$</p>
<p>So, in essence, each county has its own intercept, but shares a slope among all counties. This can easily be generalized to both random slopes and intercepts, but I'm going to keep things simple, in order to focus in implementing a single random effect.</p>
<p>The constraint that is applied to the intercepts in Gelman and Hill's original model is that they have a common distribution (Gaussian) that describes how they vary from the state-wide mean.</p>
<p>$$ \alpha_j \sim N(\mu_{\alpha}, \sigma_{\alpha}^2) $$</p>
<p>This comprises a so-called "partial pooling" model, whereby counties are neither constrained to have identical means (full pooling) nor are assumed to have completely independent means (no pooling); in most applications, the truth is somewhere between these two extremes. Though this is a very flexible approach to accounting for county-level variance, one might be worried about imposing such a restrictive (thin-tailed) distribution like the normal on this variance. If there are counties that have extremely low or high levels (for whatever reason), this model will fit poorly. To allay such worries, we can hedge our bets by selecting a more forgiving functional form, such as <a href="http://en.wikipedia.org/wiki/Student's_t-distribution">Student's t</a> or <a href="http://en.wikipedia.org/wiki/Cauchy_distribution">Cauchy</a>, but these still impose parametric restrictions (<em>e.g.</em> symmetry about the mean) that we may be uncomfortable making. So, in the interest of even greater flexibility, we will replace the normal county random effect with a non-parametric alternative, using a Dirichlet process.</p>
<p>One of the difficulties in implementing DP computationally is how to handle an infinite mixture. The easiest way to tackle this is by using a truncated Dirichlet process to approximate the full process. This can be done by choosing a size $k$ that is sufficiently large that it will exceed the number of point masses required. By doing this, we are assuming</p>
<p>$$ \sum_{i=1}^{\infty} p_i I(x=\theta_i) \approx \sum_{i=1}^{N} p_i I(x=\theta_i) $$</p>
<p><a href="http://onlinelibrary.wiley.com/doi/10.1002/sim.2666/abstract">Ohlssen et al. 2007</a> provide a rule of thumb for choosing $N$ such that the sum of the first $N-1$ point masses is greater than 0.99:</p>
<p>$$ N \approx 5\alpha + 2 $$</p>
<p>To be conservative, we will choose an even larger value (100), which we will call <code>N_dp</code>. The truncation makes implementation of DP in PyMC (or JAGS/BUGS) relatively simple.</p>
<p>We first must specify the baseline distribution and the concentration parameter. As we have no prior information to inform a choice for $\alpha$, we will specify a uniform prior for it, with reasonable bounds:</p>
<div class="highlight"><pre><span></span>alpha = pymc.Uniform('alpha', lower=0.5, upper=10)
</pre></div>
<p>Though the upper bound may seem small for a prior that purports to be uninformative, recall that for large values of $\alpha$, the DP will converge to the baseline distribution, suggesting that a continuous distribution would be more appropriate.</p>
<p>Since we are extending a normal random effects model, I will choose a normal baseline distribution, with vague hyperpriors:</p>
<div class="highlight"><pre><span></span>mu_0 = pymc.Normal('mu_0', mu=0, tau=0.01, value=0)
sig_0 = pymc.Uniform('sig_0', lower=0, upper=100, value=1)
tau_0 = sig_0 ** -2
theta = pymc.Normal('theta', mu=mu_0, tau=tau_0, size=N_dp)
</pre></div>
<p>Notice that I have specified a uniform prior on the standard deviation, rather than the more common <a href="http://en.wikipedia.org/wiki/Gamma_distribution">gamma</a>-distributed precision; for hierarchical models this is <a href="http://ba.stat.cmu.edu/journal/2006/vol01/issue03/gelman.pdf">good practice</a>. So, now we that we have <code>N_dp</code> point masses, all that remains is to generate corresponding probabilities. Following the recipe above:</p>
<div class="highlight"><pre><span></span><span class="s s-Atom">v</span> <span class="o">=</span> <span class="s s-Atom">pymc</span><span class="p">.</span><span class="nv">Beta</span><span class="p">(</span><span class="s s-Atom">'v'</span><span class="p">,</span> <span class="s s-Atom">alpha</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="s s-Atom">beta</span><span class="o">=</span><span class="s s-Atom">alpha</span><span class="p">,</span> <span class="s s-Atom">size</span><span class="o">=</span><span class="nv">N_dp</span><span class="p">)</span>
<span class="s s-Atom">@pymc</span><span class="p">.</span><span class="s s-Atom">deterministic</span>
<span class="s s-Atom">def</span> <span class="nf">p</span><span class="p">(</span><span class="s s-Atom">v</span><span class="o">=</span><span class="s s-Atom">v</span><span class="p">)</span><span class="s s-Atom">:</span>
<span class="s2">""" Calculate Dirichlet probabilities """</span>
<span class="s s-Atom">#</span> <span class="nv">Probabilities</span> <span class="s s-Atom">from</span> <span class="s s-Atom">betas</span>
<span class="s s-Atom">value</span> <span class="o">=</span> <span class="p">[</span><span class="s s-Atom">u</span><span class="o">*</span><span class="s s-Atom">np</span><span class="p">.</span><span class="nf">prod</span><span class="p">(</span><span class="mi">1</span><span class="o">-</span><span class="s s-Atom">v</span><span class="p">[</span><span class="s s-Atom">:i</span><span class="p">])</span> <span class="s s-Atom">for</span> <span class="s s-Atom">i</span><span class="p">,</span><span class="s s-Atom">u</span> <span class="s s-Atom">in</span> <span class="nf">enumerate</span><span class="p">(</span><span class="s s-Atom">v</span><span class="p">)]</span>
<span class="s s-Atom">#</span> <span class="nv">Enforce</span> <span class="s s-Atom">sum</span> <span class="s s-Atom">to</span> <span class="s s-Atom">unity</span> <span class="s s-Atom">constraint</span>
<span class="s s-Atom">value</span><span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span> <span class="o">=</span> <span class="mi">1</span><span class="o">-</span><span class="nf">sum</span><span class="p">(</span><span class="s s-Atom">value</span><span class="p">[:-</span><span class="mi">1</span><span class="p">])</span>
<span class="s s-Atom">return</span> <span class="s s-Atom">value</span>
</pre></div>
<p>This is where you really appreciate Python's <a href="http://docs.python.org/tutorial/datastructures.html#list-comprehensions">list comprehension</a> idiom. In fact, were it not for the fact that we wanted to ensure that the array of probabilities sums to one, <code>p</code> could have been specified in a single line.</p>
<p>The final step involves using the Dirichlet probabilities to generate indices to the appropriate point masses. This is realized using a categorical mass function:</p>
<div class="highlight"><pre><span></span>z = pymc.Categorical('z', p, size=len(set(counties)))
</pre></div>
<p>These indices, in turn, are used to index the random effects, which are used as random intercepts for the model:</p>
<div class="highlight"><pre><span></span>a = pymc.Lambda('a', lambda z=z, theta=theta: theta[z])
</pre></div>
<p>Substitution of the above code into Gelman and Hill's original model produces reasonable results. The expected value of $\alpha$ is approximately 5, as shown by the posterior output below:</p>
<p><img alt="" src="http://dl.dropbox.com/u/233041/images/alpha.png"></p>
<p>Here is a random sample taken from the DP:</p>
<p><img alt="" src="http://dl.dropbox.com/u/233041/images/dphist.png"></p>
<p>But is the model better? One metric for model comparison is the <a href="http://en.wikipedia.org/wiki/Deviance_information_criterion">deviance information criterion</a> (DIC), which appears to strongly favor the DP random effect (smaller values are better):</p>
<div class="highlight"><pre><span></span>In [11]: M.dic
Out[11]: 2138.7806225675804
In [12]: M_dp.dic
Out[12]: 1993.0894265799602
</pre></div>
<p>If you are interested in viewing the model code in its entirety, I have uploaded it to <a href="https://github.com/fonnesbeck/pymc_radon/blob/master/radon_dp.py">my fork of Whit's code</a>.</p>Automatic Missing Data Imputation with PyMC2013-08-18T00:00:00-05:002013-08-18T00:00:00-05:00Christopher Fonnesbecktag:stronginference.com,2013-08-18:/missing-data-imputation.html<script type="text/javascript"
src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
<p>A distinct advantage of using Bayesian inference is in its universal application of probability models for providing inference. As such, all components of a Bayesian model are specified using probability distributions for either describing a sampling model (in the case of observed data) or characterizing the uncertainty of an unknown …</p><script type="text/javascript"
src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
<p>A distinct advantage of using Bayesian inference is in its universal application of probability models for providing inference. As such, all components of a Bayesian model are specified using probability distributions for either describing a sampling model (in the case of observed data) or characterizing the uncertainty of an unknown quantity. This means that missing data are treated the same as parameters, and so imputation proceeds very much like estimation. When using Markov chain Monte Carlo (MCMC) to fit Bayesian models it usually requires only a few extra lines of code to impute missing values, based on the sampling distribution of the missing data, and associated (usually unknown) parameters. Using <a href="http://github.com/pymc-devs/pymc" title="PyMC on GitHhub">PyMC built from the latest development code</a>, missing data imputation can be done automatically.</p>
<h2>Types of Missing Data</h2>
<p>The appropriate treatment of missing data depends strongly on how the data came to be missing from the dataset. These mechanisms can be broadly classified into three groups, according to how much information and effort is required to deal with them adequately.</p>
<h3>Missing completely at random (MCAR)</h3>
<p>If data are MCAR, then the probability of that any given datum is missing is equal over the whole dataset. In other words, each datum that is present had the same probability of being missing as each datum that is absent. This implies that ignoring the missing data will not bias inference.</p>
<h3>Missing at random (MAR)</h3>
<p>MAR allows for data to be missing according to a random process, but is more general than MCAR in that all units do not have equal probabilities of being missing. The constraint here is that missingness may only depend on information that is fully observed. For example, the reporting of income on surveys may vary according to some measured factor, such as age, race or sex. We can thus account for heterogeneity in the probability of reporting income by controlling for the measured covariate in whatever model is used for infrence.</p>
<h3>Missing not at random (MNAR)</h3>
<p>When the probability of missing data varies according to information that is not available, this is classified as MNAR. This can either be because suitable covariates for explaining missingness have not been recorded (or are otherwise unavailable) or the probability of being missing depends on the value of the missing datum itself. Extending the previous example, if the probability of reporting income varied according to income itself, this is missing not at random.</p>
<p>In each of these situations, the missing data may be imputed using a sampling model, though in the case of missing not at random, it may be difficult to validate the assumptions required to specify such a model. For the purposes of quickly demonstrating automatic imputation in PyMC, I will illustrate using data that is MCAR.</p>
<h2>Implementing imputation in PyMC</h2>
<p>One of the recurring examples in the PyMC documentation is the coal mining disasters dataset from <a href="http://biomet.oxfordjournals.org/cgi/content/short/66/1/191" title="Jarrett RG (1979). A Note on the Intervals Between Coal Mining Disasters. Biometrika, 66, 191–193.">Jarrett 1979</a>. This is a simple longitudinal dataset consisting of counts of coal mining disasters in the U.K. between 1851 and 1962. The objective of the analysis is to identify a switch point in the rate of disasters, from a relatively high rate early in the time series to a lower one later on. Hence, we are interested in estimating two rates, in addition to the year after which the rate changed.</p>
<p>In order to illustrate imputation, I have randomly replaced the data for two years with a missing data placeholder value, -999:</p>
<div class="highlight"><pre><span></span>disasters_array = np.array([ 4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,
3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,
2, 2, 3, 4, 2, 1, 3, -999, 2, 1, 1, 1, 1, 3, 0, 0,
1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,
0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,
3, 3, 1, -999, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1])
</pre></div>
<p>Here, the <code>np</code> prefix indicates that the <code>array</code> function comes from the <a href="http://numpy.scipy.org/">Numpy</a> module. PyMC is able to recognize the presence of missing values when we use Numpy's MaskedArray class to contain our data. The masked array is instantiated via the <code>masked_array</code> function, using the original data array and a boolean mask as arguments: </p>
<div class="highlight"><pre><span></span> masked_values = np.ma.masked_array(disasters_array,
mask=disasters_array==-999)
</pre></div>
<p>Of course, my use of -999 to indicate missing data was entirely arbitrary, so feel free to use any appropriate value, so long as it can be identified and masked (obviously, small positive integers would not have been appropriate here). Let's have a look at the masked array:</p>
<div class="highlight"><pre><span></span>masked_array(data = [4 5 4 0 1 4 3 4 0 6 3 3 4 0 2 6 3 3 5 4 5 3 1 4
4 1 5 5 3 4 2 5 2 2 3 4 2 1 3 -- 2 1 1 1 1 3 0 0 1 0 1 1 0 0 3 1
0 3 2 2 0 1 1 1 0 1 0 1 0 0 0 2 1 0 0 0 1 1 0 2 3 3 1 -- 2 1 1 1
1 2 4 2 0 0 1 4 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1],
mask = [False False False False False False False False False
False False False False False False False False False False False
False False False False False False False False False False False
False False False False False False False False True False False
False False False False False False False False False False False
False False False False False False False False False False False
False False False False False False False False False False False
False False False False False False False False True False False
False False False False False False False False False False False
False False False False False False False False False False False
False False False],
fill_value = 999999)
</pre></div>
<p>Notice that the placeholder values have disappeared from the data, and the array has a <code>mask</code> attribute that identifies the indices for the missing values.</p>
<p>Beyond the construction of a masked array, there is nothing else that needs to be done to accommodate missing values in a PyMC model.</p>
<p>First, we need to specify prior distributions for the unknown parameters, which I call <code>switch</code> (the switch point), <code>early</code> (the early mean) and <code>late</code> (the late mean). An appropriate non-informative prior for the switch point is a discrete uniform random variable over the range of years represented by the data. Since the rates must be positive, I use identical weakly-informative exponential distributions:</p>
<div class="highlight"><pre><span></span># Switchpoint
switch = DiscreteUniform('switch', lower=0, upper=110)
# Early mean
early = Exponential('early', beta=1)
# Late mean
late = Exponential('late', beta=1)
</pre></div>
<p>The only tricky part of the model is assigning the appropriate rate parameter to each observation. Though the two rates and the switch point are stochastic, in the sense that we have used probability models to describe our uncertainty in their true values, the membership of each observation to either the early or late rate is a deterministic function of the stochastics. Thus, we set up a deterministic node that assigns a rate to each observation depending on the location of the switch point at the current iteration of the MCMC algorithm:</p>
<div class="highlight"><pre><span></span>@deterministic
def rates(s=switch, e=early, l=late):
"""Allocate appropriate mean to time series"""
out = np.empty(len(disasters_array))
# Early mean prior to switchpoint
out[:s] = e
# Late mean following switchpoint
out[s:] = l
return out
</pre></div>
<p>Finally, the data likelihood comprises the annual counts of disasters being modeled as Poisson random variables, conditional on the parameters assigned in the <code>rates</code> node above. The masked array is specified as the value of the stochastic node, and flagged as data via the <code>observed</code> argument.</p>
<div class="highlight"><pre><span></span>disasters = Poisson('disasters', mu=rates, value=masked_values, observed=True)
</pre></div>
<p>If we run the model, then query the <code>disasters</code> node for posterior statistics, we can obtain a summary of the estimated number of disasters in both of the missing years.</p>
<div class="highlight"><pre><span></span>In [9]: DisasterModel.disasters.stats()
Out[9]:
{'95% HPD interval': array([[ 0., 6.],
[ 0., 3.]]),
'mc error': array([ 0.11645149, 0.03479713]),
'mean': array([ 2.2246, 0.91 ]),
'n': 5000,
'quantiles': {2.5: array([ 0., 0.]),
25: array([ 1., 0.]),
50: array([ 2., 1.]),
75: array([ 3., 1.]),
97.5: array([ 7., 3.])},
'standard deviation': array([ 1.88206133, 0.92536479])}
</pre></div>
<p>Clearly, this is a rather trivial example, but it serves to illustrate how easy it can be to deal with missing values in PyMC. Though not applicable here, it would be similarly easy to handle MAR data, by constructing a data likelihood whose parameter(s) is a function of one or more covariates. </p>
<p>Automatic imputation is a new feature in PyMC, and is currently available only in the <a href="http://github.com/pymc-devs/pymc">development codebase</a>. It will hopefully appear in the feature set of a future release.</p>