Mixtures can be found in a large number of models, e.g., clustering and classification models, models of annotation, models with hierarchical structures, topic models, and many others. In many of these cases, the number of mixture components is unknown; choosing a small number of clusters means potentially distintive groups will end up being merged together, while choosing a large number will introduce noise (overly fine grained clusters get created that should otherwise be merged together). This motivates the need for a nonparametric mixture where the number of components can adapt to and grow with the data.

In this blog post I discuss the use of a stick breaking process in the context of an unbounded mixture. The weights of the mixture can be viewed as draws from an infinite categorical. This distribution is formed by successively breaking a stick of length one and measuring the broken parts. Using \(\{\nu_i\}_{i=1}^{\infty}\) as the breaking proportions and \(\{ \pi_i(\nu) \}_{i=1}^{\infty}\) as the length of the broken parts we have:

\[\begin{equation*} \begin{split} \pi_1(\nu) & = \nu_1 \\ \pi_2(\nu) & = \nu_2 [1 - \pi_1(\nu)] = \nu_2 (1 - \nu_1) \\ \pi_3(\nu) & = \nu_3 [1 - \pi_1(\nu) - \pi_2(\nu)] = \nu_3 (1 - \nu_1) (1-\nu_2) \\ ... \\ \pi_i(\nu) & = \nu_i (1 - \sum_{t=1}^{i-1} \pi_t(\nu)) = \nu_i \prod_{t=1}^{i-1} (1-\nu_t) \\ ... \end{split} \end{equation*}\]

More formally, the breaking proportions are assumed to be drawn from a $\mathsf{Beta}(1,\alpha)$ distribution. The $\alpha$ hyperparameter affects the growth of the number of components with the data: small values favor a smaller number of clusters, while bigger values favor a bigger number. Focusing, for simplicity, on a mixture of unigrams that we apply, let’s say, to a collection of short documents, we have the following generative process:

  • For every cluster i = 1 to $\infty$:
    • Draw breaking proportion $\nu_i \sim \mathsf{Beta}(1,\alpha)$
    • Draw a unigram distribution (i.e., a distribution over the vocabulary space) $\beta_i \sim \mathsf{Dirichlet}(\eta)$
  • For every document d = 1 to M:
    • Draw a cluster $z_d \sim \mathsf{Categorical} (\pi(\nu))$
    • For every word position n = 1 to $N_d$:
      • Draw a word $w_{d,n} \sim \mathsf{Categorical} ( \beta_{z_d} )$

Estimating the parameters of the above mixture is straightforward with variational inference because the stick breaking process nicely complies with conjugacy. The infinite mixture is approximated with truncated variational distributions, i.e., we fix a truncation level T and let the T-th stick take, under the variational distribution, the reminder mass of the unit stick ($q(\nu_T = 1) = 1$); thus subsequent sticks will have no mass under the variational distribution.

I showed in a previous post that when the complete conditionals are in the exponential family, the variational distributions take the same form and have the natural parameters equal to the expected value (under the variational distribution) of the natural parameter of the corresponding complete conditional. We will use this elegant result in the derivations below.

Let’s compute first the complete conditional for a unigram distribution:

\[\begin{equation*} \begin{split} p(\beta_i | \beta_{-i}, \nu, z, w, \alpha, \eta) & \propto p(\beta_i | \eta) \prod_{d,n}^{M,N_d} p(w_{d,n} | \beta_i)^{\mathsf{I}(z_d = i)} \\ & \propto \prod_{j=1}^{V} \beta_{i,j}^{\eta_j - 1} \prod_{d,n}^{M,N_d} \beta_{i,w_{d,n}}^{\mathsf{I}(z_d = i)} \\ & \propto \prod_{j=1}^{V} \beta_{i,j}^{\eta_j - 1} \prod_{d,n,j}^{M,N_d,V} \beta_{i,j}^{\mathsf{I}(z_d = i)\mathsf{I}(w_{d,n} = j)} \\ & \propto \prod_{j=1}^{V} \beta_{i,j}^{ \left[ \eta_j + \sum_{d,n}^{M,N_d} \mathsf{I}(z_d = i)\mathsf{I}(w_{d,n} = j) \right] - 1 } \end{split} \end{equation*}\]

The just derived the complete conditional is Dirichlet distributed. The corresponding variational distribution will thus have the same form, i.e., $q(\beta_{i} \mid \lambda_{i}) = \mathsf{Dirichlet}(\lambda_{i})$, with the natural parameter equal to the expected value (under the variational distribution) of the natural parameter of the conditional:

\[\begin{equation*} \lambda_{i,j} = \eta_j + \sum_{d,n}^{M,N_d} \mathsf{I}(w_{d,n} = j) \mathsf{E}_q [ \mathsf{I}(z_d = i) ] \end{equation*}\]

Applying the same set of steps for the per document cluster assignments, we have:

\[\begin{equation*} \begin{split} p(z_d = i | ... ) & \propto p(z_d = i | \nu) \prod_{n=1}^{N_d} p(w_{d,n} | \beta_i) \\ & \propto \pi_i(\nu) \prod_{n=1}^{N_d} \beta_{i,w_{d,n}} \\ & \propto \nu_i \prod_{t=1}^{i-1} (1 - \nu_t) \prod_{n=1}^{N_d} \beta_{i,w_{d,n}} \\ & \propto \exp \left\{ \log \nu_i + \sum_{t=1}^{i-1} \log (1 - \nu_t) + \sum_{n=1}^{N_d} \log \beta_{i,w_{d,n}} \right\} \end{split} \end{equation*}\]

The complete conditional for the per document cluster assignments is a Categorical distribution, leading to $q(z_{d} \mid \phi_{d}) = \mathsf{Categorical}(\phi_{d})$, where:

\[\begin{equation*} \log \phi_{d,i} = \mathsf{E}_q [ \log \nu_i ] + \sum_{t=1}^{i-1} \mathsf{E}_q [ \log (1 - \nu_t) ] + \sum_{n=1}^{N_d} \mathsf{E}_q [ \log \beta_{i,w_{d,n}} ] + const. \end{equation*}\]

Finally, let’s compute the complete conditional for the breaking proportions:

\[\begin{equation*} p(\nu_i | ... ) \propto p(\nu_i | 1, \alpha) \prod_{d=1}^{M} p(z_d | \nu) \end{equation*}\]

The second term needs a bit more work, so let’s focus on that for a moment:

\[\begin{equation*} \begin{split} p(z_d | \nu) & = \prod_{i=1}^{\infty} p(z_{d} = i | \nu) ^ { \mathsf{I}(z_d = i) } \\ & = \prod_{i=1}^{\infty} \pi_i(\nu) ^ { \mathsf{I}(z_d = i) } \\ & = \prod_{i=1}^{\infty} \left[ \nu_i \prod_{t=1}^{i-1} (1 - \nu_t) \right] ^ { \mathsf{I}(z_d = i) } \\ & = \prod_{i=1}^{\infty} \nu_i^{\mathsf{I}(z_d = i)} (1 - \nu_i)^{\mathsf{I}(z_d > i)} \end{split} \end{equation*}\]

Going back to the derivation of the complete conditional for the breaking proportions, we have:

\[\begin{equation*} \begin{split} p(\nu_i | ... ) & \propto p(\nu_i | 1, \alpha) \prod_{d=1}^{M} p(z_d | \nu) \\ & \propto \nu_i^{1 - 1} (1 - \nu_i)^{\alpha - 1} \prod_{d=1}^{M} \nu_i^{\mathsf{I}(z_d = i)} (1 - \nu_i)^{\mathsf{I}(z_d > i)} \\ & \propto \nu_i^{ \left[ 1 + \sum_{d=1}^{M} \mathsf{I}(z_d = i) \right] - 1} (1 - \nu_i)^{ \left[ \alpha + \sum_{d=1}^{M} \mathsf{I}(z_d > i) \right] - 1} \end{split} \end{equation*}\]

We can see the conditional is a Beta distribution. We’ll thus have $q(\nu_i \mid \gamma_i) = \mathsf{Beta}(\gamma_{i,1}, \gamma_{i,2})$, where:

\[\begin{equation*} \begin{split} \gamma_{i,1} & = 1 + \sum_{d=1}^{M} \mathsf{E}_q [ \mathsf{I}(z_d = i) ] \\ \gamma_{i,2} & = \alpha + \sum_{d=1}^{M} \mathsf{E}_q [ \mathsf{I}(z_d > i) ] \end{split} \end{equation*}\]

Knowing the functional form of the variational distributions, computing the expectations present in the update formulas derived above is straightforward:

\[\begin{equation*} \begin{split} \mathsf{E}_q [ \mathsf{I}(z_d = i) ] & = q(z_d = i) = \phi_{d,i} \\ \mathsf{E}_q [ \mathsf{I}(z_d > i) ] & = q(z_d > i) = \sum_{t=i+1}^{T} q(z_d =t) = \sum_{t=i+1}^{T} \phi_{d,t} \\ \mathsf{E}_q [ \log \nu_i ] & = \psi (\gamma_{i,1}) - \psi (\gamma_{i,1} + \gamma_{i,2}) \\ \mathsf{E}_q [ \log (1 - \nu_i) ] & = \psi (\gamma_{i,2}) - \psi (\gamma_{i,1} + \gamma_{i,2}) \\ \mathsf{E}_q [ \log \beta_{i,w_{d,n}} ] & = \psi (\lambda_{i,w_{d,n}}) - \psi \left( \sum_{j=1}^{V} (\lambda_{i,j} ) \right) \end{split} \end{equation*}\]

For the computation of the second expectation I used the fact that the variational distribution has no mass above the truncation level T. For the following expectations I used a well known result from exponential families, i.e., the first derivative of the log normalizer is equal to the expected value of the sufficient statistics. Usually you can find these expectations in relevant textbooks (or Wikipedia).

The inference algorithm involves iterating between the derived update formulas until the variational objective function (the ELBO) plateaus:

\[\begin{equation*} \mathcal{L} = \mathsf{E}_q[ p(\nu, \beta, z, w | \alpha, \eta) ] - \mathsf{E}_q[ q(\nu, \beta, z | \gamma, \lambda, \phi) ] \end{equation*}\]

I’ll leave out the expansion of the ELBO as an exercise. Should be straightforward following the work up to this point.