# Conjugate Models

Conjugate family and base models for Bayesian analysis.

Conjugate family and base models for Bayesian analysis.

## Conjugate

Definition: A class $P$ of prior distributions for $\theta$ is called conjugate for a sampling model $p(y \mid \theta)$ if $p(\theta) \in P \Rightarrow p(\theta \mid y) \in P. $

The reason for using conjugate priors is that the kernel of the likelihood function and priorâ€™s probability density function would be exactly the same, which leads to a posterior that belongs to the same distribution family we are familiar with.

Conjugate priors make posterior calculation easy, but might not actually represent our prior information. However, conjugate prior distributions are very flexible and are computationally tractable.

## Conjugate Family

- Binomial- Beta Model
- Poisson - Gamma Model
- Normal Model
- Binomial - Beta

## Binomial - Beta

### Assumption:

$Y \sim Bernoulli (\theta)$ ; $P(y \mid \theta) = \theta^{y} (1-\theta)^{1-y}$

### Likelihood:

### Prior:

### Posterior:

Thus we have

where $\tilde{\alpha}=\sum{y_i}+\alpha$, $\tilde{\beta}=n-\sum{y_i}+\beta$

In most cases, $\theta$ refers to the chance that an event will happen. Thus $\sum{y_i}$ often denotes as the # of success and $n-\sum{y_i}$ as the # of failure. The prior may come from empirical records that contains the information of $\theta$ obtained from previous studies or just by intuitivity.

## Poisson - Gamma

### Assumption:

$Y \sim Poisson (\theta)$ ; $P(y \mid \theta)=\frac{\theta^y}{y!} e^{-\theta}$

### Likelihood:

### Prior:

### Posterior:

Thus we have

where $\tilde{\alpha}=\sum{y_i}+\alpha, \tilde{\beta}=n+\beta$.