Conjugate Posterior Inference.

After we use conjugate prior to get posterior distribution that we are familiar with, we can get simple formula for the posterior mean and variance.However, often we want to explore other properties and aspects of the posterior distribution, for instance, $P(\theta \in A \mid y)$ for arbitrary set $A$; mean and variance for any function $f(\theta)$; when comparing two parameters, interested in the distribution of $\theta_1 \pm \theta_2$, $\theta_1/\theta_2$,$max{(\theta_1,\theta_2)}$ etc. For these situations knowing the exact distribution is difficult, but we can generate random samples from the known posterior distribution of the unknown parameter and approximate the distribution that we are interested in and that process is called the Monte Carlo Approximation.