This course will provide an introduction to practical methods for making inferences from data using probabilistic models for observed and missing data.  This approach is an alternative to frequentist statistics, the presently dominant inference technique in sciences, and it supports a common-sense interpretation of statistical conclusions by using probabilities explicitly to quantify uncertainty of inferences. The course will introduce Bayesian inference starting from first principles using basic probability and statistics, elementary calculus and linear algebra.  We will progress by first discussing the fundamental Bayesian principle of treating all unknowns as random variables, and by introducing the basic concepts (e. g. conjugate, noninformative priors) and the standard probability models (normal, binomial, Poisson) through some examples.  Next, we will discuss multi-parameter problems, and large-sample asymptotic results leading to normal approximations to posterior distributions.  We will continue with hierarchical models, model construction and checking, sensitivity analysis and model comparison. We will conclude the course with explicitly contrasting frequentist and Bayesian treatment of null hypothesis testing and Bayesian formulation of classical statistical tests.  Students in the course will get familiar with the software packages R and JAGS, which will allow them to fit complex Bayesian models with minimal programming expertise.  Familiarity with Matlab or C++ programming is required.