**Review of Module 6 of Data Analysis for Social Scientists (MITx, edX) – Assessing and Deriving Estimators – Confidence Intervals, and Hypothesis Testing**

The material felt much easier to grasp compared to the previous weeks. Maybe because I had been regularly using confidence intervals and hypothesis testing, so the concepts were familiar to me. It would be interesting to have the class’s average scores to check if the topic is just easier for the class in general.

Here is a brief summary of what was taught this week:

Criteria to consider when **assessing estimators** are bias, efficiency (given by the mean squared error), consistency, ease of computation, robustness (to the underlying assumptions of the distribution).

Frameworks for **deriving estimators** are the method of moments, maximum likelihood estimator (MLE) and dreaming them up 😉 MLE is efficient, but can be biased, difficult to compute and are not as robust as method of moments.

The **confidence interval** quantifies reliability. Typically, this can be constructed based on the normal or t-distribution. Correspondingly, **hypothesis testing ** is used to assess whether there is enough evidence to contradict some assertion about a population, given a random sample from the population. It can be characterised by significance level and power, based on Type I and Type II errors.