“It is really hard to understand human behaviour. The unpredictability of conscious of a human being, makes it difficult even for a well trained person in public relations.” – Harleen Kaur
Cost function calculates squared error of our hypothesis predictions.
Cost function for linear regression:
Where: m – no of training examples
θ – weights, constants
h(xi) – Hypothesis predictions for ith example
yi – expected output for ith example
If our hypothesis perfectly fit the data then (h(xi)-yi) will be zero for all xi . So our cost function will be 0(zero) if we predict exact values which were expected. And it justifies because there is no error in our hypothesis since it is giving right answers all the time.
In this case some of our predictions are not as expected. So some of (h(xi)-yi) will be non zero, and sum of all the squared error will be some positive value. Which is correct because there are values for which our hypothesis does not predict expected values.