Studied Machine Learning
Working on ML Zoomcamp course Week 4. 

This week's content is about Evaluation metrics for Classification. 

Highlights: 
  • Sometimes calculating the accuracy of the model is not enough as we might be missing important metrics such as having predicted True Positives when they are not or missing predicting actual true positives and in fact marking them as negatives. 
  • Those described above are called Precision and Recall. Both metrics are really good for binary classification and verifying our accuracy of the model is indeed good.
  • For example: Imagine we are trying to predict the Churn rate of customers in a Mobile company. Our accuracy might be 85% but in fact, with a 67% Precision we identified correctly True Positives and we incorrectly identified 33% as Positives when those are not. 
  • Another useful way to see this is by building a confusion matrix, as it will help us identify those cases of misclassified observations.