Learning PCA
Studied Machine Learning
Just finished last course of Mathematics for Machine Learning by Imperial College of London. 

Last course was related to Principal Component Analysis, with the main content as:  

  1. Measures of center, such as mean and variance. 
  2. How to calculate inner products in 1D and Generalization to multiple dimensions. 
  3. Orthogonal Projections in 1D and its generalization to multidimensional 
  4. PCA, how to project multidimensional spaces into subspaces without losing information. 
  • Idea behind PCA is to compress data.
  • Data should be normalized 
  • Compute the covariance matrix 
  • Compute the eigen vectors and eigen values that maximize the variance 
  • Obtain the principal components. 

This last section of the course has been really hard and challenging, and with that I got the importance of understanding the foundations of PCA and its applications.