Code snippets on ML fundamentals
PyTorch + Anaconda packages
-
Plot activation functions along with gradient to determine where gradient is meaningful and where it vanishes.
-
A simple comparison between perceptron and MLP on toy datasets. Shows the evolution of the decision boundary.
-
Test the effect of regularization on weights.
Note: L1/L2 induce sparsity with greater effect in L1. Connection weights -> 0, deactivating neurons. Reducing overfitting.
-
Weight Initialization Careful weight initialization schemes prevents vanishing or exploding gradient.
More info: https://www.deeplearning.ai/ai-notes/initialization/