PyTorch is a fairly new deep-learning framework released by Facebook, which reminds me of the JS framework frenzy. But having played around with PyTorch a slight bit, it already feels fun.
To keep things short, I liked it because:
Unlike TensorFlow it allows me to easily print Tensors on the screen (no seriously, this is a big deal for me since I usually take several iterations to get a DL implementation right).
TensorFlow adds a layer between Python and TensorFlow. TensorFlow even has it’s own variable scope. This is way too much abstraction, that I don’t appreciate for my experimental interests.
Interop with numpy is easy in PyTorch, with the simple .numpy() suffix to convert a Tensor to a numpy array.
Unlike Torch, it is not in Lua (also doesn’t need the LuaRocks package manager).
Unlike Caffe2, I don’t have to write C++ code and write build scripts.
As seen, in the __init__ method, we just need to define the various NN layers we are going to be using. Then the forward method just runs through them. The view method is analogous to the NumPy reshape method.
The gradients will be applied after the backward pass, which is auto-computed. The code is self-explanatory and fairly easy to understand.
Torch also keeps track of how to retrieve standard data-sets such as CIFAR-10, MNIST, etc.
After getting the data, from the data-loader you can proceed to play with it. Below are rest of the pieces required to complete the implementation (almost all of it is from the tutorial):
# Create the net and define the optimization criterionnet=Net()# The loss functioncriterion=nn.CrossEntropyLoss()# Which optimization technique will you apply?optimizer=optim.SGD(net.parameters(),lr=0.001,momentum=0.5)# zero the parameter gradientsoptimizer.zero_grad()forepochinrange(20):# loop over the dataset multiple times# Run per epochrunning_loss=0.0fori,datainenumerate(trainloader,0):# get the inputsinputs,labels=data# wrap them in Variableinputs,labels=Variable(inputs),Variable(labels)# zero the parameter gradientsoptimizer.zero_grad()# Forward Passoutputs=net(inputs)# Compute loss functionloss=criterion(outputs,labels)# Backward pass and gradient updateloss.backward()optimizer.step()
The criterion object is used to compute your loss function. optim has a bunch of convex optimization algorithms such as vanilla SGD, Adam, etc. As promised, simply calling the backward method on the loss object allows computing the gradient.
Assuming you are working on the tutorial. Try to solve the tutorial for MNIST data instead of CIFAR-10.
Instead of the 3-channel (RGB) image of size 24x24 pixels, the MNIST images are single channel 28x28 pixel images.
Overall, I could get to 96% accuracy, with the current setup. The complete gist is here.
Taking inspiration from Werner Vogel’s ‘Back to Basics’ blogposts, here is one of my own posts about fundamental topics. While on a long-haul flight with no internet connectivity, having exhausted the books on my kindle, and hardly any inflight-entertainment, I decided to code up Linear Regression in Python. Let’s look at both the theory and implementation of the same.
Theory
Essentially, in Linear Regression, we try to estimate a dependent variable $y$, using independent variables $x_1$, $x_2$, $x_3$, $…$, using a linear model.
More formally,
$y = b + W_1 x_1 + W_2 x_2 + … + W_n x_n + \epsilon$.
Where, $W$ is the weight vector, $b$ is the bias term, and $\epsilon$ is the noise in the data.
It can be used when there is a linear relationship between the input $X$ (input vector containing all the $x_i$, and $y$).
One example could be, given a vending machine’s sales of different kind of soda, predict the total profit made. Let’s say there are three kinds of soda, and for each can of that variety sold, the profit is 0.25, 0.15 and 0.20 respectively. Also, we know that there will be a fixed cost in terms of electricity and maintenance for the machine, this will be our bias, and it will be negative. Let’s say it is $100. Hence, our profit will be:
$y = -100 + 0.25x_1 + 0.15x_2 + 0.20x_3$.
The problem is usually the inverse of the above example. Given the profits made by the vending machine, and sales of different kinds of soda (i.e., several pairs of $(X_i, y_i)$), find the above equation. Which would mean being able to find $b$, and $W$. There is a closed-form solution for Linear Regression, but it is expensive to compute, especially when the number of variables is large (10s of thousands).
Generally in Machine Learning the following approach is taken for similar problems:
Make a guess for the parameters ($b$ and $W$ in our case).
Compute some sort of a loss function. Which tells you how far you are from the ideal state.
Find how much you should tweak your parameters to reduce your loss.
Step 1
The first step is fairly easy, we just pick a random $W$ and $b$. Let’s say $\theta = (W, b)$, then $h_\theta(X_i) = b + X_i.W$. Given an $X_i$, our prediction would be $h_\theta(X_i)$.
Step 2
For the second step, one loss function could be, the average absolute difference between the prediction and the real output. This is called the ‘L1 norm’.
L1 norm is pretty good, but for our case, we will use the average of the squared difference between the prediction and the real output. This is called the ‘L2 norm’, and is usually preferred over L1.
Step 3 We have two sets of params $b$ and $W$. Ideally, we want $L_2$ to be 0. But that would depend on the choices of these params. Initially the params are randomly chosen, but we need to tweak them so that we can minimize $L_2$.
For this, we follow the Stochastic Gradient Descent algorithm. We will compute ‘partial derivatives’ / gradient of $L_2$ with respect to each of the parameters. This will tell us the slope of the function, and using this gradient, we can adjust these params to reduce the value of the method.
Where, $\eta$ is what is called the ‘learning rate’, which dictates how big of an update we will make. If we choose this to be to be small, we would make very small updates. If we set it to be a large value, then we might skip over the local minima. There are a lot of variants of SGD with different tweaks around how we make the above updates.
Eventually we should converge to a value of $L_2$, where the gradients will be nearly 0.
Implementation
The complete implementation with dummy data in about 100 lines is here. A short walkthrough is below.
The only two libraries that we use are numpy (for vector operations) and matplotlib (for plotting losses). We generate random data without any noise.
Where num_rows is $n$ as used in the above notation, and num_feats is the number of variables. We define the class LinearRegression, where we initialize W and b randomly initially. Also, the predict method computes $h_\theta(X)$.
This is different from ensemble models, where a sub-model is trained separately, and it’s score is used as a feature for the parent model. In this paper, the authors learn a wide model (Logistic Regression, which is trying to “memorize”), and a deep model (Deep Neural Network, which is trying to “generalize”), jointly.
The input to the wide network are standard features, while the deep network uses dense embeddings of the document to be scored, as input.
The main benefits as per the authors, are:
DNNs can learn to over-generalize, while LR models are limited in how much they can memorize from the training data.
Learning the models jointly means that the ‘wide’ and ‘deep’ part are aware of each other, and the ‘wide’ part only needs to augment the ‘deep’ part.
Also, training jointly helps reduce the side of the individual models.
The authors employed this model to recommend apps to be downloaded to the user in Google Play, where they drove up app installs by 3.9% using the Wide & Deep model.
However, the Deep model in itself, drove up installs by 2.9%. It is natural to expect that the ‘wide’ part of the model should help in further improving the metric to be optimized, but it is unclear to me, if the delta could have been achieved by further bulking up the ‘deep’ part (i.e., adding more layers / training bigger dimensional embeddings, which are inputs to the DNNs).