It has been a year and a half since I wrote the first version of this tutorial and it is time for an update.
Knet (pronounced “kay-net”) is the Koç University deep learning framework implemented in Julia by Deniz Yuret and collaborators. It supports construction of high-performance deep learning models in plain Julia by combining automatic differentiation with efficient GPU kernels and memory management. Models can be defined and trained using arbitrary Julia code with helper functions, loops, conditionals, recursion, closures, array indexing and concatenation. The training can be performed on the GPU by simply using KnetArray instead of Array for parameters and data. Check out the full documentation and the examples directory for more information.
You can install Knet using
Pkg.add("Knet"). Some of the examples use
additional packages such as ArgParse, GZip, and JLD. These are not
required by Knet and can be installed when needed using additional
Pkg.add() commands. See the detailed
as well as the section on using Amazon
to experiment with GPU machines on the cloud with pre-installed Knet
In Knet, a machine learning model is defined using plain Julia code. A typical model consists of a prediction and a loss function. The prediction function takes model parameters and some input, returns the prediction of the model for that input. The loss function measures how bad the prediction is with respect to some desired output. We train a model by adjusting its parameters to reduce the loss. In this section we will see the prediction, loss, and training functions for five models: linear regression, softmax classification, fully-connected, convolutional and recurrent neural networks.
Here is the prediction function and the corresponding quadratic loss function for a simple linear regression model:
w is a list of parameters (it could be a Tuple,
Array, or Dict),
x is the input and
y is the desired
output. To train this model, we want to adjust its parameters to
reduce the loss on given training examples. The direction in the
parameter space in which the loss reduction is maximum is given by the
negative gradient of the loss. Knet uses the higher-order function
grad from AutoGrad.jl to compute the gradient
grad is a higher-order function that takes and returns
other functions. The
lossgradient function takes the same arguments
dw = lossgradient(w,x,y). Instead of returning a
dw, the gradient of the loss
with respect to its first argument
w. The type and size of
w, each entry in
dw gives the derivative of the
loss with respect to the corresponding entry in
for more information.
Given some training
data = [(x1,y1),(x2,y2),...], here is how we can
train this model:
We simply iterate over the input-output pairs in data, calculate the
lossgradient for each example, and move the parameters in the negative
gradient direction with a step size determined by the learning rate
Let’s train this model on the Housing dataset from the UCI Machine Learning Repository.
The dataset has housing related information for 506 neighborhoods in Boston from 1978. Each neighborhood is represented using 13 attributes such as crime rate or distance to employment centers. The goal is to predict the median value of the houses given in $1000’s. After downloading, splitting and normalizing the data, we initialize the parameters randomly and take 10 steps in the negative gradient direction. We can see the loss dropping from 366.0 to 29.6. See housing.jl for more information on this example.
grad was the only function used that is not in the Julia
standard library. This is typical of models defined in Knet.
In this example we build a simple classification model for the MNIST handwritten digit recognition dataset. MNIST has 60000 training and 10000 test examples. Each input x consists of 784 pixels representing a 28x28 image. The corresponding output indicates the identity of the digit 0..9.
Classification models handle discrete outputs, as opposed to regression models which handle numeric outputs. We typically use the cross entropy loss function in classification models:
Other than the change of loss function, the softmax model is identical
to the linear regression model. We use the same
train and set
lossgradient=grad(loss) as before. To see how well
our model classifies let’s define an
accuracy function which returns
the percentage of instances classified correctly:
Now let’s train a model on the MNIST data:
mnist.jl loads the MNIST data, downloading it from the
internet if necessary, and provides a training set (xtrn,ytrn), test set
(xtst,ytst) and a
minibatch utility which we use to rearrange the
data into chunks of 100 instances. After randomly initializing the
parameters we train for 10 epochs, printing out training and test set
accuracy at every epoch. The final accuracy of about 92% is close to the
limit of what we can achieve with this type of model. To improve further
we must look beyond linear models.
A multi-layer perceptron, i.e. a fully connected feed-forward neural network, is basically a bunch of linear regression models stuck together with non-linearities in between.
We can define a MLP by slightly modifying the predict function:
w[2k-1] is the weight matrix and
w[2k] is the bias vector
for the k’th layer. max(0,a) implements the popular rectifier
non-linearity. Note that if w only has two entries, this is equivalent
to the linear and softmax models. By adding more entries to w, we can
define multi-layer perceptrons of arbitrary depth. Let’s define one with
a single hidden layer of 64 units:
The rest of the code is the same as the softmax model. We use the same cross-entropy loss function and the same training script. The code for this example is available in mnist.jl. The multi-layer perceptron does significantly better than the softmax model:
Knet provides the
pool(x) functions for the
implementation of convolutional nets (see
@doc conv4 and
pool for more information):
The weights for the convolutional net can be initialized as follows:
Currently convolution and pooling are only supported on the GPU for 4-D
and 5-D arrays. So we reshape our data and transfer it to the GPU along
with the parameters by converting them into KnetArrays (see
@doc KnetArray for more information):
The training proceeds as before giving us even better results. The code for the LeNet example can be found in lenet.jl.
In this section we will see how to implement a recurrent neural network (RNN) in Knet. An RNN is a class of neural network where connections between units form a directed cycle, which allows them to keep a persistent state over time. This gives them the ability to process sequences of arbitrary length one element at a time, while keeping track of what happened at previous elements.
As an example, we will build a character-level language model inspired by “The Unreasonable Effectiveness of Recurrent Neural Networks” from the Andrej Karpathy blog. The model can be trained with different genres of text, and can be used to generate original text in the same style.
It turns out simple RNNs are not very good at remembering things for a very long time. Currently the most popular solution is to use a more complicated unit like the Long Short Term Memory (LSTM). An LSTM controls the information flow into and out of the unit using gates similar to digital circuits and can model long term dependencies. See Understanding LSTM Networks by Christopher Olah for a good overview of LSTMs.
The code below shows one way to define an LSTM in Knet. The first two
arguments are the parameters, the weight matrix and the bias vector. The
next two arguments hold the internal state of the LSTM: the hidden and
cell arrays. The last argument is the input. Note that for performance
reasons we lump all the parameters of the LSTM into one matrix-vector
pair instead of using separate parameters for each gate. This way we can
perform a single matrix multiplication, and recover the gates using
array indexing. We represent input, hidden and cell as row vectors
rather than column vectors for more efficient concatenation and
tanh are the sigmoid and the hyperbolic
tangent activation functions. The LSTM returns the updated state
The LSTM has an input gate, forget gate and an output gate that control
information flow. Each gate depends on the current
input value, and
the last hidden state
hidden. The memory value
cell is computed
by blending a new value
change with the old
cell value under the
control of input and forget gates. The output gate decides how much of
cell is shared with the outside world.
If an input gate element is close to 0, the corresponding element in the
input will have little effect on the memory cell. If a forget
gate element is close to 1, the contents of the corresponding memory
cell can be preserved for a long time. Thus the LSTM has the ability to
pay attention to the current input, or reminisce in the past, and it can
learn when to do which based on the problem.
To build a language model, we need to predict the next character in a
piece of text given the current character and recent history as encoded
in the internal state. The
predict function below implements a
multi-layer LSTM model.
s[2k-1:2k] hold the hidden and cell arrays
w[2k-1:2k] hold the weight and bias parameters for the k’th LSTM
layer. The last three elements of
w are the embedding matrix and the
weight/bias for the final prediction.
predict takes the current
character encoded in
x as a one-hot row vector, multiplies it with
the embedding matrix, passes it through a number of LSTM layers, and
converts the output of the final layer to the same number of dimensions
as the input using a linear transformation. The state variable
To train the language model we will use Backpropagation Through Time (BPTT) which basically means running the network on a given sequence and updating the parameters based on the total loss. Here is a function that calculates the total cross-entropy loss for a given (sub)sequence:
state hold the parameters and the state of the
range give us the input sequence and a
possible range over it to process. We convert the entries in the
sequence to inputs that have the same type as the parameters one at a
time (to conserve GPU memory). We use each token in the given range as
an input to predict the next token. The average cross-entropy loss per
token is returned.
To generate text we sample each character randomly using the probabilities predicted by the model based on the previous character:
state hold the parameters and state variables as
vocab is a Char->Int dictionary of the characters that can be
produced by the model, and
nchar gives the number of characters to
generate. We initialize the input as a zero vector and use
to predict subsequent characters.
sample picks a random index based
on the normalized probabilities output by the model.
At this point we can train the network on any given piece of text (or other discrete sequence). For efficiency it is best to minibatch the training data and run BPTT on small subsequences. See charlm.jl for details. Here is a sample run on ‘The Complete Works of William Shakespeare’:
Knet is an open-source project and we are always open to new contributions: bug reports and fixes, feature requests and contributions, new machine learning models and operators, inspiring examples, benchmarking results are all welcome. If you need help or would like to request a feature, please consider joining the knet-users mailing list. If you find a bug, please open a GitHub issue. If you would like to contribute to Knet development, check out the knet-dev mailing list and tips for developers. If you use Knet in your own work, the suggested citation is: