Testing PubSub Locally with Python + Docker

Sometimes you just want to quickly prototype a streaming system that uses pubsub. You don’t need the full power of the production version in Google Cloud. You don’t want to go through all the long prerequisite setup steps as with the official pubsub emulator guide. You just want to summon pubsub with a single command. And you don’t want to write verbose Java code.

What do you do? Well, an underrated (or perhaps just not really well-documented) alternative is to just spin up a docker container:

docker run -it --rm -p 8085:8085 gcr.io/google.com/cloudsdktool/cloud-sdk \
    gcloud beta emulators pubsub start --host-port=0.0.0.0:8085

As simple as that – you have a fake pubsub emulator running in your machine at port 8085 (the default port). Now we want to use python to start sending and receiving messages. We can use google-cloud-pubsub (API reference is also in the link) by installing it as such:

pip install google-cloud-pubsub

By default, the google-cloud-pubsub plugs itself to Google Cloud’s endpoint. We can instead instruct google-cloud-pubsub to connect to our emulator with the PUBSUB_EMULATOR_HOST environment variable:

export PUBSUB_EMULATOR_HOST=localhost:8085

Now let’s have a publisher up and running. With the emulator environment var set we launch an interactive python console and punch in the following:

from google.cloud import pubsub

publisher = pubsub.PublisherClient()
pubsub_topic = 'projects/dummy-project/topics/test-topic'

publisher.create_topic(pubsub_topic)

# Use publish to start shipping messages...
publisher.publish(pubsub_topic, b'Hello?')
publisher.publish(pubsub_topic, b'Is anyone there?')

I just sent over some byte string payload, so let’s print it in a client. Spin up another interactive python console for the client:

from google.cloud import pubsub

subscriber = pubsub.SubscriberClient()
pubsub_topic = 'projects/dummy-project/topics/test-topic'
sub_name = 'projects/dummy-project/subscriptions/subslot123'

subscriber.create_subscription(name=sub_name, topic=pubsub_topic)

# Blocking call to start listening. Passes all messages to print method
subscriber.subscribe(sub_name, print).result()

Now, if you forget to set the environment variable (as you normally would), don’t worry – Google will not fail to remind you to use their production-ready version:

Traceback (most recent call last):
  File "", line 1, in 
  File "/home/bruce/.local/lib/python3.8/site-packages/google/cloud/pubsub_v1/publisher/client.py", line 114, in __init__
    channel = grpc_helpers.create_channel(
  File "/home/bruce/.local/lib/python3.8/site-packages/google/api_core/grpc_helpers.py", line 195, in create_channel
    credentials, _ = google.auth.default(scopes=scopes)
  File "/home/bruce/.local/lib/python3.8/site-packages/google/auth/_default.py", line 321, in default
    raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)
google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://cloud.google.com/docs/authentication/getting-started

And that’s it. Happy experimenting!

Interactive p5.js Sketches in WordPress via CodePen

Today I have been doing some experiments in embedding interactive content into this blog:

It is a basic example; useful as a template to get started with. In the remainder of this post, I detail how I got here.

Codepen GIF - Find & Share on GIPHY

A quick search on the internet shows that various people have attempted to embed p5.js content in the past. All of the ones I’ve seen so far either embed an iframe manually, slip in php code, or some use some plugin. If you are under wordpress.com, the first 2 options is not possible and the last option requires a business plan.

The approach I use here needs none of that.

Prologue

For the past few weeks, I have been researching on creating interactive “sketches” that could provide a more engaging pedagogical tool for me to intuitively understand technical ideas. I figured that it would be great if it were also convenient to share it to everyone – no other platform achieves this better than the ubiquitous nature of the web. And, is it not even more convenient if I could just embed it to my blog?

Hence begins my search into web rendering engines like pixijs, web game engines like phaser, playcanvas, ctjs and Unity’s Project Tiny (I recommend gamefromscratch’s YouTube channel for more).

While these are great tools, they don’t exactly fit the definition of a “sketch”. These are more of a base to build complex systems, not something you can get up and running with a bit of code.

And so I found myself in p5.js, which takes after the core principles of Processing:

Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts.

As I browsed p5.js examples, I noted it comes bundled with math libraries, 3D capabilities, audio + image + video processing, simulation systems – the whole enchilada. The best part about p5.js is that no install is required to get started. You could go to https://editor.p5js.org/ and an editor is up, with examples ready to be loaded with a few clicks

That blew my mind. How have I in all my years of meandering the internet have I not known this existed?

Fun fact: Processing also has a Python mode https://py.processing.org/

CodePen-Wordpress Integration

So that was p5.js. How did I integrate it to this blog? Well, as you can probably already see, I use CodePen and the magic of iframes. Most frontend developers would recognise CodePen (or its competitor JsFiddle) as a CSS, HTML, javascript playground.

While CodePen does give you a few options to embed into WordPress in their blog, the only one that will work under wordpress.com is when you copy-paste any CodePen Pen link on a paragraph on its own:

WordPress will then auto-magically convert it to some widget where your visitors can interact with. This behaviour is also documented in WordPress.com CodePen support page, which has this adorable gif:

OMG Wapuu is so cute!

CodePen & p5.js

While you can find p5.js CodePens, those are woefully outdated. It uses v0.2.13, which does not have touch support – the latest version is v1.0.0. Though you could add/change the js dependencies from the HTML block, I prefer to set these imports under Settings > JS > Add External Scripts:

As of this writing, v1.0.0 is the latest version – you should use whatever the latest version is at the time you are developing. Setting to always use the latest version can potentially break your pen in the future if there are breaking changes.

For some reason, web browsers add some margin space of 8px to the body element. This will appear as white space in your sketch. You can disable this by adding the following in the CSS block:

body {
  margin: 0;
}

Aside from this, there is practically no difference from the example sketches you see in p5.js except for these additional lines in the setup function that is tailored for touch devices:

// Fit canvas to available space factoring codepen's margin
let body = document.documentElement;
createCanvas(body.clientWidth, body.scrollHeight - 5);

// Prevent page pan when you drag about the canvas
body.getElementsByTagName("canvas")[0].addEventListener(
  "touchstart",
  (e) => { e.preventDefault(); },
  false
);

That little bit of code just stretches the canvas to whatever space CodePen has available, which varies when you use a phone or tablet or fridge. It also ensures that while you are interacting with the pen you do not also accidentally pan the web page.

Of course, instead of doing all of this, you could simply start off on your own by forking my pen.

The Learning Curve

Wow! That looks exciting – but do you need to know javascript, CSS and HTML?

IMHO, just javascript would be enough. And they do teach you javascript via the examples in p5.js – covering all the bare basics as if you never wrote a single line of code before. That bit of CSS you saw earlier is probably all you will ever need (until you decide to style DOM elements or whatnot).

There is also a nice beginner-friendly introductory video by Cassie Tarakajian, who is also the lead maintainer of the p5.js editor (source code on Github):

Closing Thoughts

So that is a brief tour of adding interactive sketches to WordPress – I hope you find it useful, or at least entertaining. In future posts I hope to create and embed more of these interactive widgets, so stay tuned!

How to Setup LUKS2 encrypted Ubuntu 20.04 With Dual Boot

This post is a guide to setup disk encryption on Ubuntu 20.04 using LUKS2, while still being able to dual boot to Windows. Unlike most guides out there, I intend to keep the setup as simple as possible:

  • One partition for boot, and another for everything else (no separate data partition)
  • Boot partition is unencrypted
  • No swap
  • No LVM

I will not convince you why you should encrypt your hard disk – plenty of resources out there already.

Though I try to keep explanations simple, I presume you are already a little familiar with Linux to begin with; this is not a guide to be following if this is your first time installing Linux.

I am using Kubuntu here, but there should not be much difference in the setup procedure across different Ubuntu variants.

Rationale for Setup

You can skip ahead to “Prologue” if you do not bother to know.

I have used file based encryption via eCryptfs on my home folder for a few months. A bit of setup was needed since I did not encrypt during installation, but it was tolerably simple and it had so far worked really well. I opted to switch to disk based encryption mainly because Ubuntu team does not intend to support file based encryption moving forward. Disk based encryption is also more performant when dealing with many files (see performance comparison on phoronix.com).

Now although it sounds good to switch in theory, the setup is a big pain, especially for a GUI lover like me. Ubuntu and Pop OS offers disk based encryption out of the box, but the moment you need to customise your disk setup (so as not to wipe your Windows installation), the convenience is thrown out the window. I went through Full_Disk_Encryption_Howto_2019 from Ubuntu Community Wiki and Encrypting disks on Ubuntu 19.04 from Isuru Perera and found it too much work.

So I came up with my own guide.

I have only 110GB of space allocated for my linux setup, so dividing up between data partition and the OS is a tough call. I do not keep a lot of data inside my laptop anyway.

Encrypting my boot partition and putting decryption keys to root partition inside makes no sense to me. Even the default Ubuntu setup does not do this. Having said that, GRUB very recently supported LUKS2, in case you want to attempt to encrypt anyway.

I have 16GB of RAM and the concept of swap is foreign to me.

LVM is good if you want to grow your partition space across multiple hard disks, even while your OS is running. I am stuck with the single disk slot in my thinkpad, so this is a little unecessary.

Prologue

Disclaimer: many things can go wrong when you customize your setup and fiddle with the terminal as root user. It is recommended to keep a backup of valuable data and do a few trail runs first.

My disk setup as is below.

I have formatted my devices as follows (your device names may vary; do not just copy what you see!).

  • For my root parition I use a 110Gb EXT4 partition (/dev/nvme0n1p5)
  • For my boot partition I use a 300Mb EXT4 partition (/dev/nvme0n1p6)

Update 9/5/2020: 300Mb boot partition is too small – should be 500Mb. Turns out Ubuntu stores a few versions of the initrd in the boot partition and I bumped to “Error 24 : Write error : cannot write compressed block” when updating. You could resolve this by removing older kernels (refer here), but it is not pleasant to bump to this issue. apt does offer to cleanup old kernels for you, but you need to explicitly do so after updating:

apt autoremove

I have also a 260Mb EFI partition (/dev/nvme0n1p1), which is mandatory for me because my laptop uses UEFI – you may not have this. Everything else belongs to Windows.

Be sure you jot down the device name of your designated root partition (/dev/nvme0n1p5 in this example). You will use it a lot later.

Partition managers are simple to use, so I skip the steps for setting up the partitions here.

Important: partition manager allows you to encrypt your partition as an option when you format. Do not enable this. As of this writing it defaults to LUKS1; we want LUKS2.

Setup Encryption

Now we setup encryption on the root partition.

$ sudo -i # proceed the entire guide as root user
# cryptsetup luksFormat /dev/nvme0n1p5

WARNING: Device /dev/nvme0n1p5 already contains a 'ext4' superblock signature.

WARNING!
========
This will overwrite data on /dev/nvme0n1p5 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase for /dev/nvme0n1p5: 
Verify passphrase:

This will replace our ext4 partition with an encrypted LUKS partition. To verify that we are indeed using LUKS2, check the version (I have omitted the output for brevity):

# cryptsetup luksDump /dev/nvme0n1p5

LUKS header information
Version: 2
Epoch: 3
...

Next, we open the encrypted partition as rootfs so that Ubuntu can be installed inside:

# cryptsetup open /dev/nvme0n1p5 rootfs

Enter passphrase for /dev/nvme0n1p5:

This will create a new device /dev/mapper/rootfs where we can read and write freely, while the encryption and decryption is performed underneath. It is currently unformatted space, so we need to format as ext4:

# mkfs.ext4 /dev/mapper/rootfs

mke2fs 1.45.5 (07-Jan-2020)
Creating filesystem with 501760 4k blocks and 125440 inodes 
Filesystem UUID: 31dcba0e-ec56-4321-9578-0abcd162de2f 
Superblock backups stored on blocks:  
       32768, 98304, 163840, 229376, 294912 

Allocating group tables: done                             
Writing inode tables: done                             
Creating journal (8192 blocks): done 
Writing superblocks and filesystem accounting information: done 

Now we can install Ubuntu into /dev/mapper/rootfs.

Supposedly you can also format to ext4 in the graphical installer, but last I tried it seemed to think it needs to create a partition table, and I cannot get Ubuntu to boot properly afterwards.

Install Ubuntu

I will skip ahead a few steps to the disk setup. Here you may notice that there is no guided option where you can have encryption and keep your dual boot setup. You will need to do manual disk setup:

Select “Manual” for disk setup

In the “Prepare Partitions” page, let us start with the boot partition /dev/nvme0n1p6:

  • Use as: ext4 journaling file system
  • Format the partition: ✓ 
  • Mount point: /boot

Setup boot partition with parameters as shown

Now it gets a little confusing when it comes to the encrypted partition. For some reason, the Ubuntu installer displays it like it is a separate device from my hard disk. So select the partition under /dev/mapper/rootfs, which is also called /dev/mapper/rootfs, and set as follows:

  • Use as: ext4 journaling file system
  • Format the partition: ✓ 
  • Mount point: /

Setup root partition with parameters as shown

After setting both, you will see something like this when you select /dev/nvme0n1 and /dev/mapper/rootfs:

Lastly, if you have an EFI partition (/dev/nvme0n1p1 in my case), there is where you want to install the boot loader:

The steps after this is straightforward, so I will skip ahead to the part where the installation completes. Here click “Continue Testing” instead of restarting.

Crypttab

If you restart your Ubuntu installation now, your kernel would not be able to mount your root partition because it is encrypted. To circumvent this, you need to tell your kernel that the hard disk is encrypted. This is where crypttab comes in.

crypttab is like fstab in that it tells your kernel which hard disks to boot during startup. The key difference is that it is only for encrypted drives, and it loads before fstab.

Before we can setup crypttab for our fresh installation, we need to first understand that we are currently in the testing image. Any config changes we do in /etc/ therefore affects not our fresh installation (installed in /target/), but the testing image.

To change our terminal environment to the fresh install, execute:

# for n in proc sys dev etc/resolv.conf; do mount --rbind /$n /target/$n; done # change mount points to target
# chroot /target # change root directory to target
# mount -a       # mount all devices in fstab

Now, to inform the kernel to identify your LUKS encrypted root partition /dev/nvme0n1p5 as rootfs, execute the following:

# echo "rootfs UUID=`blkid -s UUID -o value /dev/nvme0n1p5` none luks" >> /etc/crypttab

Verify that your UUID has been added to crypttab:

# cat /etc/crypttab

rootfs UUID=aacc905d-beef-baba-a477-88aa12345fb2 none luks

Now you may be wondering: how is the kernel going to know what you set if the config is set in /etc/crypttab when it is in an encrypted disk?

Well, the kernel does not read it from there. You need to execute:

# update-initramfs -u -k all

update-initramfs: Generating /boot/initrd.img-5.3.0-46-generic

As you can see, it updates a file in /boot/, which is unencrypted.

Now restart and boot up your Ubuntu installation and you will be asked to key in a password to decrypt your root partition:

Post Installation

It is important to note that your encryption password is different from your login password; changing one does not change the other. Having said that, it is convenient to keep them both passwords the same and disable login at startup – unless you want to key in your password twice during startup.

Though it is easy to change your login password from your window manager, you need to use the command line to change your LUKS2 password (this process does not re-encrypt your partition, so do not hesitate to change the passphrase on a whim):

# cryptsetup luksChangeKey /dev/nvme0n1p5

Enter passphrase to be changed: 
Enter new passphrase: 
Verify passphrase:

The wording “Enter passphrase to be changed” is very deliberate – LUKS can be configured to allow multiple passphrases to be configured for the same device.

I now end this post with a screenshot of my desktop. Cheers!

Setting Up Vscode for Single-file C++ Builds

So I have these self-contained C++ files and with a shortcut key I want to automatically build and run active file, and then use a debugger as well if I want to. Who would want to do this? Either you are just learning the language and want a quick way to prototype, or you are doing competitive programming (UVa, Codeforces, etc). The latter reason is why I’m writing this; me and my colleagues just started an “Algo Party” where we gather together to solve coding puzzles.

I’ve only tested this on osx and linux, but if you are using MinGW and you have GDB installed it should work.

Build and Run Any Random C++ File

When you open any C++ file, vscode is smart enough to suggest a bunch of extensions you should install. But after installing them you will realize that you can’t just run immediately start running hello world from your editor. To run the file in your active editor, install Code Runner.

Now you can run the active file with the shortcut cmd+alt+n or ctrl+alt+n. There are some nice settings to have, like running in terminal, saving the file before run, and that juicy C++14 goodness. You can configure them via extension settings:

"code-runner.runInTerminal": true,
"code-runner.saveFileBeforeRun": true,
"code-runner.executorMap": {
    // ...leave as is
    "cpp": "cd $dir && g++ $fileName -std=c++14 -o $fileNameWithoutExt && $dir$fileNameWithoutExt && echo ''",
    // ...leave as is
}

Note that echo '' is just to print a new line after running the program. If you know you will always execute the program with an input file, simply add <input.txt to the command above (before && echo '').

Press Ctrl+Alt+n to build and run the active file

Now if that’s all you want then you can stop here. The next section let’s you use a debugger, but requires a different setup.

Using Vscode Tasks To Build The Active File

This one is a project level configuration, so it assumes you keep your C++ files in one folder, like how I organize my solutions to my CodeForces problems in this git repository. To get started you can simply copy the json files from my .vscode folder over to yours.

Now on any C++ file in that project, simply hit cmd+shift+b to build the file. In addition to building the file, I’ve also added a -g argument to the compiler to build debugging symbols as well. You can remove this by tweaking the tasks.json. The file itself is fairly straightforward to understand. I already have a task to both build and run the file at once called “build and run cpp” (I always run my solutions with a file “input.txt”; if that’s not what you want you can simply tweak the command).

You can assign this to a shortcut key by going to File > Preferences > Keyboard Shortcuts and click the link in the message “For advanced customizations open and edit keybindings.json”. There you can assign the task to a shortcut you want (I use ctrl+9):

    {
        "key": "ctrl+9",
        "command": "workbench.action.tasks.runTask",
        "args": "build and run cpp"
    }

Build and run the active file using vscode tasks

Setting Up The Debugger

Hit f5 (or click the green arrow if you are in debug mode) to launch the debugger. This will launch an external terminal instead of using the integrated one.

Using the debugger in vscode

If you don’t use an input file then all is good, otherwise it is a hassle to constantly keying in inputs into the program. Unfortunately for us we can’t pass in the input file as an argument where the debugger is concerned (at least, for now).

Fortunately there is a simple workaround.

In STL, both the console input and file input are input streams. So what we can do is override cin to replace it with an ifstream. Because of the scope, our custom version will always take precedence:

#include <iostream>
#include <fstream> // std::ifstream

using namespace std;

int main()
{
    ifstream cin('input.txt'); // replace cin with our version

    // rest of the code remains the same:
    int p, q;
    cin >> p >> q;
}

Don’t forget to comment out that custom cin before you submit your solution to the online judge!

Deep Q-Learning 101: Part 3 – Deep Q-Learning

This is a 3 part series of Deep Q-Learning, which is written such that undergrads with highschool maths should be able to understand and hit the ground running on their deep learning projects. This series is really just the literature review section of my final year report (which in on Deep Q-Learning) broken to 3 chunks:

Introduction: The Atari 2600 Challenge

The Atari 2600 (or Atari VCS before 1982) is a home video game console released on September 11, 1977 by Atari, Inc.

Atari 2600 with standard joystick

The challenge is as follows: can an AI, given the same inputs as human player, play a variety of games without supervision? In other words, if the AI can hold a joystick and see the same screen as human player, can it teach itself how to play the game by just playing the game?

This means that instead of having the programmer hard code the rules of the game and the AI learn an optimal way to play (as is the case with most prior RL agents), the AI will need to figure out the rules of the game by seeing how the score changes with the moves it makes.

Using an actual Atari 2600 will prove too arduous because there is a great deal of unpredictability in the physical world; to circumvent this, researchers at DeepMind used an emulator called the Arcade Learning Environment or ALE (Bellemare, Naddaf, Veness, & Bowling, 2013), which simulates a virtual Atari 2600 inside our computer. From there we can program inputs into ALE, and receive game screens (210 \times 160 pixel images) and the score (a number).

There are 18 possible actions that the agent can take. I list them in table below:

Possible actions an agent can take in ALE

The following sections dive into the individual procedures that composes Deep Q-Learning.

Preprocessing

The raw Atari 2600 screens attained (210 \times 160 with 128-bit color pallete) will require to be preprocessed to reduce the input dimensionality and unwanted artifacts. Some frame encoding was used to remove flickering (not all sprites are rendered in every frame, due to the hardware limitation of the Atari 2600) that is usually not noticeable by the human eye. The images are then scaled to 84\times 84 frames, from which we then extract the luminance values. The luminance values can be calculated from RGB via the formula 0.2126\cdot R + 0.7152\cdot G + 0.0722\cdot B. For each input, the above preprocessing step (defined as a function \phi) is applied to 4 frames at any given time step, placing the resulting dimensions as 84 \times 84 \times 4.

Q-Network Architecture

Q-Learning’s iterative update converges to an optimal solution, but in practice this is not practical, as Q-values are unique for every action and state, without any generalisation. With large state spaces (such as every pixel in an image), we would easily run out of memory to compute every single possible Q-value. Therefore, it is more common to use an approximator to estimate the Q-values. For this reason a non-linear function approximator (an ANN) is used. Such ANNs are known as Q-networks. The current estimate then becomes:

y = r + \gamma \max_{a'} Q(s', a', w)

where w is the weights of the ANN. The table below shows the CNN architecture used in Deep Q-Learning (conv = convolutional layer, fc = fully connected layer):

DeepMind’s Q-Network Architecture

The output of the CNN is the predicted scores of all 18 possible actions in ALE; Deep Q-Learning then simply chooses the action (integer between 0 to 17) in which the predicted score is the highest.

Notice that there are no pooling layers in the CNN. What pooling layers do enable the CNN to be insensitive to the location of an object in the image (or we say the CNN would be translation invariant). This would come in handy for image classification task, but for a video game we would not want to discard the location of the sprites. These are crucial in determining the reward as well.

The agent plays the game one action at a time, and with every action it takes (it takes an action by running 4 frames through the weights of the CNN), a training step is executed where 4 frames are sampled from the pool of data and used to train the CNN.

The Loss Function

We now introduce a loss function L (which is squared error loss) that will tell us how far we are from Bellman (Q^*(s, a)). At a time step t, we set a Bellman backup y_t (also known as target) , followed by the loss function (de Freitas, 2014):

y_t = \mathbb{E}_{s'}\left\lbrace r + \gamma \max_{a'} Q(s', a', w_{t-1})\right\rbrace

L_t(w_t) = \left[y_t - Q(s, a, w_t) \right]^2

What we want to do is update the weights w_{t} of the Q-function, Q(s, a, w_{t}), but for the target we used the previous weights w_{t-1} (this is not mentioned in the research paper (Mnih et al., 2013), but it is evident in the implementation). What we are essentially doing here is approximate Bellman by minimizing the difference between current estimate reward y and past estimate reward Q(s, a, w). Notice that this difference is actually the TDError as discussed earlier. So we used a supervised learning technique (neural networks), but alter the loss function that it uses a RL technique (Q-Learning). Another way to see this is to see y as the critic and Q(s, a, w) as the actor; the critic informs the actor how well it has done.

Now to update the weights via backpropagation, we need to compute the gradient, which is the derivative of the loss function L_i(w_i):

\nabla_{w_i} L(w_i) = \left( r + \gamma \max_{a'} Q(s', a', w_{i-1}) \right) \nabla_{w_i} Q(s, a, w_i)

During implementation this is normally abstracted out in a neural network library like Neon; we simply define that we are using a mean squared loss, and specify the target y and Q(s, a, w) at each iteration, and Neon figures out the loss and gradients when doing forward and backward propagation.

Sampling From Experience

So now as the AI plays a game in ALE it receives a stream of inputs, along with the game score. Instead of training the Q-network with this stream of inputs, we store these episodes into a replay memory (Lin, 1993). In terms of MDP, for each discrete time step t, we store an experience e_t as (s_t, a_t, r_t, s_{t+1}) in our replay memory D=e_1, \ldots,e_N, where N is the maximum capacity for the replay memory (a fixed constant defined by us).

Now we each update step in the Q-Network, we apply backpropagation to samples of experience drawn at random from our replay memory D. This breaks the similarity of subsequent training samples, and makes the training task a lot more similar to supervised learning.

Exploration-Exploitation

Now comes another problem: The AI initially explores the game, but will always choose the first strategy it finds. This is because the weights of the CNN are initialized with random values, and therefore the actions of the AI in the start of the training will be random as well. In other words, the AI tries out actions it has never seen before at the start of the training (exploration). However, as weights are learned, the AI converges to a solution (a way of playing) and settles down with that solution (exploitation).

What if we do not want the AI to settle for the first solution it finds? Perhaps there could be better solutions if the AI explores a bit longer. A simple fix for this is \epsilon-greedy policy, where \epsilon is a probability value between 0 and 1: with a probability of \epsilon select a random action; and with 1-\epsilon probability exploits what it has learned.

The Deep Q-Learning Algorithm

We now piece everything we have discussed so far into the Deep Q-Learning Algorithm (Mnih et al., 2013), given in Algorithm 4:

Notice that when we select an action using Q^*, we use the previous weights w_{t-1} instead of the current weights w_t.

I will now clarify the algorithm concerning the weights w. Remember that in the loss function, to calculate the current prediction y_j, we used the previous weights w_{t-1}. What happens if t=1? We will use random weights. In implementation, this simply means we keep track of 2 weights: one of a past time step, and one of the current time step. Note that we would not necessarily need to use the weights immediately before w_t. As long as it belongs to a past time step it will work (w_{t-k}, where k is a fixed constant).

Afterword

I took some material from Tambet Matiisen’s great series on Deep Q-Learning (Demystifying Deep Reinforcement Learning, Deep Reinforcement Learning with Neon), and also referred to his Simple DQN implementation. There is also some parts that I referenced from Freitas’s lectures on Deep Reinforcement Learning (part of their machine learning course).

References

  • Bellemare, M. G., Naddaf, Y., Veness, J., & Bowling, M. (2013, 06). The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47, 253–279.
  • de Freitas, N. (2014). Machine learning: 2014-2015. University of Oxford.
  • Lin, L.-J. (1993). Reinforcement learning for robots using neural networks (Tech. Rep.). DTIC Document.
  • Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.