Setting Up CUDA + cuDNN for Theano in Ubuntu

This is my personal notes on setting up Theano with my laptop GPU. It is basically an amalgam of various sources that I pieced together to make everything work, which I will link at the end of this post.

As of this writing, this is my setup:

  • Linux Mint 18 (Cinnamon) 64-bit (based on Ubuntu 16.06)
  • NVIDIA GT740M (Kepler architecture)
  • Theano 0.8.2

NVIDIA Graphic Drivers

Linux Mint gives you an option to install the drivers from the settings, but it may be dated. To get the latest drivers, you may install the drivers via PPA: https://launchpad.net/~graphics-drivers/+archive/ubuntu/ppa

IMPORTANT: You need to install the drivers first, before installing CUDA. The order is very important, since CUDA checks what version of the graphic driver you are using and installs accordingly. On a related note – should you upgrade/downgrade your graphics driver, you will need to install CUDA again. I emphasize this, because should you fail to do so, the errors that proceed it gives you no indication whatsoever that you screwed this step up.

CUDA

Choose CUDA for download in https://developer.nvidia.com/cuda-downloads. As of this writing the latest version is CUDA 7.5, so that is what I downloaded. After you download, there are instructions that they suggest you run. Not all of them will work. So follow these suggested steps instead:

Open a terminal in the download directory and enter the first command they suggested for you in the downloads site. It should look like this:

sudo dpkg -i cuda-repo-ubuntu1504-7-5-local_7.5-18_amd64.deb

Change your /var/cuda-repo-7-5-local/Release to the following:

Origin: NVIDIA
Label: NVIDIA CUDA
Architecture: repogenstagetemp
MD5Sum:
 51483bc34577facd49f0fbc8c396aea0 75379 Packages
 4ef963dfa4276be01db8e7bf7d8a4f12 21448 Packages.gz
SHA256:
 532b1bb3b392b9083de4445dab2639b36865d7df1f610aeef8961a3c6f304d8a 75379 Packages
 2e48cc13b6cc5856c9c6f628c6fe8088ef62ed664e9e0046fc72819269f7432c 21448 Packages.gz

Run (ignoring warnings about invalid signatures, and you’re done):

sudo apt-get update

Then run:

sudo apt-get install cuda

Keep an eye out the output. There should not be any errors. This will install CUDA in /usr/local/cuda/

Add CUDA To Environment Paths

Open ~/.bashrc  and append the following (you may need to do this in sudo mode):

export PATH=/usr/local/cuda/bin:$PATH 
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH

Once you save, enter in a terminal:

sudo source ~/.bashrc

This will add the CUDA executables to the environment paths. Note that currently opened terminals will not have CUDA added to the environment paths. You will need to restart them (open and close) for changes to take affect.

And then you can open a new terminal and type nvcc (Nvidia CUDA Compiler) to see whether the environment is set correctly. It should not output any errors.

Solving gcc/g++ Incompatibilities

CUDA requires a compatible C/C++ compiler to work. The one that comes bundled with Ubuntu isn’t. To fix this, enter the following:

sudo apt-get install gcc-4.9 g++-4.9

Then we may establish a soft link of the specific version for the CUDA binaries folder:

sudo ln -s /usr/bin/gcc-4.9 /usr/local/cuda/bin/gcc
sudo ln -s /usr/bin/g++-4.9 /usr/local/cuda/bin/g++

IMPORTANT! Now, if you run import theano for the first time with the THEANO_FLAGS environment variable containing device=gpu, theano complains that CUDA is not available. To run any python script that uses Theano, you need to prepend the command with THEANO_FLAGS=device=gpu,nvcc.flags=-D_FORCE_INLINES. All python scripts executed here will be using this workaround. Alternatively there is a fix here: https://github.com/Theano/Theano/issues/4425 (thanks Anonoz for the suggestion).

Alternatively,

Now running the following line:

THEANO_FLAGS=device=gpu,nvcc.flags=-D_FORCE_INLINES python -c "import theano; print(theano.sandbox.cuda.device_properties(0))"

Should give you something like this:

Using gpu device 0: GeForce GT 740M (CNMeM is disabled, CuDNN not available)
{'major': 3, 'tccDriver': 0, 'kernelExecTimeoutEnabled': 1, 'deviceOverlap': 1, 'driverVersion': 8000, 'warpSize': 32, 'concurrentKernels': 1, 'maxThreadsPerBlock': 1024, 'computeMode': 0, 'canMapHostMemory': 1, 'maxGridSize2': 65535, 'maxGridSize1': 65535, 'maxGridSize0': 2147483647, 'integrated': 0, 'minor': 0, 'ECCEnabled': 0, 'runtimeVersion': 7050, 'textureAlignment': 512, 'multiProcessorCount': 2, 'clockRate': 895000, 'totalConstMem': 65536, 'name': 'GeForce GT 740M', 'memPitch': 2147483647, 'maxThreadsDim1': 1024, 'maxThreadsDim0': 1024, 'maxThreadsDim2': 64, 'coresCount': -2, 'sharedMemPerBlock': 49152, 'regsPerBlock': 65536}

cuDNN

NVIDIA provides a library for common neural network operations that especially speeds up Convolutional Neural Networks (CNNs). For Lasagne, it is necessary that you install this to get a convnet to work. It can be obtained from NVIDIA (after registering as a developer): https://developer.nvidia.com/cudnn

Don’t expect an instant email upon registration. For some reason it takes quite a while for them to send that email. I waited about 30 minutes.

Once you are in, choose version 4. That’s the one currently supported by Theano.

To install it, copy the *.h files to /usr/local/cuda/include and the lib* files to /usr/local/cuda/lib64

To check whether it is installed, run

THEANO_FLAGS=device=gpu,nvcc.flags=-D_FORCE_INLINES python -c "from theano.sandbox.cuda.dnn import dnn_available as d; print(d() or d.msg)"

It will print True if everything is fine, or an error message otherwise. There are no additional steps required for Theano to make use of cuDNN.

Again, if everything if successful, you run your python scripts as such (the following is deep_q_rl, a Theano-based implementation of Deep Q-learning using Lasagne):

 THEANO_FLAGS=device=gpu,nvcc.flags=-D_FORCE_INLINES python run_nips.py --rom breakout

References (in order):

Advertisements

Converting a Java Project to Use JPA

In this post I walk through some of the gotchas when converting a java application that works with raw SQL strings to one using ORM (Object Relational Mapping) via JPA (Java Persistence API). I will be converting a simple application from “Generic Java GUI Netbeans Project with Embedded Database” that only has 2 entities: Post and Comment.

The finished product is in a git branch called JPA. If you don’t use git you can download this sample project as a zip from MediaFire.

You can view all the changes to convert to JPA in this Github diff.

Netbeans makes the initial setup very simple by generating persistence.xml (you find the persistence unit name here) for you, as well as the the entities for you from your database.

SQL needs to be rewritten to Java Persistence Query Language

This isn’t much an issue really; in the long run it does you a favour since it is database vendor independent.

Change from:

ResultSet rs = db.executeQuery("SELECT * FROM post ORDER BY date_created DESC");

To:

List<Post> rs = db.createQuery("SELECT p FROM Post p ORDER BY p.dateCreated DESC").getResultList();

Default Values are Lost

I noticed something strange when adding a Post entity: the created_date attribute shows up as a null when I convert to use JPA. My DDL (Database Definition Language) looks like this (Derby DB SQL):

CREATE TABLE post ( 
    id INT PRIMARY KEY GENERATED ALWAYS AS IDENTITY(START WITH 1, INCREMENT BY 1),
    name VARCHAR(250) NOT NULL,
    content VARCHAR(500),
    date_created TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

So each time I create a Post, I’m expecting the date_created attribute to show the current date, but it doesn’t. So all the SQL code where I have DEFAULT is basically replaced with null when I use JPA.

Great…

The workaround is to code the default values into the attribute field of the entity classes. Here is the dateCreated attribute inside Post:

@Column(name = "DATE_CREATED")
@Temporal(TemporalType.TIMESTAMP)
private Date dateCreated = new Date(); // new Date() returns the current timestamp

Exceptions in JPA Only Happen in Runtime

So when converting the code, I realized that in the places where SQLException would appear, Netbeans puts up an error saying that SQLException doesn’t happen here:

Sqlexception is never thrown

That’s ok I think. But what’s weird was that it offered to remove the try-catch block as a solution! Wow wow wow, stop. Aren’t there exceptions? Well, turns out there’s PersistenceException.

The problem? It’s a subclass of RuntimeException. I don’t exactly know what was the reason the exception was happening in runtime, but without the try-catch block, the procedure is going to fail (Here the Post entity cannot have null value for Name attribute) silently.

Now for a before and after. Before:

String sql = "INSERT INTO post (name, title, content) VALUES (?, ?, ?)";
try {
    PreparedStatement ps = core.DB.getInstance().getPreparedStatement(sql);
    ps.setString(1, authorField.getText());
    ps.setString(2, titleField.getText());
    ps.setString(3, postTxtArea.getText());
    ps.executeUpdate();
    
    JOptionPane.showMessageDialog(this, "Post has successfully been added.", "Successfully added post!", JOptionPane.INFORMATION_MESSAGE);
    dispatchEvent(new WindowEvent(this, WindowEvent.WINDOW_CLOSING));
} catch (SQLException ex) {
    JOptionPane.showMessageDialog(this, ex.toString(), "Invalid content... or some shit like that", JOptionPane.ERROR_MESSAGE);
}

After:

try {
    Post p = new Post();
    p.setContent(postTxtArea.getText());
    p.setName(authorField.getText());
    p.setTitle(titleField.getText());
    core.DB.getInstance().persist(p);
    
    JOptionPane.showMessageDialog(this, "Post has successfully been added.", "Successfully added post!", JOptionPane.INFORMATION_MESSAGE);
    dispatchEvent(new WindowEvent(this, WindowEvent.WINDOW_CLOSING));
} catch (PersistenceException ex) {
    JOptionPane.showMessageDialog(this, ex.toString(), "Invalid content... or some shit like that", JOptionPane.ERROR_MESSAGE);
}

The following dialog should pop up when the Author field is empty:

PersistenceException

Well, this output doesn’t just happen automatically. There’s still one more issue that I’ll get to next:

Empty Strings are not Null

In my DDL, I have a rule that Post cannot have null value for Name attribute. Yet for some reason a string “” is not a null value in JPA. It is actually stored in the database as “”.

How is “” not null?

stored as empty string

This kind of shit doesn’t happen when simply working with raw SQL strings.

There are workarounds for this online: one of them was using the annotations @Size(min=1) or @NotNull. Unfortunately I’m using Java SE 8 (@Size is currently supported up to Java EE 7 as of this writing) and I’m not using Spring (for @NotNull).

So what I ended up doing was place the validation in the setter method:

public void setName(String name) {
    if (name.trim().isEmpty()) return; // empty strings begone!
    this.name = name;
}

As you can imagine, I have to do this for every attribute in every entity that doesn’t accept empty strings.

You Need to Manually Manage One To Many Relationships

I had this issue that when I add a one to many relationship: Post has many Comments, now I add a Comment to a Post. I don’t see the changes immediately though; had to restart the application to see the new Comments. I would have thought that JPA would update the Comments collection inside the Post, but it didn’t.

Here’s how my setPostId function in my Comment entity looks like:

public void setPostId(Post postId) {
    this.postId = postId;
}

This is because (as aggravating as it sounds) in JPA, it is the responsibility of the application, or the object model to maintain relationships. By that it meant that it is the responsibility of the programmer to manually code in the links when a setter is called:

public void setPostId(Post postId) {
    this.postId = postId;
    postId.getCommentCollection().add(this); // programmer needs to add this himself
}

I’m going to be a bit honest here: this is kinda inconvenient, and retarded. In Rails this sort of things is done automatically, and it should be. In JPA I need to remind myself that each time I have a one-to-many relationship I need to manually link them together or it doesn’t update automatically/only updates after application is restarted.

Conclusion

There may be other quirks I may miss out since I’m only dealing with a very simple project that doesn’t even have update and delete, so feel free to let me know about in the comments below.

Yea, I know I complain a bit, but working with raw SQL strings is going to be pretty hard to manage in the long run. You’re dealing with simply strings, most errors only pop up in runtime, and your IDE can’t assist you much. With JPA when you’re writing in Java Persistence Language, Netbeans can actually figure out data types and attribute names:

autocomplete in java persistence language string

So unless you really need that sort of performance boost, using an ORM would be the right call.

Generic Java GUI Netbeans Project with Embedded Database

So your team just wants to create a java GUI project that requires a database. How do you minimize the hassle of configuring a database? For this reason I created this simple generic Micropost project to serve as a template to start out.

Link to Project: https://github.com/bruceoutdoors/MicropostsExample

2015-11-30 15_48_30-New notification

This guide requires that you are already familiar with SQL and java GUI development.

Configuration and Setup

The only thing you need to run this project is Netbeans. Download the ZIP folder from the site, or git clone the repo if you know git. From there, you open up the project from Netbeans and click run.

That’s it.

Did I mention configuration? There is zero configuration. Even when you deploy your application as a JAR file, it is the same thing; it runs automatically. Just like that.

Prerequisite Knowledge

You need to understand a few things:

  1. Embedded databases
  2. Migrations

Embedded Database

An embedded database is a database that do not need a server, and is embedded in an application. This means the application itself manages the database directly. Here, I am using Derby DB (or Java DB). JDK comes prepackaged with this, though to avoid trouble I have packaged that library into the project itself.

derby-logo-web

Notice, that when you launch the application a folder called “database” and a text file “derby.log” is created. These are the files you will remove should you want to start clean.

You can view the schema and data in the “database” folder from Netbeans. Under the services tab, right click on Databases and select New Connection…

netbeans new connection

Select Java DB (Embedded) as the JDBC driver and click next. Where you see JDBC URL, append the directory of the database folder. Leave the User Name and Password field blank. Click Test Connection and it should indicate “Connection Succeeded”.

JDBC Url

Then click Next until you cannot click it anymore, followed by Finish. Now you can peep inside your embedded database, and execute queries as you please:

netbeans database service

IMPORTANT NOTE: Derby DB embedded does not support multiple connections at once. If you connect to the database via Netbeans database service, then you can’t run your application. You have do disconnect (right-click the connection and disconnect it) from the database first before running your application again.

Migrations

When you first create your database, you don’t always figure every possible entities and relationships in one shot. Inevitably, we would want to make incremental changes to add remove tables or modify existing tables, migrating the database to a different version. Chances are also that you don’t want to wipe out your existing data because of it. This is why migrations are created.

flyway-logo-tm

This project uses Flyway migration library. It is as simple as just using a series of SQL files with the prefix V1__blablabla.sql, V2__blablabla.sql and so on. These SQL files are then executed in that order. Therefore, when you make changes to your database, you simply add another migration file (or you can just modify a single migration and wipe out the database every time). If you are working together using a VC system like git, all your databases will be synchronized automatically to the latest structure with existing data intact.

Migrations are located in PROJECT_DIR/src/db/migrations:

2015-12-04 14_46_55-Search

Each time the application is launched, it automatically migrates the database to the latest version.

That is all you need to know. For everything else go figure on your own. Good luck!

MMU Time Tabler – Retrospection and Surmise

In July 2015, I launched MMU Time Tabler: a web app designed to ease the frustration of planning subjects in my university.

Birth of Time Tabler

After using the original time table site from my university for 1 trimester I concluded it was too arduous to use and decided I could do a better job. Time Tabler was built in Rails in 5-6 weeks, half of the effort was really just screen scraping data (it was a nightmare and a mess) and put it together in a meaningful way. Unlike the many assortments of things the original time table site could do, like seeing room bookings, view what lecturers are teaching, and view student group time tables, Time Tabler only has a single goal: Plan your subject sessions.

2015-10-24 22_53_45-New notification

There was no SEO or Google Ads or any sort of advertising aside sharing on Facebook, but students were very receptive to the project. For the first time since Seraphim, pretty much every student in MMU (Melaka, Cyberjaya campus included; not sure about Nusajaya) knew which lecture and tutorial sessions to choose to suit their time and which lecturers they want in a few seconds. I had fellow MMU students leaving supportive comments on my YouTube video, receiving emails from people I don’t know thanking me, and have groups like MMU Confessions 2.0 and MMU Editorial Board doing publicizing for me, and follow friends sharing my site. I even had one guy trying to get me to woo me to Quintiq.

Here’s the Google Analytics for the site between 21 September and 21 October (the time where people register their subjects):

IMG_0660

That’s like over a hundred students using my site everyday; a site that cannot be found from Google search. More than half of all the traffic came from people sharing on facebook.

Looking Back

I felt a deep sense of gratitude for the fellow MMU students for their support. There’s also Hackerspace, headed by Willie, a club that pulled together programmers to build cool stuff. Most of these fellow programmers became my good friends, who offered guidance (most of them are Rails people) and showed great enthusiasm for Time Tabler when I first presented it.

I have initially wanted to build the site as something of a proof of concept, then shut it down after I got the publicity I wanted if MMU decided they are not going to support it. The reason being that it takes time to scrape the data from MMU’s Time Tabling site (it’s not as automated as it should be) and that if the site changes, even a bit, I had to dig in my code to figure out what went wrong. This is exactly what happened when Trimester 2, 2015 subject sessions were released. I spent half a day banging my head.

I didn’t have to scrape MMU’s Time Tabling site daily, so I never bothered to clean up my messy scraping code. Good thing too, because in the coming future MMU Time Tabling is going to get revamped, and I have to read the data from a CSV table file instead of scraping thousands of web pages.

One of the greatest achievements a developer can have is to have built something that benefits other people. Even though I’m not earning revenue (sadly MMU doesn’t want to buy my system ): ), I felt like I’ve done something meaningful with my time.

Surmise

I have in mind to fit in some of the features students have been requesting. Currently what distinguishes Time Tabler from Seraphim was that Seraphim could figure out the combination of sessions that will not clash, whereas Time Tabler expects you to click away and figure out yourself. Since you could visualize clashes it’s a small matter, though having it automatically generated is a neat feature, not to mention a worthy challenge for me.

I don’t intend to maintain Time Tabler forever. If the data extraction process can be automated, good, but there is little certainty what MMU would change next in their Time Tabling system. Hopefully by that time MMU will either have me integrate Time Tabler to their system, or place the features in Time Tabler in their existing system. Otherwise, it is destined to have the same fate as Seraphim.

Encouragement to Fellow Coders

Before I go, I want to leave a message to fellow coders like me.

If you are just starting out wanting to do some good in the world, here is what I learned: you don’t have to have “arrived” to make a difference. By “arrived” in this context meant being like some demigod in coding and knowing all knowledge of computer science in the world. MMU Time Tabler didn’t have any kind of fancy algorithm shit going on; I merely sort existing data from MMU’s time tabling website and display the data in a user friendly manner.

I have no experience developing with Rails or Ruby prior; so much of the code I’ve written is probably crap, but people use it anyway. Because in the end of the day it’s not how bad ass your skills and knowledge is (don’t neglect it though), what’s more important is finding what frustrates you and use it to drive yourself to a solution. When I saw how hard the original MMU Time Tabling was to use, I have 3 choices: I can continue to complain how bad it is, I can just accept it’s that crappy but it was probably better than what other universities had, or I could see it as an opportunity to build something that could help others.

VB.NET to Access: Tutorial + Example

Abstract

In this post I’m running through trying to get your VB.NET program to read and write to an Access database (*.accdb, *.mdb).

Introduction

So the target audience is my fellow degree friends at MMU taking a notorious subject called Software Engineering Fundamentals. In our batch it just so happens the lecturers decided that for our prototype app we no longer have the freedom to choose whatever language and frameworks we want but forced to use visual basic.

So anyhow, call my Googling skills are subpar, but the most recent tutorial by Microsoft on connecting your VB.NET app to Access is here, and it’s pretty dated. So I decided I’d write my own, after banging my head a bit. Here’s what I’m using:

  • Visual Studio 2013
  • Access 2013
  • .NET 4.5

The tutorial itself is really a no brainer, though I expect you know a thing or 2 about coding before proceeding.

The Database

I won’t go through much of how to create and manage Access databases in this tutorial. There are plenty of guides out there to refer to. I will be uploading the sample database used though:

Download the example database here.

Here’s how it looks like:

2015-08-08 17_10_08-Access - DB _ Database- C__Users_Lee_Documents_DB.accdb (Access 2007 - 2013 file

TIP: After you’re finalized your database or updated it, you should compact the database. This makes your database file size smaller:

2015-08-12 11_56_39-

Once you’re done checking out the database, save whatever changes you’ve made and close Access before proceeding to the next step.

Connect to Database

In Visual Studio, create a Windows Forms Application with visual basic as the language of choice.

In server explorer (View > Server Explorer to show). Click “Connect to a Database”:

2015-08-08 12_46_48-MaidAgencySystem - Microsoft Visual Studio

In “Data Source”, change it to use “Microsoft Access Database File”, then select your database directory to locate your *.accdb file.

Now click “Test Connection”.

Should you come accross an error that says:

The ‘Microsoft.ACE.OLEDB.12.0’ provider is not registered on the local machine.

2015-08-08 12_50_15-MaidAgencySystem - Microsoft Visual Studio

You need to install it some drivers: 2007 Office System Driver: Data Connectivity Components. This is important as it is also needed for deployment; your program will output this exception when you run without it:

2015-08-08 12_13_31-MaidAgencySystem - Microsoft Visual Studio

The link provided is the 32-bit version. I’d recommend not being too smart to get the 64-bit version; Visual Studio will be looking for the 32-bit driver that unless you explicitly asked it to look for the 64-bit version.

Anyhow, after you’ve done this, clicking “Test Connection” will result in “Connection Succeeded”. Leave the database credentials to their default as they are not important.

Your database should now be registered. Open up “Data Connecions” in your server explorer and you should see “DB.accdb”. Expand “Tables” and you should see a Maid table there.

Add Data Source

Under the “Data Source” tab, select “Add new data source”.

Choose “Database” -> “Dataset” -> “DB.accdb”, and click “Next”. Now be mindful of the dialog that appears:

2015-08-08 17_32_26-Data Source Configuration Wizard

This is often overlooked by your seniors, because we are so used to clicking through yes, yes, yes in prompts.

Click “Yes”. For most intends and purposes this is what you would expect if you are developing your application. Bear in mind, that whatever changes you which to make on your database now, you should do it on the database file that’s in your project directory.

We will use the default connection string that Visual Studio provides (“DBDataSet” for my case). Click Next.

Under “Tables”, Select the only table there and click Finish.

2015-08-08 17_46_26-Data Source Configuration Wizard

Notice that in your Solution Explorer there is a new file “DBDataSet.xsd”. As you will soon learn, you can’t delete data sources from Data Sources tab, but by removing this *.xsd file.

Access the Acess Database

Drag and drop the Maid table from Data Sources to your form. This is what you’ll see:

2015-08-08 17_55_38-MaidExampleSystem - Microsoft Visual Studio

Now comes the part that drives me nuts. You can run the application, change whatever data you want (assuming you don’t put any wrong data inside e.g. putting words into ‘age’) but it never seems to write to the database. Is it because you need to click the “save” icon? You probably tried that already.

What happens, if you recalled from earlier, is that each time you run from Visual Studio, Visual Studio makes a copy of the database file to your build directory (bin/Debug) and overwrites an existing copy should it exist. What it means for you is that it is totally confusing that your program can never seem to touch the database when in fact your IDE overwrites the database with a fresh new copy at each new run.

To convince yourself, try running the program outside of Visual Studio. It actually does write to DB.accdb.

Have no fear! This behaviour can be changed. Right click DB.accdb in your solution explorer and select “Properties”.

Under “Advanced”, there is an option called “Copy to Output Directory”. Choose the option that best suits you. Though, as you will learn, the default behaviour is preferred when you’re developing, because you might be constantly making changes to your Access database (adding columns and tables and whatnot), so you’d want a clean slate for each run.

2015-08-08 18_08_54-MaidExampleSystem - Microsoft Visual Studio

Back to our grid view: you’d probably want to remove the ID column. So let’s do that now. Click the small triangle on the GridView and select “Edit Columns”. From there remove the ID column.

2015-08-08 18_14_26-MaidExampleSystem - Microsoft Visual Studio

GridView Exception Handling

Now for some code!

So now in your program when you enter some silly string in the age field this happens:

2015-08-08 18_39_49-MaidExampleSystem - Microsoft Visual Studio

Yikes! It doesn’t even let me leave the application after I click OK! No marks for that kind of sloppy work!

Let’s have something more straight forward:

2015-08-08 18_58_30-Form1

Right click on your DataGridView control and click “Properties”. Under the properties window to your left, click “Events”. Under “Behavior” double click “DataError”.

2015-08-08 19_01_21-MaidExampleSystem - Microsoft Visual Studio

This will direct you to the code component of the form (alternatively you can get there via pressing F7; shift-F7 to go back to form view).

So this will be the function that executes each time there is a validation error. Have your function look something this:

Private Sub MaidDataGridView_DataError(sender As Object, e As DataGridViewDataErrorEventArgs) Handles MaidDataGridView.DataError
    e.ThrowException = False

    Dim txt As String
    txt = ("Validation Error in column '" &
            MaidDataGridView.Columns(e.ColumnIndex).HeaderText & "'." &
            vbCrLf & vbCrLf & e.Exception.Message)
    MessageBox.Show(txt, "Error", MessageBoxButtons.OK, MessageBoxIcon.Error)

    e.Cancel = False
End Sub

And you should get the result as shown in the picture above. Note that it also reverts the changes you made previously.

The Maid Example Project

Download the source code from this link: https://github.com/bruceoutdoors/MaidSystemExample

Just click “Download ZIP” to your right if you have no idea what git is.

2015-08-10 13_34_30-Form1

Now, here on I expect you to go figure based on the source code I provided.

CRUD Operations with Adapters

There are 2 ways in manipulating data in this project; both methods will modify the access database.

  1. Changing the DataGridView and then updating the TableAdapter (PeanutBtn_Click, DelRowBtn_Click).
  2. Changing the TableAdapter and then updating the DataGridView (DelPeanutBtn_Click, AddMaidBtn_Click).

In #1, the MaidDataGridView object is being modified, and then to save the changes, we call the SaveDataGridViewChanges() method. In #2, the TableAdapter class MaidTableAdapter is being modified, and we call MaidTableAdapter.Fill(DBDataSet.Maid) to update the DataGridView.

Some pointers on the MaidTableAdapter class. The methods ScalarQuery() and DeleteByNameQuery(maid_name) are added explicitly by us. You do this by double clicking DBDataSet.xsd in the Solution Explorer. This allows you to edit the dataset with a designer view:

2015-08-10 13_29_28-MaidExampleSystem - Microsoft Visual Studio

I won’t be guiding you on the procedures that follow though, as they should be pretty straight forward. As much as possible, I’d recommend delegating any fancy querying logic to TableAdapter class than manually getting row objects and doing stuff on it or writing raw SQL strings. It makes the code more readable.

[Bonus] Alternative to Using Access?

You can use Service-based Database. Microsoft MSDN has a guide which you can read here. You’ll be writing to a *. mdf file, and you’re basically using a compact embedded version of SQL Server. Visual Studio should come built in with SQL Server Compact; if you don’t have this you may download via Microsoft download site.

Should you be unable to add tables or execute queries into the database, or in other words you lack these options as displayed here:

2015-08-08 12_28_36-MaidAgencySystem - Microsoft Visual Studio

You need to install SQL Data Tools (get it from here), though in the walk-through it is mentioned:

To complete this walkthrough, install Visual Studio Express 2013 for Windows, Visual Studio Professional 2013, Visual Studio Premium 2013, or Visual Studio Ultimate 2013. These versions of Visual Studio include SQL Server Data Tools.

Well, mine didn’t come installed with SQL Server Data Tools so it got me pretty confused to begin with. Visual Studio could have at least been more explicit than just removing context menu options.

Other than that, the same rules apply as written above. Adapter classes work as usual, and you need to be aware of Visual Studio copying your database on each time unless you told it not to do so.

Conclusion

Hope you had a good read! Feel free to leave comments and likes (:

Project Structure for Projects in Qt Creator with Unit Tests

Abstract

For this post I will be proposing a project structure in Qt Creator that allows you to have your unit tests project with your main project.

Intro

If you think this post looks familiar, you have probably gone through a similar post from Svenn-Arne Dragly. This project is actually a simpler fork of his, so credits to him.

Why fork out from Dragly’s project structure? (you can skip to the next part if you want.)

I had to build out a library file – this is a bit of a hassle to me. In addition the example in github ( https://github.com/FSund/qtcreator-project-structure) didn’t work either:

LNK1104: cannot open file 'myapp.lib'.

It’s because I’m using the MSVC2013 compiler and it only builds out a *.dll and forgot a *.lib. Turns out I have to add __declspec( dllexport ) as so:

class MyClass {
public:
     __declspec( dllexport )  double addition(double a, double b);
};

Even so it still asks for the *.lib file, despite it already being built. I had to explicitly add the path to the external library, and then place the compiled *.dll in the same directory as the executable. I guess, if I specify a common build directory and have it look for the lib file there it would work… but think about this: __declspec( dllexport ) is microsoft specific. If I run the same code in linux it spits out an error. I could use some macros to check for what compiler is being used, but it’s a hassle (to me, at least).

So if you only intend to develop using a particular compiler that’s fine. I just figured I wanted something more straightforward.

Project Structure

The approach I came up with involves compiling some/most of your code twice, but it was what I stick to in the end for a test project I was working on (Click here to download an empty template) :

Project Directory:
 │       DrawingApp.pro
 │
 ├───app -> source code
 │       app.pro
 │       AbstractGroup.hpp
 │       ActiveSelection.cpp
 │       ActiveSelection.hpp
 │       Canvas.cpp
 |       ...
 │
 ├───build -> build directory
 │       DrawingApp.exe
 │
 ├───lib -> libraries
 │       catch.hpp
 │
 └───tests -> test project, which you only add files from the 'app' folder that you want to test
         tests.pro
         TestMain.cpp -> DO NOT call this main.cpp! It will clash with your main.cpp in the 'app' folder, even though they are in separate directories.
         CommandStackTest.cpp
         ...

You can see a real life example in my github project: https://github.com/bruceoutdoors/DrawingApp – I will be referring to this project so do keep a tab open.

It will look something like this in Qt creator:

2015-05-30 14_13_55-defaults.pri - MyProject - Qt Creator

This might be a hassle for some, because you would have to add the source files into the unit test project every time you created a new class. Otherwise the unit test project will fail to build and you’d likely confuse it to your actual project not building.

But that was just because I wasn’t really doing TDD. I created the source files before I write the unit tests. Well, it’s a drawing app; I can’t unit test drawing capabilities right?

Loose Coupling, Tight Coupling

After initial frustrations of using this structure, I realized doing it this way does disciplines a programmer to think about coupling. After a while it had me asked questions like “If I just want to test object A, should I need to also depend on object B and C? Can object A be isolated and used separately?”.

Great software design meant loose coupling; only when necessary should objects be bundled together. Otherwise they should be able to be packaged, isolated, reused, and unit tested separately.

This is why the tests.pro file contains significantly less files compared to app.pro.

Calling headers in different directories

Notice, that for example in the DrawingApp project, that a fileio/JsonFileReader.hpp is able to call interfaces/IFileReader.hpp by just

#include "IFileReader.hpp"

This isn’t default behaviour as in codeblocks, I had to manually configure this in the project file for both app.pro and tests.pro:

INCLUDEPATH += \
 $$PWD \
 interfaces \
 commands \
 tools \
 shapes \
 fileio

Tedious maybe, but a small inconvenience to me. I wasn’t sure if Qt Creator has a setting to make this default behaviour, but this is what I can come up with.

A Note on Catch

This single header unit test framework is too easy and convenient not to use. The TestMain.cpp simply needs to contain

#define CATCH_CONFIG_MAIN

#include "catch.hpp"

and then you can start adding separate test *.cpp files for each class (or however you want to structure your tests).

If you haven’t tried it out I highly recommend you do.

Conclusion

Think there can be a better way? I’m open to ideas.

You can download an empty template project from here: Click here to download an empty template.

 

Table To Tree Algorithm

I was faced with a little riddle (kinda like some of those UVa questions) – Can I design an algorithm that can take an excel sheet like this:

2015-03-08 20_16_36-TableToTree.ods - LibreOffice Calc

And create a tree structure that a computer program can understand? I can’t assume that the number of rows and columns will be fixed. The algorithm should cater for tree with large number of branches and leaves (row count), of a practically limitless depth (column count).

I should not assume also that the first cell will be the root of the tree. Should I not be able to set any cell that is a parent as a root?

For a Saturday the algorithm simmered in my head as I was hanging out with friends. When I came home I lay down the pseudo code. By Sunday night after dinner I taken the algorithm to its completed implementation in Javascript in TestComplete 10.

Here’s the pseudocode:

What kind of pseudocode is this you ask? I don’t know myself. I didn’t use any reference to write it; just thought it looked very clear what the algorithm should be doing having written this way.

TestComplete script (JScript):

Note that you shouldn’t use the “super” keyword in javascript as it is reserved. I simply used “Super” instead. This is may be bad practice as it may confuse people a bit – just an FYI for the reader.

Here’s the output from TestComplete:

2015-03-08 20_58_08-TestComplete - C__Users_Lee_Documents_TestComplete 10 Projects_TestProject1_Test

Download the excel file I use to test it (*.xlsx): TableToTree