# The Programmer’s Guide To FFT – Part 2: FFT

This will be a 2 part series on fast fourier transform (FFT). My aim for these posts is to provide a more hands-on and layman friendly approach to this algorithm, contrast to a lot of the theoretically heavy material available on the internet. In short: less math, no proofs, examples provided, and working source code (C++11).

In my previous post, I wrote about DFT as the basis to understand FFT. In this final part I will write about the FFT algorithm as outlined in Cooley and Tukey’s seminal work published in 1964. I assume you understood the material in the prior post before coming here, since everything here builds on top of it.

## Key Ideas

Before we begin, let us assume that the length of the input (N) is always in powers of 2 (2, 4, 8, 16, 32…). I will explain why is this important later on.

There are 2 key ideas that enable the FFT algorithm to work. First is to understand that DFT can be separated as a sum of odd and even parts:

$\begin{array}{lcl}F_k & = & \sum\limits_{n=0}^{N-1}x_n\cdot e^{-\frac{i2\pi k n}{N}} \\ & = & \sum\limits_{m=0}^{N/2-1}x_{2m}\cdot e^{-\frac{i2\pi k (2m)}{N}} + \sum\limits_{m=0}^{N/2-1}x_{2m+1}\cdot e^{-\frac{i2\pi k (2m+1)}{N}} \\ & = & \sum\limits_{m=0}^{N/2-1}x_{2m}\cdot e^{-\frac{i2\pi k (m)}{N/2}} + \sum\limits_{m=0}^{N/2-1}x_{2m+1}\cdot e^{-\frac{i2\pi k (m+1/2)}{N/2}} \\ & = & \sum\limits_{m=0}^{N/2-1}x_{2m}\cdot e^{-\frac{i2\pi k (m)}{N/2}} + \sum\limits_{m=0}^{N/2-1}x_{2m+1}\cdot e^{-\frac{i2\pi k (m)}{N/2} - \frac{i\pi k}{N/2}} \\ & = & \sum\limits_{m=0}^{N/2-1}x_{2m}\cdot e^{-\frac{i2\pi k (m)}{N/2}} + e^{-\frac{i2\pi k}{N}} \sum\limits_{m=0}^{N/2-1}x_{2m+1}\cdot e^{-\frac{i2\pi k (m)}{N/2}}\end{array}$

Let us define a function $\omega$ (read as omega):

$\omega(p, q) = e^{\frac{i2\pi q}{p}}$

Now we simplify the DFT formulation to:

$F_k = \sum\limits_{m=0}^{N/2-1}x_{2m}\cdot \omega(km, \frac{N}{2}) + \omega(N, k) \sum\limits_{m=0}^{N/2-1}x_{2m+1}\cdot \omega(km, \frac{N}{2})$

Let’s generalize further to:

$F_k = F_k^{\text{even}} + \omega(N, k) \cdot F_k^{\text{odd}}$

The second key idea is to take advantage of the periodic nature of DFT:

$\begin{array}{lcl}F_k & = & F_k^{\text{even}} + \omega(N, k) \cdot F_k^{\text{odd}} \\ F_{k+\frac{N}{2}} & = & F_k^{\text{even}} - \omega(N, k) \cdot F_k^{\text{odd}}\end{array}$

What this means is that in the process of calculating the resulting sequence $F$ you only need to compute $\omega(N, k) \cdot F_k^{\text{odd}}$ a total of $\frac{N}{2}$ times; we can essentially half the number of computations using this technique. But why stop there? We can also take either $F_k^{\text{even}}$ or $F_k^{\text{odd}}$ and split them to odd and even parts, and repeat the same procedure. If we compute this recursively, the base case for this is when $N = 1$. In this manner we compute $\omega(N, k) \cdot F_k^{\text{odd}}$ for as many times as we can divide it by 2, or $\log N$. Therefore, for sequence of size N, FFT computes the DFT in $N \log N$ time.

## Example

Ok. That was probably hard to grasp, so let us break it down. Take an example where N = 2: a sequence in the coefficient representation s is (1, 9), and we want to convert it to point-value representation. The even and odd sequence is simply 1 and 9 respectively. We can use an auxiliary variable h to store $\omega(N, k) \cdot F_k^{\text{odd}}$:

$h = \omega(2, 0) \cdot 9 = 9$

$F_0 = 1 + h = 1 + 9 = 10$

$F_1 = 1 - h = 1 - 9 = -8$

Notice how we only need to compute $\omega(N, k) \cdot F_k^{\text{odd}}$ once and reused it for $F_{0 + \frac{N}{2}}$.

Now we go to a more complex example, where N = 8: s = (1, 6, 3, 8, 9, 5, 4, 2). Here we can show that it is possible that by using the fact that DFT can be expressed as sum of even and odd parts, that we can recursively divide s to smaller subproblems, up until N = 1:

I arranged such that the subsequence to the left contains even parts, and the sequence to the right contains odd parts. Now that we separate it nicely we can systemically work on the smaller parts and work our way up until the final answer. I’ve made a nice diagram that illustrates the computational flow of the FFT algorithm:

As before, the sequence to the left are the even parts and the sequence to the right are the odd parts. The cells show the type: yellow for real numbers, and green for complex numbers. Blue arrows branch out from even sequences, and red arrows branch out from odd sequences. Red arrows also denote that the cell it came from will be multiplied by $\omega(N, k)$, though not visually depicted. The whole computation flow shows a sort of “butterfly pattern”, as how most engineers like to describe it.

IFFT works roughly the same way as FFT in that it uses the same technique to save computation, so if you understand FFT you should get IFFT as well.

## Implementation

The implementation here includes FFT and IFFT. As with the DFT and IDFT implementation in the previous post, it takes a sequence in the coefficient representation and spits out a sequence of the same size in the point-value representation using FFT, and takes that sequence puts it through IFFT to get back the original sequence. As with before, my emphasis is on readability not optimization.

#include <iostream>
#include <complex>
#include <cmath>
#include <iomanip>
#include <vector>
#include <algorithm>

using namespace std;

double PI = acos(0) * 2;
typedef complex<double> xd;
typedef vector<double> dvec;
typedef vector<xd> xvec;
const xd J(0, 1); // sqrt(-1)

inline xd omega(const double &p, const double &q)
{
return exp((2. * PI * J * q) / p);
}

xvec _fft(xvec &f)
{
double N = f.size();

if (N == 1) return f;

xvec fe, fo;
fe.reserve(N / 2);
fo.reserve(N / 2);

for (int i = 0; i < N; i += 2) {
fe.push_back(f[i]);     // even
fo.push_back(f[i + 1]); // odd
}

fe = _fft(fe);
fo = _fft(fo);

for (int m = 0; m < N / 2; ++m) {
xd omfo = omega(N, -m) * fo[m];
f[m]         = fe[m] + omfo;
f[m + N / 2] = fe[m] - omfo;
}

return f;
}

xvec fft(const dvec &x)
{
xvec f(x.size());

for (size_t i = 0; i < x.size(); ++i) {
f[i] = xd(x[i], 0);
}

return _fft(f);
}

xvec _ifft(xvec &x)
{
double N = x.size();

if (N == 1) return x;

xvec xe, xo;
xe.reserve(N / 2);
xo.reserve(N / 2);

for (int i = 0; i < N; i += 2) {
xe.push_back(x[i]);     // even
xo.push_back(x[i + 1]); // odd
}

xe = _ifft(xe);
xo = _ifft(xo);

for (int m = 0; m < N / 2; ++m) {
xd iomxo = omega(N, m) * xo[m];
x[m]         = xe[m] + iomxo;
x[m + N / 2] = xe[m] - iomxo;
}

return x;
}

dvec ifft(xvec f)
{
double N = f.size();

xvec xcomplex = _ifft(f);
dvec x(N);

for (int i = 0; i < N; ++i) {
x[i] = xcomplex[i].real() / N;
}

return x;
}

int main()
{
cout << fixed << setprecision(2);

dvec input = { 1,6,3,8,9,5,4,2 };

// convert from time to frequency domain
xvec freqdom = fft(input);

for (const auto &f : freqdom) {
cout << f << endl;
}
cout << endl;

// convert from frequency to time domain
auto timedom = ifft(freqdom);

for (const auto &t : timedom) {
cout << t << ' ';
}
cout << endl;
}


It is a good idea to set breakpoints to see how the recursive implementation of FFT systematically solves DFT from the smaller subproblems.

Because of how similar FFT and IFFT is, it is not hard to merge them into a function and pass a boolean parameter to determine whether it will be FFT and IFFT (most implementations online will call this multi-purpose function “transform”), but for the sake of clarity I refrain from doing so.

Convolution is practically the same as before – it’s just that we replace DFT with FFT and IDFT with IFFT:

// vector convolution
dvec convolve(const dvec &a, const dvec &b)
{
// calculate degree of resulting polynomial
size_t N = 2 * a.size() - 1;

// extend size to match result
dvec acof(N), bcof(N);
copy(a.begin(), a.end(), acof.begin());
copy(b.begin(), b.end(), bcof.begin());

xvec apv, bpv, cpv(N);

// evaluation
apv = fft(acof);
bpv = fft(bcof);

// point-wise multiplcation
for (size_t i = 0; i < N; ++i) {
cpv[i] = apv[i] * bpv[i];
}

for (const auto &t : cpv)  cout << t << ' ';
cout << endl;

// interpolation
return ifft(cpv);
}


Now we estimate the time complexity of vector convolution using FFT: evaluation ($N \log N$), pointwise multiplication (N) and interpolation ($N \log N$) now costs a total of $2 \times N \log N + N \approx N \log N$.

## Must N Be In Powers of 2?

You could play around with different input sizes and compare the answer with DFT. What you will see is that FFT will return some funny values if N is not a power of 2. Why is this so? Well, you can see from the visual depictions of how FFT works: at the simplest subproblem (N = 2), you need to have one even value and one odd value. FFT must be able to divide in such a way that at some point splitting the input, all subsequences are of size 2, and the only way that is possible is if N is a power of 2.

If it doesn’t make sense, you could just play around with the code to convince yourself I guess.

Is this a bad thing? Well, if all you use FFT for is convolution then no. You could first calculate the resulting polynomial degree of the convolution, then pad the input with 0 until N is a power of 2, evaluate it, do pointwise multiplication, interpolate, and resize it to match the resulting degree.

If you use bit shifts to multiply by 2, you can compute N very quickly (see full implementation here):

// degree of resulting polynomial = size of resulting array
size_t deg = a.size() + b.size() - 1;

// transform array size must be in power of 2 for FFT
size_t N = 1;
while (N < deg) N <<= 1;

// set size for arrays in point-wise representation -
// extended space is padded with 0:
xvec a(N), b(N), c(N);

// ** do convolution... **

// Resize to actual size
c.resize(deg);


But, wouldn’t resizing the input array be slow? Well, we can prove with simple example. Say N = 129: In naive DFT, computing the DFT will take $129^2 = 16641$. In FFT we resize N to the closest power of 2, which is 256. $256 \log 256 \approx 617$. That is 2697% less computations! Of course, it shouldn’t be hard to convince yourself that this is still true as N gets larger.

There are variations of FFT where N does not need to be in powers of 2, but it won’t be the focus of this post.

## Optimizations

If you print the N and m used by the $\omega$ (omega) function, you will notice that there are some repetitions. Here is N and m where N = 8:

2 0
2 0
4 0
4 1
2 0
2 0
4 0
4 1
8 0
8 1
8 2
8 3

So this means it is possible to precompute results from $\omega$ and reuse them. In the case where N = 8, we only need to calculate $\omega$ 7 times as oppose to 12. I have an implementaion of FFT that does this simple optimization.

What else can you do? In-place calculation – here’s my implementation of in-place fft. Aside knowing how to meddle with indices, one key idea is to understand is this: the indices after dividing the sequence is the bit reverse, where the length of bits is $\log N$. For example if N = 16 ($\log N = 4$), the index 7 (0111) with be swapped with 14 (1110).

## Conclusion

We have went through DFT, and now FFT. I hope this helped you understand FFT a little better. It’s ok if you’re still a little foggy with the details; play around the source code long enough and it will be clearer to you. I intend to work on hands-on applications of FFT in future posts, so keep a look out for more!

# The Programmer’s Guide To FFT – Part 1: DFT

This will be a 2 part series on fast fourier transform (FFT). My aim for these posts is to provide a more hands-on and layman friendly approach to this algorithm, contrast to a lot of the theoretically heavy material available on the internet. In short: less math, no proofs, examples provided, and working source code (C++11).

In this first part I will write about discrete fourier transform (DFT) as the basis to understand FFT.

## Sorting as a Metaphor

DFT and FFT are similar as insertion sort is to merge sort; they both take the same type of inputs and spits out the same output, it’s just that FFT runs much faster than DFT by utilizing a technique called divide and conquer (mergesort also uses this). The computational difference is also the same as merge sort is to insertion sort: $O(N \log N)$ vs $O(N^2)$. How much difference is this? Well, if N is 10 billion and each computation takes 1 nanosecond, it is the difference between finishing in 30 seconds as opposed to 30 years.

## What Does DFT Do?

DFT takes a discrete series of input in the time domain and converts it to the frequency domain. Computer scientists describe it as converting from the coefficient representation to the point-value representation (more on that later). At either case, the semantics of the input is retained; it is like translating from one language to another.

Also, the conversation from one domain to another needs to be reversible. To convert from the frequency domain back to the time domain, you use the inverse of the DFT, also called IDFT. In computer science, the process to convert from coefficient representation to the point-value representation is called evaluation, and the reverse process is interpolation.

## Applications and Motivations

The applications of DFT is diverse: how Shazam matches songs, mp3 compression, remove noise from audio, jpeg compression, edge detection, radar, sonar, optics, seismic activity, satellite transmissions, genome matching (I will put up a post on genome matching in the future as an application of DFT)… Put it this way: as long as it involves a sequence (could be in multiple dimensions) that changes – whether it be over time or space, it can benefit from DFT.

However, this post focuses on using DFT for vector convolution (I profess that as of this writing this is the only thing I know how to use DFT for).

## Vector Convolution

We can understand what convolution does as a polynomial multiplication:

Example taken from CLRS

Put formally, we are given a sequence of real numbers $a$ and $b$ (in this example is (6, 7, -10, 9) and -2, 0, 4, -5) respectively) of size n – each sequence maps to the coefficients of the input (thus the term coefficient representation), and we return a sequence of real numbers $c$ (-12, -14, 44, -20, -75, 86, -45) of size 2n – 1. The degree of each polynomial can be mapped from the indices (start from 0). We also describe convolution in mathemical notation as $a \otimes b = c$.

Each element $c_j$ ($0 \leq j < 2n-1$) can be calculated like so (formula also from CLRS):

$c_j = \sum\limits_{k=0}^{j}a_kb_{j-k}$

I have an implementation of this formulation below (click here to see a sample using this function):

typedef vector<int> ivec;

ivec polynomialMultiply(const ivec &a, const ivec &b)
{
int N = 2 * a.size() - 1;
ivec c(N, 0);

for (int j = 0; j < N; ++j) {
int p = j >= a.size() ? j - a.size() + 1 : 0;
for (int k = p; k <= j - p; ++k) {
c[j] += a[k] * b[j - k];
}
}

return c;
}


Perhaps you might be wondering why is there an auxiliary variable p which is not in the formulation: I don’t know; it doesn’t work without it. BUT, that’s not the point. The point is that this technique takes $O(N^2)$ to run (we can infer this from the nested for loop). The same result, however, can be computed much faster in point-value representation:

In this illustration, yellow indicates the coefficient representation (real numbers) and green indicates the point-value representation (complex numbers). In the point-value form, convolution is simply a pointwise multiplication which can be done in $O(N)$. However, the actual complexity of this method also needs to also factor the computation needed for evaluation and interpolation.

## The Math Behind DFT

We now describe the transformation process with more formal mathematical notation. Let $x$ be the sequence of N real values in the time domain, where the n-th element of $x$ is denoted as $x_n$. DFT then takes $x$ and transforms it to a sequence $F$ of N complex values in the frequency domain, where the k-th element of $F$ is denoted as $F_k$. We can also write DFT as a function $dft(x) = F$, and IDFT as $idft(F) = x$.

The DFT is then given by:

$F_k=\sum\limits_{n=0}^{N-1}x_n\cdot e^{-\frac{i2\pi k n}{N}}$

And IDFT given by:

$x_n=\frac{1}{N}\sum\limits_{k=0}^{N-1}F_k\cdot e^{\frac{i2\pi k n}{N}}$

Where $i$ is the imaginary number $\sqrt{-1}$.

Since each element takes N computations for all N elements, it is not hard to see that both DFT and IDFT takes $O(N^2)$.

## Calculate An Example by Hand

It always helps to compute the DFT by hand first to get a feel for it. To do this we will need Euler’s formula to break down the exponential to its sine cosine equivalents:

$e^{iy}=\cos y + i \sin y$

In this context, $y$ is $\frac{2\pi n k}{N}$.

Example: Let $x$ be a sequence (1, 2, 3, 4) and N = 4, and we wish to convert it to the frequency domain.

$\begin{array}{lcl}F_0 & = & 1\cdot e^{-\frac{i2\pi 0 \cdot 0}{4}} + 2\cdot e^{-\frac{i2\pi 0 \cdot 1}{4}} + 3\cdot e^{-\frac{i2\pi 0 \cdot 2}{4}} + 4\cdot e^{-\frac{i2\pi 0 \cdot 3}{4}} \\ & = & 1\cdot e^0 + 2\cdot e^0 + 3\cdot e^0 + 4\cdot e^0 \\ & = & 10\end{array}$

$\begin{array}{lcl}F_1 & = & 1\cdot e^{-\frac{i2\pi 1 \cdot 0}{4}} + 2\cdot e^{-\frac{i2\pi 1 \cdot 1}{4}} + 3\cdot e^{-\frac{i2\pi 1 \cdot 2}{4}} + 4\cdot e^{-\frac{i2\pi 1 \cdot 3}{4}} \\ & = & 1\cdot e^0 + 2\cdot e^{-\frac{i\pi}{2}} + 3\cdot e^{-i\pi} + 4\cdot e^{-\frac{i3\pi}{2}} \\ & = & 1 + 2\left(\cos \frac{-\pi}{2} + i\sin \frac{-\pi}{2} \right) + 3 \left( \cos(-\pi) + i \sin(-\pi) \right) + 4 \left( \cos\frac{-3\pi}{2} + i\sin\frac{-3\pi}{2}\right) \\ & = & 1 + 2\left(0 - i\right) + 3 \left( -1 + 0 \right) + 4 \left( 0 + i\right) \\ & = & -2+2i\end{array}$

$F_2 = -2$

$F_3 = -2-2i$

Now, if you take the values of $F$ and pass through the IDFT, it should return you back the sequence $x$, though I will not show it here.

## Implementation

The C++ program below simply runs the same sequence in the example through DFT, and pass the result through IDFT to get back the original sequence. I placed more emphasis on readability of the code, so it is not designed to be optimized but rather to prove the correctness of the mathematical formulations of DFT and IDFT.

#include <iostream>
#include <complex>
#include <cmath>
#include <iomanip>
#include <vector>

using namespace std;

double PI = acos(0) * 2;
typedef complex<double> xd;
typedef vector<double> dvec;
typedef vector<xd> xvec;
const xd J(0, 1); // sqrt(-1)

xvec dft(const dvec &input)
{
double N = input.size();
xvec X(N);

for (double k = 0; k < N; ++k) {
for (double n = 0; n < N; ++n) {
X[k] += (double)input[n] * exp(-2. * J * PI * n * k / N);
}
}

return X;
}

dvec idft(const xvec &input)
{
double N = input.size();
xvec x(N);
dvec out(N);

for (double k = 0; k < N; ++k) {
for (double n = 0; n < N; ++n) {
x[k] += input[n] * exp(2. * J * PI * n * k / N);
}
out[k] = x[k].real() / N;
}

return out;
}

int main()
{
cout << fixed << setprecision(2);

dvec input = { 1, 2, 3, 4 };

// convert from time to frequency domain
xvec freqdom = dft(input);

for (const auto &f : freqdom) {
cout << f << endl;
}
cout << endl;

// convert from frequency to time domain
dvec timedom = idft(freqdom);

for (const auto &t : timedom) {
cout << t << ' ';
}
cout << endl;
}


The output of this program will then be:

(10.00,0.00)
(-2.00,2.00)
(-2.00,-0.00)
(-2.00,-2.00)

1.00 2.00 3.00 4.00

With DFT and IDFT working properly we now use it to implement convolution:

dvec convolve(const dvec &a, const dvec &b)
{
// calculate degree of resulting polynomial
size_t N = 2 * a.size() - 1;

// extend size and pad with 0
dvec acof(N, 0), bcof(N, 0);
copy(a.begin(), a.end(), acof.begin());
copy(b.begin(), b.end(), bcof.begin());

xvec apv, bpv, cpv(N);

// evaluation
apv = dft(acof);
bpv = dft(bcof);

// point-wise multiplcation
for (size_t i = 0; i < N; ++i) {
cpv[i] = apv[i] * bpv[i];
}

// interpolation
return idft(cpv);
}


See the full working DFT convolution program from this gist.

## Conclusion

So, we covered DFT and vector convolution, and applied it to solve polynomial multiplication. What’s the problem now? Pointwise multiplication, in addition with evaluation and interpolation using DFT and IDFT, uses a time complexity of $O(N) + 2 \times O(N^2) \approx O(N^2)$. That’s actually no different, if not worst than using the naive method. To circumvent this, I will introduce FFT in the next post, which will compute the evaluation and interpolation in $O(N \log N)$ time. Stay tuned!

Update: Part 2 is up!

# Synchronizing BibTeX in Overleaf (BibLaTeX) and Texmaker (MiKTeX, Apacite)

In this post I detail how to get bibtex working on Overleaf (previously known as WriteLatex) and Texmaker (Windows 10 64-bit, MikTeX). Note that the citation format I’m using is APA, as specified by my university.

Overleaf may have the advantage of having collaborative editing with (almost) live previewing, but I hit a lot of problems getting the documents with bibtex I wrote there to compile in Texmaker. It just doesn’t compile. Conversely, copy-pasting working bibtex code from TexMaker into Overleaf pulls out compile errors.

So the best workflow I can come out with at the moment is this: Create latex document from my template (get from here: Overleaf to Texmaker Bibtex Template. There should be 2 files: main.tex and ref.bib. If the template link is not working, you can get from this Github gist instead), edit the latex document collaboratively in Overleaf, and then when you need it to compile in Texmaker, download the project as a zip and change some code.

Fortunately, it’s only 2 blocks of code, annotated as “SETUP DOCUMENT” and “END DOCUMENT”. You’ll find this in the start and end of the latex document respectively. These code blocks (provided in the template; the texmaker version is commented out) will need to be changed when moving your code back to TexMaker.

#### Overleaf

SETUP DOCUMENT:

% BEGIN -- SETUP DOCUMENT --
\documentclass[a4paper,12pt]{article}
\usepackage[utf8]{inputenc}
\usepackage[british]{babel}
\usepackage{csquotes}
\usepackage[backend=biber,style=apa]{biblatex}
\DeclareLanguageMapping{british}{british-apa}
\usepackage[pdftex]{graphicx}
\let\cite\parencite
\begin{document}
% END -- SETUP DOCUMENT --


END DOCUMENT:

% BEGIN -- END DOCUMENT --
\printbibliography
\end{document}
% END -- END DOCUMENT --


#### TexMaker

SETUP DOCUMENT:

% BEGIN -- SETUP DOCUMENT --
\documentclass[a4paper,12pt]{article}
\usepackage{hyperref}
\usepackage{apacite}
\begin{document}
\bibliographystyle{apacite}
% END -- SETUP DOCUMENT --


END DOCUMENT:

% BEGIN -- END DOCUMENT --
\bibliography{ref}{}
\end{document}
% END -- END DOCUMENT --


If you receive warning messages in TexMaker that goes something like

Citation ‘blablabla2007’ undefined

when you press F1 (quickbuild), you will need to enable bibtex in your build. To do this, go to “options > Configure TexMaker” and under “Quick Build” tab, select the quick-build command “PdfLatex + Bib(la)tex + PdfLaTeX (x2) + View Pdf”

Tip: You can generate bibtex code from easily with bibme.org.

There is another variation of this: separate your content into another *.tex file, and then you have 2 master documents – one for Overleaf (main.tex) and another you use for TexMaker (main-texmaker.tex or whatever name you want) – which both includes the same content file. In TexMaker, you’ll need to set one document to be a master document to work with multiple files.

Have fun with Overleaf!

# phpPgAdmin On Linux On Windows With Vagrant

This tutorial is a continuation of Rails On Linux On Windows With Vagrant.

What we want to achieve at the end of this tutorial is to able to modify our PostgresSQL database from a convenient web interface (phpPgAdmin) directly from the web browser in our host machine. Observe:

Let’s roll:

## Port Forwarding

For this to work, we need to forward the port 80 in our guest machine to port 8080 in our host.

If you are in SSH mode, exit it by entering

exit

Then halt it:

vagrant halt

In the Vagrantfile in your project directory, uncomment the following line:

config.vm.network "forwarded_port", guest: 80, host: 8080

Though not apparent in the Vagrantfile here, the virtual machine we used forwards port 3000 in the guest machine to port 3000 in the host machine. That is how we are able to access our Rails application from the host machine. You can find it here:

C:\Users\YOURUSERNAME\.vagrant.d\boxes\fiercepunchstudios-VAGRANTSLASH-vagabond\0.1.1\virtualbox\include

In “_Vagrantfile” (the only file in that directory), you can peek in its contents:

Vagrant.configure(2) do |config|
config.vm.network "forwarded_port", guest: 3000, host: 3000
end

After you have ran vagrant up and vagrant ssh, in git bash enter:

sudo apt-get update
sudo apt-get install phppgadmin

Congratulations! You have installed phpPgAdmin, but when you go to its URL – http://localhost:8080/phppgadmin/ (make sure its all in lower case), you get a page not found error:

It’s not all bad. At least we know Apache Server is running.

So now we need to configure Apache server to tell it where to find phppgadmin. Edit /etc/apache2/apache2.conf

sudo nano /etc/apache2/apache2.conf

and add the following line to it:

Include /etc/apache2/conf.d/phppgadmin

If you’re not familiar with nano, just press the down key till you reach the end of the document. Copy paste the line above and press Ctrl-O (save), Enter (confirm overwrite), followed by Ctrl-X (exit).

When you have done this, restart apache:

sudo service apache2 reload

Now when you try to access phppgadmin (http://localhost:8080/phppgadmin/). The error message changes:

Basically its trying to say that phppgadmin is there, it’s just that you don’t have permission to access it. So let’s change that now. Enter:

sudo nano /etc/apache2/conf.d/phppgadmin

Comment out (add # at beginning of line) “allow from 127.0.0.0/255.0.0.0 ::1/128” and remove # from the line below it, “allow from all”. Your file should look like this:

order deny,allow
deny from all
# allow from 127.0.0.0/255.0.0.0 ::1/128
allow from all

Save it out and restart apache again, now try to access phppgadmin again. You should be able to see the site now. Though it’s not apparent what you should be doing (contradictory to a more user-friendly web client, phpMyAdmin), click “PostgreSQL”:

…and it prompt for your login. What’s the username and password? Default username is postgres, but you need to set a password. Let’s do that now. Enter:

sudo -u postgres psql postgres 

That should take you to psql:

To set a password enter:

\password postgres

It will prompt you for the password, which you will not be able to see it as you type. This will be the password you will use to login to phppgadmin.

To quit psql, enter:

\quit

If you try to login now, it will block you with the message:

Login disallowed for security reasons.

Well, we’re not so bothered about security reasons at the moment, so in git bash enter:

sudo nano /etc/phppgadmin/config.inc.php

and change

$conf['extra_login_security'] = true; to $conf['extra_login_security'] = false;

and try to login to phppgadmin again (again, username is postgres). It should work now.

## Troubleshooting PostgreSQL

You might encounter this FATAL: role “vagrant” does not exist error:

To fix the vagrant role problem, enter this code in ssh mode (make sure you shut down your rails server first):

sudo -u postgres createuser -s -d -r -e vagrant

That should resolve that issue. Note that you need to execute all postgres commands the same way (prefix “sudo -u postgres“). For example:

sudo -u postgres createdb TimeTabler_development

You may also realize that you also can’t access the psql for the exact same error. To work around this, access psql like this:

sudo -u postgres psql postgres

You exit the terminal as such:

\quit

## Conclusion

Once you’re done playing around, you can halt the virtual machine by first exiting SSH, then halt the virtual machine:

exit
vagrant halt


# Rails On Linux On Windows With Vagrant

Rails not running properly on Windows? In this quick tutorial I’ll guide you how to get Ruby on Rails running in Ubuntu in Windows via Vagrant and Virtualbox; it’s the fastest way of getting up with Rails I’ve found to date.

Let’s begin:

## Install

You will need to restart your computer after installation for vagrant commands to work.

## Setup Vagrant

Create a new folder for your Rails project.

Now inside that folder, right click on on a blank area. If you have git installed properly, you should be able to right click and see a context menu option called “git bash”:

This will open a git bash terminal. From here you will execute all vagrant commands:

Enter:

vagrant init fiercepunchstudios/vagabond

You will see a “Vagrantfile” added to the folder. Now Enter:

vagrant up

This will download a virtual machine (pretty big file, about 700Mb) from atlas.hashicorp.com and save it to:

C:/Users/USERNAME/.vagrant.d/boxes

There you will find a folder for each virtual machine you downloaded.

NOTE: If the download gets interrupted, vagrant will not be able to resume. You’ll have to delete the incomplete box file in

C:/Users/USERNAME/.vagrant.d/tmp

## Create Rails Project and Start Running

Once all that is completed, the vagrant up command also launches your virtual machine, though you can’t see it. Enter in git bash:

vagrant ssh

Note you can only do this after you have run vagrant up. This will place you inside the Ubuntu virtual machine (we call this the guest machine, the environment you are currently using will then be the host machine):

It should place you inside a “vagrant” folder (notice the vagrant@vagabond:/vagrant), which is the project folder you have created in your host machine. It is here we create a new rails project. So now enter (Don’t forget that full-stop):

rails new .

This also requires an internet connection. After this runs you should see your project folder populated with a fresh new Rails project. So now you can use your favourite IDE, editor (Ahem, Geany) and git client (Ahem, SourceTree) right in Windows! How cool is that?

To run a Rails server, enter in your git bash:

rails server -b 0.0.0.0

Now open up your favourite web browser and go to http://localhost:3000 – you should see this:

TIP: you can copy paste code by right clicking git bash as such:

Any changes you make in your Rails project in your host machine is immediately picked up by the Rails server running in your guest Ubuntu machine.

When you’re done developing on Rails, press Ctrl-C to close the server.

## Virtual Machine Management

After you have SSHed into the virtual machine via vagrant ssh, to exit it enter

exit

While you’re in SSH to get out of the virtual machine. Once you are out, run:

vagrant halt


to shutdown the virtual machine, and

vagrant up


to boot it again.

Finally, to completely wipe the virtual machine from the disk destroying all its contents:

vagrant destroy


In git 2.5.2, the above command in git bash causes the error:

Vagrant is attempting to interface with the UI in a way that requires
a TTY. Most actions in Vagrant that require a TTY have configuration
switches to disable this requirement. Please do that or run Vagrant
with TTY.

You either run vagrant destroy in the normal command prompt, or use

vagrant destroy --force

Doing the above will not prompt for any confirmation.

## Troubleshooting

### Virtualization Technology Problem

You know you got this issue when you have vagrant and VirtualBox installed (or reinstalled a dozen times) and restarted (a dozen times) and yet when you do vagrant up, it states otherwise:

The guest machine entered an invalid state while waiting for it
to boot. Valid states are ‘starting, running’. The machine is in the
‘poweroff’ state. Please verify everything is configured
properly and try again.

If the provider you’re using has a GUI that comes with it,
it is often helpful to open that and watch the machine, since the
GUI often has more helpful error messages than Vagrant can retrieve.
For example, if you’re using VirtualBox, run vagrant up while the
VirtualBox GUI is open.

The primary issue for this error is that the provider you’re using
is not properly configured. This is very rarely a Vagrant issue.

Also, when you run VirtualBox and try to start one of the machines there, you have this error:

Failed to open a session for the virtual machine mmutimetabler_default_1448517896835_14736.

VT-x is disabled in the BIOS for both all CPU modes (VERR_VMX_MSR_ALL_VMX_DISABLED).

This issue appears a few times during the course of setting up a couple of different laptops. There are 2 possible cases:

1. Your CPU does not support Virtualization Technology (VT-x).
2. Your CPU supports it, but it is disabled.

For the first case, you must be running some really ancient hardware (i.e. Pentium C), and this rails setup is not going to be possible for you until you buy new hardware. This is the case if you boot up to the advanced options in your BIOS and you find that there is no Virtualization Technology option. For the second case, you just need to boot to your BIOS and enable it.

Check out

for further information to resolve your issue.

### PostgreSQL

If you are using PostgreSQL, you might encounter this FATAL: role “vagrant” does not exist error:

To fix the vagrant role problem, enter this code in ssh mode (make sure you shut down your rails server first):

sudo -u postgres createuser -s -d -r -e vagrant

That should resolve that issue. Note that you need to execute all postgres commands the same way (prefix “sudo -u postgres“). For example:

sudo -u postgres createdb TimeTabler_development

You may also realize that you also can’t access the psql for the exact same error. To work around this, access psql like this:

sudo -u postgres psql postgres

You exit the terminal as such:

\quit

## Conclusion

In brief, you just run your Rails application in a Ubuntu machine in your Windows machine. Stay tune for more!

UPDATE: A continuation of this post is up: phpPgAdmin On Linux On Windows With Vagrant.

# Using std::unique_ptr as a class member (C++11)

## Abstract

For this post I’m going to go through some of the gotchas most people might face when using unique_ptr as a class member (most examples out there only show unique_ptr as a local variable), all accompanied by examples. I will also compare it with shared_ptr.

## Assumptions

I’m assuming you know your way round classic/naked pointers, basic memory management and object construction and destruction.

## Intro

Unique pointers is type of smart pointer in the C++11 standard that uses the concept of ownership, which meant that only 1 pointer can point to 1 resource (ergo ‘owns’ the resource). In other words only a single pointer is responsible for the destruction of its resource, hence the term ‘unique’.

Theories aside, let’s see some code! To see the source code of the files in this post, just click the link – it will point to a github gist (For example, click Point.hpp below).

## Getting Started

In this scenario we’ll be managing a class Point (Point.hpp) from a manager class PointMan. Then add shared_and_unique.cpp to the same directory get started.

So we start with our unique pointer q:

Point::UPtr q;


UPtr here refers to a typedef std::unique_ptr<Point>, you can actually use these interchangeably. Why do Point::UPtr? Because other people do this. And it looks more concise. In fact, if you ever intend to use smart pointers to manage a class, you should add a typedef there.

Now, notice that the unique_ptr p is initialized at the constructor:

PointMan() : q(new Point("q")) {
q->set(6, 12);
// ...


well, what if you don’t want to initialize at the constructor? You do this:

r = Point::UPtr(new Point("r"));
r->set(r_x, r_y);


Here is the output of unique_and_shared.cpp:

## Ownership Transfering

Moving on to the next example: unique_container.cpp

How does one transfers a unique pointer to another unique pointer? An example is when you have created a point:

Point::UPtr temp(new Point("element-" + ss.str()));
temp->set(i, i*2);


And now you want to transfer this temp Point to a container of unique pointers. Why? Currently the unique pointer is being initialized in a for loop. Exiting or restarting the loop will destroy the resource. That is why the life management of the object needs to be passed to a variable in the class scope. However, doing this:

container[i] = temp;


will result in a compile time error:

unique_container.cpp:16:17: error: use of deleted function ‘std::unique_ptr<_Tp, _Dp>& std::unique_ptr<_Tp, _Dp>::operator=(const std::unique_ptr<_Tp, _Dp>&) [with _Tp = Point; _Dp = std::default_delete<Point>]’

what does it mean? You can’t use assignment operator from one unique pointer to another unique pointer. Why is this so? Because unique pointers are designed in such a way such that only one pointer can own one resource. However, ownership can be transfered. The following is valid:

container[i] = std::move(temp);


What happens now is that the job of managing the life time of the object is now passed down to the container. So now temp pointer doesn’t point to anything:

assert(temp == nullptr);


Of course, if you don’t want to create temporary variables and move them, you could simply initialize the array element directly:

container[i] = Point::UPtr(new Point("element-" + ss.str()));
container[i]->set(i, i*2);


## Non-owning Pointers

Now comes a tricky problem: say there is an element my container of points that I want to designate as the location of the current player on a game board. I need a convenient means to access this particular point in the container. I can’t store this Point as a unique pointer as only one unique pointer can point to one resource. How can I go about doing so?

Well, what you can do is save the index of that container and use it accordingly:

container[player_idx]->set(45, 12);


but it’s rather tedious, messy, and not very expressive. And what about data structures that are not indexed? This creates an overhead simply to get to the resource. What we can do is use a non-owning pointer. Non-owning pointers are classic pointers that do what you would expect normal pointers would do, except they should not manage the life time of the object.

Point *mainPoint;


You assign a raw pointer the resource by using the get() method:

mainPoint = container[1].get();


Now here’s where you need to be careful: do not delete this pointer!

delete mainPoint; // don't do this! no no no!


Your compiler won’t stop you from doing so, but you can cause some really insidious bugs to your program. mainPoint should only serve as a pointer where you can pass around to access the object in the heap. By no reason should you try to manipulate the life time of the resource via this pointer.

But anyhow, I was curious what would happen:

Data corruption! Wohoo! 😀

## VS Shared Pointers

So to conclude, here’s the same Pointer manager class, but now using shared pointers: shared_container.cpp

So it’s really a lot easier to read, and more consistent. In fact, Objective C only uses reference counting to handle heap memory. SFGUI, a GUI toolkit built for SFML, forces you to use shared pointers to create GUI components. Seems like shared pointers is one size fits all right? I would think so.

But of course, people would debate that unique pointers perform better than shared pointers, and that shared pointers should be only used as a last resort.

Personally? Shared pointers for the win, man.

So leave a comment if you feel this article could elaborate on certain things or whatever. Bye for now! 😀

# Auto-formatting source code using AStyle

Today I talk about how to get AStyle 2.04 the awesome code beautifier working with Qt Creator 3.1, and towards the end share my configuration settings (C++).

Background

I have a particularly strange habit to code: I can’t stand it if it doesn’t look pretty. I make a fuss about formatting, on whether code is readable and properly formatted by consistent and standard code conventions. In short, messy code is disgusting to me.

I doubt I’m the only one like this (:

For awhile now I’ve been manually formatting the code, little seemingly insignificant stuff point of adding spaces between operators and removing spaces around parenthesis, until I realize how it was actually kinda unproductive.

Enter AStyle

I setup Qt Creator via this link. Apparently Qt Creator 3.1 already has beautifier just setting around, but disabled for some reason. You then need to download AStyle and set the exe accordingly.

Then… after that I started wandering around the web wondering where to get a predefined set of formatting options for it. It was actually quite hard to find what I was looking for, truth be told. In the end I just look up the documentation and make it work for me. It wasn’t too bad; was actually kinda fun seeing the code just fixed up itself.

So here is my self-defined style (C++):

style=kr
attach-namespaces
indent=tab
keep-one-line-blocks