Tuan Anh

container nerd. k8s || GTFO

So I upgraded my VPS

I have little to none demand for web server’s power since my blog is static, plus a couple of small web apps I test on the VPS now and then. Recently, my interests shifted to Ionic framework, nodejs, mongodb and angularjs. I had quite a few ideas for pet projects, mostly for the purpose of learning. I decided to upgrade my VPS to a bit better plan - 256MB of RAM. Still a tiny machine but should be enough for what I have in mind.

I’ve also did a re-installation of the OS as I switch from 64bit Ubuntu 14.04 image to 32bit version. I have no plan to scale this VPS for anything serious so 32bit is good plenty for the purpose, plus all the additional bit of memory.

Migration process is rather painless. I just had to click Re-install OS, setup new user account, apt-get a couple of libs (rvm, ruby, jekyll), setup git server and then git push my blog. Everything is back to normal now I suppose.

Scaling node.js application with cluster

node.js’s single thread default architecture prevents it from utilizing multi-cores machines. When you create a new node.js application in WebStorm for example, this is what you get:

var debug = require('debug')('myApp');
var app = require('../app');

app.set('port', process.env.PORT || 3000);

var server = app.listen(app.get('port'), function() {
  debug('Express server listening on port ' + server.address().port);

// in app.listen method

app.listen = function(){
  var server = http.createServer(this);
  return server.listen.apply(server, arguments);

This code snippet will create a simple HTTP server on the given port, executed in a single-threaded process; doesn’t matter how many cores your machine has. This is where cluster comes in.

cluster allows you to create multiple processes, which can share the same port. Ideally, you should spawn only 1 thread per core.

var cluster = require("cluster"),
    http = require("http");
    cpuCount = require("os").cpus().length;
    port = process.env.PORT || 3000

if (cluster.isMaster) {
  for (var i = 0; i < cpuCount; ++i) {

  cluster.on("exit", function(worker, code, signal) {
} else {
  http.createServer(function(request, response) {
    // ...

That’s how you do it on a single machine. With a few more lines of code, you application can now utilize better modern hardware. If this isn’t enough and you need to scale across machines, take a look at load balancing using nginx. I will cover that in another post.

Using AngularJS with jekyll

This blog is running on jekyll; and since I’m learning AngularJS, I want to try AngularJS on my blog as well. The problem is both are using double curly braces syntax, jekyll will consume it first when generating and AngularJS will be left with nothing.

In order to use AngularJS with jekyll, we have to instruct AngularJS to use different start and end symbol by using $interpolateProvider.

var myApp = angular.module('myApp', [], function($interpolateProvider) {

function MyCtrl($scope) {
  $scope.name = 'Clark Kent';

Using it with the new symbol

<div ng-controller="MyCtrl">
    Hello, [[name]]

That’s it. Enjoy hacking your blog.

P/S: I didn’t push any change to this blog. I’m just playing AngularJS and jekyll locally

io.js v1.0.0 released

A fork of node.js with faster release cycle and open source governance. io.js v1.0.0 also marks the return of Ben Noordhuis with third most commits in node.js as a technical commitee.

For those who don’t know about Ben Noordhuis’s story1. Here’s a short version:

  • Ben Noorddhuis was a major contributor to NodeJS and a voluntee

  • Ben Noorddhuis rejected a pull request2 that would have made pronoun in the document gender neutral. The documents were already grammatically correct, but whoever made the pull request had a political preference for using a non masculine pronoun. Ben rightly saw this a trivial change and reject. Issacs accepted the PR and Ben attempted to revert3 it.

  • Joyent put an embarrassing and immature blog post which essentially called Ben an “asshole” and said that if he was an employee he’d be fired.

More related links:

link bài gốc

A minimal iTerm2 setup

My current machine is a Macbook Air 11 inches so screen estate is a luxury thing. I’ve always try to maximise the use of my screen by things like moving Dock to the side (and auto hide, delay set to 0), making use of multiple Mission Control desktop,etc…

It always strikes me that even though iTerm2 already supports tmux out-out-the-box, it still has all that title bar and tab bar. They are basically useless to me. So why not getting rid of it? iTerm2 is open-source anyway.

You just have to clone the repo and edit a bit to get rid of the titlebar.

git clone https://github.com/gnachman/iTerm2.git
cd iTerm2
vi sources/PseudoTerminal.m

Search for method styleMaskForWindowType and in the default case, remove NSTitledWindowMask.

	return (NSTitledWindowMask |
	NSClosableWindowMask |
	NSMiniaturizableWindowMask |
	NSResizableWindowMask |

Rebuild and enjoy the new sexy, minimal look of iTerm2.

Pretender - a mock server library

Pretender is a mock server library in the style of Sinon (but built from microlibs. Because javascript) that comes with an express/sinatra style syntax for defining routes and their handlers.

var server = new Pretender(function(){
  this.put('/api/songs/:song_id', function(request){
    return [202, {"Content-Type": "application/json"}, "{}"]

Very easy to use for quick prototyping.

link bài gốc

AngularJS diary - Day 4

##Update ng-model of one controller from another

Using $broadcast to broadcast a message (or $emit) and put listener on that message on receiver.

// unrelated controllers -> $rootScope broadcasts
App.controller('Ctrl1', function($rootScope, $scope) {
  $rootScope.$broadcast('msgid', msg);

App.controller('Ctrl2', function($scope) {
  $scope.$on('msgid', function(event, msg) {

When to use $broadcast and when to use $emit?

  • $broadcast — dispatches the event downwards to all child scopes,

  • $emit — dispatches the event upwards through the scope hierarchy.

In case there is no parent-child relation, use $rootScope to $broadcast.

##Making AJAX call the right way

Use service, factory or providers for that. It’s advised not to make AJAX calls within the controller.

var ExampleService = angular.module("ExampleService", []);
ExampleService.service('ExampleService', function($http) {
  this.my_method = function() {
    var url = 'http://...';
    var req_params = {};
    return $http.post(url, req_params); // a promise

// using it
ExampleService.my_method().then(function(result) {
  $scope.my_var = result.data;

##Services vs Factory

In most cases, both can be used interchangably.

When you’re using service, Angular instantiates it behind the scenes with the new keyword. Because of that, you’ll add properties to this and the service will return this. When you pass the service into your controller, those properties on this will now be available on that controller through your service. This is why service is most suitable for generic utilities.

factory is useful for for cases like when you don’t want to expose private fields but via public methods instead. Just like a regular class object.

The next bullet on my resume: AngularJS

I started a HTML5 hybrid mobile app project recently. Before I start coding the frontend, I had to decide which client side JavaScript framework I will be using. The choice narrows down between AngularJS (Ionic framework) and Ember.

Of those two, AngularJS is obviously the more popular choice. There’re many frameworks built on AngularJS; thousands of questions on StackOverflow; lots of open-source projects base on AngularJS on GitHub. There are many great frameworks out there but few has gained so much developer mindshare like AngularJS. There’s clearly something about this framework. I mean, take a look at the search trend of AngularJS vs other frameworks.

angularjs search trend vs others

So I decided to take sometimes to evaluate AngularJS first to see what’s all the hypes are about. I downloaded AngularJS; played around; read documents, tutorials and top questions on StackOverflow.

I don’t know much about AngularJS yet but I will start blogging about it as I learn here. These are the few things that I’ve learnt about AngularJS in the last two days.

AngularJS is easy to start with

The thing about AngularJS is it’s very intutive, simple to start and easy to understand if you’re already familiar with MVx pattern. It has a very high WOW factor (data binding for one) at the beginning if you’re coming from, say jQuery. Seeing things like this make people want to explore the framework further. This is one of the main reason that make AngularJS popular, I think.

Single source of truth

Single Source Of Truth (SSOT) refers to the practice of structuring information models and associated schemata such that every data element is stored exactly once

Suppose that you have a toggle button, if you were to use jQuery, you will have to add an active CSS class to style when it’s toggle on; remove it and re-add inactive class when it’s off. If there’s data dependent on it, you will have to manipulate the DOM yourself. Crazy right! The problem is that it may go out of sync (toggle is on but css class is still inactive for example). Plus, the code will be much longer and unnescessary complicated.

AngularJS seperates data from its presentation and offer you data binding. You create a view, bound a field to an attribute in your model. Your data model is now single source of truth. You don’t have to manipulate with the DOM yourself but dealing with the model instead.


There’s no base model/object in Angular. Whatever you put inside $scope is your model. This is a bit weird coming from OOP language. I wonder how would we share the model outside of the $scope. The data is stucked between your template and $scope. So far, I’ve seen example using $rootScope but it’s like using global variables which I’m against unless really nescessary. factory seems to be the thing I’m looking for. I shall read more documentations about this.

That’s it for today. I will continue once I found more cool stuff worth sharing about Angular.

How to setup rtorrent, rutorrent on Ubuntu

This is a simple and concise tutorial on how to setup a seedbox running rtorrent with rutorrent as webui on Ubuntu OS. I’ve tried to simplify as much as possible to make it easy to understand. It may look a bit lengthy but it’s copy-paste-fu mostly.

Initial server setup

Login to your server and create a new user account, add it to sudo group. Substitute USER_NAME with your desired username.

ssh [email protected]_SERVER_IP_ADDRESS
adduser USER_NAME
gpasswd -a USER_NAME sudo

Setup key authentication

It’s better, less hassle and a lot safer. You should use it.

# at your local machine: gen key and upload it to your vps
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub | ssh [email protected]_SERVER_IP_ADDRESS 'cat >> .ssh/authorized_keys'

You can try logging in your server with the new user. It will not ask you to enter password this time.

Disable root login

Use nano to change PermitRootLogin to no. Ctrl+O to save and Ctrl+X to quit afterward.

nano /etc/ssh/sshd_config
service ssh restart

Setup rtorrent

# install libraries required to build rtorrent
sudo apt-get update
sudo apt-get install subversion build-essential automake libtool libcppunit-dev libcurl3-dev libsigc++-2.0-dev unzip unrar-free curl libncurses-dev libxml2-dev

# download source and install
cd ~
mkdir src
cd src
svn checkout http://xmlrpc-c.svn.sourceforge.net/svnroot/xmlrpc-c/stable xmlrpc
cd xmlrpc
./configure --prefix=/usr --enable-libxml2-backend --disable-libwww-client --disable-wininet-client --disable-abyss-server --disable-cgi-server --disable-cplusplus
make install

Install libtorrent

cd ~
wget http://libtorrent.rakshasa.no/downloads/libtorrent-0.13.4.tar.gz
tar xvf libtorrent-0.13.4.tar.gz
cd libtorrent-0.13.4
./configure --prefix=/usr
make install

Install rtorrent

cd ~ # back to home
wget http://libtorrent.rakshasa.no/downloads/rtorrent-0.9.4.tar.gz
tar xvf rtorrent-0.9.4.tar.gz
cd rtorrent-0.9.4
./configure --prefix=/usr --with-xmlrpc-c
make install

Create new user to run rtorrent and required folders

useradd -d /home/rtorrent_usr/ rtorrent_usr
mkdir /home/rtorrent_usr
mkdir /home/rtorrent_usr/downloads
mkdir /home/rtorrent_usr/.session
mkdir /home/rtorrent_usr/watch
mkdir /home/rtorrent_usr/.sockets
touch /home/rtorrent_usr/.sockets/rpc-socket
nano /home/rtorrent_usr/.rtorrent.rc
# update permission
chown -R rtorrent_usr:rtorrent_usr /home/rtorrent_usr/
chown -R www-data:www-data /usr/share/nginx/html

For rtorrent config, you can copy the default one and mess around. It’s simple and straight forward. I won’t go into details here.

cp /usr/share/doc/rtorrent/rtorrent.rc ~/.rtorrent.rc

Setup nginx/rutorrent for webui

Install nginx and php5-fpm to run rutorrent

sudo apt-get install nginx php5-fpm php5-cli

By default, your root folder will be at usr/share/nginx/html.

Create a configuration file for rutorrent

nano /etc/nginx/sites-available/rutorrent

Copy and paste the content below

server {
  listen   80;
  listen   [::]:80 default_server ipv6only=on; ## listen for ipv6

  root /usr/share/nginx/html;
  index index.php index.html index.htm;

  server_name localhost;

  location / {
    try_files $uri $uri/ /index.html;

  error_page 500 502 503 504 /50x.html;
  location = /50x.html {
    root /usr/share/nginx/html;

  # php5-fpm
  location ~ \.php$ {
    try_files $uri =404;
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    # With php5-fpm:
    fastcgi_pass unix:/var/run/php5-fpm.sock;
    fastcgi_index index.php;
    include fastcgi_params;

Create symlink to sites-enabled and restart nginx

cd /etc/nginx/sites-enabled
ln -s ../sites-available/rutorrent
# restart nginx
service nginx restart

Create a test file to verify php5-fpm is working

nano /usr/share/nginx/html/info.php
# use content below 

Download rutorrent

cd /usr/share/nginx/html
svn checkout http://rutorrent.googlecode.com/svn/trunk/rutorrent
svn checkout http://rutorrent.googlecode.com/svn/trunk/plugins
rm -r rutorrent/plugins
mv plugins rutorrent/

And that’s it. Start rtorrent and things should work as it supposes to. Feel free to ask me any question if you got stuck.

If you’re a casual torrent user like me and still looking for a dead-cheap, torrent-friendly VPS provider, I may recommend you to take a look at RamNode. They provide a $15 per YEAR for 80GB of space and 500GB bandwidth. As long as you don’t do heavy torrenting and public trackers, you should be safe.


Using GitHub issue tracker as comment system for your static blog

Ivan Zuzak wrote about it here. It’s actually a very cool idea. All you need is to create a new repo on GitHub, create an issue for the blog post you want to enable comment and set commentIssueId in your blog post. A JavaScript ajax call will pull comments from GitHub on page load.

If I were to enable comments on my blog, I would defenitely go this route. You already blog like a hacker, you should also comment like one.

I’ve also seen people using Discourse for their website’s comments. The downside is setting up a new VPS and install Discourse(an online discussion software) seems like overkill for this purpose. Also, Discourse is not exactly lightweight. In order to install Discourse, you need a VPS with minimum 1GB of RAM (at least that was when I last tried it). My VPS has mere 128MB of memory, not exactly a beasty machine. The reason I went with static blog is it’s very lightweight. Using Discourse kinda defeat the purpose now, doesn’t it?

If you don’t want to go through all the troubles, you can use Disqus instead.