Tuan Anh

container nerd. k8s || GTFO

Fuck callbacks! Let's use generators

Let’s write a simple function hello that return a string when called.

var hello = function () { return 'Hello ' + name; }

Now convert it into a generator. Call hello() this time will return you an Object instead: the generator.

var hello = function *() { return 'Hello ' + name; }

Let’s consider the following snippet.

var hello = function *() {
  yield 'Stopped here!';
  return 'Hello ' + name;

var generator = hello('Clark Kent');
console.log(generator.next()); // {value: 'Stopped here', done: false}
console.log(generator.next()); // {value: 'Hello Clark Kent', done: true}

So what good can generators do for me? Generators can help eliminiating callbacks. Using callback for 1 or maybe 2 levels is fine. However, imagine if you have to use callbacks for 3 levels or more like the example below. URGH!!

func_1(arg, function(err, result) {
  if (err) { .... }
  else {
    func_2(result, function(err, result2) {
      if (err) { ... }
      else {
        func_3(result2, function(err, result3) {
          if (err) { ... }
          else {
            // LOST IN LIMBO !!!

Take an example: consider a scenario where you have to establish connection to a bunch of database servers at application startup. With generators, you can do something like this:

var init_all_connections = function *() {
  var result;
  result = yield init_connection(conn_str1);
  result = yield init_connection(conn_str2);
  result = yield init_connection(conn_str3);
  // ...


For the sake of conveniency, you can write a function to execute the generator until done is true, throw if an error occurs. So the whole thing looks as simple as this:

var init_all_connections = function *() {
  var result;
  result = yield init_connection(conn_str1);
  result = yield init_connection(conn_str2);
  result = yield init_connection(conn_str3);
  // ...

// execute

Look how the code is cleanly organized and so much readable compare with the callback hell above. Ain’t you glad you read this now ;)

IO performance benchmark on RamNode

Last update: 2015-01-18

I read a IO performance benchmark today comparing AWS vs DigitalOcean. It makes me curious about the IO performance on RamNode. Of course I wouldn’t expect something high since I’m running in one of the lowest tier offerred by them (It’s not even 100% SSD but just some kind of hybrid SSD-Cached). I will also ignore read test since all the VPS are container-based and read is probably heavily-cached anyway. As for the tool, I’m using fio for my IO benchmark.

Installing fio

sudo apt-get install fio
mkdir data # at ~ I suppose
nano testconfig.fio
# content of the file

numjobs=10 # check size and your free space left (df -h)


Result when blocksize=8k

Run status group 0 (all jobs):
   READ: io=2048.0MB, aggrb=6485KB/s, minb=6485KB/s, maxb=6485KB/s, mint=323357msec, maxt=323357msec

Result when blocksize=16k, bw peaks at 21249.

Run status group 0 (all jobs):
   READ: io=1024.0MB, aggrb=8154KB/s, minb=8154KB/s, maxb=8154KB/s, mint=128592msec, maxt=128592msec

Result interpretation

One thing to notice is the CPU load skyrocketted. Though I only went with numjobs=2 and size=1g, the CPU load hits 3.x (300%). This is expected behavior since I’m on their Massive VPS plan. Due to this CPU issue, this is probably not the highest write speed possible as well. I would have to try it on a higher plan where CPU is irrelevant to the test. Too bad, RamNode doesn’t allow me to quickly launch an instance to test and destroy after.

So overall, DigitalOcean > AWS > RamNode. RamNode is left behind by far margin, probably due to the CPU on their lowest plan. If your application is IO-heavy, you should probably look else where.

bluebird - a promise library with unmatched performance

The documentation of bluebird is so much better than Q - the library which I’m currently using. bluebird with nodejs is even better: it can convert an existing promise-unaware API to promise-returning API which is so awesome. Thid feature works with most popular libraries (which use error as first arg (as they all should)).

I’m sold!!


function getConnection(urlString) {
  return new Promise(function(resolve) {
    //Without new Promise, this throwing will throw an actual exception
    var params = parse(urlString);

or with promisifyAll method

var fs = require("fs");
// Now you can use fs as if it was designed to use bluebird promises from the beginning

fs.readFileAsync("file.js", "utf8").then(...)
link bài gốc

So I upgraded my VPS

I have little to none demand for web server’s power since my blog is static, plus a couple of small web apps I test on the VPS now and then. Recently, my interests shifted to Ionic framework, nodejs, mongodb and angularjs. I had quite a few ideas for pet projects, mostly for the purpose of learning. I decided to upgrade my VPS to a bit better plan - 256MB of RAM. Still a tiny machine but should be enough for what I have in mind.

I’ve also did a re-installation of the OS as I switch from 64bit Ubuntu 14.04 image to 32bit version. I have no plan to scale this VPS for anything serious so 32bit is good plenty for the purpose, plus all the additional bit of memory.

Migration process is rather painless. I just had to click Re-install OS, setup new user account, apt-get a couple of libs (rvm, ruby, jekyll), setup git server and then git push my blog. Everything is back to normal now I suppose.

Scaling node.js application with cluster

node.js’s single thread default architecture prevents it from utilizing multi-cores machines. When you create a new node.js application in WebStorm for example, this is what you get:

var debug = require('debug')('myApp');
var app = require('../app');

app.set('port', process.env.PORT || 3000);

var server = app.listen(app.get('port'), function() {
  debug('Express server listening on port ' + server.address().port);

// in app.listen method

app.listen = function(){
  var server = http.createServer(this);
  return server.listen.apply(server, arguments);

This code snippet will create a simple HTTP server on the given port, executed in a single-threaded process; doesn’t matter how many cores your machine has. This is where cluster comes in.

cluster allows you to create multiple processes, which can share the same port. Ideally, you should spawn only 1 thread per core.

var cluster = require("cluster"),
    http = require("http");
    cpuCount = require("os").cpus().length;
    port = process.env.PORT || 3000

if (cluster.isMaster) {
  for (var i = 0; i < cpuCount; ++i) {

  cluster.on("exit", function(worker, code, signal) {
} else {
  http.createServer(function(request, response) {
    // ...

That’s how you do it on a single machine. With a few more lines of code, you application can now utilize better modern hardware. If this isn’t enough and you need to scale across machines, take a look at load balancing using nginx. I will cover that in another post.

Using AngularJS with jekyll

This blog is running on jekyll; and since I’m learning AngularJS, I want to try AngularJS on my blog as well. The problem is both are using double curly braces syntax, jekyll will consume it first when generating and AngularJS will be left with nothing.

In order to use AngularJS with jekyll, we have to instruct AngularJS to use different start and end symbol by using $interpolateProvider.

var myApp = angular.module('myApp', [], function($interpolateProvider) {

function MyCtrl($scope) {
  $scope.name = 'Clark Kent';

Using it with the new symbol

<div ng-controller="MyCtrl">
    Hello, [[name]]

That’s it. Enjoy hacking your blog.

P/S: I didn’t push any change to this blog. I’m just playing AngularJS and jekyll locally

io.js v1.0.0 released

A fork of node.js with faster release cycle and open source governance. io.js v1.0.0 also marks the return of Ben Noordhuis with third most commits in node.js as a technical commitee.

For those who don’t know about Ben Noordhuis’s story1. Here’s a short version:

  • Ben Noorddhuis was a major contributor to NodeJS and a voluntee

  • Ben Noorddhuis rejected a pull request2 that would have made pronoun in the document gender neutral. The documents were already grammatically correct, but whoever made the pull request had a political preference for using a non masculine pronoun. Ben rightly saw this a trivial change and reject. Issacs accepted the PR and Ben attempted to revert3 it.

  • Joyent put an embarrassing and immature blog post which essentially called Ben an “asshole” and said that if he was an employee he’d be fired.

More related links:

link bài gốc

A minimal iTerm2 setup

My current machine is a Macbook Air 11 inches so screen estate is a luxury thing. I’ve always try to maximise the use of my screen by things like moving Dock to the side (and auto hide, delay set to 0), making use of multiple Mission Control desktop,etc…

It always strikes me that even though iTerm2 already supports tmux out-out-the-box, it still has all that title bar and tab bar. They are basically useless to me. So why not getting rid of it? iTerm2 is open-source anyway.

You just have to clone the repo and edit a bit to get rid of the titlebar.

git clone https://github.com/gnachman/iTerm2.git
cd iTerm2
vi sources/PseudoTerminal.m

Search for method styleMaskForWindowType and in the default case, remove NSTitledWindowMask.

	return (NSTitledWindowMask |
	NSClosableWindowMask |
	NSMiniaturizableWindowMask |
	NSResizableWindowMask |

Rebuild and enjoy the new sexy, minimal look of iTerm2.

Pretender - a mock server library

Pretender is a mock server library in the style of Sinon (but built from microlibs. Because javascript) that comes with an express/sinatra style syntax for defining routes and their handlers.

var server = new Pretender(function(){
  this.put('/api/songs/:song_id', function(request){
    return [202, {"Content-Type": "application/json"}, "{}"]

Very easy to use for quick prototyping.

link bài gốc

AngularJS diary - Day 4

##Update ng-model of one controller from another

Using $broadcast to broadcast a message (or $emit) and put listener on that message on receiver.

// unrelated controllers -> $rootScope broadcasts
App.controller('Ctrl1', function($rootScope, $scope) {
  $rootScope.$broadcast('msgid', msg);

App.controller('Ctrl2', function($scope) {
  $scope.$on('msgid', function(event, msg) {

When to use $broadcast and when to use $emit?

  • $broadcast — dispatches the event downwards to all child scopes,

  • $emit — dispatches the event upwards through the scope hierarchy.

In case there is no parent-child relation, use $rootScope to $broadcast.

##Making AJAX call the right way

Use service, factory or providers for that. It’s advised not to make AJAX calls within the controller.

var ExampleService = angular.module("ExampleService", []);
ExampleService.service('ExampleService', function($http) {
  this.my_method = function() {
    var url = 'http://...';
    var req_params = {};
    return $http.post(url, req_params); // a promise

// using it
ExampleService.my_method().then(function(result) {
  $scope.my_var = result.data;

##Services vs Factory

In most cases, both can be used interchangably.

When you’re using service, Angular instantiates it behind the scenes with the new keyword. Because of that, you’ll add properties to this and the service will return this. When you pass the service into your controller, those properties on this will now be available on that controller through your service. This is why service is most suitable for generic utilities.

factory is useful for for cases like when you don’t want to expose private fields but via public methods instead. Just like a regular class object.