Tuan Anh

container nerd. k8s || GTFO

Better MySQL pagination

Consider this

SELECT * from Bookings LIMIT 5000,10

versus this

SELECT * from Bookings
INNER JOIN (Select id FROM BOOKING LIMIT 5000,10) AS result USING (id)

My Bookings table has merely 6000 records yet the first query takes approx ~10 seconds which is outrageous. Luckily, we can optimize this using late lookups like in query 2.

In the second query, we select id from Bookings and then join the original table back. This will make each individual row lookup less efficient but the total number of lookups will be reduced by a lot.

Also, if you could pass more condition into the select, it will greatly improve the performance as well. So instead of making your paging url like this example.com/products?page=10, you can use example.com/products?page=10&last_seen=1023. From that, you can pass WHERE id > 1023 into the pagination query making the whole thing a lot faster.

Those are what I used for optimizing pagination. If you know any other, please let me know.

Peace out, everybody.


qmail is a mail transfer agent (MTA) that runs on Unix. It was written, starting December 1995, by Daniel J. Bernstein (djb) as a more secure replacement for the popular Sendmail program.

Một chút background: vào khoảng những năm 80,90 của thế kỷ trước, các máy chủ email và dns thường sử dụng Sendmail và BIND9. Tuy nhiên, 2 phần mềm này có cực kì nhiều lỗi. djb viết ra qmail và djbdns để thay thế 2 phần mềm này và tuyên bố 2 phần mềm này là “bug-free”. Hai phần mềm này được chạy trên hàng triệu tên miền và cực kì ổn định. Lỗi đầu tiên của qmail được tìm thấy sau gần 1 thập kỉ đủ để nói lên sự ổn định của nó. Có lẽ trong lịch sử chưa có một lập trình viên nào đạt đến ngưỡng này.

Daniel Julius Bernstein (sometimes known simply as djb; born October 29, 1971) is a German-American mathematician, cryptologist, programmer, and professor of mathematics and computer science at the Eindhoven University of Technology and research professor at the University of Illinois at Chicago. He is the author of the computer software programs qmail, publicfile, and djbdns.

link bài gốc

Performance at Rest


Benchmarking frameworks is fucking stupid.

link bài gốc

Explicit over clever

I always prefer explicit over clever, hacky hack. Explicit make the code looks clearer, more maintainable and leaning toward a more predictable behavior (aka junior developers will be less likely to mess it up).

Take an example of this code where I have a folder called providers. This here below is the content of the index.js file which basically read all the files in that folder (except index.js), require them and then module.exports that. It seems simple enough right?

var fs = require('fs');
var path = require('path');
var basename = path.basename(module.filename);
var providers = {};
var extension = '.js';

    .filter(function(file) {
        return (file.indexOf('.') !== 0) && (file !== basename) && (file.slice(-3) === extension);
    .forEach(function(file) {
        var provider = require(path.join(__dirname, file));
        var key = makeKey(file,extension)
        providers[key] = provider;

/* ... */

Now, take a look at this code where I explicitly require all the files in that folder (sounds exhausting right?)

let providers = {
    Auth: require('./auth'),
    User: require('./user'),
    Health: require('./health')
/* ... */
module.exports = providers

Now, say if you were a newly joined member of the project, which looks less intimidating to you? I’m not saying we can’t achieve both explicit and clever at the same time; but if I need to make a choice selecting one over another, I would always go with explicit.

Step by step how to install Deis on AWS

Deis (pronounced DAY-iss) is an open source PaaS that makes it easy to deploy and manage applications on your own servers. Deis builds upon Docker and CoreOS to provide a lightweight PaaS with a Heroku-inspired workflow.

I struggled installing Deis and it took me several times to get it right. Deis’s documentation is correct but not very straight forward so I decided to write this to help others that struggle like me. This steps works for me as of version 1.12.2


  • Install deisctl. This is needed for provision script.
$ cd ~/bin
$ curl -sSL http://deis.io/deisctl/install.sh | sh -s <latest-version-here>
$ # on CoreOS, add "sudo" to install to /opt/bin/deisctl
$ curl -sSL http://deis.io/deisctl/install.sh | sudo sh -s <latest-version-here>
  • Install AWS Command line interface and configure it
$ pip install awscli
$ pip install pyyaml
$ aws configure
AWS Access Key ID [None]: ***************
AWS Secret Access Key [None]: ************************
Default region name [None]: us-west-1
Default output format [None]:
  • Generate and upload keys to AWS. Also add it to ssh-agent so that it can use during provisioning the cluster.
$ ssh-keygen -q -t rsa -f ~/.ssh/deis -N '' -C deis
$ aws ec2 import-key-pair --key-name deis --public-key-material file://~/.ssh/deis.pub
$ eval `ssh-agent -s`
$ ssh-add ~/.ssh/deis
  • If you want to use more than 3 instances (default), just export DEIS_NUM_INSTANCES

Provision the cluster

  • Clone the repo, git checkout the latest tag. At repo root, run this command below to create discovery url. Forget to do this will result in etcd not configured properly.
$ make discovery-url
  • Next, go to folder contrib/aws/ in deis repo, create a file name cloudformation.json in order to override default values. You can take a look at the template file deis.template.json.

  • Run the provision script

$ cd contrib/aws
$ ./provision-aws-cluster.sh
Creating CloudFormation stack deis
    "StackId": "arn:aws:cloudformation:us-east-1:69326027886:stack/deis/1e9916b0-d7ea-11e4-a0be-50d2020578e0"
Waiting for instances to be created...
Waiting for instances to be created... CREATE_IN_PROGRESS
Waiting for instances to pass initial health checks...
Waiting for instances to pass initial health checks...
Waiting for instances to pass initial health checks...
Instances are available:
i-5c3c91aa    m3.large        us-east-1a      running
i-403c91b6    m3.large        us-east-1a      running
i-e36fc6ee    m3.large        us-east-1b      running
Using ELB deis-DeisWebE-17PGCR3KPJC54 at deis-DeisWebE-17PGCR3KPJC54-1499385382.us-east-1.elb.amazonaws.com
Your Deis cluster has been successfully deployed to AWS CloudFormation and is started.
Please continue to follow the instructions in the documentation.

Install platform

$ export DEISCTL_TUNNEL=<ip-address-of-any-of-the-cluster-node>
$ deisctl config platform set sshPrivateKey=~/.ssh/deis
$ deisctl config platform set domain=deis.example.com # create a CNAME point this to the load balancer
$ deisctl install platform
$ deisctl start platform

After this, you should have a proper configured Deis cluster. Just install the client, register an account and you should be ready to deploy your very first application on Deis.

Metalsmith - a static site generator written in Node.js

An extremely simple, pluggable static site generator.

Metalsmith works in three simple steps:

  • Read all the files in a source directory.

  • Invoke a series of plugins that manipulate the files.

  • Write the results to a destination directory!

Very simple yet powerful. Metalsmith is like gulp but made for static website/content.

link bài gốc


Drag a window up to the menubar to maximize it, or drag a window to the side to resize it to the corresponding screen half.


link bài gốc

Possibly the easiest way to setup rtorrent/rutorrent

Installation script which is cumbersome, not very easy to use and sometimes goes unmaintained but you don’t know about that yet and ended messing up your VPS file system.

In this post, I will show you how to setup rtorrent/rutorrent with Docker. It’s rather easy and requires minimal knowledge about Linux in general.

Grab a KVM VPS from RamNode

I’m going with RamNode here but you can use any VPS provider you want. Be sure to select KVM because Docker is going to require a rather new kernel so OpenVZ won’t work. I select Ubuntu 14.04 in this tutorial. To make it easy to follow, you can do the same.

After finishing the payment, you will get an email containing your VPS information. You can perform some simple initial server setup to secure your VPS after this.

Install Docker

sudo apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

sudo touch /etc/apt/sources.list.d/docker.list

sudo echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" > /etc/apt/sources.list.d/docker.list

sudo apt-get update

sudo apt-get install docker-engine

mkdir ~/rtorrent # folder for rtorrent docker

sudo usermod -aG docker <your-username> # use docker without sudo, if your user is in sudo group

Setup rtorrent/rutorrent

You will need to secure your rutorrent installation so that noone else except you can access it.

cd ~/rtorrent
printf "<username>:$(openssl passwd -apr1 "<your-passsword>")\n" > .htpasswd

Now we are going to use the Dockerfile made by diameter to simplify the whole process of setting up rtorrent/rutorrent/nginx. If you want to see the source code, you can check it out here on Docker Hub.

docker run -dt --name rtorrent-rutorrent -p 8080:80 -p 49160:49160/udp -p 49161:49161 -v ~/rtorrent:/downloads diameter/rtorrent-rutorrent:64

Let docker pull and build for awhile (1,2 mins at most) and you can navigate to http://<your-server-ip-address>:8080 to start using rutorrent.

rutorrent docker

Stuff you may not know about console


var Person = function(name, age) {
    this.name = name
    this.age = age

var person_list = [
    new Person('wayne', 10),
    new Person('cristiano', 20),
    new Person('david', 30),
    new Person('bastian', 40)

console.table(person_list, ['name', 'age']) // filtering columns

You’ll get a nice table like this



console.time('Operation A')
var arr = []
for (var i =0; i < 100000; ++i) {
console.timeEnd('Operation A')

> Operation A: 395.557ms


for (var i = 0; i < 100000; ++i) {
    console.assert(i % 1000, 'Iteration #%d', i)

This will only print when first arg (i % 1000) is false.


Group so that you can collapse/expand messages within a group.

console.group('Group A')
console.log('First line')
console.log('Second line')
console.log('Third line')

Node.js happy coding

I’ve been doing Node.js professionally for roughly 2 years. During that time, I’ve learnt a thing or two that keeps me away from troubles.

Use Promise instead of callback

ES6 gets native Promise already but if you prefer something more convenient that 3rd-party libraries offer, pick something like bluebird (it’s really really fast). Stay away from Q. I mean, just look at this.

Do not trust developer’s semver practice

use npm shrinkwrap instead

This command locks down the versions of a package's dependencies so that you can control exactly which versions of each dependency will be used when your package is installed.

Personally, i would prefer npm support something like --save-exact flag. That would be awesome.

Choosing the right dependency

You already have enough bug fixing jobs on your plate. Don’t import more from others’. There are several things that go into consideration when i need to install an extra package:

  • statistic on npm/github: popular is good

  • check if the project is active: last commit, etc..

  • check if the issues get resolved timely.

  • check unit testing is well covered?

  • check its dependencies: cover all things above. Personally, I wouldn’t want to use anything that depends on Q. It shows the author didn’t do the homework quite well, or at the very least, didn’t keep the package up to date with current situation in Node.js eco.

Use linter

Respect the code convention. The ultimate goal is having code written by different developers, looks as if they were written by the same person.