Tuan Anh

container nerd. k8s || GTFO

Setting up traefik as Ingress controller for Kubernetes

Just my own experience setting up traefik as Ingress controller on Kubernetes.

Install helm

brew install kubernetes-helm

Init helm

helm init

Install traefik chart with helm

Download the default values.yaml file and edit it depends on your needs. Then issue the below command.

I want to install it to kube-system namespace hence the --namespace kube-system.

helm install --name my-traefik --namespace kube-system --values values.yaml stable/traefik

If you make a mistake and want to remove it

helm delete --purge my-traefik

--purge if you want to reuse the name previously. Otherwise, helm will complain release xxx already exists.

Update your app

You will have to create an Ingress, a service and a deployment.

  • Ingress is a rule to help you setup routing traffic from a domain to your cluster service name

  • Service will expose your pods to be accessible in the cluster by selector (see example)

  • Deployment is well .. your app deployment.

It will be a whole lot easier if you see this cheese.yaml file

Download the file and kubectl create -f traefik.yaml to deploy it to your cluster.

In the example above, i will use the domain stilton.example.com

Update DNS

Create a CNAME record of stilton.example.com to point to your traefik’s ELB public address.


If you have traefik dashboard enabled, you should see new ingress and the pods it distributes load to on the web.

PS: You don’t necessarily use helm. It just make installing stuff a lot easier.

n-api: add support for abi stable module API

Add support for abi stable module API (N-API) as “Experimental feature”. The goal of this API is to provide a stable Node API for native module developers. N-API aims to provide ABI compatibility guarantees across different Node versions and also across different Node VMs - allowing N-API enabled native modules to just work across different versions and flavors of Node.js without recompilation.

Great news for native module developers :)

link bài gốc

codis - proxy based redis cluster

Proxy based Redis cluster solution supporting pipeline and scaling dynamically

Seems like more feature-riched than twemproxy or dynomite however the documentation seems poor and community is smaller.

Just something to keep in mind.

link bài gốc

Redis as a JSON store

a Redis module that provides native JSON capabilities – get it from the GitHub repository or read the docs online.

link bài gốc

Spot instances best practices


  • Build Price-Aware Applications

  • Check the Price History: In general, picking older generations of instances will result in lower net prices and fewer interruptions.

  • Use Multiple Capacity Pools: By having the ability to run across multiple pools, you reduce your application’s sensitivity to price spikes that affect a pool or two (in general, there is very little correlation between prices in different capacity pools). For example, if you run in five different pools your price swings and interruptions can be cut by 80%.

link bài gốc

Debugging why k8s autoscaler wouldn't scale down

Symptom: autoscaler works (it can scale up) but for some reasons, it doesn’t scale down after the load goes away.

I spent sometimes debugging and turns out, it’s not really a bug per se. More of a bad luck pod placement on my Kubernetes cluster.

I first added --v=4 to get more verbose logging in cluster-autoscaler and watch kubectl get logs -f cluster-autoscaler-xxx. I notice this line from the logs

<node-name> cannot be removed: non-deamons set, non-mirrored, kube-system pod present: tiller-deploy-aydsfy

This node is in fact under-ultilized but there is a non-deamons set, non-mirrored, kube-system pod presented, that’s why it can’t be removed.

tiller-deploy is a deployment that comes with Helm package manager.

So it seems I just have to migrate the pod to another node and it’s gonna be fine.

You can also read more on how cluster-autoscaler works here on GitHub

smaz.js - a Node.js module binding for smaz

smaz.js is smaz (short string compression from antirez) binding for v8 Javascript engine.

I’ve just released it on GitHub. My very first attempt at creating native library binding for Node.js.

link bài gốc

Setting up fluentd log forwarding from Kubernetes to AWS Cloudwatch Logs

Fluentd Docker image to send Kuberntes logs to CloudWatch

Very easy to setup. Good option for centralized logging if all of your infrastructures are already in AWS.

echo -n "accesskeyhere" > aws_access_key
echo -n "secretkeyhere" > aws_secret_key
kubectl create secret --namespace=kube-system generic fluentd-secrets --from-file=aws_access_key --from-file=aws_secret_key
kubectl apply -f fluentd-cloudwatch-daemonset.yaml

On a side note, I think i will need to move fluend configuration file to secret as I just want to collect logs from certain namespace/filter.

link bài gốc

Kubernetes spot termination notice handler

A DaemonSet to be run on node instance and keep polling for termination notice.

The daemonset will poll every 5 seconds which will give you approx 2 minutes to drain the spot node and migrate pods to another node.

link bài gốc

GopherCon 2016: Kelsey Hightower - Building a custom Kubernetes scheduler

How to build a custom Kubernetes scheduler by Mr. Kubernetes

link bài gốc