How to setup a home VPN with Synology NAS
Currently, I’m working on building my homelab. It’s still a very much work in progress but everything is coming along nicely.
I plan to host lots of stuff in my homelab and be able to access it while I’m not at home. I don’t feel comfortable exposing them all to the Internet so VPN to the rescue.
The setup is straight forward. It’s different, depends on your lab equipment but the steps are always the same.
- Setup VPN server in your homelab.
- Setup port forwarding in your router.
- [Optional] If your IP address is dynamic, you can setup dynamic DNS so that we can access the VPN server by domain.
First step is rather easy. I already have a Synology NAS and they have the built in VPN Server app ready to install from their package store. It’s just 1-click away. You install it, enable OpenVPN protocol and it’s done. Click export configuration afterward.
The UniFi Security Gateway also have built-in VPN server but I figure since the NAS is more powerful, I think I should offload the work to the NAS.
The second step can be done via your router. In my case, I use UniFi hardware so I’m gonna do it via UniFi Controller in
Routing & Firewall ->
Optionally, if your IP address is dynamic, you may want to setup dynamic DNS (eg: myvpn.example.com). I already covered it in a previous post using Docker and CloudFlare.
Now, edit the exported configuration and replace the server IP address with your static IP address or your dynamic DNS above.
Try connect with OpenVPN client and if all is good, you should be connected.
I found Ubiquiti has an excellent troubleshoot guide available on their website.
Some common problems are:
Double NAT (local): You have 2 routers on your local network. In that case, you either have to remove 1 router (change to AP mode?) or setup port-forwarding on both.
NAT public IP address: if you see your public IP address via your router and via, say Google, doesn’t match. That’s probably it. See the below picture, if the two IPs are not same, you got NAT public IP address.
If you go through all that and it’s still not working, it’s probably has something to do with the ISP.
How to adopt UniFi Security Gateway to an existing network
I’m by no mean a network expert. This is just my personal experience when I setup my USG to my existing network.
In my case, I was using Orbi RBK as my router and access point. With USG in place, I will use the Orbi in access point mode. The USG will be replacing the Orbi as my router.
My current network is using
10.0.0.1/8 IP range. By default, USG uses
192.168.1.1 IP which means I won’t be able to adopt it just by plugging it to my current network. So I need to change its IP address first.
You’re gonna need a PC/laptop with Ethernet port in order connect to the USG and change its IP address. Luckily, I have a desktop PC with me.
So I installed the UniFi Controller and connect the USG to the PC’s ethernet port. I set the ethernet IP of the desktop to something in the
192.168.1.1 range like
192.168.1.6 with subnetmask
Once that’s done, I open up UniFi Controller and we will be able to see and adopt the USG. The default username and password is
ubnt by the way.
After adopting the device, if you want to keep using the old IP and subnet, you will have to go to
Networks and edit the LAN network to use your desire IP and subnet.
Now, in order to replace the old router, you will have to configure the PPPOE info as well. From
Networks, click edit the WAN network and enter your Internet username & password there.
Now, I’m not sure where you’re from, what do you need to do to replace the router but here in Vietnam, I will need to call the ISP and ask them to remove the MAC address cache of the router as well.
After that, I just have to connect the internet cable to the Internet port of USG. Connect the LAN port of USG to Internet port of Orbi and it’s done. Internet is back online.
Dynamic DNS with CloudFlare
I use this project
oznu/docker-cloudflare-dns1 where the author implements everything in bash,
jq. There were a bunch of projects that does this DDNS with CloudFlare but I chose this project because of this uniqueness.
To use this, you just have to create an API token with Cloudflare that has these permissions:
- Zone - Zone Settings - Read
- Zone - Zone - Read
- Zone - DNS - Edit
Also, set zone resources to
All zones. And then, run the docker container
docker run -d \ -e API_KEY=<cloudflare-token> \ -e ZONE=<example.com> \ -e SUBDOMAIN=<subdomain> \ --restart=unless-stopped oznu/cloudflare-ddns
Traffic from #1 post on Hacker News
My last post recently featured #1 on Hacker News. This surprises me a little bit because when I first posted the link, there were little interest (~15 points) in it. I shake it off and went on with my day.
Few days later, when I was browsing Hacker News, my blog post was featured right there on the home page through someone else’s submission. The post stays at #1 for ~ 12 hours before drift off to page 2.
It also draws some interest on Twitter with the peak is Jeff Barr wrote a tweet about it.
Here are some numbers for you stats junkie; just so you know what kind of traffic you can expect from HackerNews.
The post attracts 15k unique users on the day of submission, 3k users on second day and eventually back to normal on the 4th day.
Traffic stats on Cloudflare seems a bit inflated with the unique visitors at 36k. Maybe they do count bots as well there.
Since my site is static and hosted by Cloudflare Workers Site, the load speed is pretty much the same throughout the day. Big fan of Cloudflare and their products.
The story behind my talk: Cloud Cost Optimization at Scale: How we use Kubernetes and spot instances to reduce EC2 billing up to 80%
This is the story behind my talk: “Cloud Cost Optimization at Scale: How we use Kubernetes and spot instances to reduce EC2 billing up to 80%”.
Now before I tell this story, I will admit first hand that the actual number is lower than 80%.
The story began in mid 2015 when I was employed by one of my ex-employer. It was a .NET framework shop that struggled to scale in both performance and cost at the time. I was hired as developer to work on the API integration but I can’t help to notice too much money was sunk into AWS EC2 billing. Bear in mind I’m not an Ops guy by any mean but you know about startup, one usually has to wear many hats.
At first, when the AWS credit is still plenty, we don’t have to worry much about it. But when it ran low, it’s clearly becoming one of the biggest pain point of our startup.
The situation at the time was like this:
- There were 2 teams: the core team using .NET framework and the API team using Node.js
- Core team mostly uses Windows-based instance and API team uses Linux-based.
- The core team uses a lot more instances than API team.
- Most EC2 instances are Windows-based. All are on-demand instances. No reserved instances whatsoever 😨.
- Few are Linux-based instances where we install other linux based applications but there weren’t many of them.
- On-demand Windows-based instance price is about 30% higher than on-demand Linux instances.
- We use RDS for database.
- We don’t have any real ops guy as you think these days. Whenever we need something setup, we just have to page someone from India team to create instances for us and then proceed to set them up ourselves.
Now, the biggest cost are obviously RDS and EC2. If I were to assigned to optimize this, I will definitely take a look at those 2 first. But I wasn’t working on it at that time. I was hired to do other things.
At that time, I was using Deis - a container management solution (acquired by Microsoft later) for my projects. I experimented shortly with Flynn but ended up not using it.
In 2016, I heard of this startup called Spotinst. I found several useful posts from their blog regarding EC2 cost optimization and find their whole startup ideas very fascinating. For those of you who are not working with infrastructure, the whole idea of Spotinst is to use spot instances to reduce the infrastructure cost for you. And they take some cut from it.
Spotinst automates cloud infrastructure to improve performance, reduce complexity and optimize costs.
Spot instances are very cheap (think 70-90% cheaper vs on-demand) EC2 offering from AWS but comes with a small problem: it can goes away anytime with just 2 minutes notice.
I thought if we can design our workload to be fault tolerant and gracefully shutdown, spot instances will make perfect sense. Or anything like a queue and worker workload would fit as well. Web apps, on another hand, will be a little bit more difficult but totally do-able.
During 2016, I also learnt about this super duper cool project called Kubernetes. I believe they were at version 1.2 at the time.
Kubernetes comes with the promise of many awesome features but what caught my eyes were this “self-healing” feature. This make perfect complement with spot instances, I thought.
And so I dig a little bit more to see if I can set one up with spot instances and they do support it. Awesome!! 🥰
Now, the only problem left is our core team still need Windows and Kubernetes didn’t support Windows at the time. So my whole infrastructure revamp idea is useless now, or so I thought.
In mid 2016, I learnt about .NET core project. They were around 1.0 release at the time. One of the feature is cross-platform. I thought to myself: I can still salvage this.
Now, please note that I’m a Node.js guy and I don’t know much about .NET aside from my thesis in university. So I asked the lead guy from core team to take a look into it and while there are many quirks, it’s actually not very difficult to migrate our core to .NET Core. It would be time consuming but it’s very much doable. I know that .NET Core is going to be the future so eventually, we will need to migrate to it anyway.
Tests + Migration
While the core team do that, I setup a test cluster with spot instances and learnt Kubernetes. I optimized the cluster setup a little bit and migrate all my projects over to them by the end of 2016. The whole process is quite fast because all the apps I have (Node.js) are already Dockerized and have graceful shutdown implemented. I just need to learn the in-and-out of Kubernetes.
Some of the changes I did for the production cluster is:
- Setup instance termination daemon to notify all the containers + graceful shutdown for all the apps.
- Setup multiple instance groups of various size and availablity zone, mixing spot instances with reserved instances. This is to prevent price spike of certain spot instance group; and minimize the chances of all spot instances going down at the same time.
- Calculate and provision a slightly bigger fleet then what we actually need so that when there were instances shut off, there won’t be service degradation. Because spot instances are so cheap, we can do this without worry much about the cost.
- Watch to see if there were scheduling failture to scale the reserved instance groups.
At this point, our API apps’ EC2 cost is already very managable. We’re waiting for the core team to migrate over. And we did that in 2017. The overall cost saving for EC2 was around 60-70% because we need to mix reserved instances in and provision a little higher than what we actually need. We were very happy with the result.
What we did back then is actually what Spotinst does but at much smaller scale. And it’s more doable with smaller startups with only 1 ops guy.
And that is my story behind the talk: “Cloud Cost Optimization at Scale: How we use Kubernetes and spot instances to reduce EC2 billing up to 80%”.
Update: #1 on HackerNews. Yay!
Thoughts on Workers KV
Infrequent write / frequent read
I tried to build a todobackend.com with Cloudflare workers and Workers KV. However, the specs runner keeps failing, inconsistently.
Meaning they would pass this run and fail the next. Manual tests usually doesn’t have this problem. This tells me it seems Workers KV is not synchronous or the data replication is slow.
Turns out, it’s mentioned right there in the Workers KV’s docs; emphasises are mine.
Workers KV is generally good for use-cases where you need to write relatively infrequently, but read quickly and frequently. It is optimized for these high-read applications, only reaching its full performance when data is being frequently read. Very infrequently read values are stored centrally, while more popular values are maintained in all of our data centers around the world.
KV achieves this performance by being eventually-consistent. New key-value pairs are immediately available everywhere, but value changes may take up to 60 seconds to propagate. Workers KV isn’t ideal for situations where you need support for atomic operations or where values must be read and written in a single transaction.
With this, todobackend.com specs runner would never pass.
The Workers KV API is quite simple for now and I do hope they keep it that way, or maybe resemble Redis API with some more data types like list / sorted set. That would be lovely.
Batch load is not yet supported
It’s on roadmap but not yet available. So the result of
.list() will have to be mapped with a
Promise.all like this
await Promise.all(keys.map(key => myKvStore.get(key.name)))
With these limitations, Workers KV are more suitable for keeping build assets or anything that doesn’t need close-to-real-time propagation. Keep that in mind when you want to build something with Workers KV; also watch this space as Cloudflare is moving pretty fast.
Some other limits can be found here on Cloudflare workers docs.
Another experiment with Cloudflare workers. I haven’t use Worker KV here though.
reader is a service that mimic reader mode on browser and let user shares the reader mode view on the web. It’s still super buggy now due to lib that I use is quite abandoned at the moment. I just want to whip out something that works first.
Something I learnt from reading Cloudflare workers docs while doing this:
- HTMLRewriter is delightful even though I didn’t get to use it (much) in this small project.
- Worker KV is another nice bit from them. With this, it’s probably be enough to build a complete web apps.
Next, I’m going to look at Worker KV and HTMLRewriter more in an attempt to build something that use both of those features.
I’ve been meaning to try Cloudflare Workers with my blog. Given that it’s static website, it should be straight forward to do.
They (Cloudflare) makes it incredibly easy to migrate. Their tutorial works just fine with a minor exception regarding the DNS setup. The whole process took like 5 mins overall.
I just had to do an additional step of setting up an A record of my domain to
192.0.2.1 so that they can be resolved to Cloudflare Workers.
The site is now up at https://tuananh.net. Eventually, if all is well, I think i’m gonna migrate mine currently on RamNode over to them.
There’s this idea that, if you do great work at your job, people will (or should!) automatically recognize that work and reward you for it with promotions / increased pay. In practice, it’s often more complicated than that – some kinds of important work are more visible/memorable than others. It’s frustrating to have done something really important and later realize that you didn’t get rewarded for it just because the people making the decision didn’t understand or remember what you did. So I want to talk about a tactic that I and lots of people I work with have used!
I’ve been doing this for years and it really works. Highly recommend you give this post a read.
Debugging with git bisect
Suppose I have this project with 5 commits. You can clone it from here.
Say, there’s a regression bug in the master branch but a lot has been added to
master after the feature was first inroduced. How would I go debugging this? Which commits break it?
Usually, we would go manually and see which commit would possibly do this but if the project is large and active, it’s a quite troulesome process.
Luckily, we have
git bisect for that.
- Go to the project and issue
git bisect start
- Mark the bad commit by
git bisect bad <commit-id>. You can ommit the commit id if you’re already on it.
- Mark the good commit by
git bisect good <commit-id>.
- Add the tests for the regression bug. In this case it’s
I’m gonna go ahead and add a fail test case for the regression bug I’m having. You may argue why not add test in the first place? Well, this is just an example so I have 0 test for it.
In actual scenario, you can have an extensive test cases but still can miss an edge case. In that scenario, you will need to add that fail test for that edge case here.
// test.js const assert = require('assert') const add = require('./add') assert(add(1,2) === 3, 'one plus two should equal to three')
git bisect run <test-command>. In this case, it would be
git bisect run node test.js.
git bisect logand see the result. It would look like this.
# bad: [addb180af061bbfbad298cd6a9ad2110df0f873e] feat: add multiply git bisect bad addb180af061bbfbad298cd6a9ad2110df0f873e # good: [7688391b1a9b133bef92198e376c9f5979260ade] feat: add add() function git bisect good 7688391b1a9b133bef92198e376c9f5979260ade # bad: [d504f94f1d71c93deb9d9bbdf87bfe333bbecff6] chore: add readme git bisect bad d504f94f1d71c93deb9d9bbdf87bfe333bbecff6 # bad: [d516aaf29331953382a8558f013b683427d7a390] feat: add subtract() function git bisect bad d516aaf29331953382a8558f013b683427d7a390 # first bad commit: [d516aaf29331953382a8558f013b683427d7a390] feat: add subtract() function
There you can see which first commit makes the test fail is
[d516aaf29331953382a8558f013b683427d7a390] feat: add subtract() function.
git bisect resetwhen you’re done.