Distributed tracing is the new structured logging
Structured logging
Một best practice vẫn được recommend cho tới bây giờ là structured logging.
Structured log là 1 dạng logging theo kiểu key=val
để có thể giúp chúng ta dễ
dàng parse log và đưa vào 1 log store để tiện query và phân tích.
logger.info({
request_time: 1000,
payload_size: 2000
})
sau đó 1 đoạn structured log sẽ đc generated ra kiểu này, ngoài các metadata chúng ta log thì còn
đi kèm thêm 1 số metadata khác như timestamp
, hostname
, deployment name
, pod name
, etc.. tùy vào cách chúng ta muốn annotate thêm gì
{"level":30,"time":1531171082399,"pid":657,"hostname":"x300","request_time":1000,"payload_size":2000}
Nhìn qua thì chúng ta có thể thấy structured logging giống như việc chúng ta define 1 bảng quan hệ (relational table) hoặc 1 schema.
Nhưng việc structured logging giống với 1 bảng quan hệ thì cũng ko nên nhét chúng vào 1 database quan hệ nha. Cái đó sẽ là thảm họa nếu app của bạn log nhiều.
Cho tới lúc này, chúng ta có thể làm các việc kiểu filter/query hoặc aggregate dữ liệu log như SQL được. Trừ việc JOIN để lấy dữ liệu của query context.
Tới đây thì mọi việc vẫn tạm ổn đúng ko?
Come microservices!!
Mọi việc vẫn ổn cho tới khi chúng ta đổi sang kiến trúc microservices. Với microservice, chúng ta chỉ có 1 phần của bức tranh (context) mà chỉ available ở microservice đó thôi.
Nếu muốn log thêm thì sao? Ví dụ: nếu bạn cần log ra các experiment flags trong context của request đó.
Đơn giản đúng ko? Chúng ta chỉ cần pass thông tin đó qua cho microservice mà chúng ta cần phải ko? và giải quyết việc JOIN đó ở tầng app code khi chúng ta log thêm thông tin mà chúng ta vừa pass qua.
logger.info({
request_time: 1000,
payload_size: 2000,
experiment_flags: ['ABTEST_1', 'ABTEST_2']
})
Nhưng việc làm thế này là cực kì tốn công sức maintain, dễ gây lỗi và ko thể scale đc.
Distributed tracing to the rescue
Distributed tracing nổi lên cùng thời điểm kiến trúc microservice đã khá mature. Khi mà mọi người đã hiểu rõ microservice hơn và các điểm hạn chế của nó (namely obversability).
Và đây là lý do mình nói “distributed tracing is the new structured logging”. Nó sẽ là 1 phần ko thể thiếu với microservice architecture và có thể thay thế (có thể ko hoàn toàn) cho structured logging.
The state of Linux on desktop (2020)
I got fed up with macOS. While the new hardware(Apple Silicon) got amazing feedbacks, the OS itself is so lag behind.
I got a Windows 10 desktop at home and heck, it was even much more pleasant to use than using macOS.
- As a typical user (web browsing, mail and office stuff), Windows 10 is very good.
- As a developer, it’s getting a lot better with WSL/Microsoft Terminal/etc…
I decided to give Linux another evaluation test. I pick Manjaro - an Arch-based over an Ubuntu-based distro this time after hearing all kind of praise from its users. But I also don’t want to configure everything I need to use, hence Manjaro.
Manjaro is a user-friendly Linux distribution based on the independently developed Arch operating system. Within the Linux community, Arch itself is renowned for being an exceptionally fast, powerful, and lightweight distribution that provides access to the very latest cutting edge - and bleeding edge - software. However, Arch is also aimed at more experienced or technically-minded users. As such, it is generally considered to be beyond the reach of those who lack the technical expertise (or persistence) required to use it.
via wiki.manjaro.org
So looks like it got the best of both worlds right?
The test setup
I built a new mini PC recently. It’s the Asrock Deskmini X300W that use AMD processor. If you prefer Intel, you can choose the Intel version of the box.
I went with AMD because I like their Zen offering and I would love to support them.
I just throw a 6 cores AMD 4650G processor, 32GB of 3200Mhz Crucial memory, 512GB Samsung NVME drive for OS and other stuff plus another 1TB 2.5’ SSD for storage.
For OS, I went with Manjaro KDE variant because I like the look of it.
The experience
Almost everything works out of the box.
-
The graphic works right. I do not have an Intel GPU so it’s much easier for me but I hear terrifying stories from other side of the world.
-
WiFi works. Zero complaints here.
-
The bluetooth is almost ok. Most stuff I throw at it works, except an old Xbox One controller of mine. The one came with Xbox One S works with 1 minor additional step (disable ERTM). I tested with 4 bluetooth mouses, 2 keyboards, 1 speaker and 2 Xbox controllers.
-
Since I pick KDE, it’s a bit troublesome to use setup i3 wm. After reading several tutorials, I decided not to bother with one. Instead, I settled with krohnkite plugin for KWin. It works really well for my needs , given that my needs are pretty basic.
-
I do gaming once in awhile and Manjaro even came bundled with Steam (LOL). One might say it’s so bloat but I’m ok. Storage is cheap these days.
-
Developer experience is awesome. Linux is usually first-class platform for open source projects. Everything just works. Docker is so fast because no VM required. It’s the best platform for developers, hands down.
Conclusion
So far, I’m loving it. It does everything I need and works with all the peripherals I have, with the exception of the Xbox One controller (wired connection still work though). I’m gonna stick with Manjaro for now. I don’t see myself moving to Arch since my love for tweaking the system is long gone. I just want something that works and Manjaro does work very well for me.
Cloudflare Warp is currently not supporting Linux. However, since it’s just Wireguard underneath, we can still use it unofficially.
Install wgcf and wireguard-tools
- Get
wgcf
from its repo. - Install
wireguard-tools
. I use Manjaro so I will use pacman for thispacman -S wireguard-tools
.
Generate Wireguard config
You can now use wgcf
to register, and then generate Wireguard config.
wgcf register
wgcf generate
register
command will create a file namedwgcf-account.toml
.generate
command will generate wireguard config file namedwgcf-profile.conf
.
Usage
Now, copy the generated profile over to /etc/wireguard
and use wg-quick
utility to simplify setting wireguard interface.
sudo cp wgcf-profile.conf /etc/wireguard
wg-quick up wgcf-profile
Verify it’s working with wgcf trace
or navigate to this page: https://www.cloudflare.com/cdn-cgi/trace. The output should have warp: on
.
TLDR: I wrote a SAX parser for Node.js. It’s available here on GitHub : https://github.com/tuananh/sax-parser
I got asked about complete XML parsing with camaro
from time to time and I haven’t yet managed to find time to implement yet.
Initially I thought it should be part of camaro
project but now I think it would make more sense as a separate package.
The package is still in alpha state and should not be used in production but if you want to try it, it’s available on npm as @tuananh/sax-parser
.
Benchmark
The initial benchmark looks pretty good. I just extract the benchmark script from node-expat
repo and add few more contenders.
sax x 14,277 ops/sec ±0.73% (87 runs sampled)
@tuananh/sax-parser x 45,779 ops/sec ±0.85% (85 runs sampled)
node-xml x 4,335 ops/sec ±0.51% (86 runs sampled)
node-expat x 13,028 ops/sec ±0.39% (88 runs sampled)
ltx x 81,722 ops/sec ±0.73% (89 runs sampled)
libxmljs x 8,927 ops/sec ±1.02% (88 runs sampled)
Fastest is ltx
ltx
package is fastest, win by almost 2 (~1.8) order of magnitude compare with the second fastest (@tuananh/sax-parser). However, ltx
is not fully compliant with XML spec. I still include ltx
here for reference. If ltx
works for you, use it.
module | ops/sec | native | XML compliant | stream |
---|---|---|---|---|
node-xml | 4,335 | ☐ | ✘ | ✘ |
libxmljs | 8,927 | ✘ | ✘ | ☐ |
node-expat | 13,028 | ✘ | ✘ | ✘ |
sax | 14,277 | ☐ | ✘ | ✘ |
@tuananh/sax-parser | 45,779 | ✘ | ✘ | ✘ |
ltx | 81,722 | ☐ | ☐ | ✘ |
API
The API looks simply enough and quite familiar with other SAX parsers. In fact, I took the inspiration from them (sax
and node-expat
) and mostly copied their APIs to make the transition easier.
An example of using @tuananh/sax-parser
to prettify XML would be like this
const { readFileSync } = require('fs')
const SaxParser = require('@tuananh/sax-parser')
const parser = new SaxParser()
let depth = 0
parser.on('startElement', (name) => {
let str = ''
for (let i = 0; i < depth; ++i) str += ' ' // indentation
str += `<${name}>`
process.stdout.write(str + '\n')
depth++
})
parser.on('text', (text) => {
let str = ''
for (let i = 0; i < depth + 1; ++i) str += ' ' // indentation
str += text
process.stdout.write(str + '\n')
})
parser.on('endElement', (name) => {
depth--
let str = ''
for (let i = 0; i < depth; ++i) str += ' ' // indentation
str += `<${name}>`
process.stdout.write(str + '\n')
})
parser.on('startAttribute', (name, value) => {
// console.log('startAttribute', name, value)
})
parser.on('endAttribute', () => {
// console.log('endAttribute')
})
parser.on('cdata', (cdata) => {
let str = ''
for (let i = 0; i < depth + 1; ++i) str += ' ' // indentation
str += `<![CDATA[${cdata}]]>`
process.stdout.write(str)
process.stdout.write('\n')
})
parser.on('comment', (comment) => {
process.stdout.write(`<!--${comment}-->\n`)
})
parser.on('doctype', (doctype) => {
process.stdout.write(`<!DOCTYPE ${doctype}>\n`)
})
parser.on('startDocument', () => {
process.stdout.write(`<!--=== START ===-->\n`)
})
parser.on('endDocument', () => {
process.stdout.write(`<!--=== END ===-->`)
})
const xml = readFileSync(__dirname + '/../benchmark/test.xml', 'utf-8')
parser.parse(xml)
camaro v6
I recently discover piscina project. It’s a very fast and convenient Node.js worker thread pool implementation.
Remember when worker_threads
first introduced, the worker startup is rather slow and pool implementation is generally advised. However, there wasn’t any good enough implementation yet until piscina
.
Since v4 when I move to WebAssembly, camaro performance took a huge hit (3 folds) and I was still trying to find a way to fix this perf regression.
Well, piscina
(worker_threads
) seems to be the answer to that.
Take a look at piscina
example:
const Piscina = require('piscina');
const piscina = new Piscina({
filename: path.resolve(__dirname, 'worker.js')
});
(async function() {
const result = await piscina.runTask({ a: 4, b: 6 });
console.log(result); // Prints 10
})();
and worker.js
module.exports = ({ a, b }) => {
return a + b;
};
Sure it looks simple enough so I wrote a quick script to wrap camaro
with piscina
. And the performance improvement is sweet: it’s about five times faster (ops/sec) and the CPU on my laptop is stressed nicely.
camaro v6: 1,395.6 ops/sec
fast-xml-parser: 153 ops/sec
xml2js: 47.6 ops/sec
xml-js: 51 ops/sec
More importantly, it scales nicely with CPU core counts, which camaro
v4 with WebAssembly isn’t.
In order to use this, I would have to drop support for Node version 11 and older but the performance improvement of this magnitude should guarantee such breaking changes right?
I published the first alpha build to npm if anyone want to give it a try.
From Zsh to Fish on macOS
I recently give fish shell another try and it doesn’t disappoint me this time.
The support from various tools has improve tremendously and the ecosystem seesm to be a lot more mature last I tried.
It tooks me like 15-20 minutes to migrate over everything to fish and it seems fish provides everything I need from zsh out of the box. Remind me why I need oh-my-zsh
again?
Installation
Install via homebrew
and set fish
as default shell.
brew install fish
chsh -s (which fish)
To go back to zsh
: do chsh -s (which zsh)
.
Migration
fish
’s configuration is located at $HOME/.config/fish
. The equivalent of .zshrc
or .bashrc
is config.fish
at $HOME/.config/fish
.
Sourcing
The source
command work just like normal. By default, fish
will source from files in $HOME/.config/fish/conf.d
folder automatically so you can put your aliases, functions, .. there.
Fixing functions
A typical function in fish
looks like this. I take gi
(gitignore) function as a simple example. Seems pretty straightforward and even more self-explain than in zsh.
function gi -d "gitignore.io cli for fish"
set -l params (echo $argv|tr ' ' ',')
curl -s https://www.gitignore.io/api/$params
end
Checking other stuff you use
If there’s no fish
support from the tool you use, there’s bass which add support for bash utilties from fish shell.
Example with nvm
:
bass source ~/.nvm/nvm.sh --no-use ';' nvm use node # latest
However, using bass
can make it quite slow in some cases. So if the tools you use do support fish
, use it native functions.
Package manager
There are:
I haven’t actually check them all out. I just went with the first result I got (fisher) and it’s working pretty well for the purpose.
Disable welcome message
set fish_greeting
FAQs
The FAQs is very nice. Be sure to check it out.
kubectl run generators removed
Đây là merged pull request liên quan.
Tóm tắt lại, trước đây nếu cần tạo deployment, bạn chỉ cần
kubectl run nginx --image=nginx:alpine --port=80 --restart=Always
Tính năng này được sử dụng rất nhiều vì 1 minimal deployment YAML khá dài. Đây là ví dụ
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
Trước đây, để tạo 1 deployment và expose thì chỉ cần đơn giản 2 lệnh là
kubectl run nginx --image=nginx:alpine --port=80 --restart=Always
kubectl expose deployment nginx --port=80 --type=LoadBalancer
Bây giờ, bạn cần tự nhớ deployment YAML và expose nó với lệnh kubectl expose
.
Thường thì mọi người không nhớ format của deployment và chỉ xài kubectl run
với flags -o yaml
và --dry-run
để lấy output ra và edit tiếp.
Lệnh này được sử dụng cực kì phổ biến và sử dụng rất nhiều khi thi CKA (Certified Kubernetes Administrator) hay CKAD (Certified Kubernetes Application Developer).
kubectl create deployment nginx --image=nginx:alpine -o yaml --dry-run
Bởi vậy nếu ai có ý định thi CKA/CKAD thì cố gắng nhớ format của mấy loại resource cơ bản đi nhé :)
Using Synology NFS as external storage with Kubernetes
For home usage, I highly recommend microk8s
. It can be installed easily with snap
. I’m not sure what’s the deal with snap
for Ubuntu desktop users but I’ve only experience installing microk8s
with it. And so far, it works well for the purpose.
Initially, I went with Docker Swarm because it’s so easy to setup but Docker Swarm feels like a hack. Also, it seems Swarm is already dead in the water. And since I’ve already been using Kubernetes at work for over 4 years, I finally settle down with microk8s
. The other alternative is k3s
didn’t work quite as expected as well but this should be for another post.
Setup a simple Kubernetes cluster
Setting Kubernetes is as simple as install microk8s
on each host and another command to join them together. The process is very much simliar with Docker Swarm. Follow the guide on installing and multi-node setup on microk8s official website and you should be good to go.
Now, onto storage. I would like to have external storage so that it would be easy to backup my data. I already have my Synology setup and it comes with NFS so to keep my setup simple, I’m going to use Synology for that. I know it’s not the most secure thing but for homelab, this would do.
Please note that most the tutorial for Kubernetes will be outdated quickly. In this setup, I will be using Kubernetes v1.18.
Step 0: Enable Synology NFS
Enable NFS from Control Panel
-> File Services
Enable access for every node in the cluster in Shared Folder
-> Edit
-> NFS Permissions
settings.
There’re few things to note here
- Because every nodes need to be able to mount the share folder as
root
so you need to selectNo mapping
in theSquash
dropdown ofNFS Permissions
. - Check the
Allow connections from non-previleged ports
also.
With Helm
nfs-client
external storage is provided as a chart over at kubernetes incubator. With Helm, installing is as easy as
helm install stable/nfs-client-provisioner --set nfs.server=<SYNOLOGY_IP> --set nfs.path=/example/path
Without Helm
Step 1: Setup NFS client
You need to install nfs-common
on every node.
sudo apt install nfs-common -y
Step 2: Deploy NFS provisioner
Replace SYNOLOGY_IP
with your Synology IP address and VOLUME_PATH
with NFS mount point on your Synology.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: <SYNOLOGY_IP>
- name: NFS_PATH
value: <VOLUME_PATH>
volumes:
- name: nfs-client-root
nfs:
server: <SYNOLOGY_IP>
path: <VOLUME_PATH>
Setup RBAC and storage class
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
allowVolumeExpansion: "true"
reclaimPolicy: "Delete"
Step 3: Set NFS as the new default storage class
Set nfs-storage
as the default storage class instead of the default rook-ceph-block
.
kubectl patch storageclass rook-ceph-block -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Testing
We will create a simple pod and pvc to test. Create test-pod.yaml
and test-claim.yaml
that looks like this in a test
folder
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: gcr.io/google_containers/busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
and test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "nfs-client" # nfs-client is default value of helm chart, change accordingly
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
And do kubectl create -f test/
. You should see the PVC bounded and pod completed after awhile. Browse the NFS share and if you see a folder is created with a SUCCESS
file inside, everything is working as expected.
Debugging Kubernetes: Unable to connect to the server: EOF
We had an EC2 instance retirement notice email from AWS. It was our Kubernetes master node. I thought to myself: we can simply just terminate and launch a new instance. I’ve done it many times. It’s no big deal.
However, this time, when our infra engineer did that, we were greeted with this error when trying to access our cluster.
Unable to connect to the server: EOF
All the apps are still fine. Thanks to Kubernetes’s design. We can have all the time we need to fix this.
So kubectl
is unable to connect to Kubernetes’s API. It’s a CNAME to API load balancer in Route53. That’s where we look first.
Route53 records are wrong
So ok. There are many problems which can cause this error. One of the first thing I notice is the Route53 DNS record for etcd
is not correct. It was the old master IP address. Could it be somehow the init script unable to update it?
So our first attempt to fix it was manually update the DNS record for etcd
to the new instance’s IP address. Nope, the error is still the same.
ELB marks master node as OutOfService
We look a little bit more into the ELB for API server. The instance was masked OutOfService
. I thought this is it. It makes sense. But what could cause the API server to be down this time? We’ve done this process many times before.
We sshed into our master instance and issue docker ps -a
. There is nothing. Zero container whatsoever.
We check systemctl
and there it is, the cloud-final.service
failed. We check the logs with journalctl -u cloud-final.service
.
We noticed from the logs that many required packages were missing like ebtables
, etc… when nodeup
script ran.
Manual apt update
So if we can fix that issue, it should be ok right? We issue apt update
manually and saw this
E: Release file for http://cloudfront.debian.net/debian/dists/jessie-backports/InRelease is expired (invalid since ...). Updates for this repository will not be applied.
Ok, this still makes sense. Our cluster is old and the release file is expire. If we manually update it, it should work again right? We do apt update with valid until
flag set to false
.
apt-get -o Acquire::Check-Valid-Until=false update
Restart cloud-final service
Restart cloud-final.service
or manually run the nodeup
script again with
/var/cache/kubernetes-install/nodeup --conf=/var/cache/kubernetes-install/kube_env.yaml --v=8
docker ps -a
at this point should show all the containers are running again. Wait for awhile (30seconds) and kubectl
should be able to communicate with the API server again.
Final
While your problem may not be exactly same as this, I thought I would just share my debugging experience in case it could help someone out there.
In our case, the problem was fixed with just 2 commands but the actual debugging process takes more than an hour.
Tips for first time rack buyer
Few weeks ago, I knew nothing about server rack. I frequent /r/homelab a lot in order to learn to build one for myself at home. These are the lessions I learnt during building my very first homelab rack.
Choose the right size
You need to care 2 things about a rack size: height & depth. The width is usually pretty standard 19 inches.
- Rack height is meassured in U (1.75 inch or 44.45mm): a smallest height of a rack-mountable unit.
- Rack depth is very important too. Usually available in 600/800 or 1000mm. Don’t buy anything shallower than 800mm unless you plan to use the rack mostly for network devices. Otherwise, your rackmount server options are very limited. If you must go with 600mm depth rack, you can choose some half depth servers like ProLiant DL20, Dell R220ii, some Supermicro servers or build one yourself with a desktop rackmount cases.
Carefully plan what kind of equiments you want to use to get the correct size. An usual rack usually have these devices:
- 1 or more patch/brush panel for cable management (1U each)
- 1 router (1U)
- 1 or 2 switches. (1U each)
- servers: this depends on how much computing power you need. Also servers come in various sizes (1U/2U/3U/4U) as well.
- NAS maybe (1-2U)
- PSU: usually put at the bottom (1U or 2U)
- PDU: some people put it at the front, some puts it at the back. (1U)
Things to looks for when selecting a rack
- Rack type: open frame / enclosures or wall-mounted rack.
- Wheel or not wheel, that is the question. I recommend you to go with wheel for home usage.
- If you choose wheel, get a rack that has wheel blockers.
- Does the rack’s side panel can be taken off? If it does, it will make equipment installation a lot easier.
Cable management
The top U is patch panel. The third one is brush panel. The purpose of these panels is pretty easy to understand. I didn’t know the term to search for at first when I want to buy one.
Here are some accesories that helps with cable management:
- Zip tie
- Velcro
- Cable combs
- Patch panel
- Brush panel
- Multi-colored cables: eg green for switch to path link, orange for guest VLAN, etc…
Some notes on the patch panel. There is punch down type that looks like this and there’s pass-through type that looks like this. You probably want the keystone one as it’s easier to maintain.
If you cannot find cable combs, i saw people has been using zip tie to make DIY cable comb. It’s pretty cool.
Other tips
Numbering unit on the rack if it doesn’t have one will help a lot when installing equipments. Like this
Most racks I saw on /r/homelab have this but the cheap rack I got doesn’t. I just got to be creative: use label maker tape along the rack’s height and hand wrote the number there.
Know something that isn’t on this list, please tweet me at @tuananh_org. I would love to learn about your homelab hacks.