This is my fork of koa-compress
that add supports for brotli compression.
Available on npm.
Great idea. I never thought of Docker containers this way because I totally forgot that I can always mount the config to the container.
This totally changes my dev environment setup.
$ docker run -it \
-v /etc/localtime:/etc/localtime \
-v $HOME/.irssi:/home/user/.irssi \ # mounts irssi config in container
--read-only \ # cool new feature in 1.5
--name irssi \
jess/irssi
The benefit of having a highly competent boss is easily the largest positive influence on a typical worker’s level of job satisfaction
Kubernetes-hosted application checklist (part 2)
This part is about how to define constraint to the scheduler on where/how you want your app container to be deployed on the k8s cluster.
Node selector
Simpleast form of constraint for pod placement. You attach labels to nodes and
you specify nodeSelector
in your pod configuration.
When to use
- you want to deploy redis instance to memory-optimized (R3, R4) instance group for example.
Affinity and anti-affinity
Affinity and anti-affinity is like nodeSelector
but much more advanced, with
more type of constraints you can apply to the default scheduler.
- the language is more expressive (not just “AND of exact match”)
- you can indicate that the rule is “soft”/”preference” rather than a hard requirement, so if the scheduler can’t satisfy it, the pod will still be scheduled
- you can constrain against labels on other pods running on the node (or other topological domain), rather than against labels on the node itself, which allows rules about which pods can and cannot be co-located
requiredDuringSchedulingIgnoredDuringExecution
is the hard type and
preferredDuringSchedulingIgnoredDuringExecution
is the soft/preference type.
In short, affinity rules define rules/preferences to where a pod deploys and anti-affinity is the opposite.
When to use
-
affinity and anti-affinity should be used when necessary only. It has a side effect of reducing the speed of deployment.
-
affinity use case example: web server and redis server should be in the same node
-
anti-affinity example: 3 redis slaves should not be deployed in the same node.
Kubernetes-hosted application checklist (part 1)
At work, we’ve been running Kubernetes (k8s) in production for almost 1 year. During this time, I’ve learnt a few best practices for designing and deploying an application hosted on k8s. I thought I might share it today and hopefully it will be useful to newbie like me.
Liveness and readiness probes
- Liveness probe: check whether your app is running
- Readiness probe: check whether your app is ready to accept incoming request
Liveness probe is only check after the readiness probe passes.
If your app does not support liveness probe, k8s won’t be able to know when to restart your app container and in the event your process crashes, it will stay like that while k8s still directing traffic to it.
If your app takes some time to bootstrap, you need to define readiness probe as well. Otherwise, requests will be direct to your app container even if the container is not yet ready to service.
Usually, I just make a single API endpoint for both liveness and readiness probes. Eg. if my app requires database and Redis service to be able to work, then in my health check API, I will simply check if the database connection and redis service are ready.
try {
const status = await Promise.all([redis.ping(), knex.select(1)])
ctx.body = 'ok'
} catch (err) {
ctx.throw(500, 'not ok')
}
Graceful termination
When an app get terminated, it will receive SIGTERM
and SIGKILL
from k8s.
The app must be able to handle such signal and terminate itself gracefully.
The flow is like this
- container process receives
SIGTERM
signal. - if you don’t handle such signal and your app is still running,
SIGKILL
is sent. - container get deleted.
Your app should handle SIGTERM
and should not get to the SIGKILL
step.
Example of this would be something like below:
process.on('SIGTERM', () => {
state.isShutdown = true
initiateGracefulShutdown()
})
function initiateGracefulShutdown() {
knex.destroy(err => {
process.exit(err ? 1 : 0)
})
}
Also, the app should start returning error on liveness probe.
Minimal Node.js docker container
Bitnami recently releases a prod
version of their bitnami-docker-node
with
much smaller size due to stripping a bunch of unncessary stuff for runtime.
If your app does not require compiling native modules, you can use it as is. No changes required.
However, if you do need to compile native modules, you can still use their
development image as builder and copy stuff over to prod
image after.
I try with one of my app and the final image size reduce from 333 MB down to just 56 MB 💪 !! All these without the sacrify of using alpine-based image.
Please note that this is the size reported by Amazon Cloud Registry so probably compressed size. I don’t build image locally often.
update: the uncompressed size of my app is 707MB before and 192 MB after.
FROM bitnami/node:8.6.0-r1 as builder
RUN mkdir -p /usr/src/app/my-app
WORKDIR /usr/src/app/my-app
COPY package.json /usr/src/app/my-app
RUN npm install --production --unsafe
COPY . /usr/src/app/my-app
FROM bitnami/node:8.6.0-r1-prod
RUN mkdir -p /app/my-app
WORKDIR /app/my-app
COPY --from=builder /usr/src/app/my-app .
EXPOSE 3000
CMD ["npm", "start"]
FROM ubuntu:latest
RUN useradd -u 10001 scratchuser
FROM scratch
COPY dosomething /dosomething
COPY --from=0 /etc/passwd /etc/passwd
USER scratchuser
ENTRYPOINT ["/dosomething"]
Quite innovative use of multi stage docker build. Of course, you can create a passwd
file yourself but this one seems much rather interesting.
Recent Node.js TSC fuss
- meta: vote regarding Rod’s status on the TSC
- Some remove information on what Rod did from the above link (image)
- new Node.js fork ayojs regarding this issue
node-pre-gyp and CI
Note to self:
When developing new feature for Node.js native module and using
node-pre-gyp
, make sure you pump version higher so thatnode-pre-gyp
will not pull the prebuilt binary.
A better man page. This is insanely useful 👍
tar
Archiving utility.
Often combined with a compression method, such as gzip or bzip.
- Create an archive from files:
tar cf target.tar file1 file2 file3
- Create a gzipped archive:
tar czf target.tar.gz file1 file2 file3
- Extract an archive in a target folder:
tar xf source.tar -C folder
- Extract a gzipped archive in the current directory:
tar xzf source.tar.gz
- Extract a bzipped archive in the current directory:
tar xjf source.tar.bz2
- Create a compressed archive, using archive suffix to determine the compression program:
tar caf target.tar.xz file1 file2 file3
- List the contents of a tar file:
tar tvf source.tar