node-pre-gyp and CI
Note to self:
When developing new feature for Node.js native module and using
node-pre-gyp, make sure you pump version higher so that
node-pre-gypwill not pull the prebuilt binary.
A better man page. This is insanely useful 👍
tar Archiving utility. Often combined with a compression method, such as gzip or bzip. - Create an archive from files: tar cf target.tar file1 file2 file3 - Create a gzipped archive: tar czf target.tar.gz file1 file2 file3 - Extract an archive in a target folder: tar xf source.tar -C folder - Extract a gzipped archive in the current directory: tar xzf source.tar.gz - Extract a bzipped archive in the current directory: tar xjf source.tar.bz2 - Create a compressed archive, using archive suffix to determine the compression program: tar caf target.tar.xz file1 file2 file3 - List the contents of a tar file: tar tvf source.tar
The power of 2 random choices
Getting started with WebAssembly
wat2js - Compile WebAssembly .wat files to a common js module
WebAssembly spec - WebAssembly Specification
Some examples modules
blake2b - Blake2b implemented in WASM
Follow Mathias Buus on GitHub. He’s has published quite a few interesting things about WebAssembly.
Successor to xpath-object-transform
camaro is an utility to transform XML to JSON, using Node.js binding to native XML parser pugixml, one of the fastest XML parser around.
- Faster. A lot faster.
- Building time is also faster.
- Mostly backward compatible with
xpath-object-transform. I haven’t cover all the cases though.
- pre-built binaries included for some distro. Will include more in the future.
This chapter describes various performance tricks that allowed the author to write a very high-performing parser in C++: pugixml. While the techniques were used for an XML parser, most of them can be applied to parsers of other formats or even unrelated software (e.g., memory management algorithms are widely applicable beyond parsers).
Found out about this gem, recommended by the author of “Writing a Really, Really Fast JSON Parser”. This is a really good post as well.
Very young but interesting project.
Might save you from introducing something new your project just for full text search (chances are you probably already have Redis in your tech stack)
Here’s a dockerfile for Redis 4.0 RC3 and Redisearch 0.16 for you to fiddle with.
- The image is based on glibc for wide compatibility
- Using apt package manager for access to large number of packages
- Quicker security updates
Even though there are many complaints about
glibc, it’s still very widely-adopted.
I would hate to debug building libraries with
musl-libc. It’s just not worth it.
Using alpine as base Docker image
I recently updated all of my personal Dockerfiles that I have for multiple purposes to use
alpine as base image.
Prior this, I just use
ubuntu as the base image and don’t have much care about built-images size. However, using Kubernetes, having small images size can make rolling out update speed much faster.
Some tips for reducing Docker image size that I found during my research:
- Using smaller base image (alpine, busybox, etc..)
- Remove unnecessary dependencies that you use for compiling stuff after done. (Also remove cache)
- Use few layers as possible. However, I don’t think you should do it blindly just for the shake of small image size and destroy readability. I prefer to keep it simple and clean. Optimize it later for building only.
How to setup twemproxy running on kubernetes.
- Convert this into helm chart.