Tools I use to create this blog
As you might expect, this blog uses some overengineered tools, after all, it wouldn't be so much fun otherwise. This article is probably not too interesting on its own. It's just "documentation".
// TODO upload source code and link it here.
"Build system"
. ├── ... (readme 'n stuff) ├── code │ ├── Cargo.toml │ └── src │ └── ... (rust source) └── content ├── articles │ ├── 2022-08-29-blog-start.md │ ├── 2022-08-29-blog-test.md │ └── 2022-08-30-blog-tools.md ├── makefile ├── out │ ├── ... (generated files) └── style.css
The entry point to this "build system" is content/makefile
, it has rules for
generating HTM from the markdown sources, the index and the atom feed. The
"compiler" here is a small rust program. (code
).
# oversimplifed, doesnt work TOOL := blog-tool out/index: $(ALL_ARTICLES) $(TOOL) $(TOOL) render-index > $@ out/feed.atom: $(ALL_ARTICLES) $(TOOL) $(TOOL) generate-atom > $@ out/%: articles/%.md $(TOOL) $(TOOL) render-article $< > $@
A small trick here is, to make everything depend on the compiler ($(TOOL)
)
too, so that when it changes, re-generation of all articles if triggered.
Rust tools
I use the laby crate for templating the HTM.
// TODO what is important here?!
File server
I wanted to serve all files without the file name extension (.html
), but my
previous http server (http-server)
exclusively inferred the MIME type from the extension, which makes it impossible
to do that. The obvious solution was to reinvent the wheel.
This had the great side-effect of making my website blazing fast 🚀🚀! :)
http-server (node.js)
Running 10s test @ http://127.0.0.1:8080/test-file 2 threads and 100 connections Requests/sec: 2314.00 Transfer/sec: 725.38KB
fileserver (rust, no caching)
Running 10s test @ http://127.0.0.1:8080/test-file 2 threads and 100 connections Requests/sec: 24464.69 Transfer/sec: 3.10MB
// TODO also upload source code and link it
Syntax highlighing
For that, I chose the crate synoptic. It provides a some functions for defining tokens with a regex, then returning start- and end-points of those for further processing. For example, this is how i defined comments and types:
let rust_grammar = &[ (&["(?m)(//.*)$"], "comment"), (&["[A-Z][a-z]*", "bool", "usize", /* simplified */ ], "type"), /* more rules */ ]
The library finds all the tokens and lets me serialize them to HTM.
End
I still need to learn, how to write this blog well and what is the most interesting to read. Please give me some advice or commentary via matrix, mail or fedi. (see contact)
Article written by metamuffin, text licenced under CC BY-ND 4.0, non-trivial code blocks under GPL-3.0-only except where indicated otherwise