Förvillelser

Good stuff in 2018

Happy New Year. It’s 2019. Here’s a short retrospective on some of the things I enjoyed in 2018.

Music

I’ve listened to a lot of different stuff this year. Here are some of my favorites: Top 16 releases 2018

Movies

  • Isle of dogs
  • Three Billboards Outside Ebbing, Missouri
  • Death of Stalin
  • Mandy

Books

  • Kazuo Ishiguro - Nocturnes
  • Margaret Atwood - The handmaid’s tale
  • Martin Kleppmann - Designing Data-Intensive Applications (my notes)
  • Francesco Cesarini - Designing for Scalability with Erlang/OTP
  • Stanisław Lem - The invincible

Gigs

  • Aphex Twin at Funkhaus in Berlin
  • Rival consoles at Prince Charles in Berlin
  • Nils Frahm at Malmö Live konserthus in Malmö
  • Sleaford mods at Columbiahalle in Berlin
  • Fever Ray at Columbiahalle in Berlin

Exhibitions

Skateboard films

Tv series

  • Maniac
  • Dilan & Moa
  • Black Mirror s4
  • The handmaid’s tale
  • Stranger things s2
  • Dark

Misc

  • Cinque e cinque in Livorno (sandwich)
  • Chili pepper plants 🌶
  • Masthugget parking garage (skate spot)
  • Negroni (cocktail)

On Designing Data-Intensive Applications

Designing Data-Intensive Applications — The big ideas behind reliable, scalable, and maintainable systems — is a book written by Martin Kleppmann, published in 2017 by O’reilly.

I read this book cover-to-cover, which is probably not necessary or even recommended. But it worked quite well for me, although it took a long time to finish. Towards the end of the book I started thinking about writing a sort of review of it. Partly to make some of the ideas and concepts stick better, and partly because I think it’s a great book that more people should read.

While reading technical books I usually make markings on sections I like to get back to. Either to re-read, or maybe as a starting point for a small experiment of my own, or just because the page/section is really good. In this post I’ll write a short snippet about each such marking I made for Designing Data-Intensive Applications.

My expectations on the book was different than the actual content. I had the impression that DD-IA was going to be more like cookbook style read, recipes on how to architect scalable backend apps. That is not the case at all, instead the book focuses on the underlying theory for vital pieces used in such designs. Things like how database features work under the hood, theories for distributed systems, concurrency truths, and tradeoffs around consistency, fault tolerance etc. You might think “why should I care about stuff like that?”. My answer is that this sort of stuff give you guidance on what technology to chose for certain use-cases, and what characteristics you can expect from it. Plus it’s great material if you ever want to build some database like system yourself.

My markings

Constructing and maintaining SSTables

Where: Page 78, Chapter 3: Storage and retrieval

I think my main motivation for marking this section is that it clearly outlines how some storage engines work. It describes an algorithm that is “essentially what is used in LevelDB and RocksDB”! Also cool that red-black trees are mentioned as a good way to store a sorted structure in memory for a storage engine. I quite recently implemented a persistent red-black tree in Elixir myself.

Partitioning and Secondary Indexes

Where: Page 206, Chapter 6: Partitioning

My interest in search engines triggered this marking. Here we get a deep dive into why secondary indexes are so useful, how they can be implemented, and some tradeoffs.

I really like the way Kleppmann makes it easy to understand how partitioning affect secondary indexes. And how the scatter/gather approach suffers from some issues with tail latency amplification (all too familiar after having worked on Elasticsearch backed projects).

Preventing Lost Updates

Where: Page 242, Chapter 7: Transactions

Description of the very common issue of data loss due to concurrent updates; where you read a value, (possibly) modify it and then write it back. Then follows a list of solutions to the problem with plenty of details:

  • Atomic write operations
  • Explicit locking
  • Automatically detecting lost updates
  • Compare-and-set
  • Conflict resolution and replication

Very practical information that helps every developer.

Safety and liveness

Where: Page 308, Chapter 8: The Trouble with Distributed-Systems

I learned that there are two important kinds of properties for correctness of distributed algorithms; safety and liveness. These two kinds can help us to deal with difficult system models. Typically in distributed systems you want the safety properties to always hold—even if all nodes crash for example. Liveness properties can be more flexible.

Safety

This property can be informally defined as nothing bad happens, the more precise definitions is:

If a safety property is violated, we can point at a particular point in time at which it was broken (…). After a safety property has been violated, the violation cannot be undone — the damage is already done.

Liveness

A liveness property works the other way round: it may not hold at some point in time (for example, a node may have sent a request but not yet received a response), but there is always hope that it may be satisfied in the future (namely by receiving a response).

Linearizability is stronger than causal consistency

Where: Page 342, Chapter 9: Consistency and Consensus

This is a section I want to re-read to make sure I understand it completely. I’ve tried to immerse myself more into the distributed systems world and all the interesting and sometimes mind bending problems. So much research go through and so much head scratching.

What caught my eye in this section is this passage:

In fact, causal consistency is the strongest possible consistency model that does not slow down due to network delays, and remains available in the face of network failures.

In many cases, systems that appear to require linearizability in fact only require causal consistency, which can be implemented more efficiently.

Apparently this is kind of recent research, and we may still see cool things be implemented based on this!

Chapter 9 Summary

Where: Page 373, Chapter 9: Consistency and Consensus

Not going to include everything, but I liked this summary as a sort of reference on some common terms and their meanings in distributed systems.

  • Linearizability: Popular consistency model. Goal is to make replicated data appear as though there is only a single copy. All operations act on the single copy atomically. Appealing because it’s easy to understand. Downside of being slow. Especially in environments with large network delays (cloud).
  • Causality: Imposes an ordering on events in a system. Weaker consistency model than linearizability. Some things can be concurrent. Does not incur coordination overhead like linearizability, also less sensitive to network delays.
  • Consensus: All nodes agree on the decision. Many problems can be reduced to the consensus problem; Linearizable compare-and-set registers, Atomic transaction commit, Total order broadcast, Locks and leases, Membership/coordination service, Uniqueness constraint. Straightforward on single node, or single node decision maker.

Using logs for message storage

Where: Page 447, Chapter 11: Stream Processing

A very good breakdown how using a log can be used to power a message broker (like Kafka or Kinesis). Both in the simplest way with a single node, and how it can be partitioned across machines.

I think it would be a really fun experiment to try and build a simple message broker like this myself.

Designing for auditability

Where: Page 531, Chapter 12: The Future of Data Systems

I enjoyed this whole chapter a lot, and some of the concerns Kleppmann bring up really resonate with me.

One thing that may become more important in the close future is how auditable our systems are. If you mutate a table in a relational database, the previous state is lost and there might not be a trace of why something was changed. This can be a huge problem.

Event-based systems have an advantage here if the event is kept around (event sourcing). We need to design systems that are deterministic, repeatable and explicit around data flow. Immutable events can allow us to do time travel debugging.

Feedback loops

Where: Page 536, Chapter 12: The Future of Data Systems

This section is about (bad) feedback loops in recommendation systems and other such predictive applications.

We’ve all heard about social media echo chambers or filter bubbles. Recommendation systems can be a huge contributing factor to this phenomenon.

Another example is if an employer use credit-scores to evaluate potential hires. If you miss a payments on bills (due to some catastrophic event beyond your control), your credit-score suffer which in turn makes it harder for you to get a job and get back on track.

Difficult problems that must be tackled head on; more and more ML is powering important parts of all of our lives with increasing adoption rates. Scary stuff even if built with good intentions.

Data as assets and power

Where: Page 540, Chapter 12: The Future of Data Systems

I can’t even count how many scandals about the misuse of user data that has been reported on in 2018. It seems to be prevalent and have left me very skeptical and paranoid of all online services.

In this section Kleppmann argues that we should perhaps view data not as an asset like any other, but more like a hazardous one. We need to safe guard it against current abuse and also make sure it does not fall into the wrong hands.

When collecting data, we need to consider not just today’s political environment, but all possible future governments. There is no guarantee that every government elected in the future will respect human rights and civil liberties, so “it is poor civic hygiene to install technologies that could someday facilitate a police state”

In conclusion

Read the book. Learn about how databases work. Learn the underlying ideas of the technology you might already be using and what you can expect from it. Get better at deciding on what technology to use and when. The choices you make will have very real impact on the development and operation of your systems.

Think about your system holistically: correctness, behaviour in face of network partitions, observability, traceability, auditability. Think about how the system affects users and non-users. Think about data as a potentially hazardous asset.

This is a book I recommend all developers read.


Finding bugs with property based testing

Property based testing has been on my radar ever since being forced to write QuickCheck tests for the introduction to functional programming course at Chalmers university some 10+ years ago.

However examples based tests has always felt more intuitive, and maybe more importantly has been the go to approach to development in the places I’ve worked.

Recently I’ve tried to force myself into the invariant based reasoning mode, and apply property based testing. This was prompted by some interesting issues around testing inputs consisting of date times and time zones. Edge cases lead to head scratching bugs despite good unit test coverage. (If you want a good introduction I recommend Jessitron’s post Property-based testing: what is it?)

I still haven’t gotten to the stage where I can get property based tests to drive design of the implementation. But I’ve gotten much more comfortable at wielding this amazing tool.

Strengths

Edge cases

This probably doesn’t come as a surprise. Generating inputs instead of selecting them yourself yields impressive coverage and can catch the most subtle cases.

Algorithms and data-structures

Property based tests seems like the perfect fit for these type of functions. Maybe it’s because there is usually quite clear desired properties in this space.

I was happy and impressed by how fast property based tests found bugs in my own (persistent) data structure implementations in Elixir.

Large combination of inputs

More generally, it’s getting apparent to me that anything where there the combined input set is very large property bases tests might be a good fit.

Conclusion

I think there are way more opportunities to improve confidence in any code base with property based tests than what we usually think.

I’m also convinced that the — “what needs to go right?” — reasoning mode in your arsenal can make some hairy issues a bit easier to work with.


Experiment with gzipped and chunked HTTP responses with Plug

Recently I wanted to add gzip support to an API endpoint which could give back very large amounts of JSON. Internally the API gets data from a source where the size of the payload is not known upfront. To deliver the data it uses a HTTP (1.1) feature called chunked transfer encoding with Plug.Conn.chunk(). After finishing the implementation I found some, to me, surprising results.

Requirements

  • We do not want to keep the data in memory because it can be very large, and there may be many similar requests coming in.
  • The data itself comes from some source we can stream from, like a database or AWS s3.
  • Elixir implementation

Hypothesis

Adding compression of the data using erlang’s zlib will speed up the transfer iff the data size exceeds some threshold.

Gzip

The easiest thing would be to simply gather the data in-memory and then pop it out in one go with :zlib.gzip(mydata) and the correct http headers. But we implemented chunking so that we don’t have to keep the whole data in memory. So what to do? We can compress the chunks. This was not straigh forward to do and I’m still not sure that what I cooked up is the optimal way to do it.

Headers

Before getting to the individual chunks we first have to indicate to the client that we are going to send gzipped things.

Plug.Conn.put_resp_header(conn, "content-encoding", "gzip")

Of course we should only do this if the client actually asked for compression — ie has set the request header Accept-Encoding to application/gzip or gzip, deflate.

Test first

I knew exactly what outcome I wanted, but not quite how the implementation would look so I started with a test.

test "should gzip json data if requested" do
  conn =
    build_conn()
    |> put_req_header("accept-encoding", "application/gzip")
    |> get("/data")

  assert conn.status == 200
  assert Enum.member?(conn.resp_headers, {"content-type", "application/json; charset=utf-8"})
  assert Enum.member?(conn.resp_headers, {"content-encoding", "gzip"})
  assert {:ok, _} = conn.resp_body |> :zlib.gunzip() |> Poison.decode()
end

Attempt one

This boiled down to gzipping every chunk payload like so: Plug.Conn.chunk(conn, :zlib.gzip(chunk_payload)). This worked in my ExUnit tests, and using curl but it did not work in any of the browsers I tried (Firefox, Safari, Chrome). In the browsers I either got the first chunk only, or a JSON parse error. Very interesting! This was surely a smell that my implementation was not correct.

It was not clear why the data was not understood by the browsers but this thread on Stack Overflow made me suspect that it had to do with including the full gzip headers in every chunk. The first attempt’s implementation meant: It output one full gzipped “file” per chunk that had to be unpacked individually instead of one large “file” that was fully assembled once the last chunk was sent.

Attempt two

I did not know exactly how to proceed, but figured I should read up on the erlang zlib documentation to see what’s possible.

Getting to know erlang’s zlib

The SO thread mentioned details on window bits and init functions. Is there something like that in the erlang implementation I can control? Yes zlib.deflateInit/6 has a WindowBits option, promising. deflateInit needs a zstream though, what’s that? “A zlib stream, see open/0.” Hmmm okay time to play around in iex.

z = :zlib.open()
:ok = :zlib.deflateInit(z, :default, :deflated, 31, 8, :default)

Fancy, and lots of magic parameters. All of them except 31 for the WindowBits are defaults for deflateInit/6. This is a good start, we have a zstream and have configured it. At this point all I want to do is to compress some text, and be able to uncompress it again.

first = :zlib.deflate(z, "my first string")
second = :zlib.deflate(z, "my second string", :finish)
:zlib.deflateEnd(z)
:zlib.gunzip(first ++ second) # "my first stringmy second string"

With a bit of imagination we can see how this could fit together with the chunks; call deflate on the chunk_payload and send that. Will this approach pass the browser test?

Plugging it in

Awful title aside, here’s how a module which wraps the zlib functionality could look like

defmodule MyApp.ChunkCompressor do
  def init(%Plug.Conn{} = conn) do
    conn
    |> Plug.Conn.assign(gzip, gzip_requested?(conn)) # gzip_requested? return boolean based on accept-encoding header
    |> content_encoding()
    |> init_zstream()
  end

  def chunk(%Plug.Conn{assigns: %{gzip: false}} = conn, payload, _deflate_option) do
    Plug.Conn.chunk(conn, payload)
  end

  def chunk(%Plug.Conn{assigns: %{gzip: true, zstream: zstream}} = conn, payload, :sync) do
    compressed_payload = :zlib.deflate(zstream, payload, :sync)
    Plug.Conn.chunk(conn, compressed_payload)
  end

  def chunk(%Plug.Conn{assigns: %{gzip: true, zstream: zstream}} = conn, payload, :finish) do
    compressed_payload = :zlib.deflate(zstream, payload, :finish)
    :zlib.deflateEnd(zstream)
    Plug.Conn.chunk(conn, compressed_payload)
  end

  defp init_zstream(%Plug.Conn{assigns: %{gzip: false}} = conn), do: conn
  defp init_zstream(%Plug.Conn{assigns: %{gzip: true}} = conn) do
    with zstream <- :zlib.open(),
         :ok <- :zlib.deflateInit(zstream, :default, :deflated, 31, 8, :default) do
      Plug.Conn.assign(conn, :zstream, zstream)
    end
  end
end

And here’s a phoenix controller using our ChunkCompressor:

# ...

def data(conn, _params) do
  conn =
    conn
    |> put_resp_content_type("application/json")
    |> MyApp.ChunkCompressor.init()
    |> send_chunked(200)

  with {:ok, conn} <- stream_data(conn),
       {:ok, conn} <- MyApp.ChunkCompressor.chunk(conn, "", :finish) do
    conn
  end
end

def stream_data(conn) do
# Stream from source
|> MyApp.ChunkCompressor.chunk(conn)
end

Using this we can finally verify with a browser that we do get all the expected json. Celebration!

Comparison against uncompressed transfer

Let’s take a look at how our compressed data transfer compares against our uncompressed baseline. Gotta be honest I was quite confident that it would be way faster.

CompressionData sizeTime takenAvg. d/l speedNotes
No1125M0:00:2055.1M/sBaseline w/ 2 CPUs
Yes295M0:00:555427k/sDefault compression

Wha? How can it be!? Average download speed is a tenth of the baseline — WTF!

George Costanza is also confused

Very disappointing. But I slowly realized that the overhead of compressing a chunk simply didn’t outweigh the size benefit. I started experiment with the zlib parameters to see if I could improve things:

CompressionData sizeTime takenAvg. d/l speedNotes
Yes346M0:00:359.9M/s:best_speed compression level
Yes348M0:00:389337k/sHighest mem_level
Yes755M0:00:3521.5M/s:huffman_only strategy

Turned out it was tough to beat the baseline’s 20s time taken. The last option was getting close in download speed but then the compression was garbage. Not worth. I implemented parallel compression, but that wouldn’t fly:

** (EXIT) an exception was raised:
    ** (ErlangError) Erlang error: :not_on_controlling_process
        :zlib.getStash_nif(#Reference<0.3440369274.760610817.152045>)
        :zlib.restore_progress/2
        :zlib.deflate/3

Because zlib streams should only be changed from one thread.

There’s an alternative erlang zlib library called ezlib “optimized for streaming” that sounded very promising — unfortunately I couldn’t get it to work.

Conclusion

It seems like the original idea of compressing chunks is not giving enough benefits, the transfer speed is worse than the uncompressed version. After the fact I guess it simply makes sense, doing work per chunk — especially computationally intensive work — inflicts a heavy penalty on the throughput.

There’s one more thing I can think of that might help the compression and that is to increase the size of the chunks that we get from the data source (right now they are around 150 - 1000KB). This way deflate/2 is called fewer times. I could also try smaller values for window bits, but I am not sure that would do much.

All in all this was a “failed” experiment, although fun and I learned some things.

Future

HTTP/2 does not support chunked transfer encoding (it even forbids it). I don’t know how a solution would look like for HTTP/2, but maybe I’ll write a follow-up if I get to experiment a bit with that.


Game of Life in Elixir and Scenic

After seeing Boyd Multerer’s 2017 talk on Scenic – an OpenGL backed UI framework for Elixir/Erlang – I have been following its progress. This year at ElixirConf we got a new talk by Mr. Multerer with even more impressive and polished demos, and most importantly the Scenic repo went public.

I knew I wanted to build something simple to get familiar with the framework. Yesterday I came up with the brilliant idea to add a graphical view of my very old Elixir game of life golex. Golex was one of my first Elixir projects, conceived back in 2013 when Elixir was still in beta (hipster cred?). I cloned the project, built it, and ran the text based simulation. Worked like a charm and the code looked modular enough to build a graphical view on top of.

Getting started with Scenic

This was pretty straightforward with clear instructions for macOS in the Getting started guide. It didn’t take long at all before I was looking at the sample app and clicking around.

I created a new scene, and looked at the other ones for clues on how to proceed.

The grid

I decided to start by just drawing a 2D grid in my GameOfLife scene to get my feet wet.

First attempt:

alias Scenic.Graph
alias Scenic.Primitives

@width 800
@height 600
@cell_size 10

@grid Graph.build()
  |> Primitives.line({{0, 0}, {@width, 0}}, stroke: {1, :white})
  |> Primitives.line({{0, @cell_size}, {@width, @cell_size}}, stroke: {1, :white})
  |> Primitives.line({{0, 2 * @cell_size}, {@width, 2 * @cell_size}}, stroke: {1, :white})

🤔

This is already old after three lines. I will write a function since I’ve heard that computers are good at executing repetitive tasks. Second attempt:

def build_grid(graph, {width, height}, spacing) do
  horizontal =
    Enum.reduce(0..height, graph, fn y, acc ->
      Scenic.Primitives.line(acc, {{0, spacing * y}, {width, spacing * y}}, stroke: {1, :white})
    end)

  Enum.reduce(0..width, horizontal, fn x, acc ->
    Scenic.Primitives.line(acc, {{spacing * x, 0}, {spacing * x, height}}, stroke: {1, :white})
  end)
end

# in init/2
Graph.build()
|> build_grid({@width, @height}, @cell_size)
|> push_graph()

– that’s better! I actually found a bug in this function as I was writing this post (proof). Here’s the beautiful output:

golux grid

The cells

Since the grid drawing was so painless I was eager to get the cells out on the board. This turned out to also be quite easy. I stared at the different primitives for a bit expecting something like rect(x, y, width, height) (like in processing or quil). I found quad/3 instead. It wasn’t really clear to me how to translate a rect at first, so I thought let’s just go with quad now to Get Shit Done.

def cell_graph(graph, %Golex.Cell{alive: false}), do: graph
def cell_graph(graph, %Golex.Cell{position: {x, y}, alive: true}) do
  xp = x * @cell_size
  yp = y * @cell_size

  Scenic.Primitives.quad(
    {{xp, yp}, {xp, yp + @cell_size}, {xp + @cell_size, yp + @cell_size}, {xp + @cell_size, yp}},
    fill: :white
  )
end

So there’s a couple of things going on here. First off the pattern matching makes it really clear that we simply ignore dead cells. For alive cells we need to calculate where on the grid they should show up. That’s what xp and yp is for (naming is hard etc). Now obviously we need to call this function for every single cell in our world.

def world_graph(graph, %Golex.World{cells: cells}) do
  Enum.reduce(cells, graph, fn {_, cell}, acc ->
    cell_graph(acc, cell)
  end)
end

# init/2 can now do
world = Golex.random_world(...) # Get a new world from golex

Graph.build()
|> world_graph(world)
|> build_grid({@width, @height}, @cell_size)
|> push_graph()

We have the living cells on the grid, great. Except the game still sucks. We need to make it move.

Animation

I’m sure there are a bunch of ways to do this but I really liked the idea of making the scene send a message to itself on a fixed interval, and that would trigger the world update + re-render. To achieve that we reach into our Erlang toolbox and find :timer.send_interval/2 (also used in other Scenic demos). I figured that updating the scene once a second to start with should be conservative enough. I had no clue or expectations on how slow/fast scenic and golex would be.

To handle the message we have to implement handle_info/2 – standard OTP stuff.

# In init/2
:timer.send_interval(1_000, :world_tick)

def handle_info(:world_tick, state) do
  new_world = Golex.world_tick(state.world)

  Graph.build()
  |> world_graph(new_world)
  |> build_grid({@width, @height}, @cell_size)
  |> push_graph()

  {:noreply, %{state | world: new_world}}
end

It’s slow :(

The one second update interval turned out to not be quite conservative enough. The updates looked a bit dodgy / not good. I measured how long it took to do the stuff in handle_info/2 above and it took a bit more than a second (~1.2). Was Scenic really this slow? Of course not. It was golex’ world_tick/1 function that was very very naive – but I fixed that! Didn’t worry about performance after that and could lower the timer interval a lot (to 100ms).

Animated game of life

Inputs

Part of the fun of game of life is seeing different starting conditions play out. So adding a way to restart the game with a fresh world would be cool. A simple left click on the board to do that maybe? I was a bit confused at first on how to implement this, because I kept trying to use filter_event/3. That was wrong because I don’t have any components in my Scene. Components can generate events. In our case we need to deal with lower level inputs with handle_input/3. A nice way to understand what events are available and how to handle them is to add:

def handle_input(msg, _, state) do
  IO.inspect(msg, label: "handle_input")
  {:noreply, state}
end

to your scene. This way every time some input event happen you will see how it looks. To handle mouse left click release I just pattern matched like this:

def handle_input({:cursor_button, {:left, :release, _, _}}, _input_context, state) do
  IO.puts("Generating a new world")
  new_world = Golex.random_world(div(@width, @cell_size), div(@height, @cell_size))

  {:noreply, %{state | world: new_world}}
end
def handle_input(_msg, _, state), do: {:noreply, state} # Need to handle all other events

Voilà we have new worlds on left mouse click.

Faster!

This section came about after getting some feedback from Mr. Multerer himself on twitter. What happens with the performance if we switch the cell drawing from quads to rects?

time difference for quads and rects

PercentileQuads (µs)Rects (µs)
9087123.582863.0
5072575.570106.5

Rects with translation is a bit faster than quads, and probably better memory wise (although I haven’t actually verified that). A note on the numbers here, they were gathered from a small sample size of 27 ticks for each type with timer.tc/1. What I measured was the time it took to update the game of life world and render with Scenic.

Wrapping up

This turned out to be a fun way to get to know Scenic at least a little bit. I look forward to doing more with this promising framework.

Here’s the code which is using rect instead of quads, and has some additional controls: golux


Creating a statically generated site - the hard way

Forget about installing Jekyll, Next, Hugo, or any of the other mainstream static site generators out there. To make Förvillelser I opted for the Carl Sagan method:

If you wish to make apple pie from scratch, you must first create the universe

I knew that I wanted a site generator that I could mess around with properly. Right now I am very fond of Elixir so naturally I looked at the elixir options out there. Obelisk immediately caught my attention because of its nice mix task UX. I followed the README to create my site, and I hit the first road block right away. Obelisk didn’t build out of the box. No problem, I updated one of its dependencies and continued. Next I tried to build my site, this time obelisk crashed and it wasn’t obvious why. That prompted me to look closer on its github page – maybe someone else had the same problem? Uh oh, lots of PRs with no attention, and the project seemed generally abandoned by its creator. One guy even wanted to take over the project but hadn’t gotten a response :(

Okay. That messed up my plan, since if I hack on this thing I’d like to give the result back to the community. Or at least have the option to do so. Forking seemed unfun, as did the prospect of debugging and fixing. Looked at Serum as well. Nice but not quite what I was after. Oh well time to roll up my sleeves and get cranking on My Own Static Site Generator! How hard can it be!? Haha!

I, as a software developer by trade, of course had some requirements. First I wanted my new shiny sites pages and posts to be written in Markdown. I wanted these documents to be versioned controlled. And I wanted to be able to deploy changes to my site with git commit+push.

Nothing out of the ordinary.

I got a first good enough version of Gonz together in an evening or so. I started working on Förvillelser early so that I could feed the Gonz development with fixes and features I needed for my particular site.

After a few more days of sporadic hacking I realized that I needed to publish Förvillelser somewhere to actually deliver value to my customer (me). Around this time other requirements popped up in my head:

  • I want SSL on the thing. http without s is so lame!
  • I don’t want to actually run a server myself. All the dev without the ops for this please.

With these in mind it seemed my choices were either github pages or netlify. I really liked the prospect of not having the baked HTML files in my repo, only the raw markdown files. Not sure why, I guess something something tidy, ridiculous nerd bullshit. That meant using netlify, because they can build the site for you. Cool beans 😎

I signed up and set up my site. Boom – “Site deploy failed”: make: mix: Command not found. Not sure why I expected them to support Elixir out of the box, but hey a guy can dream. Started looking around for some neat workaround or whatever. No real luck, but I did find netlify’s build-image repo – all open source and beautiful. I forked that, added erlang and elixir and sent a PR. Surprisingly it was approved (eventually) and even merged to master!

In conclusion

  1. Build your own static site generator in a non-mainstream programming language X
  2. Build the actual site
  3. Deploy the site somewhere that doesn’t support X
  4. Contribute changes to that place’s stack to add support for X
  5. P R O F I T 🎉

Goodbye Tumblr

After using Tumblr as my blogging platform of choice for the last 10 years I have decided to move to something else. The move is mostly triggered because of philosophical reasons. I think the internet is getting increasingly walled off into corporate silos and I don’t like it. Tumblr is not a particularly bad, or big player in terms of this issue but it’s definitely part of the problem.

This time I decided to go back to self-hosting despite the amount of maintenance work that means. I like the idea of owning all my content and having full control. While I’m at it I can consolidate my personal site and blog. Maybe later I will go even further by hosting my git repos myself. We’ll see about that though.

I’ll have to come up with some plan on how to add all my old Tumblr posts here. I’ve exported the tumblr blog and got a 309MB size zip file 😅

Content from joel.vorce.se will also be added here. I will probably also point that DNS to here.