This year, like the last, I participated in Advent of Code, an online event where a new algorithmics puzzle is unlocked every day of december before and on the 25th. For each day, the same puzzle idea yields two small exercises, one typically easier than the other.

I chose Rust, like last year, to solve those problems. You can find my repository here.

A bit of write-up for the days I found interesting

Most puzzles were interesting, to very interesting. The first week or so is easy to solve as long as you have a good handling of simple data structures and logic branching.

A word on boilerplate

In order to have a faster time coding and testing my code, I spent the first couple days perfecting a boilerplate system, learning about Rust's workspace crate mechanism. The code I wrote was very simple :

  • A shell day crate that could be copied and which content could be changed to deploy the simple boilerplate code for the day
  • A benchmarking area
  • A lot of testing that the code gave the right results for tests and answers

The shell crates I made had the basic bin/lib structure, where all of my logic code resided in methods called solve_part_one and solve_part_two in src/, which could be imported by the higher-level crate for testing, or src/ to be executed / test sample inputs.

Day 1 : Slice Windows

Day 1 had me a little stumped, but not for any valid reason. I just had never thought about the concept of a window iterator in Rust. I was sure there must have been something like that in the standard library, I just did not know where. It turns out, the slice primitive type is equipped with a windows method that yields a special iterator over a given amount of elements in the original slice.

Day 5 : HashMap is good for grids (with moderation)

Continuing from a theme I had last year, using std::collections::HashMap instead of a fixed size grid can turn out to be a life saving spark of genius if it turns out your puzzle input is larger than expected, your data is lacunar, or your coordinate space expands to negative coordinates. Last year, the various versions of Conway's Game of Life had us play around with multi-dimensional grids that could expand to unreasonable sizes in any direction, and I found that storing one type of state information as coordinates in a HashMap (or HashSet), and assuming that absence from that structure means another state information, can be fairly useful.

Day 23 : Hashing is good, with moderation (and well)

A caveat of the previous point is that hashing still comes at a cost, particularly for complicated data structures that you define yourself. I had an example in day 23 where I needed to store the states of my puzzle state space that I had already explored in order to avoid needlessly repeating computations, and filling memory. To do so, I chose to use a HashSet and implemented std::hash::Hash on a type that was essentially an enum and 30-ish integers in a trench coat.

My first implementation iterated all over every single one of these values, and hashed them all individually.

#[derive(Clone, Copy, PartialEq, Eq, Debug, PartialOrd, Ord, Hash)]
enum Amphipod {

#[derive(Clone, Copy, Debug)]
struct AmphipodPuzzleState {
    cost: usize,
    hallway: [Option<Amphipod>; 11],
    chambers: [[Option<(Amphipod, bool)>; 4]; 4],
    part_one: bool

impl std::hash::Hash for AmphipodPuzzleState {
	fn hash<H: Hasher>(&self, state: &mut H) {
		for potential in self.hallway {
			if let Some(amp) = potential {
				// Room no yields a number between 0 and 3 depending
				// on the value of the enum
			} else {
		// .. You get the idea

This was, as I can see now, horrendously slow. Indeed, having to endure the overhead of a single hashing for every single number in the structure was absolutely too much.

A second version of this implementation created a single number by pushing a base 5 number into a number stack and hashing it (you will see that it's a theme this year). However, that was still quite slow.

In the end, just creating a method that yielded a fixed-length array of numbers that itself could be automatically hashed worked better. The overhead of hashing still slowed down the code, but I managed to get sub-1s in release target.

Day 24 : Sometimes it's not about simulating, it's about reading and counting

On day 6, we were presented with a problem that seemingly asked us to simulate an unreasonable amount of data. It turns out we only needed to realize that some states were independent of the others, and count.

On day 14, we were once again tricked. The polymerization exercise demanded outright impossible simulation. People who had working short-term memory remembered the lanternfish exercise from a week prior, and simply figured out you could use dynamic programming to cut down on simulation, and count.

On day 24, we were presented with assembly instructions. I spent a lot of time building a robust, modular instruction simulator, confident that we would need to simulate this program.


It turns out, and don't read if you don't want it spoiled, that the program performed a 14-cycle parametric "hashing" process that essentially pushed and popped a base 26 number (see, I said it would be a theme), computed from a test input and some of the aforementionned parameters. The goal was to get the end value to 0. Solving day 24 required way more program structure analysis than pure programming skill.

Day 22 : Papers aren't really always helpful

I spent a large amount of my time on day 22 trying to see if someone smarter than me had figured out a general algorithm to compute the real volume of a set of axis-aligned, intersecting cuboids in 3D space. I read a couple brain-melting papers that solved this efficiently for cubes, and more general versions that were really above my understanding.

I just ended up creating a system where the intersection of an added cuboid was computed against every cuboid already in the set, and the added cuboid was split into smaller cuboids accordingly in order to shave off and split the added cuboid into a set of smaller, non intersecting cuboids. Computing the difference of volume and adding them then became trivial.

Binary optimization is madness

The worst runtime I had this year was day 19, before a large optimization I made. In debug target, it ran for about 70 seconds before spitting out the right answer on my input. However, building in the release target reduced the runtime to a mere 3 and a half seconds.

More generally, I have to say that the binary optimization on the release target for Rust is mind-bogglingly fast. Repeated operations seem to fly by like the compiled itself predicted your data and shrunk the binary to just solve what could change. The world of compiler optimization looks fascinating, and very complex.

More general things I learned

  • Rust is very powerful, yet,
  • Rust is not very well suited for a lot of fast, dirty problem solving.
  • Zero-cost abstraction is wonderful. I could write entire structures to describe my problems very clearly (like Amphipod/AmphipodPuzzleState for day 23, and have close to no overhead from it)
  • The people who strive for leaderboard on this event are all generally american and do not work a job where they have to wake up early

And that's about all I can come up with for now! I want to thanks Eric Wastl, creator of Advent of Code, along with the whole team, for the work they put into the event every year. Hopefully, I'll see y'all for the next one!

And since this post comes out January 1st 2022 I also want to wish everyone a better 2022 than 2021 was.