This Xmas I took part in the Kaggle - Santa's Workshop Tour 2019 competition. Last year, I managed to get a bronze, code+blog here.

As with the previous year, this was an optimisation problem. Without going into too much detail, the problem was to assign 5000 families to to a daily schedule, where each family had expressed a set of preferences for 10 different days to visit Santa's workshop 100 days before xmas. Each day was (hard) constrained to hold a minimum of 125 to a maximum of 300 people. The objective was to minimise a score which penalised solutions where family preferences were not met + an accounting fee which favoured a smooth assignment (in terms of first differences) of families over the 100 days.

Unlike last year, early in the competition, it became apparent that an optimal solution had been found - A few entries with the same score at top of leader-board.

It turned out that that problem could be solved (proved to optimality) by formulating it as a Linear Programming problem. I've done some linear programming in the past, and had a go myself. The sheer number of variables involved was too much for an open source solver (CBC or GLPK). It could only be solved using a commercial solver such as CPLEX or Gurobi). I've used Gurobi in the past, it really is exceptional. The problem is, it is very expensive (10k+). They also offer a free license to academics, so the leader-board quickly became populated with optimal scores from people lucky enough to have access to Gurobi or another commercial solver. It was not a trivial problem to model (it was actually pretty hard, requiring integer linear programming skill). However, the competition was somewhat tainted by this.

Nonetheless, I tried to compete with heuristics :). - I want this damn medal!

The first approach I took was to randomly select a single family, and then randomly select a day to move them to from their stated preferences. The move to a new day was only considered if the hard-constraints (min 125, max 300) for the current and candidate day were satisfied. If the candidate assignment yielded a better score, the candidate solution was accepted and a new random family was tried. This continued until I hit control-c.

This worked quite well initially. I was able to get a reasonable score and could leave it running, checking in once and a while. It soon became apparent that the move operation is quite limited, so I extended it to include family swaps, where 2 families are selected, and if possible, they exchange assigned days (constraints permitting).

By just following this simple heuristic, I was able to stay above the scores from the public kernels. However, as with last year, higher quality public approaches/kernels began to appear, so I had to think of some new approaches.

The previous approach got stuck in local-minima pretty fast. I wanted to know for sure if I was stuck in local-minima. The problem was that the objective function was quite computationally expensive to compute, requiring iteration over all families for both the assignment and accounting penalty score.

After lots of prototyping, I found a way to compute the objective function for a candidate move *very* quickly, using a move delta for both components of the score. This allowed me to assess new moves extremely fast, opening up lots of different possibilities in terms of algorithm implementation.

Working from the first approach, I brute forced all possible moves from 1-2 families. When a move was found, the algorithm started from the first family. Upon reaching the final family, if no moves could be found, then the algorithm returned a local-minimum solution.

It was possible to brute force all 1-2 family combinations in a few seconds using my fast penalty function. The computation time for all combinations of 3-family moves was significantly higher. To do this, I split the combinations to be considered into a random list and placed each unit of work onto a blocking-queue. I then had a set of "brute worker" threads consume from this queue, placing candidate solutions onto a different incoming-solution-queue. A supervisor thread would restart all workers and randomise the combination queue upon receiving an improvement in score.

To escape the local minima, I implemented simulating-annealing with a move selection probability function which favoured less-damaging moves (in terms of objective function delta).

As with the previous approach, I used multiple simulated-annealing workers, each initialised with a randomly seeded random-number-generator.

The final algorithm was a mix of hill climbing with random moves, brute force and simulated annealing executed concurrently.

In the final few days of the competition I fired up an Amazon EC2 c5-metal spot-instance with 96 cores (Xeon - Cascade Lake). With these heuristics, and a bit of help all these cores, was able to stay in the leader-board.

In the end, I ended up at position 114 out of 1620 - another bronze :) I'm quite happy with that, given that my approach was 100% based on heuristics :]

github/phil8192/kaggle-santa-2019

EOF