All systems are go

Don't agree with this? Please let us know

Previous Incidents

[Resolved] Deployment issue

This incident lasted less than 1 second.
Fri, 8 Jun 2018
14:48:06 CEST

Today, at 2:23pm CEST, a deployment went wrong, resulting in a brief downtime for around half of the requests. For our zero-downtime deployments to work, we automatically spin up new virtual servers before taking the old ones out. In this particular case, however, the old code was incompatible with a database change introduced by the new version. This resulted in a number of failed requests for a period of 2 minutes, namely the ones that were still being served by the old servers.
At 2:25pm CEST, the new virtual servers took over and the situation returned back to normal.
We’re sorry for the inconvenience and are already working out a strategy to prevent this from happening again.
All systems are go.

[Resolved] DNS problems

This incident lasted 56 minutes.
Sun, 27 May 2018
08:19:49 CEST

Looks like an upstream issue with our DNS provider causes all subdomains to be down.

08:28:08 CEST

We can confirm this is a general problem with Gandi. They reported to be under a DDoS attack a little over an hour ago, and are working on resolving the problem:

For, this means everything is down:,, but also this status page. We'll still update it for future reference.

09:05:32 CEST

Our own monitoring indicates some improvements, but we're waiting for confirmation from Gandi.

09:16:16 CEST

Our metrics tell us the problem is fixed, and that corresponds with what Gandi tells us on
We're closing this incident for now, but will keep monitoring the situation of course. All systems are go.

[Resolved] API 503 Service Unavailable errors

This incident lasted 40 minutes.
Mon, 14 May 2018
04:34:13 CEST

Our API is currently responding 503 Service Unavailable in an estimated 80% of the cases. This is affecting the rendering and the web app. We're sorry, we're on it and we will get back with an update within the next 15 minutes.

04:49:00 CEST

After restarting all of the services, we're seeing normal numbers again. We'll continue monitoring the situation and of course update here. Expect to hear from us again in the next 30 minutes.

05:15:08 CEST

The API and all other systems have been running normally for the past 30 minutes, so we can close this incident for now. Naturally, we will investigate and fix the root cause. We're sorry for the trouble this has caused!

15:58:39 CEST

We hunted down the root cause of last night's problem: a bug in the best available seat API endpoint caused a deadlock under high load. This, in turn, resulted in failure of the other API endpoints, and ultimately seating chart renderings.
We deployed a fix earlier today, all systems are go.
Please do get in touch ( in case of questions!

No further notices from the past 90 days.