Skip to main content

Rate limit

The API is rate limited, to keep the servers running smoothly when they receive an extremely high number of requests.

This limit is set to 100 concurrent requests per account. That means that 100 requests can be active at any given time (per account).

When you send more simultaneous requests, the API answers with status code 429 - Too Many Requests.

Note that this is not a limit on the number of concurrent ticket buyers (see below).

Also, we don't enforce a maximum on the number of requests per second. It's perfectly fine to do hundreds of fast calls per second, as long as you respect the limit of 100 concurrent requests.

How many ticket buyers can enter my site?

In-browser calls by rendered charts (e.g. to create a hold token, or to hold a seat) also count towards the rate limit - not just API calls from your server to

100 concurrent requests corresponds to about 100 renderings and 200 booked places per second. In other words: 100 new ticket buyers can enter your site every second and book or hold 2 places, without hitting the concurrent requests rate limit.

Since not all ticket buyers will be holding or releasing places at exactly the same time, 1.000s of people will be able to simultaneously look at the same chart without hitting the rate limit.

Be aware: this depends on a number of factors, such as the size of the chart, whether social distancing rules are in place, other API calls you're doing etc. So it's very important to do proper load testing before you go live.

Calls to book best available seats take a little longer than normal booking calls, meaning you'll be able to do only about 100 requests per second instead of 200, before hitting the rate limit.

An example

As a practical example, suppose 2.500 ticket buyers are simultaneously looking at the same chart.

If we assume they select 4 places on average, and sessions are enabled, that amounts to 5 API calls per user: 1 call to create the hold token + 4 calls to hold the places.

Furthermore, suppose each user takes about a minute to select his places.

That means 5 x 2.500 API calls per minute, or about 200 per second - which we'll be able to handle without hitting the rate limit.

Handling rate limited requests

Status 429 should be handled by a retry mechanism.

You could build that yourself (e.g through exponential backoff), or just use our api client libraries. They have exponential backoff built-in.

So if you use one of these, then you don't have to do anything: the client library will automatically and repeatedly retry requests that fail with status 429.

Higher rate limit

Expecting a higher load than what we allow on our shared instance? A dedicated server might be an option. Please get in touch.