Whisper

Rate Limiting

How Whisper's proactive rate limiter keeps you under Riot's limits

How It Works

Whisper uses proactive rate limiting. Instead of waiting for a 429 response and retrying, the rate limiter reads Riot's rate limit headers (X-App-Rate-Limit, X-App-Rate-Limit-Count, X-Method-Rate-Limit, X-Method-Rate-Limit-Count) from every response and tracks how many requests remain in each time window. When a window is near its limit, requests are queued and released only when capacity is available.

This means you rarely see 429 errors at all -- the limiter prevents them before they happen.

Automatic Behavior

Rate limiting is enabled by default. No configuration required:

import { createClient } from '@wardbox/whisper/core';

// Rate limiting is active out of the box
const client = createClient({
  apiKey: 'RGAPI-your-key-here',
});

The first few requests to a new endpoint run without limits (since the limiter hasn't seen headers yet). After the first response, the limiter calibrates to the exact limits Riot returns.

Three Types of Rate Limits

Riot enforces three levels of rate limiting:

App-Level Limits

Global limits that apply across all endpoints for your API key. Tracked via the X-App-Rate-Limit and X-App-Rate-Limit-Count headers. A typical development key has limits like 20:1,100:120 (20 requests per second, 100 requests per 2 minutes).

Method-Level Limits

Per-endpoint limits that apply to a specific API method. Tracked via the X-Method-Rate-Limit and X-Method-Rate-Limit-Count headers. Different endpoints have different method limits -- match history might allow 500 requests per 10 seconds while summoner lookups allow 2000.

Service-Level Limits (429)

When Riot's infrastructure is under load, you may receive a 429 with no X-Rate-Limit-Type header. These are service-level rate limits and cannot be predicted. Whisper handles them with exponential backoff.

Configuration

You can customize the rate limiter behavior:

import { createClient } from '@wardbox/whisper/core';

const client = createClient({
  apiKey: 'RGAPI-your-key-here',
  rateLimiter: {
    // Throw immediately instead of queuing when rate limited
    throwOnLimit: true,

    // Maximum requests to queue before rejecting
    maxQueueSize: 100,

    // Per-request timeout in milliseconds
    requestTimeout: 30000,

    // Callback when a rate limit is encountered
    onRateLimit: (scope, retryAfter) => {
      console.log(`Rate limited on ${scope}, retry in ${retryAfter}ms`);
    },
  },
});

To disable rate limiting entirely:

const client = createClient({
  apiKey: 'RGAPI-your-key-here',
  rateLimiter: false,
});

Handling 429s

Even with proactive limiting, 429 responses can still occur in edge cases:

  • Race conditions between concurrent requests
  • Shared API keys where another application consumes quota
  • Service-level 429s from Riot infrastructure load

When a 429 does occur, Whisper automatically retries:

  • App/method 429s: Retries using the Retry-After header value
  • Service 429s: Retries with exponential backoff
  • Up to 3 retry attempts per request before throwing a RateLimitError

How It Compares

Most Riot API wrappers use reactive rate limiting -- they send the request, get a 429 back, wait, and retry. This wastes a request and adds latency.

Whisper's proactive approach parses the rate limit headers from every successful response and maintains a token bucket per scope. Requests are held in a queue until a token is available, preventing the 429 from happening in the first place.

On this page