Skip to main content
Xquik enforces rate limits to ensure fair usage and platform stability. Limits are applied per user account using a fixed window counter.
Quick answer: 120 GET/60s, 30 POST/60s, 15 DELETE/60s per account. Hit the limit? Respect the Retry-After header. Avoid 429s? Use the client-side rate limiter below.

Rate limit tiers

TierMethodsLimit
ReadGET, HEAD, OPTIONS120 per 60s
WritePOST, PUT, PATCH30 per 60s
DeleteDELETE15 per 60s

Action-specific limits

Some write endpoints have stricter limits that apply on top of the general write tier:
ActionEndpointLimit
Connect accountPOST /x/accounts3 per 15 minutes
FollowPOST /x/users/{id}/follow20 per 60s, 400 per day

How limits work

Xquik uses a fixed window counter:
1

Window starts on first request

The first request from your account starts a 60-second window for that tier.
2

Each request increments the counter

Every request within the window increments the counter for its tier (read, write, or delete).
3

Counter resets after window expires

After 60 seconds, the window resets and the counter returns to zero.
4

Requests rejected when limit is reached

If the counter exceeds the tier limit within a window, subsequent requests return 429 Too Many Requests until the window resets.
Read tier (120 per 60s):

Time 0s:   [0/120]  → Send 50 GET requests  → [50/120]
Time 10s:  [50/120] → Send 70 GET requests  → [120/120] (limit reached)
Time 30s:  [120/120] → GET request → 429 (rejected, window active)
Time 60s:  [0/120]  → Window resets → requests allowed again

Response headers

When you exceed the rate limit, the response includes:
HeaderDescription
Retry-AfterSeconds to wait before retrying
Example 429 response:
HTTP/1.1 429 Too Many Requests
Retry-After: 60
Content-Type: application/json

{ "error": "rate_limit_exceeded", "message": "Too many requests. Try again later.", "retryAfter": 60 }
Always respect the Retry-After header. Sending requests before the window resets may extend your cooldown.

Client-side rate limiter

Prevent hitting server-side limits by implementing a client-side rate limiter. This is more efficient than relying on 429 responses and backoff.
class WindowRateLimiter {
  constructor(maxRequests, windowMs) {
    this.maxRequests = maxRequests;
    this.windowMs = windowMs;
    this.count = 0;
    this.windowStart = Date.now();
  }

  async acquire() {
    const now = Date.now();

    if (now - this.windowStart >= this.windowMs) {
      this.count = 0;
      this.windowStart = now;
    }

    if (this.count >= this.maxRequests) {
      const waitMs = this.windowMs - (now - this.windowStart);
      await new Promise((resolve) => setTimeout(resolve, waitMs));
      this.count = 0;
      this.windowStart = Date.now();
    }

    this.count += 1;
  }
}

// Read tier: 120 requests per 60s
const readLimiter = new WindowRateLimiter(120, 60_000);

async function apiRequest(url) {
  await readLimiter.acquire();
  return fetch(url, {
    headers: { "x-api-key": "xq_YOUR_KEY_HERE" },
  });
}

Rate limiting libraries

Instead of building your own rate limiter, consider these battle-tested libraries:
LanguageLibraryInstall
Node.jsbottlenecknpm install bottleneck
Node.jsp-limitnpm install p-limit
Pythonratelimitpip install ratelimit
Goratego get golang.org/x/time/rate
bottleneck example
import Bottleneck from "bottleneck";

const limiter = new Bottleneck({
  reservoir: 120,           // 120 requests per window
  reservoirRefreshAmount: 120,
  reservoirRefreshInterval: 60_000, // 60 seconds
  maxConcurrent: 5,
});

const response = await limiter.schedule(() =>
  fetch("https://xquik.com/api/v1/events?limit=50", {
    headers: { "x-api-key": "xq_YOUR_KEY_HERE" },
  })
);

Best practices

Fetch events in larger pages (limit=100) instead of many small requests. 1 request for 100 events is better than 10 requests for 10 events each.
Webhooks deliver events in real time with zero polling overhead. You only receive traffic when something happens. See the webhooks overview.
Monitor and webhook configurations change infrequently. Cache GET responses for list endpoints and invalidate only after mutations (create, update, delete).
When you receive a 429, wait for the Retry-After duration. If the retry also fails, double the wait time. See the error handling guide for complete retry implementations.
Avoid sending all requests in a burst at the start of each window. Spread them evenly across the 60-second window to avoid hitting the limit early.

Error Handling

Retry strategies and error recovery patterns.

API Overview

Base URL, authentication, and conventions.