Quick answer: 120 GET/60s, 30 POST/60s, 15 DELETE/60s per account.
Hit the limit? Respect the
Retry-After header. Avoid 429s? Use the client-side rate limiter below.Rate limit tiers
| Tier | Methods | Limit |
|---|---|---|
| Read | GET, HEAD, OPTIONS | 120 per 60s |
| Write | POST, PUT, PATCH | 30 per 60s |
| Delete | DELETE | 15 per 60s |
Action-specific limits
Some write endpoints have stricter limits that apply on top of the general write tier:| Action | Endpoint | Limit |
|---|---|---|
| Connect account | POST /x/accounts | 3 per 15 minutes |
| Follow | POST /x/users/{id}/follow | 20 per 60s, 400 per day |
How limits work
Xquik uses a fixed window counter:Window starts on first request
The first request from your account starts a 60-second window for that tier.
Each request increments the counter
Every request within the window increments the counter for its tier (read, write, or delete).
Counter resets after window expires
After 60 seconds, the window resets and the counter returns to zero.
Response headers
When you exceed the rate limit, the response includes:| Header | Description |
|---|---|
Retry-After | Seconds to wait before retrying |
Retry-After header. Sending requests before the window resets may extend your cooldown.
Client-side rate limiter
Prevent hitting server-side limits by implementing a client-side rate limiter. This is more efficient than relying on 429 responses and backoff.Rate limiting libraries
Instead of building your own rate limiter, consider these battle-tested libraries:| Language | Library | Install |
|---|---|---|
| Node.js | bottleneck | npm install bottleneck |
| Node.js | p-limit | npm install p-limit |
| Python | ratelimit | pip install ratelimit |
| Go | rate | go get golang.org/x/time/rate |
bottleneck example
Best practices
Batch operations where possible
Batch operations where possible
Fetch events in larger pages (
limit=100) instead of many small requests. 1 request for 100 events is better than 10 requests for 10 events each.Use webhooks instead of polling
Use webhooks instead of polling
Webhooks deliver events in real time with zero polling overhead. You only receive traffic when something happens. See the webhooks overview.
Cache responses client-side
Cache responses client-side
Monitor and webhook configurations change infrequently. Cache
GET responses for list endpoints and invalidate only after mutations (create, update, delete).Implement exponential backoff
Implement exponential backoff
When you receive a 429, wait for the
Retry-After duration. If the retry also fails, double the wait time. See the error handling guide for complete retry implementations.Spread requests evenly
Spread requests evenly
Avoid sending all requests in a burst at the start of each window. Spread them evenly across the 60-second window to avoid hitting the limit early.
Error Handling
Retry strategies and error recovery patterns.
API Overview
Base URL, authentication, and conventions.