Rate Limiting Overview

Our platform employs a distributed rate limiting system to ensure optimal performance and resource allocation. Rate limits vary based on your subscription plan:

Plan-Based Limits

  1. Basic Plan:

    • 1,000 requests per minute (RPM)
    • Resets every minute
    • Perfect for most applications
    • Tracked via remaining_minute counter
  2. Pro Plan:

    • 5,000 requests per minute (RPM)
    • Higher throughput for demanding applications
    • Priority request processing
    • Tracked via remaining_minute counter
  3. Enterprise Plan:

    • No rate limits
    • Unlimited requests per minute
    • Custom infrastructure
    • Performance monitoring included

Enterprise customers get unlimited request throughput with dedicated infrastructure to ensure optimal performance at any scale.

For information about where to find rate limit data in responses, see our Response Structure documentation.

Caching and Rate Limits

Our caching system helps you optimize your rate limit usage:

  • Public endpoints (like profile information) are automatically cached
  • Cached responses don’t count towards your rate limits
  • Use ?fresh=true when you need real-time data
  • Cache duration varies by endpoint type

When using cached responses, you’re not consuming any of your rate limits, making it an effective way to optimize your API usage.

Rate Limit Monitoring

Track your rate limit usage through:

  1. Response Headers

    • Current limits
    • Remaining requests
    • Reset timers
  2. Dashboard Analytics

    • Usage patterns
    • Peak usage times
    • Rate limit warnings

Best Practices

Implement these strategies for optimal throughput:

  1. Monitor Your Limits

    • Track usage patterns
    • Plan for limit resets
    • Set up alerts before limits are reached
  2. Optimize Request Patterns

    • Distribute requests evenly
    • Avoid request bursts
    • Use batch operations when available
  3. Handle Rate Limits Gracefully

    • Implement exponential backoff
    • Queue requests when near limits
    • Use cached responses when possible
  4. Cache Strategy

    • Leverage cached responses
    • Only use ?fresh=true when necessary
    • Implement local caching when appropriate

Error Handling

When you exceed rate limits:

  • Response will have status code 429 Too Many Requests
  • Contains information about when limits reset
  • Implement automatic retry with backoff

Example backoff strategy:

async function fetchWithRetry(url, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      const response = await fetch(url);
      if (response.status !== 429) return response;

      // Get retry delay from headers or use exponential backoff
      const retryAfter = response.headers.get("retry-after") || Math.pow(2, i);
      await new Promise((resolve) => setTimeout(resolve, retryAfter * 1000));
    } catch (error) {
      if (i === maxRetries - 1) throw error;
    }
  }
}

Enterprise Options

For high-volume requirements:

  • No rate limits
  • Dedicated infrastructure
  • Custom performance tuning
  • Advanced monitoring tools

Contact our enterprise team to discuss your specific needs.