Skip to main content
DataBrain includes built-in rate limiting to protect your API against abuse, brute-force attacks, and accidental traffic spikes. In a self-hosted deployment, you have full control over these limits through environment variables.
Rate limiting is enabled by default on all routes. No configuration is required for basic protection, but tuning the values for your specific traffic patterns is recommended.

How It Works

  • Every incoming API request is tracked by the client’s IP address.
  • Each rate limiter defines a time window and a maximum number of requests allowed within that window.
  • If a client exceeds the limit, they receive a 429 Too Many Requests response until the window resets.
  • Standard rate limit headers are included in API responses so clients can monitor their remaining quota.
If DataBrain runs behind a reverse proxy or load balancer (NGINX, AWS ALB, Cloudflare, etc.), make sure the proxy forwards the real client IP. DataBrain checks the X-Client-IP header first and falls back to the standard X-Forwarded-For header (via Express’s built-in trusted proxy support) to identify individual clients. Without proper proxy configuration, all traffic may appear to come from a single IP and get rate-limited together.

Environment Variables

Set these in your .env file for the DataBrain API service (Express backend). All values represent the maximum number of requests per client IP within the given time window.

General API Rate Limit

RATE_LIMIT
number
default:"500"
Maximum requests per IP across all API endpoints within a 2-minute window.This is the global rate limiter — it applies to every request before any route-specific limits are checked.
Important: You should always explicitly set this variable in your .env file. If left unset, the rate limiter may not enforce the intended default of 500 requests.

Authentication Rate Limits

These protect login and identity-related endpoints against brute-force and credential-stuffing attacks.
AUTH_ROUTE_RATE_LIMIT
number
default:"30"
Maximum requests per IP to authentication endpoints within a 1-minute window.Covers: sign-in, sign-up, SSO, password reset, invitation acceptance, and related auth flows.
REFRESH_ROUTE_RATE_LIMIT
number
default:"30"
Maximum requests per IP to the token refresh endpoint within a 1-minute window.Controls how frequently a client can request new access tokens.
OTP_RATE_LIMIT
number
default:"50"
Maximum requests per IP to OTP (one-time password) endpoints within a 1-minute window.Covers OTP generation and verification.

Other Rate Limits

EMAIL_RATE_LIMIT
number
default:"50"
Maximum requests per IP to email-sending endpoints within a 1-minute window.Covers invitation emails, verification re-sends, and scheduled report triggers.
ONBOARDING_DEMO_DATABASE_RATE_LIMIT
number
default:"50"
Maximum requests per IP to the demo database onboarding endpoint within a 1-minute window.Only relevant if you use the built-in demo database onboarding flow.

Quick Reference

VariableDefaultWindowProtects
RATE_LIMIT5002 minAll API routes (global)
AUTH_ROUTE_RATE_LIMIT301 minLogin, sign-up, SSO, password reset
REFRESH_ROUTE_RATE_LIMIT301 minToken refresh
OTP_RATE_LIMIT501 minOTP generation and verification
EMAIL_RATE_LIMIT501 minInvitation and verification emails
ONBOARDING_DEMO_DATABASE_RATE_LIMIT501 minDemo database onboarding
Route-specific limits (auth, OTP, email) are applied in addition to the global limit. A request to a login endpoint must pass both the global RATE_LIMIT check and the AUTH_ROUTE_RATE_LIMIT check.

Configuration Example

Add these to your DataBrain API .env file:
# --- Rate Limiting ---
# Global: max requests per IP across all endpoints (2-minute window)
RATE_LIMIT=500

# Auth routes: sign-in, sign-up, SSO, password reset (1-minute window)
AUTH_ROUTE_RATE_LIMIT=30

# Token refresh endpoint (1-minute window)
REFRESH_ROUTE_RATE_LIMIT=30

# OTP generation and verification (1-minute window)
OTP_RATE_LIMIT=50

# Email sending: invitations, verification (1-minute window)
EMAIL_RATE_LIMIT=50

# Demo database onboarding (1-minute window)
ONBOARDING_DEMO_DATABASE_RATE_LIMIT=50
After updating, restart the DataBrain API service for changes to take effect.

Tuning for Your Deployment

The default values are a good starting point for most deployments. Here’s how to think about adjusting them:
If you have a small user base (under ~100 users) and want tighter security, you can reduce limits — for example, halving RATE_LIMIT to 250 or lowering AUTH_ROUTE_RATE_LIMIT to 15. Fewer legitimate users means fewer requests per IP, so tighter limits are less likely to cause false positives.
Increase limits if you see legitimate requests getting 429 errors. Common scenarios:
  • High-traffic embedded dashboards — Many end users loading dashboards simultaneously can generate significant API traffic. Increase RATE_LIMIT as needed (e.g., to 1000 or higher).
  • Shared IP / corporate NAT — If many users share one public IP (offices, VPNs), they collectively consume one IP’s quota. Raise RATE_LIMIT proportionally.
  • Automated workflows — CI/CD pipelines, bulk token generation, or automated testing can trigger auth limits. Raise AUTH_ROUTE_RATE_LIMIT or REFRESH_ROUTE_RATE_LIMIT for those environments.
  1. Start with the defaults — they work well for most medium-sized deployments.
  2. Monitor 429 responses in your logs or monitoring stack.
  3. Increase incrementally if legitimate traffic is being blocked — double the value and observe.
  4. Avoid setting limits excessively high — very high limits (e.g., RATE_LIMIT=100000) effectively disable rate limiting and remove protection against abuse.

What Happens When a Limit Is Hit

When a client exceeds a rate limit:
  1. The API responds with HTTP status 429 Too Many Requests.
  2. The response body contains a descriptive error message (e.g., “Too many requests, please try again later”).
  3. The client should wait until the current window expires before retrying.
Responses include standard rate limit headers that clients can use to manage their request pace:
  • RateLimit — Combined header showing the current limit, remaining requests, and reset time (e.g., limit=500, remaining=498, reset=120)
  • RateLimit-Policy — Describes the rate limit policy in effect
  • Retry-After — Seconds to wait before retrying (included on 429 responses)
If your backend integration receives a 429 response:
  1. Read the Retry-After header to know when to retry.
  2. Implement exponential backoff — wait 1s, then 2s, then 4s, etc.
  3. Do not retry immediately in a tight loop — failed requests still count against the limit, so rapid retries will consume the next window’s quota as soon as it resets.
  4. Guest tokens are reusable — consider caching them on your backend rather than generating a new one for every page load. Refresh before the token’s expiryTime to avoid 401 errors.

Best Practices

We strongly recommend explicitly setting the RATE_LIMIT variable in your .env file rather than relying on the default. This ensures predictable behavior and makes your configuration self-documenting.
Rate limiting is based on client IP. If DataBrain sits behind NGINX, a cloud load balancer, or Cloudflare, ensure the proxy forwards the original client IP. DataBrain checks the X-Client-IP header first, then falls back to X-Forwarded-For (via Express trusted proxy support). Either header works — just make sure at least one is set.Without this, all requests appear to come from the proxy’s IP, causing legitimate users to be rate-limited together.
Guest tokens are reusable — once created, the same token can authenticate multiple requests until it expires. Instead of calling the guest token API on every page load, cache the token on your backend and reuse it for the same user and parameter combination.If you set an expiryTime, make sure to refresh the token before it expires. Expired tokens are periodically cleaned up and will return a 401 error once removed.
Authentication endpoints are the most targeted by brute-force attacks. Keep AUTH_ROUTE_RATE_LIMIT and OTP_RATE_LIMIT conservative (30–50 per minute) unless you have a specific reason to increase them.
Track 429 responses in your monitoring stack (DataDog, Grafana, etc.). A spike in 429s may indicate either an attack (good — the rate limiter is protecting you) or limits that are too tight for your traffic (adjust accordingly).

Troubleshooting

Cause: The reverse proxy is not forwarding the real client IP.Fix: Configure your proxy to forward the real client IP. DataBrain checks X-Client-IP first, then X-Forwarded-For. For NGINX, add:
proxy_set_header X-Forwarded-For $remote_addr;
For AWS ALB or Cloudflare, X-Forwarded-For is typically set automatically, but verify the header is reaching DataBrain.
Cause: High-traffic pages generating too many API calls per IP.Fix:
  • Increase RATE_LIMIT to accommodate your peak traffic.
  • Cache guest tokens on your backend to reduce token-creation calls.
  • Ensure dashboards use caching (Workspace Settings → Cache Settings) to reduce repeated data queries.
Cause: Auth rate limit exceeded, possibly from automated tests or repeated failed logins.Fix: Wait 1 minute for the window to reset. If this happens regularly for legitimate users, consider increasing AUTH_ROUTE_RATE_LIMIT slightly (e.g., from 30 to 50).