Rate Limits
Rate Limits
Section titled “Rate Limits”LMIF uses rate limits to ensure fair usage and maintain service quality.
Rate Limit Tiers
Section titled “Rate Limit Tiers”| Plan | Requests/minute | Requests/day | Batch Size |
|---|---|---|---|
| Sandbox | 60 | 1,000 | 100 |
| Basic | 300 | 50,000 | 100 |
| Standard | 1,000 | 500,000 | 100 |
| Enterprise | Custom | Custom | Custom |
Rate Limit Headers
Section titled “Rate Limit Headers”Every response includes rate limit headers:
X-RateLimit-Limit: 300X-RateLimit-Remaining: 299X-RateLimit-Reset: 1705312860| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests per minute |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when the limit resets |
Handling Rate Limits
Section titled “Handling Rate Limits”When you exceed the limit, you receive a 429 Too Many Requests response:
{ "error": { "code": "RATE_LIMITED", "message": "Rate limit exceeded", "details": { "limit": 300, "remaining": 0, "resetAt": "2024-01-15T10:01:00Z", "retryAfter": 45 } }}Retry Logic
Section titled “Retry Logic”async function callWithRetry(fn: () => Promise<any>, maxRetries = 3) { for (let attempt = 0; attempt < maxRetries; attempt++) { try { return await fn(); } catch (error) { if (error.code === 'RATE_LIMITED') { const retryAfter = error.details.retryAfter || 60; console.log(`Rate limited. Retrying in ${retryAfter}s...`); await sleep(retryAfter * 1000); continue; } throw error; } } throw new Error('Max retries exceeded');}
// Usageconst result = await callWithRetry(() => lmif.identity.check({ name, imageUrl }));Proactive Rate Limit Handling
Section titled “Proactive Rate Limit Handling”Monitor headers to avoid hitting limits:
class RateLimitedClient { private remaining = Infinity; private resetAt = 0;
async request(fn: () => Promise<Response>) { // Wait if we're about to hit the limit if (this.remaining <= 1 && Date.now() < this.resetAt) { const waitTime = this.resetAt - Date.now(); await sleep(waitTime); }
const response = await fn();
// Update from headers this.remaining = parseInt(response.headers.get('X-RateLimit-Remaining') || '0'); this.resetAt = parseInt(response.headers.get('X-RateLimit-Reset') || '0') * 1000;
return response; }}Endpoint-Specific Limits
Section titled “Endpoint-Specific Limits”Some endpoints have additional limits:
| Endpoint | Additional Limit |
|---|---|
POST /identity/check/batch | 100 items per request |
POST /licenses/reportUsage | 100 reports per minute |
POST /webhooks/test | 10 tests per hour |
Best Practices
Section titled “Best Practices”1. Use Batch Endpoints
Section titled “1. Use Batch Endpoints”Instead of multiple single calls:
// Bad: 100 separate requestsfor (const avatar of avatars) { await lmif.identity.check({ name: avatar.name, imageUrl: avatar.imageUrl });}
// Good: 1 batch requestawait lmif.identity.checkBatch( avatars.map((a) => ({ name: a.name, imageUrl: a.imageUrl })));2. Implement Request Queuing
Section titled “2. Implement Request Queuing”For high-volume applications:
import PQueue from 'p-queue';
const queue = new PQueue({ concurrency: 10, interval: 1000, intervalCap: 5, // Max 5 requests per second});
async function checkIdentity(name: string, imageUrl: string) { return queue.add(() => lmif.identity.check({ name, imageUrl }));}3. Cache Results
Section titled “3. Cache Results”Cache identity check results:
import NodeCache from 'node-cache';
const cache = new NodeCache({ stdTTL: 3600 }); // 1 hour
async function cachedIdentityCheck(name: string, imageUrl: string) { const cacheKey = `${name}:${imageUrl}`; const cached = cache.get(cacheKey);
if (cached) { return cached; }
const result = await lmif.identity.check({ name, imageUrl }); cache.set(cacheKey, result); return result;}4. Implement Exponential Backoff
Section titled “4. Implement Exponential Backoff”async function callWithBackoff(fn: () => Promise<any>) { const maxRetries = 5; let delay = 1000;
for (let i = 0; i < maxRetries; i++) { try { return await fn(); } catch (error) { if (error.code === 'RATE_LIMITED') { // Use server's retryAfter if available const wait = error.details?.retryAfter ? error.details.retryAfter * 1000 : delay;
await sleep(wait); delay *= 2; // Exponential backoff continue; } throw error; } } throw new Error('Max retries exceeded');}Monitoring Rate Limits
Section titled “Monitoring Rate Limits”Track your usage in the dashboard:
- Go to Dashboard → API Usage
- View requests per minute/day
- Set up alerts for approaching limits
Requesting Higher Limits
Section titled “Requesting Higher Limits”If you need higher limits:
- Standard tier: Upgrade from Basic
- Enterprise tier: Contact sales for custom limits
Enterprise plans include:
- Custom rate limits
- Dedicated infrastructure
- Priority support
Contact: enterprise@lookmaimfamous.com
SDK Rate Limit Handling
Section titled “SDK Rate Limit Handling”The TypeScript SDK handles rate limits automatically:
const lmif = new LMIFClient({ apiKey: process.env.LMIF_API_KEY, retry: { maxRetries: 3, retryDelay: 1000, },});
// Automatic retry on rate limitconst result = await lmif.identity.check({ name, imageUrl });The SDK:
- Automatically retries rate-limited requests
- Uses exponential backoff
- Respects
retryAfterheader - Logs retry attempts (configurable)