SDK Reference
Official TypeScript and Python SDKs for the Scraper API. Typed clients with built-in retry logic and error handling.
Installation
npm / pip
npm install @scraper-bot/sdk
# or
pip install scraper-botTypeScript SDK
TypeScript
import { ScraperBot } from '@scraper-bot/sdk'
const client = new ScraperBot({ apiKey: 'scr_live_...' })
// List flows
const flows = await client.flows.list()
// Create a flow
const flow = await client.flows.create({
name: 'Product Scraper',
url: 'https://example.com/products',
mode: 'extract',
description: 'Extract all product data',
})
// Run a flow
const run = await client.runs.trigger(flow.id)
// Get run results
const result = await client.runs.get(run.id)
console.log(result.outputPreview)
// One-shot extraction
const data = await client.extract({
url: 'https://example.com/products',
instructions: 'Get all product names and prices',
schema: { name: 'string', price: 'number' },
})Python SDK
Python
from scraper_bot import ScraperBot
client = ScraperBot(api_key="scr_live_...")
# List flows
flows = client.flows.list()
# Create a flow
flow = client.flows.create(
name="Product Scraper",
url="https://example.com/products",
mode="extract",
description="Extract all product data",
)
# Run a flow
run = client.runs.trigger(flow.id)
# Get results
result = client.runs.get(run.id)
print(result.output_preview)
# One-shot extraction
data = client.extract(
url="https://example.com/products",
instructions="Get all product names and prices",
schema={"name": "string", "price": "number"},
)API Methods Reference
Both the TypeScript and Python SDKs expose the same methods. Python uses snake_case for method parameters and response fields.
| Method | Description |
|---|---|
client.flows.list() | List all flows |
client.flows.get(id) | Get a flow by ID |
client.flows.create(data) | Create a new flow |
client.flows.update(id, data) | Update an existing flow |
client.flows.delete(id) | Delete a flow |
client.runs.list() | List all runs |
client.runs.trigger(flowId) | Trigger a new run |
client.runs.get(id) | Get run details with results |
client.extract(options) | One-shot extraction without creating a flow |
client.keys.list() | List all API keys |
client.keys.create(data) | Create a new API key |
Error Handling
Both SDKs throw typed exceptions for API errors. Catch these to handle rate limits, authentication failures, and validation errors gracefully.
TypeScript
import { ScraperBot, ScraperBotError, RateLimitError } from '@scraper-bot/sdk'
const client = new ScraperBot({ apiKey: 'scr_live_...' })
try {
const data = await client.extract({
url: 'https://example.com',
instructions: 'Get all headings',
})
} catch (error) {
if (error instanceof RateLimitError) {
console.log(`Rate limited. Retry after ${error.retryAfter}s`)
} else if (error instanceof ScraperBotError) {
console.log(`API error: ${error.message} (status ${error.status})`)
} else {
throw error
}
}Python
from scraper_bot import ScraperBot, ScraperBotError, RateLimitError
client = ScraperBot(api_key="scr_live_...")
try:
data = client.extract(
url="https://example.com",
instructions="Get all headings",
)
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after}s")
except ScraperBotError as e:
print(f"API error: {e.message} (status {e.status})")
except Exception as e:
raiseRate Limits
API rate limits depend on your plan tier. When a limit is exceeded, the API returns a 429 Too Many Requests response with a Retry-After header. The SDK automatically retries with exponential backoff (up to 3 attempts).
| Plan | Requests / min | Concurrent runs | Monthly runs |
|---|---|---|---|
| Free | 30 | 1 | 500 |
| Pro | 120 | 5 | 10,000 |
| Team | 300 | 20 | 50,000 |
| Enterprise | Custom | Custom | Unlimited |
To increase your limits, upgrade your plan in Settings > Billing or contact the sales team for enterprise arrangements.