Appearance
POST /batch
Extract from many URLs with the same schema.
POST https://api.scrapewithruno.com/v1/batch
X-API-Key: sk_live_...
Content-Type: application/jsonRequest Body
json
{
"urls": [
"https://example.com/p/1",
"https://example.com/p/2",
"https://example.com/p/3"
],
"schema": [
{ "field": "title", "type": "string", "example": "Example title" },
{ "field": "price", "type": "float", "example": 19.99 }
],
"options": {
"render_js": "auto",
"concurrency": 4,
"fail_fast": false,
"timeout_ms": 15000
},
"async_mode": false
}| Field | Type | Required | Notes |
|---|---|---|---|
urls | array of strings | yes | Absolute URLs. Order is preserved |
schema | array | dynamic keys only | Applied to all URLs |
options.concurrency | integer | no | Parallel fetches inside this batch (server-tuned by default) |
options.fail_fast | boolean | no | If true, return as soon as the first URL errors. Default false |
options.render_js / timeout_ms / process_images | - | no | Same as /extract, applied per URL |
async_mode | boolean | no | If true, route LLM extraction through Batch API (up to 24 h). See Async Mode |
Response
An array of /extract-shaped objects, in input order:
json
{
"results": [
{ "url": "https://example.com/p/1", "status": "success", "render_mode": "fetch", "data": { "title": "...", "price": 19.99 } },
{ "url": "https://example.com/p/2", "status": "error", "error": { "code": "FETCH_BLOCKED", "message": "...", "retryable": true } },
{ "url": "https://example.com/p/3", "status": "success", "render_mode": "headless", "data": { "title": "...", "price": 24.50 } }
],
"cancelled": false
}cancelled: true indicates the batch was stopped via DELETE /v1/jobs/{id}. Unstarted URLs come back as JOB_CANCELLED.
Cost
1 request per URL. Failures count. Cancelled URLs that didn't run are refunded.
Examples
bash
curl -X POST https://api.scrapewithruno.com/v1/batch \
-H "X-API-Key: $RUNO_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"urls": ["https://example.com/p/1", "https://example.com/p/2"],
"schema": [
{ "field": "title", "type": "string", "example": "Example title" },
{ "field": "price", "type": "float", "example": 19.99 }
],
"options": { "concurrency": 4 }
}'js
const res = await fetch('https://api.scrapewithruno.com/v1/batch', {
method: 'POST',
headers: {
'X-API-Key': process.env.RUNO_API_KEY,
'Content-Type': 'application/json',
},
body: JSON.stringify({
urls: ['https://example.com/p/1', 'https://example.com/p/2'],
schema: [
{ field: 'title', type: 'string', example: 'Example title' },
{ field: 'price', type: 'float', example: 19.99 },
],
options: { concurrency: 4 },
}),
})
const { results } = await res.json()
for (const r of results) {
if (r.status === 'success') console.log(r.url, r.data)
else console.warn(r.url, r.error.code)
}py
import os, requests
res = requests.post(
"https://api.scrapewithruno.com/v1/batch",
headers={"X-API-Key": os.environ["RUNO_API_KEY"]},
json={
"urls": ["https://example.com/p/1", "https://example.com/p/2"],
"schema": [
{"field": "title", "type": "string", "example": "Example title"},
{"field": "price", "type": "float", "example": 19.99},
],
"options": {"concurrency": 4},
},
timeout=300,
)
for r in res.json()["results"]:
if r["status"] == "success":
print(r["url"], r["data"])
else:
print(r["url"], "error:", r["error"]["code"])