Inngest Review: Serverless Queues & Workflows (2026)
Inngest turns your serverless functions into reliable, retryable workflows — no Redis, no workers, no infrastructure. It's the "what if background jobs were designed for serverless?" answer. Here's our review after running it in production.
What Is Inngest?
Inngest is a workflow engine that runs on top of your existing serverless deployment. You define functions with steps, Inngest handles execution, retries, concurrency, and scheduling.
import { Inngest } from 'inngest'
const inngest = new Inngest({ id: 'my-app' })
export const syncUser = inngest.createFunction(
{ id: 'sync-user', retries: 3 },
{ event: 'user/created' },
async ({ event, step }) => {
await step.run('create-in-crm', () =>
crm.createContact(event.data.email)
)
await step.run('send-welcome', () =>
sendEmail(event.data.email, 'welcome')
)
await step.run('notify-team', () =>
slack.post(`New user: ${event.data.email}`)
)
}
)
Key stats:
- Used by Vercel, Resend, Soundcloud, and hundreds of companies
- Works with Next.js, Express, Hono, Remix, SvelteKit
- No infrastructure to manage
- Step-level retries (not full-function retries)
- YC-backed, active development
What We Love
1. Step Functions That Actually Make Sense
Each step is independently retryable. If step 3 fails, Inngest retries only step 3 — not the whole function:
async ({ step }) => {
// Step 1: Runs once, result is cached
const user = await step.run('fetch-user', () => db.getUser(id))
// Step 2: If this fails, only THIS retries
const charge = await step.run('charge', () => stripe.charge(user))
// Step 3: Can fail independently
await step.run('send-receipt', () => sendReceipt(user.email, charge))
}
Compare to traditional background jobs where a failure anywhere means re-running everything from scratch.
2. Sleep and Wait — For Real Workflows
async ({ step }) => {
await step.run('send-trial-welcome', () => sendEmail('welcome'))
// Actually sleep for 3 days — function suspends, costs nothing
await step.sleep('wait-3-days', '3d')
await step.run('send-tips', () => sendEmail('tips-and-tricks'))
await step.sleep('wait-before-upgrade', '4d')
// Check if they upgraded during the wait
const upgraded = await step.run('check-status', () =>
db.users.findUnique({ where: { id } })
)
if (!upgraded.isPaid) {
await step.run('send-upgrade-nudge', () => sendEmail('upgrade'))
}
}
This replaces complex cron job chains with a single, readable function.
3. Event-Driven Architecture
Trigger functions from events anywhere in your app:
// Anywhere in your code — emit events
await inngest.send({
name: 'order/placed',
data: { orderId: '123', customerId: 'abc', amount: 99.99 }
})
// Multiple functions can react to the same event
const processPayment = inngest.createFunction(
{ id: 'process-payment' },
{ event: 'order/placed' },
async ({ event }) => { /* handle payment */ }
)
const updateInventory = inngest.createFunction(
{ id: 'update-inventory' },
{ event: 'order/placed' },
async ({ event }) => { /* update stock */ }
)
const notifyWarehouse = inngest.createFunction(
{ id: 'notify-warehouse' },
{ event: 'order/placed' },
async ({ event }) => { /* send to warehouse */ }
)
4. Zero Infrastructure on Serverless
Inngest works by calling your existing HTTP endpoints. No separate worker processes:
Traditional: App server → Redis queue → Worker server → execute
Inngest: App server → Inngest cloud → calls your HTTP endpoint → execute
Your Vercel/Netlify/Cloudflare deployment is both the API and the worker. No Redis, no Bull, no separate containers.
5. Concurrency and Rate Limiting
Built-in, no configuration headaches:
inngest.createFunction(
{
id: 'sync-to-api',
concurrency: {
limit: 5, // Max 5 running at once
key: 'event.data.accountId', // Per account
},
rateLimit: {
limit: 100,
period: '1m', // 100 per minute max
},
throttle: {
limit: 1,
period: '10s', // At most once per 10 seconds per key
key: 'event.data.userId',
},
},
{ event: 'data/sync' },
async ({ event }) => { /* ... */ }
)
What Could Be Better
1. No Self-Hosting (Yet)
Inngest is cloud-only in production. The dev server runs locally, but production workloads go through Inngest's infrastructure. If you need on-prem or full data control, this is a blocker.
2. Step Overhead
Each step adds ~10-50ms of latency (Inngest needs to coordinate). For functions where every millisecond matters, this adds up:
3-step function: +30-150ms overhead
10-step function: +100-500ms overhead
For most background jobs this is negligible, but not suitable for hot-path latency-sensitive work.
3. Debugging Multi-Step Failures
When step 7 of a 10-step function fails, you see the error in the dashboard. But understanding why in the context of the full workflow requires clicking through multiple step views.
4. Pricing at Scale
Free: 10,000 runs/mo
Pro: $50/mo → 100,000 runs
Business: $250/mo → 1,000,000 runs
At 500K runs/mo: $250/mo (fine)
At 5M runs/mo: Need enterprise pricing (can get expensive)
For high-volume event processing, BullMQ + Redis is cheaper at scale.
Real-World Use Cases
Onboarding Workflow
User signs up → create in CRM → send welcome email → wait 2 days → send tips → wait 5 days → check if active → if not, send re-engagement.
Order Processing
Order placed → validate inventory → charge payment → generate invoice → notify warehouse → update CRM → wait for shipment event → send tracking email.
Data Pipeline
CSV uploaded → parse rows → process each row (with concurrency limit) → aggregate results → generate report → email to user.
Who Should Use Inngest
Perfect for:
- Serverless apps (Next.js on Vercel, Remix on Cloudflare)
- Event-driven workflows with multiple steps
- Teams that don't want to manage Redis/workers
- Onboarding flows, notification sequences, data processing
- Small-to-medium volume (under 1M events/mo)
Not ideal for:
- High-volume event processing (5M+/mo) — cost adds up
- Sub-10ms latency requirements
- On-premise or air-gapped environments
- Teams already running BullMQ at scale successfully
Verdict
Rating: 9/10
Inngest is the best way to add background jobs to serverless applications. The step function model, event-driven triggers, and zero infrastructure make it dramatically simpler than traditional queue setups. The only meaningful limitations are the lack of self-hosting and pricing at very high volume.
If you're on Vercel/Netlify/Cloudflare and need reliable background processing — Inngest is the obvious choice.