Inngest vs Trigger.dev vs BullMQ: Best Background Jobs (2026)
Every production app needs background jobs — sending emails, processing images, syncing data, running AI pipelines. In 2026, three solutions dominate: Inngest for serverless workflows, Trigger.dev for long-running tasks, and BullMQ for self-hosted queues.
Quick Verdict
| Inngest | Trigger.dev | BullMQ | |
|---|---|---|---|
| Best for | Serverless workflows, event-driven | Long-running tasks, AI pipelines | Self-hosted, high throughput |
| Hosting | Managed cloud | Managed cloud + self-host | Self-hosted (needs Redis) |
| Max duration | Unlimited (step functions) | Hours+ | Unlimited |
| Concurrency | Managed | Managed | You configure |
| Retries | Built-in | Built-in | Built-in |
| Pricing | Free → $50/mo+ | Free → $50/mo+ | Free (OSS) + Redis cost |
| Framework | Any (Next.js, Express, etc.) | Any | Node.js |
| Learning curve | Low | Low | Medium |
The Core Problem
User clicks "Generate Report" →
❌ API route: Times out after 30 seconds
❌ setTimeout: Lost if server restarts
❌ Promise.all: No retry, no visibility
✅ Background job: Runs reliably, retries on failure, shows progress
Inngest: Event-Driven Workflows
Inngest turns your functions into reliable, retryable workflows triggered by events:
import { Inngest } from 'inngest'
const inngest = new Inngest({ id: 'my-app' })
// Define a multi-step function
export const processOrder = inngest.createFunction(
{
id: 'process-order',
retries: 3,
concurrency: { limit: 10 },
},
{ event: 'order/created' },
async ({ event, step }) => {
// Each step is independently retryable
const payment = await step.run('charge-payment', async () => {
return await stripe.charges.create({
amount: event.data.amount,
customer: event.data.customerId,
})
})
// Wait for external event (up to 7 days)
const shipment = await step.waitForEvent('wait-for-shipment', {
event: 'shipment/confirmed',
match: 'data.orderId',
timeout: '7d',
})
// Send confirmation
await step.run('send-confirmation', async () => {
await sendEmail(event.data.email, {
subject: 'Order shipped!',
tracking: shipment.data.trackingNumber,
})
})
return { payment, shipment }
}
)
// Trigger from anywhere
await inngest.send({
name: 'order/created',
data: { orderId: '123', amount: 4999, customerId: 'cus_abc' },
})
Why Teams Choose Inngest
- Step functions: Each step retries independently — no re-running entire jobs
- Event-driven: Trigger functions from events, not just API calls
- Sleep and wait:
step.sleep('1h')andstep.waitForEvent()for complex workflows - Fan-out: Process thousands of items in parallel with concurrency limits
- Zero infrastructure: No Redis, no queue servers, no workers to manage
- Works with serverless: Runs on Vercel, Netlify, Cloudflare — no long-running server needed
Inngest Pricing
Free: 10,000 runs/mo, 5 concurrent
Pro: $50/mo — 100K runs, 50 concurrent
Business: $250/mo — 1M runs, 200 concurrent
Inngest Limitations
- Vendor lock-in: No self-hosted option (yet)
- Step overhead: Each step adds ~10ms latency
- Cold starts: On serverless platforms, first invocation has latency
- Complex debugging: Multi-step functions can be hard to trace
Trigger.dev: Long-Running Tasks Made Easy
Trigger.dev v3 runs tasks on managed infrastructure with no timeout limits:
import { task, wait, logger } from '@trigger.dev/sdk/v3'
export const generateReport = task({
id: 'generate-report',
retry: { maxAttempts: 3 },
run: async (payload: { userId: string; reportType: string }) => {
logger.info('Starting report generation', { userId: payload.userId })
// Step 1: Gather data (can take minutes)
const data = await fetchAllUserData(payload.userId)
// Step 2: Process with AI
const analysis = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: `Analyze: ${JSON.stringify(data)}` }],
})
// Step 3: Generate PDF
const pdf = await generatePDF(analysis.choices[0].message.content)
// Step 4: Upload and notify
const url = await uploadToS3(pdf)
await sendEmail(payload.userId, { reportUrl: url })
return { url, pages: pdf.pageCount }
},
})
// Trigger from your API
import { generateReport } from './trigger/generate-report'
const handle = await generateReport.trigger({
userId: 'usr_123',
reportType: 'quarterly',
})
Why Teams Choose Trigger.dev
- No timeouts: Tasks run for minutes or hours — perfect for AI workloads
- Managed infrastructure: They run your code on their servers
- Real-time logs: Stream logs from running tasks to your dashboard
- Self-hostable: v3 supports self-hosting with Docker
- TypeScript-native: Full type safety on payloads and returns
- Batch operations: Process thousands of items with built-in batching
Trigger.dev Pricing
Free: 30,000 runs/mo, 500 compute hours
Hobby: $50/mo — unlimited runs, 2,500 compute hours
Pro: $250/mo — unlimited runs, 10,000 compute hours
Trigger.dev Limitations
- Newer platform: Less battle-tested than BullMQ
- Compute-based pricing: Heavy workloads get expensive
- Network dependency: Tasks run on their infra, need network access to your services
- Not event-driven: Primarily task-based, less natural for event workflows
BullMQ: Self-Hosted Power
BullMQ is the battle-tested open-source queue for Node.js:
import { Queue, Worker } from 'bullmq'
import Redis from 'ioredis'
const connection = new Redis({ host: 'localhost', port: 6379 })
// Create a queue
const emailQueue = new Queue('emails', { connection })
// Add jobs
await emailQueue.add('welcome-email', {
to: 'user@example.com',
template: 'welcome',
}, {
attempts: 3,
backoff: { type: 'exponential', delay: 1000 },
removeOnComplete: 100,
removeOnFail: 500,
})
// Process jobs
const worker = new Worker('emails', async (job) => {
const { to, template } = job.data
await sendEmail(to, template)
return { sent: true }
}, {
connection,
concurrency: 5,
limiter: { max: 100, duration: 60000 }, // rate limiting
})
worker.on('completed', (job, result) => {
console.log(`Job ${job.id} completed`, result)
})
worker.on('failed', (job, err) => {
console.error(`Job ${job.id} failed`, err.message)
})
Why Teams Choose BullMQ
- Free and open-source: No per-run charges, ever
- Battle-tested: Used by thousands of companies in production
- Full control: Configure every aspect of job processing
- Redis-backed: Reliable, fast, supports clustering
- Rich features: Priority queues, rate limiting, job dependencies, repeatable jobs
- Dashboard: Bull Board or Arena for monitoring
BullMQ Limitations
- Redis required: Need to host and manage Redis
- Workers needed: Must run always-on worker processes
- No serverless: Can't run on Vercel/Netlify edge functions
- More code: You handle retries, dead letter queues, monitoring yourself
- Node.js only: No first-party support for other languages
Decision Framework
Choose Inngest When
- Using serverless (Vercel, Netlify, Cloudflare)
- Need complex workflows with waits and event coordination
- Want zero infrastructure management
- Event-driven architecture appeals to you
- Budget allows $50-250/mo
Choose Trigger.dev When
- Running long tasks (AI processing, report generation, data pipelines)
- Need real-time task monitoring and logs
- Want managed infra with self-host option
- Tasks take minutes to hours, not seconds
- Building AI agent workflows
Choose BullMQ When
- Self-hosting and want zero vendor costs
- High throughput (millions of jobs/day)
- Already running Redis
- Need maximum control over queue behavior
- Budget is tight — only cost is Redis hosting
FAQ
Can Inngest replace a traditional queue?
For most use cases, yes. Inngest handles everything BullMQ does plus multi-step workflows. The trade-off is cost at scale and vendor dependency.
Is Trigger.dev v3 stable?
Yes. v3 is production-ready and a significant improvement over v2. The managed infrastructure model makes it much simpler to operate.
Can I use BullMQ with serverless?
Not directly — workers need to be always-on. You'd need a separate server/container running workers alongside your serverless frontend.
What about AWS SQS or Google Cloud Tasks?
Those work but require cloud-specific code and more configuration. Inngest/Trigger.dev/BullMQ offer better DX for TypeScript teams.
Bottom Line
Inngest for serverless-first teams wanting event-driven workflows with zero infrastructure. Trigger.dev for long-running AI and data processing tasks with great observability. BullMQ for self-hosted teams wanting maximum control at zero software cost.
Our pick: Inngest for most teams — the step function model and zero-infra approach is the future of background processing.