Customer story - Outtake

How Outtake uses Inngest to ensure the reliability of millions of threat scans every minute, without extra infrastructure

Throttling and durable execution are essential for automating cybersecurity at scale. Inngest is helping our developers confidently create and deploy highly complex agentic AI workflows faster than ever.

Image of Diego EscobedoDiego Escobedo
Founding Engineer, Outtake

Outtake is a cybersecurity platform that uses always-on AI agents to detect and dismantle digital impersonations just as fast as attackers using AI can now spin them up. For security teams focused on protecting online identities, this approach shrinks remediation time from weeks to minutes.

To keep their edge, Outtake needs to continually improve these agents without sacrificing the guardrails needed to make them resilient at scale. They had a choice: spend months on retry scripts, event guarantees, and concurrency controls... or abstract away reliability infrastructure by bringing durability into their existing code base. They chose the latter. This is the story of how Inngest helped Outtake avoid the slow-down from AI infra reliability, to stay ahead of attackers, and the competition.

The problem: Making agents reliable now takes more time than making them better

AI agents are harder to control at scale. At Outtake’s volume, the surface area of things that can go wrong is enormous: data source APIs rate-limit, model providers throttle token spend, webhooks arrive twice, pipelines stall mid-run when a downstream service goes down. And because Outtake’s agents are chained—each step’s output feeding the next—a failure anywhere can cascade.

Diego Escobedo, Founding Engineer at Outtake, describes the complexity plainly:

Our pipelines are intricate, with many interdependent components chained together, each with their own concurrency challenges. Lots can go wrong in that workflow—including unpredictable behavior from the models themselves. Inngest gives us fine-grained control over errors, and how they propagate, so we can just focus on what the agents should do, not what to do when they fail.

The things Outtake needs to handle—rate limits on both sides, unpredictable agent behavior, error propagation, concurrency across chained components—are all complex engineering problems that used to take weeks to build and weekly effort to maintain.

The solution: Durability in existing code, not adjacent infrastructure

Outtake knows what they're up against: if attackers can use AI to spin up incredibly convincing clones of enterprise websites within minutes, Outtake's agents have to work even faster. The team also can't afford to waste any time on things that aren't directly contributing to agent improvement.

They use Inngest for both. At its simplest, Inngest is an event-driven API—engineers wrap existing code in steps, and durability, flow control, and error handling come with it. For Outtake, it's also the operational backbone of their context-building pipeline—the layer they employ to ensure models get the right data, in the right format, at the right moment.

image.png

Our agents reason and act on millions of datapoints at once. With Inngest, we can queue up millions of events without worrying about loss or disruption. That reliability is everything.

Built on that foundation, the context-building pipeline runs in three stages:

1. Scan. Outtake systematically scans the digital landscape — search engines, domains, websites, forums — to uncover unstructured data that may signal threats. At this volume, incoming data sources rate-limit constantly. Inngest's throttling spreads execution over time, respecting API limits on both ends without any bespoke rate-limit logic in the codebase.

2. Detect. That data flows into advanced detection pipelines where AI agents classify abuse types — phishing, impersonation, intellectual property violations. Pipelines this complex, with many interdependent steps, are where failures cascade. Inngest's durable workflows ensure that if a third-party API goes down mid-run, the workflow pauses and picks up from the last successful step — no lost data, no duplicate work, no manual intervention.

3. Remediate. Guided by each client's brand guidelines, IP policies, and enforcement preferences, agents automatically take action. At millions of events daily, triggering a function run per event isn't sustainable. Inngest's batching groups related signals before passing them downstream, meaningfully reducing compute and database load — without Outtake's engineers having to design or maintain a batching layer.

Beyond retries: How Outtake handles the sticky parts of scale

Running AI agents at Internet scale means two things are always true: traffic is unpredictable, and something, somewhere, will eventually go wrong. Retries alone aren't enough. Outtake uses Inngest's flow control and observability features to handle both.

Handling bursts. Threat signals arrive in waves, and security workflows need to process all of them without overwhelming downstream systems. Inngest’s debouncing feature collapses rapid-fire events into single function runs, preventing redundant work when the same signal arrives multiple times in quick succession.

Investigating failures. When something breaks in a complex agentic pipeline, the question isn't just "did it fail" — it's "what exactly happened, and where." Inngest's Insights gives Outtake engineers a clear view of every function run: inputs, outputs, steps, and retry history.

Replay. When an issue is identified and fixed, affected events can be replayed through the corrected function logic. That’s continuity without manual triage. Outtake can use replays to recover from infrastructure outages, verify bug fixes against real failed traffic, and re-process runs in bulk after upstream API failures.

Replays and error handling are so critical. Being able to replay functions is incredibly useful because outcomes can be unpredictable. Whether it's an outage or data inconsistency somewhere that affects function runs, having the ability to retry that work ensures we process everything end-to-end without missing a thing.

Replay functions in Inngest

How Outtake has grown with Inngest

Since this case study was originally published, Outtake’s usage of Inngest has accelerated significantly. In the first quarter of 2026, production functions grew 44%, while development velocity over that time tripled.

This acceleration is a direct result of their foundation. When spinning up a new function carries no reliability overhead—no bespoke retry logic, no custom rate-limit handling, no infrastructure to provision—shipping is easy.

Engineering time that would have gone into building equivalent infrastructure now goes into improving agent detection and remediation logic instead. That’s the compounding advantage of Inngest: every hour not spent on reliability plumbing is an hour spent on the actual product.

Conclusion

Outtake's approach demonstrates that effective AI agents require a robust architecture to handle large datasets, manage rate limits, and ensure reliability. Using Inngest's workflows, throttling, and event-driven design, Outtake developed a cybersecurity system that processes hundreds of thousands of digital attack surfaces daily without missing a single threat.

Ready to build reliable AI agents at scale?

Get started with Inngest, or book a call with our experts to learn how we can help you orchestrate reliable, scalable AI workflows—without building the infrastructure yourself.

Read more customer success stories →