Case Study - Aomni

Productionizing AI-driven sales flows using serverless LLMs

For anyone who is building multi-step AI agents (such as AutoGPT type systems), I highly recommend building it on top of Inngest's job queue orchestration framework, the traceability it provides out of the box is super useful, plus you get timeouts & retries for free.

Image of David ZhangDavid Zhang
CEO & Co-founder

Aomni provides AI-leveraged intelligent sales planning tools, allowing businesses to optimize their sales process through AI-driven insights, all on top of Inngest.

Aomni's approach requires innovative use of chained LLM calls, including tree of thought, chain of thought, retrieval augmented generation (RAG), and fine tuning. Everything from parsing websites, analyzing case studies, and generating custom sales prompts depends on these flows — meaning they need to run reliably, efficiently, and at scale.

Productionizing LLMs without queues

Complex chains of LLMs require seamless orchestration, reliability, and scalability. Legacy job queues aren't suitable for these modern workflows due to their lack of insight, poor flow control, maintenance challenges, and inadequate state management, as they leave the developer with the core flow and state logic.

Production-grade LLMs require complex state management utilities. For RAG, sources have to be parsed, chunked, and ingested to vector DBs in a single workflow. And to augment LLM calls, the workflow needs to query vector DBs and update context prior to the LLM call — reliably, with state and future LLM chains handled by the underlying infrastructure.

Leveraging Inngest for zero-infrastructure workflows

Aomni turned to Inngest, the leading platform for durable event-driven workflows, to implement its complex AI-driven sales account planning solution from local development to scale. Leveraging Inngest provided many benefits:

  1. Automatic state management and reliability: Inngest automatically tracks function state, ensuring that LLM flows run from start to finish with correct data. Retries ensure that calls to LLM providers run reliably and always succeed.
  2. History and Audit Trails: Built-in history and audit trails allow Aomni to track the usage and responses of each chained LLM call, ensuring consistency and quality in the response output. State is tracked and visible every step of the way.
  3. Infrastructure-Free Development: By using Inngest, Aomni engineers were able to focus solely on the business logic and workflow chaining without spending valuable time on custom infrastructure management. They leveraged serverless infrastructure, such as Vercel, to handle the underlying infrastructure complexities, freeing up their team to concentrate on refining their AI models and workflow orchestration.
  4. Product-only focus: by leveraging Inngest's workflow engine for production and local development, Aomni were able to focus on their core flows, allowing them to evaluate cost and response tradeoffs between differing LLM models at every call in the flow.

Aomni's journey to optimize AI-driven sales account planning hit the market faster due to Inngest's event-driven workflow platform. If you're interested in productionizing complex LLM flows — on servers or serverless environments — reach out to our solutions engineering team.

Read more customer success stories →

Talk to a product expert

Chat with sales engineering to learn how Inngest can help your team ship more reliable products, faster