Back to blog
AI Agents

How We Build an AI Agent: From Discovery Call to Production in 2 Weeks

··9 min read

Our proven AI agent implementation process takes you from discovery call to production in 14 days. See the exact steps, timelines, and n8n workflows.

Most agencies promise AI agents in 3-6 months. We deliver in 2 weeks.

We've built 47 AI agents for service businesses since 2024. Our fastest implementation took 9 days. Our average is 12 days from discovery call to production.

Here's our exact AI agent implementation process. No theory. Just the workflows, timelines, and decisions that get agents handling real customer queries in under 14 days.

Days 1-2: Discovery and Scoping

The discovery call determines everything. We need to understand three things in 60 minutes:

What queries does the agent handle? We analyse your support tickets, sales emails, or booking requests from the past 90 days. We're looking for patterns. If over 60% of queries fall into 3-5 categories, you're an ideal candidate.

Example: A law firm had 847 enquiries last quarter. We categorised them: 31% were "Can you help with my case type?", 28% were "What are your fees?", 18% were "What documents do I need?", and 23% were everything else. That 77% became the agent's scope.

Where does the agent live? Website widget, WhatsApp, email, or all three? Each channel needs different setup time. Website widgets take 2-3 hours. WhatsApp integration adds 4-6 hours. Email agents need 6-8 hours because of authentication and threading complexity.

What systems does it connect to? Your CRM, calendar, payment processor, or knowledge base. Every integration adds time. We map dependencies in the discovery call to avoid surprises.

After discovery, you get a scope document within 24 hours. It lists:

  • The exact queries the agent handles
  • The channels it operates on
  • The systems it integrates with
  • The handoff triggers to humans
  • The 2-week timeline with milestones

Days 3-5: Knowledge Base Construction

Your AI agent is only as good as its knowledge base. We spend 3 days building it properly.

We don't train on your entire website. That creates vague, generic responses. Instead, we build structured knowledge documents:

Service descriptions: One document per service. 200-400 words. Includes what it is, who it's for, typical timeline, and pricing (if you share that publicly).

FAQs: We take your most common questions and write definitive answers. Not the 2-sentence answers on your website. Proper 100-150 word answers with examples and edge cases.

Process documents: How does someone become a client? What happens after they pay? What do they need to prepare? These documents let the agent guide people through your workflows.

Qualification criteria: When should the agent book a call versus answer directly? We document the business rules that determine handoffs.

A typical knowledge base has 15-25 documents totaling 8,000-12,000 words. We write them in Notion or Google Docs, then import them into the n8n vector store.

Here's our n8n workflow for knowledge base ingestion:

  1. Document Source Node: Connects to Notion or Google Drive
  2. Text Splitter: Breaks documents into 500-token chunks with 50-token overlap
  3. OpenAI Embeddings: Converts chunks to vectors using text-embedding-3-small
  4. Pinecone Insert: Stores vectors with metadata (document name, category, last updated)

This workflow runs nightly. When you update a document, the agent has the new information within 24 hours.

Days 6-8: Agent Development in n8n

This is where the actual agent gets built. We use n8n because it's visual, flexible, and self-hostable. No vendor lock-in.

Our standard agent workflow has seven nodes:

1. Trigger Node: Webhook for website, WhatsApp Cloud API for messaging, IMAP for email

2. Message History Node: Pulls the last 10 messages from your database (we use PostgreSQL). Context matters. The agent needs to know what was discussed.

3. Retrieval Node: Queries your Pinecone vector store with the user's message. Returns the 3 most relevant knowledge chunks. We use cosine similarity with a 0.7 threshold.

4. AI Agent Node: This is the core. We use OpenAI's GPT-4o model with a 450-token response limit. The system prompt is typically 600-800 words and includes:

  • Your brand voice guidelines
  • The agent's role and boundaries
  • Examples of good responses
  • Handoff triggers
  • Data handling rules

5. Decision Node: Checks if the response includes a handoff trigger. If yes, it routes to the notification workflow. If no, it continues to formatting.

6. Format Node: Converts the response to the channel's format. WhatsApp has a 4,096 character limit. Email needs HTML formatting. Website widgets need markdown.

7. Response Node: Sends the message back to the user and logs it to your database.

This workflow handles 93% of our deployments. The other 7% need custom tools for booking calendars, checking inventory, or processing payments.

Days 9-10: Integration and Tool Building

If your agent needs to take actions (not just answer questions), we build tools. Tools are functions the AI can call.

Common tools we build:

Calendar booking: Connects to Calendly or Cal.com API. The agent checks availability and books directly. Implementation time: 4-6 hours.

CRM logging: Creates contacts and deals in your CRM when someone qualifies. We've built these for HubSpot, Pipedrive, and Copper. Implementation time: 3-4 hours.

Document generation: Creates proposals or contracts using user inputs. We use Docupilot or Google Docs API. Implementation time: 5-7 hours.

Payment processing: Sends payment links via Stripe. The agent includes the link in its response. Implementation time: 2-3 hours.

Here's an n8n tool example for calendar booking:

The agent's system prompt includes: "You have access to a calendar_check function. Use it when someone wants to book a call."

The n8n workflow has a Function node that:

  • Accepts parameters (date preference, duration)
  • Calls the Calendly API
  • Returns available slots
  • The AI formats these into a natural response

When the user picks a time, the agent calls calendar_book with the slot ID. It confirms the booking in its next message.

Tool building is where implementation time varies most. Simple tools take 2-3 hours. Complex multi-step tools can take 15+ hours.

Days 11-12: Testing and Refinement

We test every agent for 2 full days before you see it. We use a 47-point checklist covering:

Response quality: We send 25-30 test queries covering common scenarios, edge cases, and attempts to confuse the agent. We're looking for accurate, on-brand responses that match your voice.

Handoff reliability: We trigger every handoff scenario. Does it notify the right person? Does it include enough context? Does it stop responding after handoff?

Integration reliability: If the agent books calendars or logs CRMs, we test each integration 10+ times. API errors happen. We add retry logic and error handling.

Speed: We measure response latency. Our target is under 3 seconds from message received to response sent. If retrieval is slow, we optimise the vector search. If the AI is slow, we adjust token limits.

Conversation flow: We run 5-10 multi-turn conversations. Does the agent remember context? Does it stay on topic? Does it handle topic switches gracefully?

After internal testing, you get access to a staging environment. You test it with real scenarios. We refine based on your feedback. This typically requires 3-5 adjustment cycles over 48 hours.

Days 13-14: Launch and Monitoring

Launch day is anticlimactic. That's intentional. We soft launch to 20% of traffic first.

For website agents, we use a JavaScript condition that shows the agent to every fifth visitor. For WhatsApp or email, we forward 20% of incoming messages to the agent.

We monitor four metrics in real-time:

Resolution rate: What percentage of conversations end without human handoff? Our average is 68%. Anything above 60% is solid. Below 50% means the scope was too broad.

Response accuracy: We manually review 20 conversations per day. Are responses factually correct? Are they helpful? We score each on a 3-point scale.

Handoff appropriateness: When the agent hands off, should it have? We review every handoff. False negatives (should have handed off but didn't) are worse than false positives.

User satisfaction: For website agents, we ask "Was this helpful?" after each conversation. For WhatsApp and email, we track response sentiment using a simple classifier.

After 24-48 hours at 20% traffic, we scale to 100%. The agent is live.

Post-launch, we monitor for 2 weeks. We have a Slack channel where you report issues. Response time for critical bugs: under 2 hours. For improvements: we batch them into weekly updates.

The Numbers Behind Our Process

47 agents built. Here's what we've learned:

Average implementation time: 12 days (range: 9-16 days)

Average knowledge base size: 19 documents, 9,400 words

Average resolution rate: 68% (range: 51-84%)

Average cost to build: £8,500-£12,000 depending on integrations

Average monthly operating cost: £340-£580 (API calls, vector storage, hosting)

Average time saved per client: 15-22 hours per week

The fastest implementations had clear scope, existing documentation, and simple integrations. The longest had vague requirements, no documentation, and complex multi-system workflows.

Why Two Weeks Works

Most agencies take months because they overthink it. They try to handle every edge case on day one. They build complex workflows before testing simple ones.

We ship fast by scoping tightly. Your agent doesn't handle everything. It handles the 60-80% of queries that follow patterns. Humans handle the rest.

We use n8n because it's faster than custom code but more powerful than no-code tools. We can build, test, and iterate in hours, not days.

We've done this 47 times. We know which questions to ask, which integrations are straightforward, and which prompts work. Experience compounds.

Ready to Build Your AI Agent?

We have 3 implementation slots available in April 2026. Discovery calls are 60 minutes. No sales pitch. We'll tell you honestly if an agent makes sense for your business.

If it does, you'll have a working agent handling real customer queries in 2 weeks.

Start your AI implementation project

Ready to automate?

Book a free automation audit and we'll map your workflows and show you where to start.

Book a Call

Related posts

Table of contents