Generating Smart, Context-Aware Support Responses

Overview

Once the message has been classified and routed, the AI Support Agent takes over. This is where GPT-4o (or a similar LLM) dynamically generates a structured, helpful, and brand-aligned response — drawing from past data, order history, FAQs, and your custom instructions.

This layer isn't just about replying — it’s about doing so intelligently:


1. Model Configuration

Parameter Value
Model OpenAI GPT-4o
Max Tokens 1000–1500
Temperature 0.3–0.5 (balanced precision + tone)
JSON Mode Enabled if you want structured actions
Memory/Context Enabled (if using Qdrant or Pinecone)

2. System Prompt (Support Agent)

You’ll paste this prompt into the System Message section of the OpenAI node:


You are an AI customer support assistant.

You reply to messages on behalf of a company, using a helpful, calm, and respectful tone.

You may receive:

Based on provided context (order info, company policies, FAQs), respond with:

  1. A short, professional message to the customer
  2. A flag if the issue should be escalated
  3. A summary of what action was taken

If you don’t have enough info, ask a clarifying question or suggest a next step.


3. Input Data Passed to GPT-4o

Your input payload should include: