← Back to Anthropic BDR Track

Anthropic BDR Evidence / 03

Grounded AI Customer-Support Workflow

An AI support workflow where answer quality improved through routing logic, guardrails, and evaluation - not model choice alone.

Problem

ContextThe customer-support workflow needed more reliable answers and clearer routing between different AI response paths.

What I Owned

  1. Routing policy
  2. RAG strategy
  3. Guardrails
  4. Evaluation protocol
  5. Grounded-answer constraints
Design PrincipleThe goal was not to make the model sound more confident. The goal was to make the system answer only when it had enough grounding, route uncertain cases properly, and evaluate performance under the same test condition.
Routing LogicSeparated policy and structured FAQ questions from document-grounded explanatory cases, then used routing rules and output constraints to reduce hallucination, over-generation, and cost exposure.
Evaluation ProtocolCompared before and after results on the same roughly 200-query internal test set, with human-reviewed criteria and failure-mode review.
OutcomeImproved answer accuracy from 27% to 85% on the same internal test set.
LearningFor AI workflows, the commercial explanation should include constraints: when to use RAG, when not to use it, what the router decides, and how quality is evaluated.

Why this matters for Anthropic BDR

Role fitSelling frontier AI requires disciplined communication. A BDR needs to explain what an AI system can do, where it should be constrained, and how a customer can adopt it responsibly without overpromising.