All Case Studies

Support Enablement at Scale: Faster Answers, Fewer Tickets

TL;DR

  • Implemented a knowledge-grounded assistant in Slack to speed up answers for Salesforce users.
  • Pilot measurement: ~250 questions/week, ~20% ticket deflection, 4.4–4.6/5 satisfaction.
  • Designed for reliability: guardrails, logging, and human-in-the-loop workflows.

Context

Internal Salesforce users frequently needed quick answers to configuration and process questions. Support teams were overwhelmed with repetitive questions, and users experienced delays waiting for responses. Knowledge was scattered across wikis, Slack history, and tribal knowledge.

Problem

Support noise slows shipping. When teams can't get quick answers to common questions, they either wait (blocking progress) or guess (creating errors). This created a drag on delivery velocity and increased support burden.

Goals & Success Metrics

  • Reduce time-to-answer for common Salesforce questions
  • Deflect repetitive tickets from the support queue
  • Maintain high answer quality and user trust
  • Build reliable, auditable AI system with appropriate guardrails

My Role

Product Owner & Builder — designed the solution architecture, built the RAG pipeline, established operational practices, and measured outcomes.

Constraints

  • Must use existing approved infrastructure (Slack, internal APIs)
  • Cannot expose sensitive data or hallucinate authoritative answers
  • Must be transparent about limitations and confidence
  • Human-in-the-loop required for edge cases

Approach

1

Discovery

Analyzed Slack question patterns and support ticket themes to identify high-frequency, low-complexity questions suitable for automation.

2

Plan

Designed RAG architecture with vector search over curated knowledge base. Defined confidence thresholds, fallback behaviors, and logging requirements.

3

Ship

Built retrieval pipeline with Claude, implemented Slack integration, added logging and monitoring, deployed with gradual rollout.

4

Measure

Tracked question volume, deflection rate, satisfaction scores, and edge case escalations. Iterated based on feedback patterns.

What Shipped

  • Slack-native assistant with natural language interface
  • RAG pipeline grounded in curated internal knowledge
  • Confidence indicators and source citations
  • Automatic escalation for low-confidence responses
  • Comprehensive logging for audit and improvement
  • Admin dashboard for monitoring and content updates

Impact

~250/week
Questions Handled
Steady-state usage
~20%
Ticket Deflection
Estimated from resolution patterns
4.4-4.6/5
User Satisfaction
Based on feedback ratings
25-40 min
Time Saved per Request
Pilot estimate
~150/week
Active Users
Regular weekly usage
~40%
Repeat Usage
Users returning weekly

Artifacts

Available upon request for hiring discussions

Architecture diagramSample conversation logs (anonymized)Monitoring dashboard screenshot

Evidence & Assumptions

Verified

Question volume and satisfaction scores from Slack analytics and feedback collection. Usage patterns from logging infrastructure.

Estimated

Ticket deflection estimated by comparing question themes to support ticket reduction. Time-saved based on self-reported resolution times in pilot.

Notes

Deflection rate is a pilot measurement with defined methodology. Time savings are extrapolations from sampled interactions.

Measurement Methodology

Questions defined as direct queries to the assistant (excluding greetings and follow-ups). Timeframe: 4-week pilot period. Satisfaction captured via reaction emoji prompt after each response. Deflection estimated by matching question themes to support ticket categories and measuring category-level reduction during pilot.

Learnings

  • Transparency about AI limitations increased user trust
  • Human escalation path is essential—AI should augment, not replace
  • Logging everything enabled rapid iteration on answer quality
  • Starting with narrow scope (Salesforce only) built confidence for expansion

How Modern Tooling Helped

AI was used to retrieve the right internal knowledge quickly, with safeguards. This reduced repetitive questions, improved time-to-answers, and allowed teams to focus on higher-value work.

Want to discuss this work?

I'm happy to walk through the details, share artifacts, or discuss how similar approaches might apply to your challenges.