Service Hub Data Points
The Service Hub section evaluates customer support operations and service quality across 4 analysis blocks. These blocks require Service Hub Starter or above for basic ticket data and Service Hub Professional or above for feedback surveys, SLA tracking, and knowledge base features.
Ticket Pipeline
Section titled “Ticket Pipeline”The Ticket Pipeline block analyzes the structure and flow of support ticket processing, measuring how efficiently tickets move from creation to resolution.
Data points captured:
- Pipeline count — Number of ticket pipelines configured. Multiple pipelines are common for different support tiers (general support, VIP, billing, technical) but should have clear purposes.
- Stage configuration — Number of stages per pipeline and the logical flow from open to closed. Stages should represent distinct actions or handoff points.
- Ticket volume — Total tickets created over the past 30 and 90 days. Broken down by pipeline and creation source (form, email, chat, manual).
- Stage distribution — Current distribution of open tickets across stages. A high concentration in a single stage indicates a bottleneck.
- SLA configuration — Whether Service Level Agreements are configured with time-to-first-response and time-to-close targets. SLAs require Service Hub Professional.
- SLA compliance rate — Percentage of tickets meeting SLA targets versus tickets that breached. Broken down by SLA type (response vs resolution).
- Ticket priority usage — Whether ticket priority levels are configured and consistently applied. Missing priority data prevents proper triage and reporting.
- Automatic ticket routing — Whether tickets are automatically assigned to teams or reps based on properties, pipeline, or rotation rules.
What good looks like: Clear pipeline stages with logical progression, SLAs configured and compliance above 90%, priority levels used on all tickets, automatic routing configured to reduce manual triage, and no single stage holding more than 40% of open tickets.
Resolution Metrics
Section titled “Resolution Metrics”The Resolution Metrics block measures the speed and quality of support responses, providing quantitative data on service team effectiveness.
Data points captured:
- First response time — Average time from ticket creation to the first agent response. This is one of the strongest predictors of customer satisfaction. Measured as both mean and median (median is more representative as it filters out outlier tickets).
- Resolution time — Average time from ticket creation to ticket close. Broken down by pipeline, priority level, and ticket category for meaningful comparisons.
- Resolution time by priority — How resolution times vary across priority levels (critical, high, medium, low). Critical tickets should resolve significantly faster than low-priority ones.
- Customer Satisfaction (CSAT) — The average CSAT score from post-resolution surveys. Requires Service Hub Professional with feedback surveys configured.
- Tickets per agent — Average open and resolved ticket volume per support team member. Identifies capacity issues and workload imbalances.
- Reopen rate — Percentage of tickets that are reopened after being marked as closed. High reopen rates indicate incomplete resolutions.
- One-touch resolution rate — Percentage of tickets resolved in a single interaction without requiring escalation or follow-up. Higher rates indicate effective frontline support.
- Escalation rate — Percentage of tickets escalated to a different tier or team. Tracked alongside resolution outcomes to measure escalation effectiveness.
What good looks like: Median first response time under 1 hour during business hours, CSAT above 4.0 out of 5.0, reopen rates below 5%, and one-touch resolution rates above 50%.
Customer Feedback
Section titled “Customer Feedback”The Customer Feedback block evaluates how the organization collects, measures, and acts on customer sentiment data.
Data points captured:
- NPS (Net Promoter Score) — The overall NPS score and trend over time (Full mode only). NPS measures customer loyalty by asking how likely customers are to recommend the product or service.
- NPS response rate — The percentage of surveyed customers who respond to NPS surveys. Low response rates reduce the statistical significance of the score.
- NPS distribution — Breakdown of responses into promoters (9-10), passives (7-8), and detractors (0-6). Understanding the distribution is as important as the aggregate score.
- Survey types in use — Which feedback survey types are configured: NPS, CSAT (Customer Satisfaction), CES (Customer Effort Score). Using multiple survey types provides a more complete picture.
- Survey frequency — How often surveys are sent and whether there are controls to prevent survey fatigue (e.g., suppression periods between surveys).
- Feedback collection channels — Whether feedback is collected via email, in-app, post-ticket, or at key journey milestones. Multiple touchpoints capture feedback at different stages of the customer experience.
- Feedback follow-up — Whether detractor or low-scoring responses trigger automated follow-up actions (alerts, tasks, or workflows) to close the feedback loop.
- Trend analysis — NPS and CSAT trends over the past 6-12 months (Full mode only). Trending data reveals whether service quality is improving or declining.
What good looks like: NPS above 30 (above 50 is excellent), survey response rates above 20%, at least two survey types in active use, detractor responses triggering automated follow-up, and feedback collected at multiple customer journey points.
Knowledge Base
Section titled “Knowledge Base”The Knowledge Base block assesses the self-service support content library that helps customers find answers without contacting support.
Data points captured:
- Article count — Total published knowledge base articles. Compared against ticket volume to assess self-service coverage.
- Category organization — How articles are organized into categories and subcategories. Good structure makes content discoverable for both customers and search engines.
- Article freshness — Percentage of articles updated within the past 6 months. Stale articles may contain outdated information that causes customer confusion.
- Search effectiveness — How often knowledge base searches return relevant results versus producing no results or leading to support tickets. Low search effectiveness indicates content gaps.
- Article traffic — Views per article over the past 30 and 90 days. Identifies high-value articles and content that may not be discoverable.
- Self-service ratio — The ratio of knowledge base views to support tickets created. A higher ratio indicates customers are successfully finding answers on their own.
- Article feedback — Whether articles include helpful/not helpful ratings and the aggregate rating across the knowledge base.
- Search queries without results — The most common search terms that return zero results. These represent gaps in the knowledge base that should be addressed with new content.
What good looks like: At least 50 articles for portals with active support operations, all articles updated within the past 12 months (80% within 6 months), a self-service ratio above 5:1 (5 KB views per ticket), and a system in place to monitor and address zero-result searches.
Next Steps
Section titled “Next Steps”- Automation Data Points — Workflows, sequences, and lead scoring
- Scoring: Section Scoring — How Service Hub scores are calculated
- AI Insights: Block Insights — How AI generates Service Hub recommendations