Skip to content

Automation Data Points

The Automation section evaluates how effectively the portal uses HubSpot’s automation capabilities across 3 analysis blocks. Workflows require any Hub at Professional tier or above. Sequences require Sales Hub Professional or above. Lead scoring requires Marketing Hub Professional or above (or Operations Hub Professional for custom scoring).

The Workflow Health block provides a comprehensive assessment of all workflows in the portal, measuring their operational status, effectiveness, and maintenance.

Data points captured:

  • Active workflow count — Total number of currently active (turned on) workflows. Compared against the portal’s complexity and team size to assess whether automation is underutilized or sprawling.
  • Inactive workflow count — Workflows that exist but are turned off. A high number of inactive workflows indicates either experimentation (good) or abandoned automations that add confusion (bad).
  • Workflow types — Distribution of workflows by type: contact-based, company-based, deal-based, ticket-based, quote-based, custom object-based, and scheduled. Reveals which areas of the business are most automated.
  • Enrollment rates — The number of contacts or objects currently enrolled in each workflow and the enrollment rate over the past 30 days. Workflows with zero enrollments may have trigger conditions that are too narrow or are no longer relevant.
  • Completion rates — The percentage of enrolled objects that reach the end of a workflow without being unenrolled, suppressed, or encountering an error. Low completion rates indicate workflow design issues.
  • Error rates — The percentage of workflow executions that encounter errors. Common errors include failed email sends, missing property values, and integration action failures.
  • Workflow complexity — The number of actions, branches, and delays in each workflow. Highly complex workflows (20+ actions, multiple branches) are harder to maintain and debug.
  • Goal criteria usage — Whether workflows have goal criteria configured (available in Professional). Goals allow contacts to exit the workflow when they achieve the desired outcome, improving efficiency.
  • Suppression list usage — Whether workflows use suppression lists to prevent contacts who should not be enrolled (e.g., existing customers in a lead nurture workflow).
  • Naming conventions — Consistency of workflow naming. Standardized prefixes or categories (e.g., “[Marketing] Lead Nurture - New Subscribers”) make large workflow libraries manageable.

What good looks like: Active workflows have enrollment rates above zero, completion rates above 70%, error rates below 5%, goal criteria configured where applicable, suppression lists preventing inappropriate enrollments, and consistent naming conventions across the library.

The audit groups workflows into functional categories for analysis:

CategoryExamples
Lead nurtureDrip campaigns, onboarding sequences, re-engagement
Internal notificationNew deal alerts, ticket assignments, escalation triggers
Data managementProperty setting, lifecycle stage updates, record cleanup
Sales enablementLead rotation, task creation, follow-up reminders
Service automationTicket routing, SLA escalation, feedback triggers

This categorization helps the AI insights engine provide targeted recommendations for each automation use case.

The Sequence Performance block evaluates sales sequences — the semi-automated email and task cadences used by sales reps for outbound outreach.

Data points captured:

  • Active sequence count — Total sequences currently in use across the sales team. Compared against the number of sales reps to assess adoption.
  • Sequence adoption rate — Percentage of sales reps actively using sequences. Low adoption suggests training gaps or tool preference issues.
  • Enrollment volume — Contacts enrolled in sequences over the past 30 and 90 days. Broken down by sequence and by rep.
  • Completion rate — Percentage of enrolled contacts who receive all steps in the sequence without being unenrolled. Contacts are unenrolled when they reply, book a meeting, or are manually removed.
  • Reply rate — Percentage of sequenced contacts who reply to at least one email. This is the primary effectiveness metric for outbound sequences.
  • Meeting booking rate — Percentage of sequenced contacts who book a meeting through a meeting link included in the sequence.
  • Bounce rate — Email bounces within sequences, indicating data quality issues in the prospecting lists.
  • Opt-out rate — Contacts who unsubscribe as a result of sequence emails. High opt-out rates suggest poor targeting or overly aggressive cadences.
  • Step performance — Email open rates, click rates, and reply rates broken down by step number within each sequence. Identifies which steps perform best and where engagement drops off.
  • Sequence length — Number of steps and total duration of each sequence. Excessively long sequences (more than 8-10 steps over 30+ days) often see diminishing returns.

What good looks like: Sequence adoption above 70% of the sales team, reply rates above 5%, meeting booking rates above 2%, opt-out rates below 1%, and sequences optimized to 5-7 steps based on step-level performance data.

The Lead Scoring block assesses the configuration, logic, and adoption of lead scoring models used to prioritize leads for sales outreach.

Data points captured:

  • Scoring model count — Number of lead scoring models configured. At least one model should be active. Multiple models may represent different scoring strategies for different segments or products.
  • Scoring criteria — The properties, behaviors, and demographic attributes used as scoring criteria. Evaluated for breadth (using multiple signal types) and relevance (using signals that correlate with conversion).
  • Score distribution — How lead scores are distributed across the contact database. A healthy distribution shows a clear separation between high-score and low-score contacts. If most contacts cluster at the same score, the model lacks discrimination.
  • Positive vs negative signals — Whether the model includes both positive signals (e.g., page views, form submissions, email clicks) and negative signals (e.g., unsubscribes, bounces, inactivity). Models without negative scoring tend to inflate scores over time.
  • Score threshold definitions — Whether clear thresholds are defined for MQL (Marketing Qualified Lead) and SQL (Sales Qualified Lead) designations. Without thresholds, lead scores are informational but not actionable.
  • Score decay — Whether score decay is configured to reduce scores for contacts who become inactive over time. Without decay, old leads retain artificially high scores.
  • Sales team adoption — Whether sales reps actively use lead scores in their prioritization and workflow. Measured through views sort, and filter usage in CRM views.
  • Scoring model age — When the scoring model was last updated. Models not reviewed in over 6 months may use outdated criteria that no longer reflect current buying behavior.

What good looks like: At least one active scoring model with both positive and negative signals, clear MQL/SQL thresholds defined and integrated into workflows, score decay configured, the model reviewed and updated within the past 6 months, and sales reps actively sorting or filtering by lead score.