Skip to content

Understanding Recommendations

Recommendations are actionable items generated from your audit findings. While AI insights explain what the data means, recommendations tell you exactly what to fix and in what order. They are designed to give you and your clients a clear improvement roadmap.

Recommendations are generated using a rule-based system. Each recommendation rule defines:

  1. Conditions — What audit data triggers this recommendation
  2. The recommendation itself — What to do about it
  3. Impact and effort scores — How important it is and how much work it takes

Each rule has one or more conditions that are evaluated against the audit data. Conditions use these operators:

OperatorDescriptionExample
greater_thanValue exceeds a thresholdScore > 80
less_thanValue is below a thresholdPercentage under 50
equalsValue matches exactlyStatus equals “inactive”
containsValue includes a substringName contains “test”
is_trueBoolean value is trueDKIM configured is true
is_falseBoolean value is falseSPF configured is false

A recommendation is triggered when all of its conditions are met. If a rule has three conditions, all three must evaluate to true for the recommendation to appear.

Each triggered recommendation includes:

How much improving this item will affect the portal’s health, efficiency, or business outcomes:

ScoreMeaning
5Critical impact. Directly affects core functionality, compliance, or revenue tracking.
4High impact. Significantly improves operational efficiency or data quality.
3Moderate impact. Meaningful improvement in a specific area.
2Low impact. Minor improvement or optimization.
1Minimal impact. Polish or nice-to-have enhancement.

How much work is required to implement the recommendation:

ScoreMeaning
5Major effort. Requires multi-day project, cross-team coordination, or significant technical work.
4High effort. Several hours of focused work with some complexity.
3Moderate effort. A few hours of straightforward work.
2Low effort. Under an hour. Simple configuration or update.
1Minimal effort. A few minutes. Single setting change or toggle.

An estimated number of hours to complete the recommendation. This helps scope improvement projects and set expectations with clients.

The functional area the recommendation belongs to (e.g., Email Deliverability, Pipeline Configuration, Data Hygiene, Workflow Optimization). Categories help group related recommendations for planning.

Recommendations are sorted by a priority score that combines impact and effort:

Priority Score = (Impact x 2) + (5 - Effort)

This formula favors high-impact, low-effort items. Examples:

ImpactEffortCalculationPriority Score
51(5 x 2) + (5 - 1)14
53(5 x 2) + (5 - 3)12
31(3 x 2) + (5 - 1)10
44(4 x 2) + (5 - 4)9
25(2 x 2) + (5 - 5)4

Recommendations are displayed in descending priority score order, so the most impactful and easiest fixes appear at the top.

JetStack AI audit recommendations list sorted by priority score showing impact, effort, time estimates, and categories

The recommendations section of an audit report shows:

  • Each recommendation with its description, impact, effort, time estimate, and category
  • Sorted by priority score (highest first)
  • Grouped or filterable by category

This gives clients a ready-made action list they can work through sequentially or hand off to their team.

Recommendations and AI insights are complementary but distinct:

  • AI insights are generated per-block and explain findings in natural language
  • Recommendations are generated from rules across the entire audit and focus on specific actions

A single block insight might reference multiple data points. A recommendation targets a specific condition and action. Together, they give a complete picture: the insight explains the context, and the recommendation provides the next step.