Performance Remediation (DAX + Model)

Refactoring a slow semantic model and high-cost DAX into a scalable, reliable reporting system

Performance issues in Power BI rarely start as “performance issues.” They start as reasonable shortcuts—an extra relationship here, a calculated column there, a measure that works fine in testing—until the model grows, refresh windows tighten, and report adoption depends on speed.

This engagement began in a familiar place: the reports mattered, leadership relied on them, and the business had outgrown the original design.

The situation

The organization depended on Power BI for day-to-day operational and financial decision-making. Over time, the reporting environment accumulated complexity:

  • New source tables were added without a consistent modeling pattern

  • Measures evolved organically across multiple report files

  • Refresh schedules expanded while data volumes increased

  • Users experienced lag, timeouts, and occasional refresh failures

The outcome was predictable: confidence eroded, adoption dropped, and teams fell back to “workarounds” (exporting, reconciling in Excel, rebuilding one-off views).

The problem

Three symptoms showed up together:

  1. Slow report interactivity
    Common interactions (filtering by date, drilling, slicing by business unit) felt sluggish and unpredictable.

  2. Refresh instability
    Refresh duration and reliability fluctuated, creating situations where reports were stale or unavailable when needed.

  3. A model that didn’t scale
    As new requirements arrived, every change risked breaking performance somewhere else. Internal teams were hesitant to extend the model because it was hard to predict impact.

This was not a “single bad measure” issue. It was an architecture issue: semantic model design, DAX patterns, and refresh strategy were misaligned with scale.

What we changed (high level)

The remediation focused on four areas:

  • Refactor the semantic model for scalable filtering and aggregation

  • Optimize high-cost DAX patterns driving query latency

  • Reduce unnecessary complexity that multiplied compute

  • Align refresh strategy to data volumes and usage patterns

The goal was not only to make it faster today, but to make it maintainable as complexity continues to grow.

Step 1: Diagnose the bottlenecks (before touching the model)

The first step was measurement—because “slow” can come from multiple layers (Power Query transformations, model relationships, DAX, visuals, or capacity constraints).

A typical diagnostic pass included:

  • Power BI Performance Analyzer to identify slow visuals and expensive queries

  • DAX Studio to inspect query plans and identify high-cost measures and storage-engine vs formula-engine behavior

  • VertiPaq analysis to understand cardinality drivers, compression issues, and large columns that inflated memory footprint

  • Model review for relationship directionality, ambiguity, and filter propagation patterns

  • Refresh review to isolate what was failing: source latency, transformations, incremental strategy gaps, or gateway constraints

This clarified the real drivers: model shape and a set of measures that worked logically but were computationally expensive at scale.

Step 2: Refactor the semantic model for scale

Most performance improvements came from making the model “predictable” again.

Common remediations in this phase included:

Establishing a clean star schema where possible

  • Consolidated and clarified fact tables

  • Introduced/standardized dimensions (Date, Customer, Product, Organization, etc.)

  • Reduced reliance on snowflaked chains when they created ambiguous filtering

Fixing relationship issues that amplify query cost

  • Removed or minimized bi-directional relationships unless there was a controlled, justified reason

  • Eliminated ambiguous paths and unnecessary many-to-many relationships

  • Ensured filtering behavior aligned with the questions the business actually asks

Reducing model bloat

  • Removed unused columns and tables

  • Moved “nice-to-have” derived fields out of the core model when they increased size and cardinality

  • Standardized data types and optimized columns that were inflating memory usage

The guiding principle: if the model is easy to explain on a whiteboard, it will usually perform better and be easier to maintain.

Step 3: Optimize expensive DAX patterns (without changing business meaning)

DAX performance work is typically about making calculations storage-engine friendly and reducing unnecessary row-by-row evaluation.

In this engagement, the highest-impact changes came from:

Replacing iterator-heavy measures where they weren’t necessary

Iterators (e.g., SUMX, FILTER over large tables) can be valid, but they can also become the default hammer. Where possible, measures were rewritten to push more work to the storage engine and reduce row context.

Reducing repeated evaluation

Measures that recompute the same intermediate logic across multiple branches were refactored using variables and a consistent measure layering approach (base measures → derived measures → KPIs).

Tightening filter context control

Overly broad filters and unconstrained context transitions can cause significant overhead. The remediation focused on making filters explicit, removing accidental context transitions, and simplifying conditions.

Standardizing time intelligence patterns

Time calculations are common culprits when implemented inconsistently. We aligned time intelligence to a single Date dimension and standardized patterns so performance and results remained consistent across report pages.

Cleaning up measure sprawl

Measures were consolidated, named consistently, and organized to reduce duplication. This helped both performance and maintainability.

The key constraint throughout: no “optimization” was accepted if it changed the meaning of KPIs. Correctness came first.

Step 4: Align refresh strategy to data volumes and usage

Performance remediation is incomplete if refresh remains fragile.

Depending on data volume and access constraints, this phase typically involved:

  • Reducing transformation load by pushing appropriate logic upstream (where feasible)

  • Ensuring refresh sequence and dependencies were stable

  • Implementing an incremental approach where it fit the data shape and business requirements

  • Validating gateway and source behavior under realistic refresh conditions

  • Tightening partitioning strategy and ensuring the model wasn’t reprocessing unnecessary history

The objective: predictable refresh cycles that support reporting SLAs and reduce operational surprises.

Results and impact

After remediation:

  • Reports became responsive and usable for day-to-day decisions—users could filter and navigate without delays that disrupted meetings or analysis flow.

  • Refresh reliability improved, reducing “stale data” scenarios and the need for manual fallbacks.

  • The model became easier to extend—internal teams could add measures and dimensions with less risk of performance regressions, because the architecture was cleaner and patterns were standardized.

Just as important: stakeholders regained confidence that the platform could support growth rather than becoming a recurring fire drill.

Deliverables (what the client walked away with)

To make the improvement durable, the engagement produced more than a faster report:

  • Updated semantic model structure and relationship rationale

  • A standardized measure layer (base measures and KPI measures)

  • A performance-focused DAX pattern set for the team to reuse

  • Refresh strategy notes and operational recommendations

  • A prioritized backlog of remaining enhancements (nice-to-haves separated from must-fix)

This ensured the organization didn’t “drift back” into the same performance traps over time.

Takeaways: what causes Power BI performance to degrade over time

If your environment is experiencing similar symptoms, these are the patterns I see most often:

  • A model that grew without a consistent schema strategy

  • Too many ambiguous relationships and uncontrolled filter propagation

  • DAX measures that are correct but expensive due to iterators and repeated evaluation

  • Refresh approaches that don’t match the scale and change rate of the data

  • Report pages with too many high-cardinality visuals competing for compute

The fix is rarely a single trick. It is disciplined architecture plus measurable optimization.

If this sounds familiar

If your Power BI environment has become slow, unreliable, or hard to maintain, the fastest responsible starting point is typically a focused diagnostic. From there, remediation can be targeted to the highest-impact issues affecting trust and adoption.

If you want, I can adapt this into a site-ready case study page format (headline, summary card, “before/after,” deliverables, and CTA) and keep the narrative blog version for your Resources section.

Next
Next

Governance & RLS Rollout for Scalable, Secure Power BI Distribution