0% found this document useful (0 votes)
173 views

Auto-Scaling with Microsoft Fabric Capacity for Power BI

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
173 views

Auto-Scaling with Microsoft Fabric Capacity for Power BI

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Auto-Scaling with Microsoft Fabric Capacity for Power BI

Auto-scaling with Microsoft Fabric and Power BI ensures that your environment can dynamically
adjust to fluctuating workloads, efficiently utilizing resources for optimal performance during peak
demand. In this Document, we'll explore how to set up, test, and evaluate auto-scaling with
Microsoft Fabric capacity for Power BI reports and datasets, especially under scenarios with varying
user loads.

Goal

 Automatically scale compute and memory resources to handle high user concurrency in
Power BI.

 Test dynamic allocation of resources in Microsoft Fabric (Azure Synapse Analytics


integration).

 Ensure that the auto-scaling mechanism adjusts in real time to the demands placed by Power
BI reports on the underlying capacity.

 Identify any performance bottlenecks, such as delays in scaling or insufficient scaling during
peak times.

Prerequisites

 Microsoft Fabric (Azure Synapse Analytics): Ensure that your Fabric workspace is
provisioned with Synapse SQL pools or Lakehouse, which support auto-scaling.

 Power BI Premium or Premium Per User (PPU): A Premium SKU (e.g., P3, P5, or PPU) that
allows Power BI datasets and reports to connect directly with Microsoft Fabric.

 Azure Monitor/Log Analytics: Use this to track auto-scaling metrics and monitor resource
usage dynamically.

 Load Testing Tool: Use Azure Load Testing, Apache JMeter, or Gatling to simulate user loads
for testing.

Implementation

1. Setup Microsoft Fabric and Power BI Premium

Step 1: Provision Microsoft Fabric Workspace

1. Create a Synapse SQL Pool (or Lakehouse) within the Microsoft Fabric workspace.

o Navigate to the Microsoft Fabric Portal, and under the Analytics section, create a
dedicated SQL pool or Lakehouse.

o For the purpose of this document, ensure that auto-scaling is enabled for the SQL
pools or Lakehouse.

2. Configure Auto-Scaling:
o In your Synapse Analytics workspace, go to SQL Pool Settings.

o Enable autoscale for your SQL pool and set the min and max scaling limits for DWUs
(Data Warehouse Units).

 E.g., start at a minimum of 100 DWUs and scale up to 3000 DWUs based on
load requirements.

o Configure auto-scaling policies:

 Set a minimum idle time before scaling down (e.g., after 10 minutes of no
activity).

 Define the auto-scaling cooldown to avoid unnecessary up/down scaling


transitions.

3. Integrate Synapse with Power BI:

o In Power BI, connect your dataset to the Synapse SQL pool using DirectQuery or
Import Mode.

o DirectQuery is preferable for large datasets, as queries are pushed down to the SQL
pool, allowing auto-scaling to handle fluctuating query loads.

Example: If using DirectQuery, connect Power BI to the SQL pool via the Azure Synapse Analytics
connector:

sh

Copy code

Server Name: <synapse-workspace>.sql.azuresynapse.net

Database Name: <SQL pool name>

o Test the connection and ensure the dataset is functioning correctly with Synapse.

Step 2: Setup Power BI Premium

1. Configure Power BI Premium Capacity:

o Go to Power BI Admin Portal > Premium Capacity Settings.

o Allocate workspaces to the Power BI Premium capacity (P3 or P5).

o Set autoscale for Power BI Premium, which automatically adds one vCore at a time if
more capacity is needed.

2. Set Thresholds for Autoscaling:

o Define when auto-scaling will be triggered:

 Monitor metrics like CPU utilization, memory usage, and query processing
times.

o Use the Power BI Premium Capacity Metrics App to track memory consumption and
query performance.
o Ensure that Concurrent Dataset Queries Limit is appropriate (adjust default limits if
needed).

2. Test Auto-Scaling Setup with Load Testing

Scenario 1: Simulate Baseline Load (500 Users)

1. Set up Load Testing Tool:

o Use Apache JMeter, Gatling, or Azure Load Testing to simulate users logging into
Power BI and accessing reports.

o Create test scripts that simulate users performing common actions like:

 Opening reports

 Applying filters

 Drilling down into data

 Refreshing visuals

2. Run Initial Load Test with 500 Users:

o Start with 500 users simulating concurrent report interactions.

o Monitor resource usage in Power BI Premium Capacity Metrics and Azure Monitor
to see if the initial load is well within the base capacity (no auto-scaling triggered).

Example metrics to observe:

o CPU usage (%) in Power BI capacity

o DWU usage in Microsoft Fabric

o Query execution times from Power BI queries sent to Synapse SQL pool

3. Monitor for Auto-Scaling:

o Verify that no autoscaling events are triggered, as 500 users should be well within
the baseline resource capacity.

Scenario 2: Increase Load to 3000 Users

1. Increase Concurrent Users to 3000:

o Gradually increase the load to 3000 users by modifying the load test parameters.

o Observe resource usage in Power BI Premium and Microsoft Fabric as the load
increases.

2. Observe Auto-Scaling Events in Synapse SQL Pool:

o In Azure Monitor, track when the Synapse SQL pool auto-scales. The DWU should
increase automatically as query demand rises from the Power BI DirectQuery mode.

o Use Azure Synapse Studio to view resource scaling events in the SQL pool.
Example: You may notice the SQL pool scaling up from 100 DWUs to 500 DWUs or more, depending
on how many concurrent queries are hitting the system.

sh

Copy code

az synapse sql-pool update \

--name <SQL pool name> \

--resource-group <resource group> \

--workspace-name <workspace name> \

--capacity <DWU value after autoscale>

3. Monitor Power BI Premium Capacity Autoscaling:

o Track if Power BI Premium capacity adds more vCores automatically to handle the
increase in concurrent users.

o Use Power BI Capacity Metrics to view autoscale events, showing additional vCores
added.

Scenario 3: Peak Load (5000+ Users)

1. Simulate Maximum Load of 5000 Users:

o Increase the load to 5000 or more concurrent users.

o Observe the behavior of both Microsoft Fabric and Power BI Premium autoscaling in
real time.

2. Check Autoscaling Efficiency:

o Monitor how quickly Synapse SQL pool scales up to handle the additional query
load.

o Track the efficiency of Power BI Premium autoscaling in response to increased


report rendering and processing demands.

3. Evaluate User Experience:

o Measure report loading times, query execution times, and failure rates as the
system scales up.

o Analyze whether user experience remains consistent during peak load times or if
there are any delays introduced by the autoscaling process.

3. Analyzing Results

Key Metrics to Capture:

 Response Time: How quickly Power BI dashboards load when scaling is triggered.
 DWU Scaling Patterns: How the SQL Pool DWUs increase as the number of concurrent
queries rises.

 vCore Additions: How many Power BI Premium vCores are added automatically to manage
user load.

 Query Execution Times: Average time for queries to execute in Synapse SQL pools before
and after scaling.

 Resource Utilization: Peak CPU, memory, and storage usage in both Power BI and Synapse
environments.

Analysis:

 Ensure that auto-scaling happens proactively before significant performance degradation


occurs.

 Identify if there are any delays in scaling (e.g., if Power BI reports slow down significantly
before Synapse SQL pool scales up).

 Check if the min and max DWU settings were appropriate for handling the user load.

 Determine if any manual interventions were necessary or if the autoscaling mechanisms


handled all loads without manual adjustments.

4. Optimize Based on Findings

1. Tune SQL Pool Auto-Scaling:

o Adjust min and max DWU settings based on actual usage patterns. If scaling was too
slow, increase the max DWU to accommodate higher concurrency.

2. Adjust Power BI Premium Settings:

o If autoscaling was triggered too frequently or too slowly, adjust the CPU utilization
thresholds or concurrent query limits in Power BI Premium.

3. Optimize Dataset and Query Design:

o Consider optimizing reports (e.g., simplifying visuals, using aggregations, or


partitioning datasets) to reduce load and improve query performance during scaling
events.

5. Reporting and Documentation

Key Deliverables:

 Detailed logs of auto-scaling events (both for Power BI and Synapse SQL pools).

 Graphs showing response time improvements after scaling.

 Analysis of scaling delays and their impact on performance.

 Recommendations for optimizing auto-scaling thresholds and resource utilization.


Conclusion

This document provides a step-by-step framework for testing and implementing auto-scaling with
Microsoft Fabric and Power BI Premium. By simulating varying loads, you can verify that your
infrastructure automatically scales to handle peaks in user activity, ensuring optimal performance
and resource utilization during concurrent stress testing.

You might also like