Microsoft Fabric Updates Blog

Introducing the Job-Level Bursting Switch in Microsoft Fabric

We’re introducing a new feature that gives you more granular control over your Spark compute resources in Microsoft Fabric: The Job-Level Bursting Switch. This highly anticipated addition empowers capacity administrators to fine-tune how Spark jobs utilize burst capacity, optimizing for either peak performance or higher concurrency based on your specific workload needs.

Microsoft Fabric’s Compute Units offer a powerful 3× bursting capability, allowing a single Spark job to temporarily consume significantly more compute cores than your base capacity provides. This intelligent design helps accelerate job performance during intensive periods, ensuring full utilization of your available resources when it matters most.

Taking Control – The ‘Disable job-level bursting’ Switch

With this new release, capacity administrators now have direct control over this behavior via the ‘Disable job-level bursting’ switch, conveniently located in the Admin Portal:

Location: Admin Portal → Capacity Settings → [Select Capacity] → Data Engineering/Science Settings → Job Management

How it Works

  • Enabled (Default): When enabled, a single Spark job can leverage the full burst limit, consuming up to 3× CUs. This is ideal for demanding ETL processes or large analytical tasks that benefit from maximum immediate compute power.
  • Disabled: If you disable this switch, individual Spark jobs will be capped at the base capacity allocation. This prevents a single job from monopolizing the burst capacity, thereby preserving concurrency and improving the experience for multi-user, interactive scenarios.

Important Considerations for Autoscale Billing

It’s crucial to note that this switch is only available when running Spark jobs on Fabric Capacity. If the Autoscale Billing option is enabled for your capacity, this switch will be automatically disabled. This is because Autoscale Billing operates on a pure pay-as-you-go model with no smoothing window, meaning all Spark usage is billed on demand without relying on reserved capacity bursting.

Optimizing for Your Workloads: Use Cases and Examples

The Job-Level Bursting Switch provides flexibility to cater to diverse data engineering and science requirements:

ScenarioSettingBehavior
Heavy ETL WorkloadBursting enabledJob can use the entire burst capacity (e.g., 192 CUs in an F64 capacity), accelerating execution.
Multi-user Interactive NotebooksBursting disabledJob usage is capped (e.g., 64 CUs in an F64 capacity), improving overall concurrency for many users.
Autoscale Billing is enabledBursting control unavailableAll Spark usage is billed on demand; no bursting from base capacity as it follows a pay-as-you-go model.

Choose Your Optimization – Throughput vs. Concurrency

This new switch is a powerful tool to help you fine-tune your Fabric Spark environment:

  • Keep bursting enabled for large-scale jobs, critical ETL pipelines, and compute-intensive tasks where maximizing single-job throughput is paramount.
  • Disable it for interactive development, shared environments, or scenarios with many concurrent users where maintaining consistent responsiveness for multiple jobs is more important.

We believe the Job-Level Bursting Switch will provide our customers with even greater control and flexibility in managing their Spark workloads on Microsoft Fabric. This feature, combined with our existing capabilities like Optimistic Job Admission, empowers you to build highly efficient and responsive data solutions.

To learn more about optimizing your Spark workloads in Microsoft Fabric, please refer to our documentation:

Related blog posts

Introducing the Job-Level Bursting Switch in Microsoft Fabric

December 10, 2025 by Ted Vilutis

Schema lakehouses are now Generally Available. By using schemas in lakehouses, users can arrange their tables more efficiently and make it easier to find data. When creating new lakehouses, schema-enabled lakehouses will now be the default choice. However, users still have the option to create lakehouses without a schema if they prefer. What do schema … Continue reading “Lakehouse Schemas (Generally Available)”

December 9, 2025 by Kunal Parekh

Discover how Microsoft Fabric’s Forecasting Service system reduces Spark startup latency and cloud costs through proactive AI and ML-driven resource provisioning. Context & Relevance Waiting minutes for a Spark cluster to become available can throttle analytics velocity, delay insights, and drive-up cloud spend. In a world where data teams expect near‐instant execution and seamless burst … Continue reading “How does Fabric make Spark Notebooks Instant?”