Skip to main content

3 posts tagged with "Broadcast Join"

View All Tags

The Complete Apache Spark and Iceberg Performance Tuning Checklist

· 35 min read
Cazpian Engineering
Platform Engineering Team

The Complete Apache Spark and Iceberg Performance Tuning Checklist

You have a Spark job running on Iceberg tables. It works, but it is slow, expensive, or both. You have read a dozen blog posts about individual optimizations — broadcast joins, AQE, partition pruning, compaction — but you do not have a single place that tells you what to check, in what order, and what the correct configuration values are. Every tuning session turns into a scavenger hunt across documentation, Stack Overflow, and tribal knowledge.

This post is the checklist you run through every time. We cover every performance lever in the Spark and Iceberg stack, organized from highest impact to lowest, with the exact configurations, recommended values, and links to our deep-dive posts for the full explanation. If you only have 30 minutes, work through the first five sections. If you have a day, work through all sixteen. Every item has been validated in production workloads on the Cazpian lakehouse platform.

Spark Caching and Persistence: The Complete Guide for Iceberg and Cazpian

· 30 min read
Cazpian Engineering
Platform Engineering Team

Spark Caching and Persistence: The Complete Guide for Iceberg and Cazpian

You are running the same 500 GB join three times in a single pipeline — once for a daily summary, once for a top-products report, once for customer segmentation. Each query reads from S3, shuffles terabytes across the network, builds hash maps, and aggregates from scratch. That is 1.5 TB of redundant I/O, three redundant shuffles, and three redundant sort-merge joins.

Spark caching eliminates this waste. You compute the expensive join once, store the result in executor memory, and every subsequent query reads from that in-memory copy instead of going back to object storage. The improvement is not incremental — it is typically 10-100x faster for repeated access patterns.

But caching does something else that is less obvious and equally powerful: it makes Spark's query optimizer smarter. When a table is cached, Spark knows its exact in-memory size. If that size falls below the broadcast join threshold, the optimizer automatically converts a Sort-Merge Join into a Broadcast Hash Join — eliminating the shuffle entirely, without you writing a single hint.

This post covers every dimension of Spark caching. We start with internals, walk through every storage level, show all the ways to cache, explain the columnar storage format that makes DataFrame caching special, dive into memory management, discuss how much data you should actually cache (spoiler: not terabytes), show you how to read the Spark UI Storage tab, and cover the pitfalls that catch production workloads.

Spark Broadcast Joins: The Complete Guide for Iceberg and Cazpian

· 27 min read
Cazpian Engineering
Platform Engineering Team

Spark Broadcast Joins: The Complete Guide for Iceberg and Cazpian

Your 500-node Spark cluster is shuffling a 2 TB fact table across the network — serializing every row, hashing it, writing it to disk, sending it over the wire, and deserializing it on the other side — just to join it with a 50 MB dimension table. Every single query. Every single day.

The shuffle is the most expensive operation in distributed computing. It consumes network bandwidth, disk I/O, CPU cycles, and memory on every executor in the cluster. And for joins where one side is small, it is completely unnecessary.

Broadcast join eliminates the shuffle entirely. Instead of redistributing both tables across the cluster, Spark sends the small table to every executor, where it is stored as an in-memory hash map. Each executor then joins its local partitions of the large table against this hash map — no network shuffle, no disk spill, no cross-executor coordination. The result is typically a 5-20x speedup over Sort-Merge Join for eligible queries.

This post goes deep. We cover exactly what happens inside a broadcast join, all five ways to trigger one, how AQE converts joins at runtime, the real memory math that catches people off guard, when broadcast hurts instead of helps, and how to monitor and debug broadcast behavior in production.