Skip to main content

6 posts tagged with "Databricks"

View All Tags

The Serverless Black Box: What You Lose on Databricks Serverless Compute

· 13 min read
Cazpian Engineering
Platform Engineering Team

The Serverless Black Box: What You Lose on Databricks Serverless Compute

Databricks serverless compute promises a simple deal: stop managing clusters and just run your workloads. No instance selection. No autoscaling policies. No driver sizing. Just submit your query or job and let Databricks handle the rest.

The pitch is compelling. The reality is a black box that removes not just infrastructure management, but your ability to observe what is happening, tune how it runs, and control what it costs.

This is Part 3 of our Databricks observability series. In the previous post, we documented how system tables leave critical metrics gaps. Serverless makes those gaps dramatically worse — because on serverless, you lose even the tools that classic compute provides.

Databricks System Tables: The Observability Gap — What They Expose vs What You Actually Need for Cost Control

· 17 min read
Cazpian Engineering
Platform Engineering Team

Databricks System Tables: The Observability Gap

Databricks system tables look comprehensive on paper. Sixteen tables across ten schemas. Billing, compute, jobs, queries, lineage, audit. When Databricks deprecated Overwatch and pointed everyone to system tables, the message was clear: this is the future of observability.

But if you have ever tried to answer these questions using only system tables, you already know the gap:

  • Why is my job's GC time at 15%? (System tables do not track GC time.)
  • Which stage is spilling to disk? (System tables do not track per-stage spill.)
  • Which executor is the memory bottleneck? (System tables do not track executor-level JVM metrics.)
  • How many files does my Delta table have? Is it healthy? (System tables do not track Delta table physical metrics.)
  • What did my serverless job's infrastructure look like? (System tables do not populate node_timeline for serverless.)

This post documents exactly what Databricks system tables contain — column by column — and exactly what is missing. Every claim is verifiable against Databricks' own documentation.

How to Profile Spark Jobs After Completion: The Complete Guide to Collecting Metrics on Databricks, EMR, and Dataproc

· 22 min read
Cazpian Engineering
Platform Engineering Team

How to Profile Spark Jobs After Completion

Your Spark job finished. It ran for 2 hours. It used 50 executors. The bill arrived. Now what?

Most teams have no systematic way to answer the obvious follow-up questions: Was 50 executors the right number? Did we waste memory? Was there data skew? Should we use different instance types? They either overprovision "just in case" — burning 2-5x the needed budget — or reactively debug when jobs fail with OOM errors.

The root cause is not a lack of tooling. It is a data collection problem. Spark generates extraordinarily detailed metrics at every level — task, stage, executor, application — but those metrics vanish when the application terminates. If you did not capture them during or immediately after execution, they are gone.

This post solves that problem. We cover every method for collecting Spark profiling metrics, show exactly how to set them up on Databricks, EMR, and Dataproc, and then show you how to turn those metrics into right-sizing decisions with concrete formulas and thresholds.

Why Every Data Company Is Betting on Apache Iceberg — And What It Means for AI

· 13 min read
Cazpian Engineering
Platform Engineering Team

Why Every Data Company Is Betting on Apache Iceberg

Something unusual is happening in the data industry. Companies that have spent years — and billions of dollars — building proprietary storage formats are now rallying behind an open-source table format created at Netflix. Snowflake, Databricks, Dremio, Starburst, Teradata, Google BigQuery, AWS — the list keeps growing. They are not just adding Iceberg as a checkbox feature. They are making it central to their platform strategy.

If you are a data engineer, you have almost certainly heard of Apache Iceberg by now. But the more interesting question is not what Iceberg is — it is why every major vendor has decided that their own proprietary format is no longer enough.

Databricks vs. EMR vs. Cazpian: The 2026 Compute Cost Showdown

· 13 min read
Cazpian Engineering
Platform Engineering Team

Databricks vs. EMR vs. Cazpian: The 2026 Compute Cost Showdown

"Which platform is cheapest for Spark?" is one of the most common questions data teams ask — and one of the most misleading. The honest answer is: it depends entirely on your workload shape.

A platform that saves you thousands on large nightly batch jobs might quietly waste thousands on your fleet of small ETL runs. The billing model that looks transparent at first glance might hide costs in cold starts, minimum increments, or idle compute you never asked for.

In this post — Part 3 of our compute cost series — we compare Databricks, Amazon EMR, and Cazpian across three realistic workload scenarios. No hypotheticals. Real pricing. Real math.

The Small Job Tax: How Spark Cold Starts Are Silently Draining Your Data Budget

· 10 min read
Cazpian Engineering
Platform Engineering Team

The Small Job Tax: How Spark Cold Starts Are Silently Draining Your Data Budget

Most data teams obsess over optimizing their biggest, most complex Spark jobs. Meanwhile, hundreds of tiny ETL jobs — each processing a few gigabytes — quietly rack up a bill that nobody questions.

We call it the Small Job Tax: the disproportionate cost of running lightweight workloads on infrastructure designed for heavy lifting. And for many organizations, it is the single largest source of wasted compute spend.