You have 47,000 sales transactions from last month. Your dashboard needs to show 'total revenue by region.' You could load all 47,000 records into memory and loop through them...
Or you've been asked how many support tickets were opened each day for the past year. 365 individual database queries? A single massive query that times out? Neither feels right.
The data is all there. But turning 'every transaction' into 'what actually happened' requires a different approach.
Aggregation is the math that turns individual events into understanding - sums, counts, averages, and groupings that make data meaningful.
PERFORMANCE MULTIPLIER - Good aggregation can replace hours of computation with milliseconds of database work.
Aggregation takes many records and produces fewer, more meaningful ones. A thousand orders become 'total sales this quarter.' Ten thousand user sessions become 'average time on site.' The individual records are valuable, but the summary tells the story.
Every aggregation answers a question: How many? How much? What's the average? What's the trend? The question determines the function - COUNT, SUM, AVG, MIN, MAX, and their combinations. Group by time, by category, by user, by region. The output is always smaller than the input, always more actionable.
You're not losing data when you aggregate - you're extracting signal from noise. The raw events are still there when you need them. The aggregations let you understand what the events mean.
Aggregation applies a universal pattern: reduce many things to one number (or a few numbers grouped by category). The same pattern works everywhere data needs summarizing.
Take a set of records, apply a function (sum, count, average, min, max), optionally group by dimensions (time, category, region), output summary records. This pattern scales from spreadsheet formulas to petabyte data warehouses.
Start with 10 raw orders. Click each aggregation type to see how many rows you get back and what questions each approach can answer.
SELECT * FROM ordersInput: 10 rows
One function, one result
SELECT SUM(amount) FROM orders. SELECT COUNT(*) FROM users WHERE active = true. You're asking one question about the whole dataset. Fast, simple, but limited - you get a single number back.
Same function, broken down by category
SELECT region, SUM(amount) FROM orders GROUP BY region. Now you're getting multiple answers - one per group. This is where aggregation becomes powerful: same computation, applied to meaningful segments.
Aggregations with context from surrounding rows
Running totals, moving averages, rank within groups. These don't collapse rows - they add computed columns that look at neighboring data. Essential for time-series analysis and comparative rankings.
A month of e-commerce transactions needs to become a dashboard showing revenue by region, daily trends, and top-performing products. Raw orders are filtered, validated, then aggregated into summary tables that dashboards can query in milliseconds instead of minutes.
Hover over any component to see what it does and why it's neededTap any component to see what it does and why it's needed
Animated lines show direct connections · Hover for detailsTap for details · Click to learn more
You calculate 'average order value' but include cancelled orders with $0 values. The average looks lower than expected because you're counting things that shouldn't be in the set.
Instead: Filter to relevant records BEFORE aggregating. WHERE status = 'completed' comes before SUM().
COUNT(*) counts all rows. COUNT(email) counts rows where email is not null. You think you're counting customers but you're counting rows - including those with missing data.
Instead: Be explicit about what you're counting. COUNT(DISTINCT customer_id) is often what you really mean.
GROUP BY user_id on a table with 10 million users. You just asked the database to return 10 million rows. That's not aggregation - that's a slow SELECT with extra steps.
Instead: Aggregate on meaningful categories with reasonable cardinality. Time buckets, not timestamps. Product categories, not SKUs.
You've learned how to turn raw records into meaningful summaries. The natural next step is understanding how to store and query time-based aggregations efficiently.