5th Nov 2025 10 minutes read Level Up Your Queries: 7 Intermediate SQL Tricks Every Analyst Should Know Agnieszka Kozubek-Krycuń Data Analysis Table of Contents 1. Use CASE WHEN for Conditional Columns 2. Combine CASE WHEN with GROUP BY and SUM 3. Simplify Summaries with ROLLUP 4. Handle Missing Data with COALESCE and NULLIF 5. Master Date and Time Manipulation 6. Use JSON Functions for Semi-Structured Data 7. Organize Queries with Common Table Expressions (CTEs) Keep Advancing Your SQL Superpowers SQL mastery doesn’t stop at writing simple queries — it begins when you start transforming data into insights. Learn seven intermediate SQL tricks every analyst should know to analyze smarter, work faster, and uncover patterns your basic queries can’t show. If you already know your way around SELECT, JOIN, and WHERE, it’s time to step up your game. SQL doesn’t end at SELECT — if you want to become a professional data analyst, you need to go beyond the basics and start thinking like someone who turns data into answers. Intermediate SQL is where analysis gets interesting — when you stop pulling raw data and start building insights that actually drive business decisions. In this article, we’ll go through seven intermediate-level SQL tricks that will help any data analyst work smarter and faster. These techniques are practical, real-world skills that will make your queries more powerful, efficient, and ready for advanced analysis. At LearnSQL.com, our SQL for Data Analysis track helps you move from writing basic queries to understanding data at a deeper level. Below are seven SQL tricks every data analyst should master to make their queries faster, cleaner, and more insightful. 1. Use CASE WHEN for Conditional Columns Often, analysts need to create their own categories — for example, separating VIP customers from regular ones or labeling transactions as “high” or “low” value. While filters show you existing groups, sometimes you need to define your own. That’s where CASE WHEN helps. It’s a standard SQL expression that works like an IF statement, letting you build conditional logic directly into your queries to segment data or apply business rules dynamically. Example: SELECT customer_id, total_spent, CASE WHEN total_spent > 1000 THEN 'VIP' WHEN total_spent BETWEEN 500 AND 1000 THEN 'Regular' ELSE 'New' END AS customer_segment FROM customers; Here’s what’s happening in this query. It starts by selecting each customer’s ID and total spending from the customers table. Then, the CASE expression runs through each row, checking the value of total_spent in order. If a customer has spent more than 1000, SQL assigns them the label 'VIP'. If not, it moves to the next condition — customers who spent between 500 and 1000 get labeled 'Regular'. Everyone else falls into the 'New' category defined by the ELSE clause. The CASE expression stops as soon as it finds the first true condition, ensuring that every customer gets exactly one category. The final output includes a new column, customer_segment, which reflects the label determined by those rules. This approach lets you embed simple business logic directly in your SQL, making your reports or dashboards immediately more insightful without extra data cleaning or manual classification. You can read more about CASE expression in our article SQL CASE WHEN Explained: 10 Easy Examples for Beginners 2. Combine CASE WHEN with GROUP BY and SUM Sometimes you need to create your own summary metrics — for example, counting how many high-value, medium-value, and low-value orders each region generates. Standard grouping can only summarize existing columns, but business analysis often requires custom definitions. That’s where combining CASE WHEN with GROUP BY and aggregate functions like SUM or COUNT becomes powerful. It lets you calculate multiple, condition-based statistics in one query — a clean way to turn raw data into meaningful business insights. Example: SELECT region, SUM(CASE WHEN total_amount > 1000 THEN 1 ELSE 0 END) AS high_value_orders, SUM(CASE WHEN total_amount BETWEEN 500 AND 1000 THEN 1 ELSE 0 END) AS mid_value_orders, SUM(CASE WHEN total_amount < 500 THEN 1 ELSE 0 END) AS low_value_orders FROM orders GROUP BY region; This query groups orders by region and counts how many fall into different value categories. For each order, the CASE expression checks the total_amount and returns 1 if the condition is true or 0 if it’s not. The SUM then adds up the ones to produce totals for each category within a region. This pattern is useful for comparing business performance — for example, spotting which regions generate more high-value transactions or tracking how order sizes differ across markets. Instead of writing multiple queries, you can summarize everything in one view using CASE WHEN with GROUP BY. You can learn how to work with CASE, COUNT, SUM, and GROUP BY in our intermediate-level course Creating Basic SQL Reports. 3. Simplify Summaries with ROLLUP When analyzing data across time — such as sales by month and year — you often need multiple levels of aggregation: monthly totals, yearly totals, and a grand total. Usually, that would require several separate queries or UNION statements. ROLLUP solves this problem neatly. It extends GROUP BY to include subtotals and grand totals automatically, making it ideal for financial reports, dashboards, or executive summaries. Example: SELECT EXTRACT(YEAR FROM order_date) AS year, EXTRACT(MONTH FROM order_date) AS month, SUM(total_amount) AS total_sales FROM orders GROUP BY ROLLUP( EXTRACT(YEAR FROM order_date), EXTRACT(MONTH FROM order_date)) ORDER BY year, month; Result: yearmonthtotal_sales 2023185,000 2023290,500 2023NULL175,500 2024192,000 2024295,300 2024NULL187,300 NULLNULL362,800 Here, ROLLUP(year, month) tells SQL to group data first by year and month, then add an extra row summarizing each year, and finally another row showing the grand total across all years. Wherever you see NULL in the result, that’s a subtotal or total level automatically generated by ROLLUP. This makes it easy to prepare time-based summaries in one query — for example: Monthly and yearly sales reports. Website visits by day, month, and overall. Expense tracking per department with company-wide totals. In short, ROLLUP is a quick, reliable way to produce hierarchical summaries directly in SQL — no spreadsheet formulas or manual aggregation needed. You can learn about ROLLUP and other GROUP BY extensions in our intermediate-level course GROUP BY Extensions in SQL. 4. Handle Missing Data with COALESCE and NULLIF The ROLLUP example shows a common issue: subtotal rows use NULL to represent missing grouping levels, which can make results harder to read. More generally, NULL values in data often cause confusion — blank cells in reports, missing text in dashboards, or incomplete calculations in aggregates. Every data analyst should know how to handle NULLs effectively, because unmanaged missing values can lead to wrong conclusions or failed queries. Fortunately, SQL provides simple tools to make these values more meaningful and easier to work with: COALESCE and NULLIF. Using COALESCE: Replace NULL with Readable Values COALESCE returns the first non-NULL value from a list, letting you replace the NULL label with something more meaningful. Example: SELECT COALESCE(region, 'Total') AS region_label, SUM(total_amount) AS total_sales FROM orders GROUP BY ROLLUP(region); Result: regiontotal_sales East85,000 West92,300 Total177,300 Here, COALESCE(region, 'Total') replaces the NULL from the subtotal row with a readable label. You can use it the same way to fill in missing numbers (COALESCE(discount, 0)) or text values (COALESCE(name, 'Unknown')). Using NULLIF: Prevent Division by Zero Errors NULLIF does the opposite — it converts a specific value into NULL. Its most practical use is preventing division by zero in calculations. Example: SELECT product_id, sales, returns, sales / NULLIF(returns, 0) AS sales_to_returns_ratio FROM product_stats; If returns equals zero, NULLIF(returns, 0) returns NULL, stopping SQL from throwing a division-by-zero error. By mastering COALESCE and NULLIF, analysts can make their queries more reliable and readable — ensuring that missing data never gets in the way of accurate insights. 5. Master Date and Time Manipulation Dates are at the core of nearly every analysis — tracking monthly revenue, weekly retention, quarterly churn, or yearly growth. But raw timestamps are rarely in the format you need. To get meaningful insights, you often have to extract parts of a date, calculate time intervals, or adjust time zones. A data analyst should know how to: sort records chronologically, extract specific parts of a date or time (year, month, day, hour, minute), add or subtract time intervals, format dates for reports, convert between time zones, and handle conversions between date and string or timestamp types. SQL provides functions for all of these. Example: SELECT EXTRACT(YEAR FROM order_date) AS year, EXTRACT(MONTH FROM order_date) AS month, COUNT(order_id) AS orders FROM orders GROUP BY EXTRACT(YEAR FROM order_date), EXTRACT(MONTH FROM order_date) ORDER BY year, month; This query groups all orders by year and month to count how many were placed in each period. EXTRACT pulls out parts of a date — here, the year and month from order_date. The GROUP BY uses those extracted values to aggregate data by time period, and ORDER BY ensures results appear chronologically. Once you understand how to extract and manipulate date parts, you can easily expand your analysis — compare month-over-month growth, measure average order intervals, or calculate retention windows. These skills let you turn timestamps into insights that reflect real business cycles. (Get our SQL for Data Analysis Cheat Sheet to see these functions in one place.) 6. Use JSON Functions for Semi-Structured Data More and more business data today comes from REST APIs — web services that return information in JSON (JavaScript Object Notation) format. JSON is a lightweight, semi-structured way to store data using key–value pairs, arrays, and nested objects. For example, an API might return this: {"user_id": 42, "device": "mobile", "location": {"country": "US", "city": "Boston"}} This kind of data is flexible, but it doesn’t fit neatly into standard SQL tables. A single column might contain an entire JSON object, making it impossible to use traditional SQL operators like = or LIKE to extract values. That’s why modern databases include JSON functions such as JSON_EXTRACT() in MySQL or JSON_VALUE() in SQL Server. They let you query, filter, and aggregate semi-structured data directly. Example: SELECT JSON_EXTRACT(user_data, '$.device') AS device_type, COUNT(*) AS users FROM events GROUP BY device_type; Here, the column user_data stores JSON objects describing user sessions. The function JSON_EXTRACT(user_data, '$.device') retrieves the value of the device key (like 'mobile' or 'desktop') for each row. The query then groups the results by device type and counts how many users belong to each category. Being able to work with JSON means you can analyze data from modern systems — API responses, app logs, or tracking events — without needing to reshape it first. It’s an essential skill for analysts dealing with data that’s not fully structured but still full of valuable insights. 7. Organize Queries with Common Table Expressions (CTEs) As your SQL skills grow, your queries naturally get longer — multiple joins, filters, and aggregations all in one statement. That’s great for analysis but hard to read and maintain. Common Table Expressions (CTEs) make complex queries cleaner and easier to manage. They let you define a temporary, named result set that you can reference within the same query. Think of a CTE as a reusable subquery with a name. CTEs are especially useful when: you need to break down a multi-step analysis into logical parts, the same calculation is used more than once, or you want to debug and test sections of a larger query separately. Example: WITH monthly_sales AS ( SELECT EXTRACT(YEAR FROM order_date) AS year, EXTRACT(MONTH FROM order_date) AS month, SUM(amount) AS total_sales FROM orders GROUP BY EXTRACT(YEAR FROM order_date), EXTRACT(MONTH FROM order_date) ) SELECT year, month, total_sales FROM monthly_sales WHERE total_sales > 50000 ORDER BY year, month; Here, the CTE named monthly_sales calculates total sales per month and year. It acts like a temporary table that exists only for the duration of the query. The outer query then filters this result to show only months where sales exceeded 50,000. This structure improves readability and makes it easier to extend the query — for example, you could join monthly_sales with a targets table or compute growth rates without rewriting the aggregation logic. CTEs are an essential tool for analysts who write layered, data-driven reports or need to keep complex SQL organized and maintainable. You can learn more about Common Table Expressions in our advanced course Recursive Queries. Keep Advancing Your SQL Superpowers Each of these techniques helps you write smarter, more flexible queries — the kind that save time and impress teams. If you’re ready to move beyond basics and build real analytical skills, explore SQL for Data Analysis. It covers everything from intermediate to advanced reporting, all with hands-on practice. Want full access to every course we offer? The All Forever SQL Package gives you lifetime access to all LearnSQL.com current and future courses and tracks — the best value for money and a single payment that pays off for your entire career. Keep learning, keep querying, and keep leveling up. Tags: Data Analysis