Your queries
deserve better.
A database architect you can talk to. Paste your slow query, describe your schema problem, ask about indexing strategy — get real SQL back, not hand-wavy advice.
SELECT u.*, COUNT(o.id) as order_count, SUM(o.total) as lifetime_value
FROM users u
LEFT JOIN orders o ON o.user_id = u.id
WHERE u.created_at > '2024-01-01'
GROUP BY u.id
ORDER BY lifetime_value DESC NULLS LAST
LIMIT 100;
1. How many rows in
orders? If it's 10M+, the LEFT JOIN + GROUP BY is your bottleneck.2. Do you have an index on
orders(user_id)? What about users(created_at)?3. Can you run
EXPLAIN (ANALYZE, BUFFERS) and paste the output?That said, here's an immediate improvement — push the filter down and avoid scanning the full orders table:
SELECT
u.id, u.email, u.name, u.created_at,
COALESCE(agg.order_count, 0) AS order_count,
COALESCE(agg.lifetime_value, 0) AS lifetime_value
FROM users u
LEFT JOIN LATERAL (
SELECT
COUNT(*) AS order_count,
SUM(o.total) AS lifetime_value
FROM orders o
WHERE o.user_id = u.id
) agg ON true
WHERE u.created_at > '2024-01-01'
ORDER BY lifetime_value DESC NULLS LAST
LIMIT 100;
LATERAL joins let PostgreSQL compute the aggregate per-user using the index on orders(user_id) instead of scanning + grouping the entire table. For top-N queries with aggregates, this pattern often drops from seconds to milliseconds.Also: you're selecting u.* — consider listing only the columns you need. Wide rows = more I/O.
Seq Scan on orders (cost=0.00..245891.20 rows=8012544 width=24)
Filter: (user_id IS NOT NULL)
-- Primary: covers the LATERAL join lookup
CREATE INDEX CONCURRENTLY idx_orders_user_id_total
ON orders (user_id)
INCLUDE (total);
-- Secondary: speeds up the WHERE filter on users
CREATE INDEX CONCURRENTLY idx_users_created_at
ON users (created_at DESC)
WHERE created_at > '2024-01-01';
INCLUDE (total) makes this a covering index — Postgres can compute the SUM without touching the heap. The partial index on users keeps it small. After creating both, run ANALYZE users, orders; and re-test.
What You Can Ask About
Six domains. One conversation.
Query Optimization
Paste a slow query and get it rewritten. I'll use LATERAL joins, CTEs, window functions, or recursive queries — whatever fits your engine and data shape.
Schema Design
Normalization vs. denormalization trade-offs for your specific use case. I'll design tables, define relationships, and explain when to break the rules.
Index Strategy
B-tree, GIN, GiST, partial, covering, composite — I'll recommend the right index type and column order based on your query patterns and table size.
EXPLAIN Analysis
Paste your EXPLAIN (ANALYZE, BUFFERS) output and I'll read it line by line. Sequential scans, nested loops, hash joins — I'll tell you what's wrong and how to fix it.
NoSQL Patterns
MongoDB aggregation pipelines, Redis data structures, DynamoDB single-table design. I'll help you model for access patterns, not just relationships.
Migrations & N+1
Write migration scripts, detect N+1 query problems in your ORM usage, and plan zero-downtime schema changes. Describe your stack and I'll write the migration.
Ready to optimize?
Paste your query. Get it back faster.
Example Prompts
Things you can type right now
Every one of these is a real conversation starter. Query Forge will ask follow-up questions to give you the most relevant answer.
"My PostgreSQL query uses a subquery in WHERE EXISTS but it's doing a seq scan on a 50M row table. Here's the EXPLAIN output — what index am I missing?"
"I'm building a multi-tenant SaaS app. Should I use one database per tenant, schema per tenant, or shared tables with tenant_id? ~500 tenants, PostgreSQL."
"Write a MongoDB aggregation pipeline that groups orders by month, calculates running totals, and filters for customers with >$10K lifetime spend."
"My Rails app is making 300+ queries per page load. I think it's an N+1 problem with has_many :through. Here's my model code — help me fix the includes."
"Help me design a DynamoDB single-table schema for an e-commerce app. Access patterns: get user orders, get order items, get product reviews by date."
Pasted my EXPLAIN output and got back the exact covering index I needed. Query went from 4 seconds to 80ms. I've been staring at this for two days.
It actually asked me what database engine I was using before suggesting anything. That's more than most humans on Stack Overflow do.
Used it to design a DynamoDB schema for a new project. The single-table design it suggested was cleaner than what our team had drafted in two meetings.
FAQ
Honest answers
Stop guessing. Start forging.
Paste your slow query, your messy schema, your confusing EXPLAIN plan. Get back real answers from an AI that thinks in execution plans.
Ask Your First QuestionChat-based AI on AURVEK · No installation needed