Crunchy Bridge for Analytics now has support for Iceberg and other new features. Learn more in our Crunchy Bridge for Analytics announcement.

Postgres Traceability: Use SQL Comments Like a Stacktrace

Have you traced a sql statement back to the source application? Of course you have! Was it time consuming? Probably! SQL queries can hide in plain sight.

If you've read through any of Crunchy's advice on query performance or healthy databases you know there's a goldmine of optimization waiting for you in your database. When using an ORM, the queries are obfuscated, and tracing a specific query to the application code can be challenging.

Using Ruby as an example below (BTW, I <3 Ruby; I will see you at RailsConf!), ORMs generally look like the following. If the source of your performance issues is the following, how would you know?

	.where(recipient_id: params[:id])
	.includes(:recipient, :sender)
	.order(sent_at: :desc)

The SQL that is generated looks nothing like the ORM code that is written. The above code will output the following SQL statements:

-- get messages
SELECT "messages".* FROM "messages" WHERE "messages"."recipient_id" = $1
ORDER BY "messages"."sent_at" DESC

-- get recipient
SELECT "users".* FROM "users" WHERE "users"."id" = $1

-- get senders
SELECT "users".* FROM "users" WHERE "users"."id" IN
(/* list of args whose length depends on number of senders*/)

But that's just Ruby, right? Node's Prisma …

I built a similar structure in the Prisma ORM, and ran the following code:

	orderBy: [
			sent_at: 'desc',
	where: {
		recipient_id: 10088,
	include: {
		recipient: true,
		sender: true,

The queries generated look like the following:

-- get messages
SELECT "public"."Message"."id", "public"."Message"."message",
"public"."Message"."recipient_id", "public"."Message"."sender_id",
"public"."Message"."sent_at", "public"."Message"."created_at",
"public"."Message"."updated_at" FROM "public"."Message"
HERE "public"."Message"."recipient_id" = $1 ORDER BY
"public"."Message"."sent_at" DESC OFFSET $2

-- get recipient
SELECT "public"."User"."id", "public"."User"."email",
"public"."User"."name", "public"."User"."created_at",
"public"."User"."updated_at" FROM "public"."User" WHERE
 "public"."User"."id" IN ($1) OFFSET $2

-- get sender
SELECT "public"."User"."id", "public"."User"."email",
"public"."User"."name", "public"."User"."created_at",
"public"."User"."updated_at" FROM "public"."User" WHERE
"public"."User"."id" IN
(/* list of args whose length depends on number of senders*/) OFFSET $25

What do SQL comments do?

Enough with the ORM bashing! But, you get the point right? A system with high load, slow responses, a database log flowing with slow queries, then, a land-mine of a query is thrown into the database. How can we fix this? How can we point you in the right direction?

Use SQL Comments!

What does a commented SQL statement look like?

-- get messages from Rails using
SELECT "messages".* FROM "messages" WHERE "messages"."recipient_id" = $1
ORDER BY "messages"."sent_at" DESC /*application:CustomerFacingFrontEndApp,
line:/app/controllers/messages_controller:5:in `<main>'*/

By looking at the comment, we know the query was run by executing line 5 of the MessageController from the CustomerFacingFrontEndApp. Query comments are recorded in your slow query logs and when using pg_stat_statements.

NICE! So, how can we do that? It all depends …

Python, Java, Node.js, & PHP:

  • sqlcommenter, this project has a wide-breadth, so it may or may not be plug-and-play for your environment, yet it is a good starting point.

Rails (which I love) has 2 native features in Rails 7:



  • Write your own comments in the SQL :)

But wait, there is more

Combine SQL Comments with the following Postgres features, and you'll level up your traceability:

Below, I have a query below that returns the user_id with the most received messages. I've also added the Ruby on Rails annotate method to track how a slow query is being executed:

	.select("recipient_id, COUNT(id) AS max_received")
	.order("max_received DESC")
  .annotate("Github Issue #345")

In my logging management system, I can go search for "Github Issue #345" (Of course, you should tie this back to your issue tracking system!). Your log management system would show the following:

LOG:  duration: 248.578 ms  plan:
  Query Text: SELECT recipient_id, COUNT(id) AS max_received
  FROM "messages" GROUP BY "messages"."recipient_id"
  /* Github Issue #345 */ ORDER BY max_received DESC LIMIT $1
  Limit  (cost=126246.71..126246.72 rows=1 width=12)
    ->  Sort  (cost=126246.71..126401.77 rows=62022 width=12)
      Sort Key: (count(id)) DESC
        ->  Finalize GroupAggregate  (cost=110223.35..125936.60 rows=62022 width=12)
        Group Key: recipient_id
          ->  Gather Merge  (cost=110223.35..124696.16 rows=124044 width=12)
          Workers Planned: 2
            ->  Sort  (cost=109223.33..109378.38 rows=62022 width=12)
            Sort Key: recipient_id
              ->  Partial HashAggregate  (cost=94535.88..104286.23 rows=62022 width=12)
              Group Key: recipient_id
              Planned Partitions: 4
                ->  Parallel Seq Scan on messages  (cost=0.00..34642.25 rows=934925 width=12)

Yuck! This query just performed a bunch of table-scans over 934,925 rows multiple times!

Step #1, read Postgres Indexes for Newbies, Step #2: add indexes, Step #3: measure improvement or revert, Step #4, rinse and repeat.

Where to start?

Even if you don't have slow queries now, go ahead and start commenting your queries. Your future-self will thank your past-self.

Avatar for Christopher Winslett

Written by

Christopher Winslett

May 16, 2022 More by this author