Explanation

SQL Review: From Linting to Governance

Adela
Adela8 min read
SQL Review: From Linting to Governance

SQL review is the practice of checking SQL changes (schema migrations and data operations) for correctness, safety, and policy compliance before they run against a database. It sits between SQL linting, which catches syntax and style, and code review, which is a human reading a pull request. The scope of SQL review is narrower than either. It is automated, database-aware, and in mature implementations it lives inside the change workflow with an approval gate attached. See what is database change management for the broader category.

The three layers of SQL review

Most tools called "SQL review" occupy one, two, or all three of these layers. The category confusion in the market comes from the fact that a layer 1 tool and a layer 3 tool both answer "do you do SQL review?" with "yes."

Layer 1: linting

SQLFluff is the clearest example. A linter checks SQL for syntax correctness and style conventions: consistent keyword casing, correct indentation, no forbidden functions. This is the same category of work as Prettier or ESLint for JavaScript. Useful, and it prevents style arguments in code review, but it does not know what the SQL does.

Layer 2: semantic rules

Layer 2 adds rules that understand the database. A missing primary key, an ALTER TABLE that rewrites a large table synchronously, a DELETE with no WHERE clause, a foreign key column with no index: these are not syntax errors, but they will hurt in production. Example rules at this layer:

  • Require a primary key on every table, so rows are addressable and replication works.
  • Prohibit cascading deletes in foreign key constraints, so a DELETE on one row does not silently wipe dependent rows.
  • Enforce NOT VALID check constraints and follow them with a validation step, so partially-validated constraints do not linger.
  • Require an index on every foreign key column, so joins on the FK do not scan the entire child table.

SQLFluff does not cover these. SonarQube covers a handful. Database-native review tools (Liquibase Quality Checks, Bytebase SQL Review, Flyway Teams SQL checks) cover dozens to hundreds.

Layer 3: policy and approval

Layer 3 is where SQL review becomes governance. The layer decides not only whether the SQL is clean, but whether the person can apply it, to which environment, under what approval. A rule that warns in dev may block in prod. A change touching a PII-classified table may require DBA sign-off regardless of the SQL itself. Exemptions are logged with the justification. This is the layer where static linters stop.

Static linters vs review platforms vs DBA review

Not every tool called a "SQL review tool" does the same work. The practical differences show up in where the review runs, what rules it can enforce, whether policy is per-environment, and whether human approval lives inside the review or has to be stitched in from the pull request.

ToolRule countPer-env policyHuman approval in the reviewWhere review runs
SQLFluff~30 style rulesNoNoCLI ยท CI ยท Python lib
SonarQube~20 SQL rulesNoNoCI ยท IDE ยท REST API
Liquibase Quality Checks60+ (Pro)PartialNo (PR gate external)CLI ยท CI ยท Java lib
Flyway Teams SQL checks74+ (SQLFluff + Redgate rules)NoNo (PR gate external)CLI ยท CI ยท Java lib
Redgate SQL PromptLinting onlyNoNoIDE
Bytebase200+ (all tiers)YesYes (in change issue)GUI ยท CI ยท REST API
Manual DBA reviewVariesAd hocYesEmail / Slack

Two columns in this table matter more than they look. Per-environment policy is where governance starts: a linter with the same rule in dev and prod forces teams to pick one, either lax enough that nothing blocks or strict enough that dev is painful. Per-environment policy lets the rule warn in dev and block in prod, which is what teams actually need.

"Human approval in the review" is where the approval flow lives. In most stacks the approver reads a PR diff in GitHub while the SQL review results sit in a CI log or a separate console. Seeing the rule result and the approval decision in the same pane is a different UX, and it is the one that changes whether the approver notices the violations.

How Bytebase does SQL review

Bytebase exposes the same rule engine through three paths: the GUI change issue, a CI pipeline check, and a REST API. The rules, the per-environment policy, and the audit trail are shared. The difference is where the developer or connected tool submits the SQL.

Change issue (GUI). The primary path. A developer opens an issue for the target database, pastes or uploads the SQL, and submits. Rules run inline against the target environment's policy: warnings for style, errors for safety violations. The approver opens the same issue. Rule results, the SQL itself, the environment policy, and the approval button all sit in the same pane. The approver is reviewing the SQL and its rule output at the same time, which is the point of the design.

A concrete example: a developer submits ALTER TABLE users DROP COLUMN email. The rule engine flags the change as non-additive (destructive, because an active application may still reference the column). The issue routes to a DBA reviewer per the production environment's policy. The DBA opens the issue, sees the destructive-change warning alongside the SQL, checks the deployment plan, and rejects with a comment pointing at the application code still reading the column. One issue holds the SQL, the rule violation, the review decision, the approver, and the audit record. For the compliance side of that trail, see audit log for SOC 2.

CI / GitOps. For teams with a GitOps database workflow, the same rule engine runs as a check on pull requests in GitHub or GitLab. Violations show up as PR comments and a CI status. The trade-off against the GUI path: the approver is now a code reviewer reading the PR diff, and the rule output lives in the CI log, a click away. Setup details are in integrate SQL review into GitHub.

REST API. The same engine is also exposed via a REST API, language-agnostic, so calling it does not require running a JVM the way embedding Liquibase or Flyway does. This is how teams wire Bytebase into Jenkins shell steps, ServiceNow-driven approvals, or internal dashboards that are not covered by the native GitHub/GitLab integration.

In the teams we work with, the reason Bytebase ends up in the stack is usually this: their CI already runs SQLFluff or a schema check, but they still need a human to sign off on production DDL. The automated check and the human approval normally live in separate tools and separate queues, and the audit trail gets stitched together after the fact. Consolidating them into one workflow is the differentiator they hired Bytebase for.

FAQ

What is SQL review? SQL review is the automated check of SQL changes against a set of rules (semantic, performance, and policy) before the SQL runs against a database. It sits between SQL linting (syntax and style only) and code review (a human reading a pull request). In mature implementations, SQL review also includes a human approval gate where the approver sees the rule results alongside the SQL.

Does SonarQube do SQL review? SonarQube lints SQL for code-quality issues at the source level. It does not know which environment the SQL will run in, does not gate on approval, and does not sit in the database change workflow. It is a different category of tool from a database-aware SQL review platform like Bytebase or Liquibase Quality Checks.

What rules does SQL review check? Rule categories across major SQL review tools fall into five groups: syntax and style (casing, forbidden functions), naming (table/column conventions), safety (non-additive schema changes, DELETE without WHERE), performance (missing indexes, inefficient joins, large synchronous ALTER TABLE), and schema compatibility (breaking changes, FK integrity). Rule counts range from ~30 in SQLFluff to 200+ in Bytebase. Not every tool covers every category.

What's the difference between SQL linting and SQL review? Linting is layer 1: syntax, style, and format. SQL review adds layer 2 (semantic rules that understand the database) and layer 3 (per-environment policy with approval gates). A linter can live in an IDE or a CLI. A full SQL review runs inside the change workflow because the rule results inform the approval decision.

Is SQL review the same as code review? No. Code review is a human reading a pull request to evaluate design and correctness. SQL review is automated rule enforcement plus, in platform implementations, a human approval gate. The two are complementary: code review reads the SQL's intent, SQL review checks it against the rules and the production policy before it runs.

Contact Us