Two weeks ago, the automated T-SQL to PostgreSQL converter behind ReportBridge was passing 11% of the queries in a multi-jurisdictional regulatory report library — 34 out of 307 reports compiled and ran against the target PostgreSQL. The other 273 failed somewhere between the prompt and the execution plan.
Today that number is 85.7%: 263 of 307. The remaining 44 reports are a mix of structural problems (recursive CTEs, cross-database references) that no amount of better prompting will solve on its own.
This is what it took to get there. None of the changes were individually heroic — but the order mattered, and the shape of the fallback matters even more.
Step 1: Diagnosing a low pass rate
When an AI-powered converter passes only 11%, the first instinct is to blame the model. It's usually wrong. Three instrumentation changes told us the real story:
- Failure taxonomy: every failed query was tagged with a category (syntax, name resolution, type mismatch, reserved word, function not found). 71% were name-resolution failures — the converter couldn't find a table or column in the target schema.
- Schema snapshot diff: we compared the target PostgreSQL schema the converter had cached against the live schema. There was a six-week drift.
- Prompt-level audit: we captured the exact prompt sent to the model for a sample of failures and reviewed what schema context was actually being passed.
The model wasn't hallucinating. It was faithfully translating against a schema that no longer existed. That turned out to be ~60% of the total gap.
Step 2: Fixing the schema cache
The converter had been pulling schema from a cached snapshot that only refreshed on manual trigger. Three migrations on the source side — a column rename, a table split into two tables, and a new view replacing a deprecated one — had never been reflected. Every report that touched those objects failed at conversion time.
// Before: cached once, never invalidated
const schema = await readCachedSchema(targetDbId);
// After: TTL-bounded cache with explicit invalidation hooks
const schema = await getSchema(targetDbId, {
maxAgeMs: 30 * 60 * 1000,
onMigrationEvent: invalidateSchemaCache,
});The fix was small: a 30-minute TTL plus an invalidation hook fired by the migration runner. Once the next conversion run picked up the fresh schema, the pass rate jumped from 11% to 48% in a single deploy.
Step 3: Reserved-word collisions
PostgreSQL reserves a different set of keywords than SQL Server, and most T-SQL code was written against a looser list. Columns named user, order, type, desc, or grant are perfectly legal in SQL Server and fail instantly in PostgreSQL unless quoted.
T-SQL (works)
SELECT user, type, desc
FROM records
WHERE grant = 'active'PostgreSQL after conversion (fails)
-- ERROR: syntax error at or near "user"
SELECT user, type, desc
FROM records
WHERE grant = 'active'
-- Needs:
SELECT "user", "type", "desc"
FROM records
WHERE "grant" = 'active'We added a reserved-word scrub pass that runs after the model output: any identifier that collides with a PostgreSQL reserved word gets double-quoted. The scrub is deterministic, which matters — asking the model to do it probabilistically introduced occasional errors when the reserved word appeared inside string literals or aliases. Doing it in code fixed ~12% of previously-failing reports.
Step 4: Subquery aliases and the silent bug
The worst class of failure isn't the one that throws an error. It's the one that compiles, runs, and returns the wrong answer. This is the "stripped alias" bug:
T-SQL
SELECT e.name,
LastReview = (SELECT TOP 1 r.review_date
FROM reviews r
WHERE r.emp_id = e.id
ORDER BY r.review_date DESC)
FROM employees eNaive conversion (compiles, wrong answer)
SELECT e.name,
(SELECT r.review_date
FROM reviews r
WHERE r.emp_id = e.id
ORDER BY r.review_date DESC) AS last_review
FROM employees e
-- Missing LIMIT 1 — subquery returns multiple rows
-- PostgreSQL will throw at runtime: "more than one row returned"
-- OR silently pick an arbitrary row depending on driverThe TOP 1 was stripped during the T-SQL-to-ANSI normalization step, but the compensating LIMIT 1wasn't added. We caught this during validation only because the pipeline now runs every converted query through a row-count sanity check. Without that, a perfectly-compiling report could ship with subtly-wrong numbers. This is the bug that convinced us validation matters more than conversion.
Step 5: The correction tools that make 85.7% enough
Ninety percent is not the goal of an automated conversion pipeline. The goal is a shipped report library. To close the gap between "automated" and "shipped" we built five manual-correction tools that live inside the same admin UI:
- Editable T-SQL and PostgreSQL panes: the original T-SQL and the AI-converted PostgreSQL are both editable in place. You can tweak the conversion directly without triggering a full re-run.
- RDL XML viewer: sometimes the problem is the RDL itself — a parameter with a hardcoded default that references a retired lookup. The RDL pane lets you read the XML inline without jumping to a separate tool.
- Fix-hint panel: for known failure categories, the UI surfaces a one-sentence suggestion. "This query uses a recursive CTE; PostgreSQL requires the RECURSIVE keyword." Not generated — pattern-matched against the failure taxonomy from Step 1.
- Deterministic reserved-word scrub: the same post-processing pass from Step 3, exposed as a one-click button so users can apply it after a manual edit.
- History revert: every save creates a version. If a manual edit breaks something that worked, one click restores the previous state. This sounds obvious — but it's what makes people willing to experiment without fear.
The economics are the point. At 11% automated, a migration team has to manually rewrite 89% of the library — that's weeks of work. At 86% automated with good correction tools, the team hand-finishes 14% in an afternoon. That's the difference between a migration that ships and one that stalls.
What I'd tell myself at 11%
- Instrument the failure mode before tuning the prompt. A failure taxonomy pointed us at schema drift in an hour; prompt engineering would have taken weeks to stumble across it.
- Deterministic post-processing beats probabilistic prompting for well-defined transformations. Reserved-word scrub is the canonical example.
- Invest in the correction UI before you invest in squeezing the last 5% out of the model. You'll need the UI anyway, and it makes every other number look better.
- Measure pass rate against a real report library, not a synthetic benchmark. The shape of real T-SQL in production — decades of accumulated patterns, reserved words from back when they weren't reserved, and dialect mixing — is unlike any benchmark.
ReportBridge's converter ships these five fixes by default. If you're evaluating tools for an SSRS migration of your own and your current approach is anywhere near the 11% we started at, I'd love to hear about the failure categories you're seeing — they're often domain-specific and the taxonomy benefits everyone.
See the conversion pipeline on your own reports
Upload an RDL, watch the pipeline convert, and step through any failures in the manual-correction UI. No commitment, no sales call.
Try ReportBridgeFAQ
What does a T-SQL to PostgreSQL conversion pass rate actually measure?
Pass rate in the ReportBridge pipeline means the converted PostgreSQL query compiles AND executes successfully against the target database AND returns a non-error result. It does not yet measure row-for-row equivalence with the original report — that comes in a later validation stage. Starting at 11% means 89% of converted queries failed before producing any output at all.
Why was the pass rate so low to begin with?
The single largest contributor was a stale schema cache: the converter was translating queries against an older snapshot of the target schema, so references to renamed columns and retired tables were getting emitted verbatim. Three data migrations on the source side had never been replayed into the converter's cached schema. Reserved-word collisions and stripped subquery aliases made up most of the remaining gap.
Is 85.7% good enough to ship?
For automated conversion, yes — but only if there's a productive path for the remaining failures. We shipped five manual-correction tools alongside the converter fixes so a human can inspect the failed query, edit T-SQL or PostgreSQL directly, view the RDL XML, revert to an earlier version, and apply targeted fix hints. Automation takes care of the 86%; the correction UI makes the remaining 14% a ten-minute task instead of a full-day one.
What kind of reports are still in the 14% that requires manual correction?
Almost all the remaining failures fall into three buckets: recursive CTEs where the converter loses the RECURSIVE keyword, cross-database stored procedure calls that reference objects in databases that don't exist in the target, and legacy reports that use undocumented T-SQL features like ROWVERSION comparisons or deprecated xp_* extended procedures. None of these are solvable with better prompting alone — they require either migration work on the source side or targeted manual edits.