Three days ago, ReportBridge's T-SQL to PostgreSQL conversion pipeline hit 95.8% effective pass on a 190-RDL full-corpus baseline (nearly 200 production reports from a multi-jurisdictional regulatory program). That's a number we spent weeks getting to. It's also the number where we stopped trying to push it higher with AI — and started building the editor.
The remaining 4-5% gap is genuinely heterogeneous. Hallucinated columns the model can't recover from even with an explicit "here are the actual columns" retry. Niche operator-mismatch cases. Single-file edge bugs that don't cluster into a pattern. The AI does what AIs are good at — pattern matching across thousands of examples — and stops where the pattern thins out. The path forward from there isn't a bigger model. It's giving the human in the loop better tools.
Companion reading: the triage post covers the methodology for deciding which residuals are worth fixing in code vs deferring to the editor; the recursive-CTE post covers some of the deterministic scrub work that got the pipeline to ~90% in the first place. This post is what came after, when scrubs and prompts ran out of leverage.
1. Live EXPLAIN feedback
The previous loop was: edit SQL, click Save, click Test, wait ~5 seconds, read the error, edit again. About a minute per iteration. Now the editor runs a debounced EXPLAIN check 800ms after the last keystroke and shows a small green check or red error line right under the SQL pane. Same backend call, same database round trip — just no save, no test-history pollution, no manual click.
One technical detail: each dataset has a per-dataset request sequence counter, so out-of-order responses get dropped. If the user is typing fast, only the last debounce fires; in-flight requests from earlier keystrokes don't race the latest one. Avoids the "wait, why does it say PASS?" flicker.
2. AI-Fix history + editable hint round trip
Every dataset has a per-row fix_hint field that survives regeneration. The user writes one line — "the column is testdatetime not keydatetime" or "use COALESCE for the join" — and the next AI-Fix call carries that hint to the model. The hint existed in storage; the round trip was broken. User had to save the hint, navigate elsewhere, click Re-run, wait, navigate back.
Now there's a single "Save Hint & Re-run AI Fix" button right next to the hint input. The AI-Fix history is also surfaced as an inline expandable list with each round's full error, not just a count badge with a tooltip. Combined: a 5-minute round trip becomes a 30-second one. Steve's directive after we shipped this was "this is the obvious shape of how the editor should work" — which it is, in retrospect.
3. Schema reference
The hallucinated-column class is the single largest source of residual failures. The AI references ec.keydatetime when the actual column is ec.testdatetime. CONV-18 closes a chunk of these via an automatic retry-with-allowed-columns prompt, but the long tail still slips through.
The Schema Reference disclosure is the prevention layer. It's a lazy-loaded panel that shows, per dataset, the alias-to-table mapping pulled from the SQL plus every column on each table. Click any column name to copy the qualified reference (ec.testdatetime) to the clipboard. Doesn't fix anything automatically — but turns "wait, what columns does this table actually have?" from a 30-second multi-tab lookup into a single click. Most editing speed gains come from removing this kind of context-switch, not from cleverer algorithms.
4. Side-by-side T-SQL ↔ PostgreSQL diff
The editor already showed both T-SQL (source) and PostgreSQL (converted) panes — but as plain text. There was no fast way to see what the AI actually changed. The diff view adds line-level annotations: same / changed / added / removed, with a small count summary at the top.
The diff itself is a tiny LCS-based line algorithm with two niceties: whitespace and case differences are normalized away (so cosmetic-only changes don't mark as "changed"), and adjacent removed/added lines that share more than 50% of tokens get paired as a single "changed" line. Catches the "AI rewrote this CASE statement when it didn't need to" class of issue at a glance.
5. Test Preview (parameter-binding visibility)
Reports from SSRS use @Param placeholders. Bold Reports passes them as text. PostgreSQL needs them cast to typed values for any non-trivial comparison. The platform does the substitution automatically in the test path — but the substitution was invisible. User clicks Test, sees a pass or fail, but doesn't know what their @StartDate became before EXPLAIN ran.
The Test Preview disclosure shows the actual SQL EXPLAIN runs, with the parameter substitution applied. Each substitution is annotated with its mode: typed CAST (when the parameter has a registered data type) or heuristic (when the platform infers from the parameter name). Catches the "test passes but production fails" class — usually the heuristic substitution chose differently than the production parameter binding will.
6. Snippet library
Twelve curated patterns from the CONV-N work, each one click to copy: PG reserved-word quoting, RECURSIVE keyword for self- referencing CTEs, UNION text-cast direction (always cast to text; never cast text to int), ISNULL → COALESCE, TOP → LIMIT, CROSS APPLY → LATERAL JOIN, ROUND with explicit ::NUMERIC cast, string concatenation with ||, type-aware @Param casts, and a few others. Things the AI mostly knows but applies inconsistently.
Pure client-side; no backend dependency. The snippet content is in src/lib/sqlSnippets.js as a flat array — easy to extend as new patterns emerge from customer use. The category grouping is for browseability; the descriptions are the knowledge transfer.
How the six fit together
They stack vertically as expandable disclosures inside the existing per-dataset editor card. Default state is collapsed; user opens what they need. None of them affect the existing save, test, or publish paths — every disclosure fails open if its data isn't available.
The architecture is AI-first, editor-second. Bedrock serves every conversion call as the primary AI, with Anthropic as the failover. The editor extensions only matter when the AI's output needs human review — which, on a real customer report library, is the residual ~5%. For the 95% the AI handles cleanly, the user never opens an editor disclosure at all.
The deeper takeaway from building this arc: every percentage point of residual after the obvious AI work is a different shape, and shapes don't have a unified solution. They have unified-enough tools — fast feedback, ground truth in view, easy correction patterns. Build those, then trust the human + AI pair to find the edge cases.
What this means for SSRS migrations
If your team is sizing an SSRS-to-PostgreSQL migration:
- The honest expectation is the AI gets you to 90-95% of automated conversions on a real-world report library. That last 5-10% is human judgment, period.
- Without an editor that closes the loop — fast EXPLAIN, schema reference, AI-fix round trip, snippet library — the long tail eats your migration timeline. Manual SQL fixes without supportive tooling are 5-10 minutes each. With tooling, they're 30-60 seconds each.
- On nearly 200 reports, that's the difference between a two-week migration and a two-month one. Same AI, same SQL — different time to production.
FAQ
Why six and not one big AI improvement?
We tried the big AI improvement path first. The CONV-1 through CONV-19 prompt and scrub stack moved the pipeline from 11% to 94.5% effective pass rate over a few weeks. That's where the AI-side ROI inverted: each CONV-N item closed a smaller and smaller class. The remaining ~5% turned out to be genuinely heterogeneous — hallucinated columns the model can't recover from even with explicit allowed-columns retry, niche operator-mismatch cases, single-file edge bugs. Different problem shapes need different tools, so we went wide on editor affordances instead of deep on more prompts. Six small extensions beat one heroic prompt rewrite for the long tail.
Why live EXPLAIN feedback specifically?
The previous loop was: edit SQL, click Save, click Test, wait for the result, read the error, edit again. About 60 seconds per iteration. Live EXPLAIN runs a debounced check (800ms after the last keystroke) against the customer's database without saving the SQL or polluting test history. The user sees 'column ec.keydatetime does not exist' appear under the editor 1 second after the typo, not 60 seconds later. Same call, same engine — just a tighter loop. That tightening is more productive than any single prompt fix because it converts every manual edit into a fast feedback cycle.
How does the AI-Fix-with-hint round trip work?
Every dataset already stored a per-row fix_hint that survives regeneration — text the user could write to steer the next AI-Fix attempt ('the column is testdatetime not keydatetime', 'use COALESCE for the join'). The hint existed but the round trip was broken: user had to save the hint, navigate, click Re-run AI Fix, wait, read the result, navigate back. Now there's a single 'Save & Re-run AI Fix' button right next to the hint input. The AI-Fix history is also surfaced inline so the user can see what r1 and r2 actually returned before deciding what to hint. The combined flow turns a 5-minute round trip into a 30-second one.
Are these AI-replacement features or AI-augmentation features?
Augmentation. Every editor affordance feeds back into the AI's success rate when used. Schema reference helps the user write a hint that the AI then understands. Live EXPLAIN catches mistakes early so the AI isn't asked to fix something the user could have caught by typing carefully. Side-by-side diff makes wrong-conversion failures visible so the user can hint the AI about what to preserve. The snippet library is mostly things the AI already knows but applies inconsistently — having them one click away means the user can apply the right pattern when the AI doesn't. The architecture stays AI-first; the editor extensions are how a human power-user closes the residual gap that bigger models or longer prompts can't.
ReportBridge ships these editor extensions in v1.9.0 (April 2026). Every affordance described in this post is in the admin app today. Schema reference, live EXPLAIN, AI-Fix round trip, side-by-side diff, test preview, snippet library — all one click away inside each per-dataset card.
Try ReportBridge for your migration