Discussion about this post

User's avatar
Neural Foundry's avatar

The fundamental problem you nailed is that LLMs learned SQL from Stack Overflow's greatest hits—not from versioned docs. That's why every schema looks like it's from 2015. The MCP approach is clever because it solves the retrieval problem: instead of hoping the model memorized PG17 features during pretraining, you're just giving it the right manual at query time. The 420% increase in indexes alone tells me most codegen tools are basically ignoring performance entirely.

No posts

Ready for more?