My sentiments exactly. Anyone at the low side of scale thinking about MS SQL, should seriously do a current survey of things in the dbms space.. there is absolutely no NEED to pay for dbms in 2026. Those old dinosours only still exist, because of the data hijacking nature of past db designs and coding. Everybody and their grandmother were obfuscating code and designs in order to bake in customer loyalty and repetitive patronage. Those old projects are keeping the lights on at proprietary DB Inc. AT the high end of things, you're gonna need db engineers, and if you get yourself Microsoftie hammersharks disguised as professional engineers, they gonna see everything as a nail.
It really is a good database. Give it lots of room. If you can distribute your workload on multiple machines though, you can't beat Postgres' licencing terms vs SQL Server.
Why is it a good database? Integration with Entra? I've heard arguments in favor of Oracle DB, but I've never heard anything good about MSSQL besides integration with the MS ecosystem.
I love Postgres and use it for _everything_. I've also used SQL Server for a couple of years.
I've lost count the number of times I'll read about some new postgres or MySQL thing where you find out that Oracle or SQL server implemented it 20 years ago. Yes they always have it behind expensive SKUs. But they're hardly slouches in the technical competence departments.
I found Oracle to just be a lot more unwieldy from a tooling perspective than SQL Server (which IMO had excellent tools like SSMS and the query planner/profiler to do all your DB management).
But overall, these paid databases have been very technically sound and have been solving some of these problems many, many years ago. It's still nice to see the rest of us benefit from these features in free databases nowadays.
As others have said, the query planners I used 25 years ago with Oracle (cost based, rule based, etc) were amazing. The oracle one wasn't visual but the MSSQL one was totally visual that actually gave you a whole graph of how the query was assembled. And I last used the MSSQL one 15 years ago.
Maybe pgAdmin does that now (I haven't used pgAdmin), but I miss the polished tools that came with SQL Server.
The SQL Server query planner is head and shoulders above what Postgres offers in the types of optimizations it will apply to your queries. It also properly caches query plans.
It offers heap tables, as well as index organized tables depending on what you need.
The protocol supports running multiple queries and getting multiple resultsets back at once saving some round-trips and resources.
Also supports things like global temp tables, and in memory tables, which are helpful for some use cases.
The parallelism story for a single query is still stronger with SQL Server.
I'm sure I could think of more, but it's been a few years since I've used it myself and I've forgotten a bit.
It is a good database. I just wouldn't use it for my startup. I could never justify that license cost, and how it restricts how you design your infrastructure due to the cost and license terms.
That’s kind of my point. They’re not really in competition. I bet they’d have an easier time with this scale if they were on SQL Server, but obviously that migration isn’t happening and startups don’t reach for it for many reasons.
I had never written an iOS app until a couple months ago and was initially very put off when I hit the same wall. The alternative is to host on a cheap VPS and find some way to prevent other people from using your app. When you cost it out, it's close enough to the 100 bucks a year for the Apple account. However, the kicker for me is the side loading process. Way too much headache compared to a deploy script that has my changes running nearly instantly.
Yep, as a manager, I am explaining this conundrum often. You can be a rockstar SDE 2 or senior, but not be ready for a promotion because you aren’t leading enough.
I have a feeling most of these folks are talking about personal projects or work on relatively small products. I have a good amount of personal projects that I haven’t written a line of code for. After bootstrapping an MVP, I can almost entirely drive by having Claude pick up GitHub issues. They’re small codebases though.
My day job is mostly a gigantic codebases that seem to still choke the best models. Also there’s zero way I’d be allowed to tailscale to my work computer from my phone.
It’s a fast moving field. People aren’t coming up with new ideas to be performative. They see issues with the state of the art and make something that may or may not advance things forward. MCP is huge for getting agents to do things in the “real world”. However, it’s costly! Skills is a cheap way to fill that gap for many cases. People are finding immediate value in both of these. Try not to be so pessimistic.
It's not pessimism, but actual compatibility issues
like deno vs npm package ecosystems that didn't work together for many years
There are multiple AGENTS vs CLAUDE vs .github/instructions; skills vs commands; ... intermixed and inconsistent concepts, all out in the wild
When I work on a project, do all the files align? If I work in an org, where developers have agent choice, how many of these instructions and skills "distros" do I need to put (pollute?) my repo with?
Skills have been really helpful in my team as we've been encoding tribal knowledge into something that other developers can easily take advantage of. For example, our backend architecture has these hidden patterns, that once encoding in a skill, can be followed by full stack devs doing work there, saving a ton of time in coding and PR review.
We then hit the problem of how to best share these and keep them up to date, especially with multiple repositories. It led us to build sx - https://github.com/sleuth-io/sx, a package manager for AI tools.
While I do agentic development in personal projects a lot at this point, at work it's super rare beyond quick lookups to things I should already know but can't be arsed to remember exactly (like writing a one-off SQL scripts which does batching mutations and similar)
I’m incredibly biased (I work at Microsoft) but I love Teams. It’s a great meeting app and a great chat app. It blows my mind that there are companies that have totally separate apps for each (Zoom/Slack).
It’s more incredible to me that Microsoft has different versions of teams that don’t work with each other, but are named the same thing, and that the home version of teams that doesn’t work with enterprise teams comes forcibly bundled with an pro or enterprise os.
Teams is the only meeting app where I am usually late because it doesn't just let me join my meeting. Zoom will never lock up letting you join a meeting because someone decided you need to reauthenticate Teams regularly.
This would be understandable if it happened quickly but normally Teams has a seizure for a minute or two when you try to join the call and then you get told to sign in. Whoever allowed this behavior to ship should be fired out a cannon... when I click join a call, absolutely nothing should stop me from joining a call.
In fairness this might not be explicitly Teams' fault. It's built on top of a terrible authentication platform which also seems to be down at least four or five days a year. 365 is one of those things that could not exist if not for the incredible monopoly Microsoft has over Excel.
I definitely agree with you, but it’s probably a little apples and oranges. MCP server is a one stop shop for discovering “tools”. To leverage a CLI tool “from
scratch”, your agent has to do a web search to find if a CLI tool even exists, figure out how to install it, install it. Not saying those are impossible, but it’s way less automated and “deterministic” than what MCP provides.
I don't quite follow your meaning. Are you referring to an MCP registry of some kind, that the agent would operate itself to discover and install new tools? I would say that is a separate concern from the tool form factor itself. Also, there are CLI-focused solutions to this as well (e.g. brew, npm).
> This separation also avoids the threading and memory-safety limitations that would arise from embedding DuckDB directly inside the Postgres process, which is designed around process isolation rather than multi-threaded execution. Moreover, it lets us interact with the query engine directly by connecting to it using standard Postgres clients.
- Separation of concerns, since with a single external process we can share object store caches without complicated locking dances between multiple processes.
- Memory limits are easier to reason about with a single external process.
- Postgres backends end up being more robust, as you can restart the pgduck_server process separately.
> This separation also avoids the threading and memory-safety limitations that would arise from embedding DuckDB directly inside the Postgres process, which is designed around process isolation rather than multi-threaded execution. Moreover, it lets us interact with the query engine directly by connecting to it using standard Postgres clients.
reply