Is this something that can happen? We just ran into this limitation and I really want to keep using pgvectorscale... am exploring other solutions on EKS but RDS would be so much easier. From my reading it seems like this isn't something we can get done as a single AWS customer though.
Yeah, I know what you mean. I used to roll my eyes every time someone said “agentic,” too. But after using Claude Code myself, and seeing how our best engineers build with it, I changed my mind. Agents aren’t hype, they’re genuinely useful, make us more productive, and honestly, fun to work with. I’ve learned to approach this with curiosity rather than skepticism.
We just launched a bunch around “Postgres for Agents” [0]:
forkable databases, an MCP server for Postgres (with semantic + full-text search over the PG docs), a new BM25 text search extension (pg_textsearch), pgvectorscale updates, and a free tier.
Thanks. We are already using timescale postgres image for pgvectorscale with some customizations on tsvector and GIN indexes. Would be nice to have bm25 as well. Any specific reason why this was not made open source from the get go or is it the usual phased approach by Timescale (now Tigerdata)? If not, it is a worrying signal as the same could happen with pgvectorscale development as well.
Anyways really appreciate the free offerings by timescale. Really makes things easy.
That's interesting. Personally I did not find it vague and ambiguous.
ClickHouse was fast but required a lot of extra pieces for it to work:
Writing data to Clickhouse
Your service must generate logs in a clear format, using Cap'n Proto or Protocol Buffers. Logs should be written to a socket for logfwdr to transport to PDX, then to a Kafka topic. Use a Concept:Inserter to read from Kafka, batching data to achieve a write rate of less than one batch per second.
Oh. That’s a lot. Including ClickHouse and the WARP client, we’re looking at five boxes to be added to the system diagram.
So it became clear that ClickHouse is a sports car and to get value out of it we had to bring it to a race track, shift into high gear, and drive it at top speed. But we didn’t need a race car — we needed a daily driver for short trips to a grocery store. For our initial launch, we didn’t need millions of inserts per second. We needed something easy to set up, reliable, familiar, and good enough to get us to market. A colleague suggested we just use PostgreSQL, quoting “it can be cranked up” to handle the load we were expecting. So, we took the leap!
PostgreSQL with TimescaleDB did the job. Why overcomplicate things?
I’ve learned (sometimes the hard way!) that every design choice comes with real trade-offs. There’s no magic database architecture that optimizes every dimension (e.g., scalability, performance, ease-of-use) simultaneously.
Social media often pushes us into oversimplified "winner vs. loser" narratives, but this hides the actual complexity of building great infrastructure.
Recognizing and respecting these differences makes us smarter engineers, better community members, and frankly, just more enjoyable people to chat with.
PS Thank you for helping me add a new book to my list :-)
That's fair. We referenced that quote because it captured a lot of the skepticism in the early days (and because that comment is public). No hard feelings though!
That’s fair, but I would never point out a single individual over this, even if they were really mean about it. It’s just not a good look.
Together with the other paragraph with the bashing of the competition this just looks like your company starts to develop an echo chamber where you’re internally so fine with speaking like that, that it leaks out to the public. For comparison, at my company we don’t even speak about our competition like that internally. Let your readers come to the conclusion that you’re better than the competition by showing the necessary facts only.
please ask your RDS rep to support it
we (tiger data) are also happy to help push that along if we can help