Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These changes look fantastic.

If I may hijack the thread with some more general complaints though, I wish the Postgres team would someday prioritize migration. Like make it easier to make all kinds of DB changes on a live DB, make it easier to upgrade between postgres versions with zero (or low) downtime, etc etc.

Warnings when the migration you're about to do is likely to take ages because for some reason it's going to lock the entire table, instant column aliases to make renames easier, instant column aliases with runtime typecasts to make type migrations easier, etc etc etc. All this stuff is currently extremely painful for, afaict, no good reason (other than "nobody coded it", which is of course a great reason in OSS land).

I feel like there's a certain level of stockholm syndrome in the sense that to PG experts, these things aren't that painful anymore because they know all the pitfalls and gotchas and it's part of why they're such valued engineers.



We currently use MongoDB and while Postgres is attractive for so many reasons, even with Amazon Aurora's Postgres we still need legacy "database maintenance windows" in order to achieve major version upgrades.

With MongoDB, you're guaranteed single-prior-version replication compatibility within a cluster. This means you spin up an instance with the updated version of MongoDB, it catches up to the cluster. Zero downtime, seamless transition. There may be less than a handful of cancelled queries that are retryable but no loss of writes with their retryable writes and write concern preferences. e.g. MongoDB 3.6 can be upgraded to MongoDB 4.0 without downtime.

Edit: Possibly misinformed but the last deep dive we did indicated there was not a way to use logical replication for seamless upgrades. Will have to research.


The problem with MongoDB though is that you're on MongoDB


As someone that mostly shared that opinion for the last decade or more, I recently set up a cluster for work, and everything seems much more production level quality than I remember or what I assumed it was going to be like. I'm not the one using it for queries every day, but I did do a bunch of testing for replication and failed nodes to confirm that I understood (and could rely) on the claims of robustness, and it seemed to be stable and with good documentation of what to expect in different scenarios and how to configure it (which is not what I experienced doing the same testing back in 2010-2011).

All in all, my impression of MongoDB now is that they're one of those "fake it till you make it" success stories, where they leveraged their popularity into enough momentum to fix most their major problems.


One thing that turned me away from MongoDB was their utter lack of care for your data integrity that they displayed for years. Some of those instances were even documented. Then there were some bad defaults - some could _also_ cause data loss.

For any component that's viewed as a database (as opposed to, say, cache), data integrity is one of the most important metrics (if not THE most).

In contrast, PostgreSQL data loss bugs are rare - and are treated extremely seriously. Defaults are sane and won't lose data. It's one of the few databases I'm pretty confident that data will be there even if you yank a server power cord mid writes.

Has MongoDB improved? Yes, leaps and bounds(seems to still fail Jepsen tests though). But I can't help but feel that it should have been released as a beta product, instead of claiming it was production ready. It wasn't. Maybe it is now. I'd still evaluate other alternatives before considering it.

That said, one thing that always amuses me is how MongoDB gets mentioned in the same context as PostgreSQL. If PostgreSQL would meet your needs, it's unlikely that MongoDB would. And vice-versa(but maybe something else like Cassandra would).


Postgres with tables that are just an ID and a JSONB column nowadays give you practically everything you'd want out of MongoDB.

You can add deep and customized indices as desired, you can easily shard with Citus, and if you want to live without transactions you'll see equally good if not better performance - with the option to add ACID whenever you want. The developer experience argument, where the ->> operator was more confusing than brackets, is now moot.

As a former MongoDB user, there were good synergies between MongoDB and Meteor back in the day, and I loved that tech, but between Materialize and Supabase, you have vastly more options for realtime systems in the Postgres ecosystem.


Although MongoDB claims in an undated article entitled "MongoDB and Jepsen"[65] that their database passed Distributed Systems Safety Research company Jepsen's tests, which it called “the industry’s toughest data safety, correctness, and consistency Tests”, Jepsen published an article in May 2020 stating that MongoDB 3.6.4 had in fact failed their tests, and that the newer MongoDB 4.2.6 has more problems including “retrocausal transactions” where a transaction reverses order so that a read can see the result of a future write.[66][67] Jepsen noted in their report that MongoDB omitted any mention of these findings on MongoDB's "MongoDB and Jepsen" page.

from https://en.wikipedia.org/wiki/MongoDB#Bug_reports_and_critic...


Those defaults were changed a decade ago and were never an issue if you used a driver eg. Python.

And the Jepsen tests are part of the core test suite so do you some evidence they are still failing.

It’s so ridiculous and pointless to be rehashing the same issues a decade later.

Actually more a testament to the company that it’s still hugely successful and depended on by some very large applications.


Perhaps, but mongodb was responsible for something I have bookmarked as "the worst line of code ever".

Which decided whether or not to log connection warnings based on Math.random()

https://github.com/mongodb/mongo-java-driver/blob/1d2e6faa80...


a) This is a line of code from 2013 and was fixed weeks after.

b) Based on the JIRA [1] it was designed to only log 10% of subsequent failures where there is no connection to prevent log flooding. You would still get the initial failure message.

Pretty reasonable technique and hardly the worst code ever.

[1] https://jira.mongodb.org/browse/JAVA-836



Jepsen test suite completely tore them a new one. I don't trust any allegedly distributed database that gets excoriated that badly by Aphyr.

https://jepsen.io/analyses/mongodb-4.2.6

That's just a bit more than a year ago. Come on.

MongoDB is like Mysqldb. I am so so so tired of hearing "that's been fixed, it's great now", doing a paper-thin dive into things, and seeing there are massive problems still.

I used MongoDB with Spring Data, it is impressively seamless.

It's just that there are way too many people who have sold snake oil for a decade-plus now, and I don't trust what they say anymore, and won't for a long long time.


Even worse, MongoDB lied about having fixed these bugs.

https://en.wikipedia.org/wiki/MongoDB#Bug_reports_and_critic...


Let's be clear, I definitely don't think it's great. It's just that my immediate response prior to six months ago was to laugh at the mere suggestion it be put into production.

The only reason it actually was put into production is because we had a vendor requirement on it (and why they thought it was sufficient, I'm not sure).

There's a difference between "not suitable for anything because it's so buggy and there's been so many problems over the years" and "not suitable as a replacement for a real RDBMS for important data". For the former, I think my opinion was possible a little harsh for the current state of it. For the latter, yeah, I'm not going to blindly trust it for billing data and processing yet, that's for sure.


So did you do app-level code to verify writes? Double checking, etc?


I wrote a few small test programs to run doing continuous inserts to the master, and tested shutting down, firewalling off, and killing the process of different members of the cluster and how it recovered and if data loss was experienced by comparing data sets.

It was sufficient for me to not feel like we were taking on undue risk by using it, and since our use case is not one where we're in major trouble if a problem does come about (restoring from daily backups should be sufficient) and we're not doing anything transactional, that's good enough. As I mentioned earlier, it was a vendor requirement, so we just wanted to make sure it wasn't something that was problematic enough to make us question the vendor's decision making.


>All in all, my impression of MongoDB now is that they're one of those "fake it till you make it" success stories, where they leveraged their popularity into enough momentum to fix most their major problems.

The downside being that their reputation is now somewhat charred.


> All in all, my impression of MongoDB now is that they're one of those "fake it till you make it" success stories, where they leveraged their popularity into enough momentum to fix most their major problems.

That's not all bad. The same could be said of MySQL. Both DBMS prioritized ease of use over data integrity in the early going.


And yet PostgreSQL making the exact opposite choice has really paid off in the longer run. People used to dismiss it as simply a toy for academics to play with, and look where the project is today. It can easily surpass most NoSQL databases on their home turf.


To be fair PostgreSQL 15 years ago also had a lot of problems storing data reliability. Some of them manifested as performance issues. I also heard a fair number of war stories about corruption with "large" databases (e.g., 1TB+). PG replication lagged MySQL for many years as well. These seem to be non-issues today.

At this point there's effectively no difference in the durability of data stored in MySQL or PostgreSQL, so it's hard to argue that one or the other made a better choice. They just got there by different paths.

In fact, PostgreSQL is winning back share in part because of licensing. GPLv2 is limiting for a lot of applications, and there continue to be concerns about Oracle ownership. It's also absorbed a lot of features from other databases like JSON support. That's not special to PostgreSQL though. It's been a trend since the beginning for SQL RDBMS and explains why they have stayed on top of the OLTP market for decades.


And how exactly is that a problem?


Some things seem to have changed from 2018, but MongoDB was by far the worst database I ever had the displeasure of using (and Amazon DocumentDB was even worse).

https://jepsen.io/analyses/mongodb-3-6-4

https://jepsen.io/analyses/mongodb-4.2.6


Posting old Jepssen analyses is like pointing at old bug reports. Everytime Jepsen finds a bug we fix it lickety-split. I know it's not cool to focus on that fact, but it is a fact. The Jepsen tests are part of the MongoDB test suite so when we fix those problems they stay fixed.

I would love to hear your personal experience of MongoDB as opposed to reposting old Jepsen reports. Perhaps there is something that we can address in 5.1 that is still a problem?

(I work in developer relations at MongoDB)


The latest "old Jepsen report" is barely a year old. It's not like digging up dirt from years ago.

It also seems like there was quite a lot wrong even a year ago, quoting from there:

> Roughly 10% of transactions exhibited anomalies during normal operation, without faults.

It's just not a very reassuring response to say "when someone goes to dig a bit and finds a lot of show-stopping bugs, we address those specific bugs quickly".

To me it sounds like the architecture and care just isn't there for a robust data storage layer?


Something that was drilled into me decades ago is that there is no such thing as fixing multi-threaded (or distributed) code via debugging or patching it "until it works".

You either mathematically prove that it is correct, or it is wrong for certain.

This sounds like an oddly strong statement to say, but the guy who wrote the textbook that contained that statement went on to dig up trivial looking examples from other textbooks that were subtly wrong. His more qualified statement is that if a professor writing simplified cases in textbooks can't get it right, then the overworked developer under time pressure writing something very complex has effectively zero chance.

The MongoDB guys just don't understand this. They're convinced that if they plug just one more hole in the wire mesh, then it'll be good enough for their submarine.

PS: The professor I was referring to is Doug Lea, who wrote the "EDU.oswego.cs.dl.util.concurrent" library for Java. This was then used as the basis for the official "java.util.concurrent".


If you use MongoDB as a document store, arguably it's core functionality, you're not exposed to any of the shortcomings Jepsen rightly identified and exploited weaknesses in.

Transactions are new to MongoDB and they are not necessary for most. Structure your data model so you only perform single-document atomic transactions ($inc, $push, $pull) rather than making use of multi-document ACID transactions. It's possible, we're doing it for our ERP.

Sharding is something we've intentionally avoided opting for application-layer regional clusters. We specifically were avoiding other complexities related to shards that are not a concern for replica sets. Durability and maximum recovery time during emergency maintenance caused us to avoid them.


where is the latest jepsen test results published?


That was sarcasm. But yeah, you can search for MongoDB and you'll come across many many posts criticizing it.

It can be said Mongodb is hated ad much as Postgres is loved.

Personally I have no opinion about mongodb.


Yes, and most of these love/hate memes are blowned out of proportion by people who don't actually have any real expertise in those technologies, but just parrot whatever they've read in some memes.


You're exactly correct. Tons of "XYZ is bad" because of some meme that they don't even understand or have context on that hasn't been relevant for years.

I have no idea if MongoDB is good or bad at this point, but the comments of "haha it's mongo" are completely devoid of meaningful content and should be flagged.


I was part of a team that operated a large Mongo cluster for most of the last decade. I would not have advised anyone to use Mongo as their durable source of truth database then, and I still don't think it's advisable to do so now. On numerous occasions, Mongo demonstrated the consequences of poor engineering judgment and an addled approach to logic in critical components responsible for data integrity. In addition, Mongo internalized many poor patterns with respect to performance and change management. Mongo did not, and does not provide the data integrity or performance guarantees that other databases internalize by design (the WiredTiger transition helped, but did not cure many of the issues).

PostgreSQL introduced JSONB GIN index support sometime around 2015, making Postgres a better fit for most JSON-based applications than Mongo.


My issue isn't with people not liking Mongo. It's with contentless meme posts. Your post has real information that adds value to the conversation, and I appreciate that you took the time to write it out.


Because it's (or at least it definitely WAS) true.

There are valid use-cases for mongo but for vast majority of things, you're better to start with postgres. And I say that as an early adopter - I really wanted mongo to succeed but it just failed all of my expectations. All of them.

BTW: this is post about postgres.


You can find posts criticising every database.

Most of the ones for MongoDB are from a decade ago and not at all relevant today.


It's easier to ask how that is NOT a problem, because that list will be much, much shorter.


> Edit: Possibly misinformed but the last deep dive we did indicated there was not a way to use logical replication for seamless upgrades. Will have to research.

It is possible since PG10

https://severalnines.com/database-blog/how-upgrade-postgresq...


We seem to have been misguided by all of the Amazon RDS and Aurora documentation. It seems Amazon prefers to implement postgres logical replication through their database migration service. All upgrades are typically done through pg_upgrade which does require downtime.

Interesting. I can't wait to see how PG12 influences future offerings from the cloud providers for more seamless major version upgrades.


MongoDB and Postgres are like apples and oranges tho.

I'm not gonna choose MongoDB if I need a relational model… even if it offers zero downtime upgrades out-of-the-box.


You might choose Postgres with JSON as an alternative to MongoDB though. There are plenty of people pushing the limits of MongoDB who are researching it if not just for access to a larger pool of DBAs who can work for them.


Logical replication across major releases for seamless upgrades has been supported and documented since pgSQL 10.


Migrations are quite highly prioritized in PostgreSQL. PostgreSQL has the by far best migration support of any database I have worked with. There is of course a lot of work left to do but that is true for many other areas in PostgreSQL.

Also I can't agree at all with "nobody coded it", patches in this area generally welcome. My first real patch for PostgreSQL was about improving migrations. And there has been talk about several of the ideas you propose, but nobody is working on any of them right now. So I mostly think it is a lack of resources in general.


Sorry, I didn't mean offense. What I meant with "nobody coded it" is that the migration DX features that don't exist yet, likely don't exist simply because they haven't been made yet (and not because eg they're architecturally impossible or because the postgres team are stupid or sth).

Its hard to complain about OSS without attacking the creators, I tried to do that right buy clearly I failed nevertheless :-) Thanks for your contributions!


[flagged]


What does SJP mean?


"Social Justice" something, probably, like in "SJW" = "Social Justice Warrior".

Some people see politeness or compassion as weakness, and ttherefore use terms acknowledging it as pejoratives. The funny thing is, they think this makes objects of derision of their targets, and don't realise that it's themselves it does so.


Ah, I suppose that makes sense. Its weird to me how 'Social Justice Warrior' is derogatory.. because it puts you against people fighting for justice. I get the mind games (I think) you have to play to arrive at the conclusion that SJW is a good insult but its got this "are we the baddies?" energy, to me.


DDL migrations are amazing. Migrations between different postgres versions not so much.


I love postgresql, as long as I don’t have to do upgrade. I have yet to see a successful zero downtime upgrade. That being said, other databases aren’t that much better. Maybe except SQLite.


> far best migration support of any database I have worked with

BS, have you never worked with mysql or sqlite?


MySQL migrations are beyond a joke.

It’s absolutely impossible that you’re not trolling with such a statement.


Copy/hardlink the data-files and start the new engine instead have to make a dump and import it?

It's a joke that you have to make a offline dump and import for upgrades with pgsql.



Works one out of three without hiccups.


Sounds like a fun bug then.

I’ve never had it fail for me. :/


They are slowly getting there. For example, postgres 12 added REINDEX CONCURRENTLY. Under the hood it's just recreating the index and then name swapping and dropping the old one. Basically what pg_repack was doing.

There's a huge collection of tricks out there that just need to become formal features. The trick I'm working with today is adding a check constraint with NOT VALID and then immediately calling VALIDATE because otherwise it takes a very aggressive lock that blocks writes. That could easy become ALTER TABLE CONCURRENTLY or something.


Do you know a good resource with these tricks? I often struggle to predict exactly which schema migrations will lock tables aggressively and what the smartest workaround is.


We've had good success with https://github.com/sbdchd/squawk to lint migrations. It tells you if a query is going to lock your table (as long as it's written in SQL, not some ORM DSL)


An orthogonal migration issue which I'm hitting right now: we need to migrate from heroku postgres to aws rds postgres, and I'm stressed about the risk and potential downtime in doing so. If there was a way to make a replica in rds based on heroku, promote the rds replica to be the primary, hard switch our apps over to rds, that'd be a lifesaver.

I'm working through this blog post [1] now, but there is still a bit to be defined (including a dependency on heroku's support team) to get this rolling.

Why the migration is required? Heroku postgres doesn't support logical replication, and logical replication is required for any ELT vendor (Fivetran, Stitch, Airbyte) to use Change Data Capture to replicate data from postgres to snowflake (with replicating deleted rows efficiently).

Note: I've also read this ebook [2], but this approach requires downtime.

Note 2: I reached out to heroku support and asked if logical replication was on their short term roadmap. They said they've heard this quite a bit, but nothing tangible is on the roadmap.

If anyone has any thoughts on the above migration, I'd be all ears. :)

1) https://vericred.com/how-we-migrated-a-1tb-database-from-her...

2) https://pawelurbanek.com/heroku-migrate-postgres-rds


I did that exact migration. Unfortunately, to my knowledge, there's no way to do it with zero downtime. You need to make your app read only until the RDS instance has ingested your data, then you can cut over. For me, that was roughly one gigabyte of data and took about forty seconds.

My best advice is to automate the whole thing. You can automate it with the Heroku and AWS CLIs. Test on your staging site until you can run through the whole process end to end a few times with no interruptions.


Yep, absolutely garbage that these clouds (Azure is another one) don't allow you to replicate with external systems. Pretty much devalues their entire hosted postgresql offering if you ask me, since it's just designed to keep you locked in (duh).

If you have any significant amount of data where you're worried about a migration, stay far away from hosted postgres offerings. You'll never get your data out without significant downtime.


There are other ways to handle this at the application level, to be clear, using dual read & write and backfill. More relevant when you have TB+++ of data.


Interesting. I've done dual-writes at the application level to migrate the datastore for a smaller feature (branch by abstraction), but never for an entire application. And the code path was quite simple, so it was easy to think about all of the edge cases at one time in your head.

Do you have any resources which talk through the read/write/backfill approach?

Here's what I found so far: * https://medium.com/google-cloud/online-database-migration-by... * https://aws.amazon.com/blogs/architecture/middleware-assiste...


Jumping in again... Your post reminded me that I actually typed my migration up!

https://mattbasta.medium.com/migrating-from-heroku-to-aws-6d...

Hopefully it's somewhat helpful!


Thank you for this - extremely helpful in validating the current approach and de-risking the developer time.


So, basically, Postgres would have a replication port which can be used for both replication/clustering and transfer across cloud providers. And sharding. </dreaming>


I mean, it essentially does. Heroku's managed postgres has it disabled.


We've moved a number of customers from Heroku over to Crunchy Bridge with essentially no down time, am currently helping one customer with 7TB through that process. It's not over to RDS, but would be happy to talk through process if helpful. And we do support logical replication and have many people using wal2json/logical replication with us.


> Why the migration is required? Heroku postgres doesn't support logical replication

You could possibly hack together some form of higher-layer logical replication via postgres_fdw and database triggers. A comment ITT references this as a known technique.


One possible solution for the ETL stuff might be to use Heroku Kafka for the Change Data Capture and then from that Kafka you can move it someplace else.

See https://blog.heroku.com/streaming-data-connectors-beta Heroku's own Kafka seems to have slightly more native support than if you use a 3rd party like Confluence.

We've not yet tried any of this, but it's been bookmarked as a possible solution to explore.


Interesting that you bring this up. I looked into heroku's streaming connectors to facilitate an integration with materialize.com, but Heroku's support team wasn't confident we could sync all 187 postgres tables under 1 connection.

I thought about using Debezium and Kafka to roll my own micro-batch ETL solution, but listening to this podcast made me walk away slowly: https://www.dataengineeringpodcast.com/datacoral-change-data...


Interesting, what was it from that podcast that made you reconsider? Always eager to learn about opportunities for improving the experience of using Debezium.

Disclaimer: I work on Debezium


Oh wow, by "work on" you mean "the core maintainer of". Thank you for replying. :)

The main part I reconsidered based on was the level of effort taking the data from kafka and landing into snowflake, especially around handle postgres schema changes safely. I also have no experience with kafka, so I'd be out of my depth's pretty quickly for a critical part of the architecture. He also expressed the need for building quality checks into the kafka to snowflake code, but those details were a bit sparse (if i recall correctly).

Note: all of the above are probably outside the scope of debezium. :)

Note 2: your article [1] on using cdc to build audit logs w/ a "transactions" table blew my mind. Once I listened to your data engineering podcast interview [2], I knew there was some implementation of "event sourcing lite w/ a crud app" possible, so I was excited to see you had already laid it out.

1) https://debezium.io/blog/2019/10/01/audit-logs-with-change-d...

2) https://www.dataengineeringpodcast.com/debezium-change-data-...


Gotcha, yeah, there's many things to consider indeed when setting up end-to-end pipelines. Thanks for the nice feedback, so happy to hear those resources are useful for folks. As far as event sourcing is concerned, we got another post [1] which might be interesting to you, discussing how "true ES" compares to CDC, pros/cons of either approach, etc.

[1] https://debezium.io/blog/2020/02/10/event-sourcing-vs-cdc/


We faced this migration, too. My sympathies.

Adding to your list of options that still require _some_ downtime: we used Bucardo [0] in lieu of logical replication. It was a bit of a pain, since Bucardo has some rough edges, but we made it work. Database was ~2 TiB.

[0] https://bucardo.org/


When you subscribe to managed services instead of running the software yourself, these are the kinds of trade-offs that get made


logical replication, but this is one of the walls that heroku creates.


Coming from the outside, with zero understanding of the internal details, my hunch is the same: lack of support for logical replication is more of a business decision than a technical decision. (But again, this a hunch -- partially based on how good heroku is from a technical perspective)


It's absolutely an evil business decision, and all the clouds are playing this game. Don't ever use a hosted database solution if you're thinking about storing any significant amount of data. You will not be able to get it out without downtime.


It looks like gcp supports logical replication now: https://cloud.google.com/blog/products/databases/you-can-now...


The big question is: Can you enter arbitrary IP addresses, or do you have to replicate to another GCP instance? Azure does the latter.



> I feel like there's a certain level of stockholm syndrome in the sense that to PG experts, these things aren't that painful anymore

I don't know if I’m a PG expert, but I just prefer “migration tool” to be a separate thing, and for the DB server engine to focus on being an excellent DB server engine, with the right hooks to support a robust tooling ecosystem, rather than trying to be the tooling ecosystem.


As a developer I fully support the notion of splitting the tools out from the server engine, like things are today.

But, realistically, pg_upgrade's functionality would need to be integrated into the server itself if we're ever going to have zero-downtime upgrades, right?

I don't know how other RDBMSs handle this, if at all


Yeah, the comment I was responding to addressed two different kinds of migration—schema migration and version upgrades—and my comment really applies more to schema migration than version upgrades; more support for smoothing the latter in the engine makes sense.


Agreed: Postgres' transactional schema migration is freaking sublime.

I used and abused it pretty hard at my previous gig and now it's hard to imagine ever living without it.

At my gig before THAT, we had MySQL and schema migrations were so very very painful.


> If I may hijack the thread with some more general complaints though, I wish the Postgres team would someday prioritize migration.

A thing I'm interested in is a 'simple' replication setup to reduce single points of failure. We currently use Galera with My/MariaDB/Percona and it's quite handy for HA-ish needs: we can have two DBs and the garbd running on the web app server.

Pointers to tutorials for Debian/Ubuntu to accomplish something similar would be appreciated. (We run things on-prem in a private cloud.)



There's no such thing as "simple" when it comes to HA setups, the requirements are simply too varied. PostgreSQL has great documentation for their HA featureset, but when it comes to systems-level concerns (detecting that a primary is down and arranging promotion of a replica to primary) you're expected to address those on your own.


With our Galera setup we have a keepalived health check look at the local system, and if it fails/times out it stops sending heart beats so the other sides takes over the vIP. If one system crashes the vIP fails over as well.


Doesn’t PG already support inplace version upgrade?

Also PG is one of the few that support schema/DDL statements inside a transaction.


"one of the few" is a pretty low bar though, I don't know a DB that doesn't suck at this.


Or maybe it is you who are underestimating the technical complexity of the task? A lot of effort has been spent on making PostgreSQL as good as it is on migrations. Yes, it is not as highly prioritized as things like performance or partitioning but it is not forgotten either.


My complain would be that there is no standard multi-master solution for postures, whereas mysql now has group replication as a native multi-master solution.


There’s a Ruby gem called strong_migrations that does this. It’s fantastic and I include it on all of my Ruby projects.


The gem does not solve these issues, merely tells you about them (and even then it can't catch all of them, only the ones that Rails defines).


It gives you mitigation strategies.

The mere presence of it on the dev workflow keeps your developers thinking about these types of issues as well, which goes a long way.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: