Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can't think of a single time I've used grep where I thought "I wish this was faster".


Even if the answer is instant, you have a 50% performance improvement in your search just from typing "rg" instead of "grep"!

From my perspective it's a no brainer. I don't HAVE a grep (because I don't have a Unix) so when I install a grep, any grep, reaching for rg is natural. It's modern and maintained. I have no scripts anywhere that might expect grep to be called "grep".

Of course if you already have a grep (e.g. you run Unix/Linux) then the story is different. Your system probably already has a grep. Replacing it takes effort and that effort needs to have some return.


Well, a cmd script for msys64 grep in my \CmdTools is named `gr`. It feels more natural, because index-then-middle finger also does. Thinking of it, I actually hate starting anything with a middle finger (no pun). Also learning new things that do the same thing as the old one.


You can alias `grep` to `gr` or even `rg`. Installing a whole entire different program just to type a shorter name is a crazy contrived justification.

I imagine a lot of devs have grep preinstalled. In fact, where is grep not installed, now that WSL exists?


Even faster, I have an alias 'ss' (mnemonic for 'super search') for rg. Fitts' Law to the max!


What do you use a single "s" for?


git status --untracked-files=all

sn is 'git status --untracked-files=no'.


I am amused by this comment, because it shows a dramatically different type of thinking. I have probably have thought "I wish this was faster" for nearly everything I do on a computer :)


Multi-GB log files. Even with LC_ALL=C, grep is painfully slow.


That's probably true - but good Lord, one should probably do something to reduce the size of log files that large.


DB logs can get HUGE. logrotate for them is currently daily. I briefly wanted to tune it to help alleviate the issue, but honestly it didn’t and doesn’t matter, given the infrequency with which they’re directly accessed. No risk of running out of disk space, and the DBAs like them how they are, so meh. There are other things to worry about.


A few years ago I worked on a Solaris box that would lock the whole machine up whenever I grepped through the log files. Like it wouldn't just be slow, the web server that was running on it would literally stop serving requests while it was grepping.

I never worked out how that could be happening.


My best guess is your grep search was saturating I/O bandwidth, which slowed everything else to a crawl.

Another possibility is that your grep search was hogging up your system's memory. That might make it swap. On my systems which do not have swap enabled but do have overcommit enabled, I experience out-of-memory conditions as my system essentially freezing for some period of time until Linux's OOM-killer kicks in and kills the offending process.

I would say the first is more likely than the second. In order for grep to hog up memory, you need to be searching some pretty specific kinds of files. A simple log file probably won't do it. But... a big binary file? Sure:

    grep -a burntsushi /proc/self/pagemap
Don't try that one at home kids. You've been warned. (ripgrep should suffer the same fate.)

(There are other reasons for a system to lock up, but the above two are the ones that are pretty common for me. Well, in the past anyway. Now my machines have oodles of RAM and lots of I/O bandwidth.)


On extremely slow systems, such as Windows. There I can search in multiple repos only with rg.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: