Hacker News new | past | comments | ask | show | jobs | submit | ipozgaj's comments login

You can do that with standard Linux tooling available on every distribution, see https://man7.org/linux/man-pages/man8/tc.8.html. What you're specifically looking for is `qdisc netem`, it can inject packet loss, reordered packets, duplicate packets, delay and more.


I am lucky enough to have worked on all three of these systems (TAO, ZippyDB, and currently MySQL) so can shed some light here.

Both MySQL and ZippyDB are datastores that use RocksDB under the hood, in a slightly different way and with different querying capabilities exposed to the end user. ZippyDB uses it exclusively, but MySQL uses both the traditional InnoDB and RocksDB (MyRocks). TAO is in memory graph database, layer above both of these, and doesn't persist anything by itself - it talks to the database layer (MyRocks).


/wave


/wave


I took a look and don't see any strong reasons not to use and support the official app. Tweetbot was so successful because it was competing against the official client which was (and still is) truly horrible, but I don't see that in this case Ivory has many advantages over the official client.


I used to use the official client prior to Ivory. It's a fine app, but it's a far cry from Ivory's polish and overall quality — I mean, it feels snappier and more “at home” on iOS than Mastodon's official app.

Anyway, Mastodon has no reason to stand against third party apps. Its official app was released only last year, and by no means to threat others nor become the only app in town. Experimentation is good, and Mastodon's third apps are shining right now. There are dozens of them, each one bringing fresh ideas and new concepts.

edit: typos.


The official app makes it awkward to read the local timeline, which is where the real fun is on smaller instances.


Most of the "new" features are just eye candy, but usability is still very poor and way behind what they had 10 years ago. You can't do even the most trivial things, like sorting stocks in your watch-list by market value or P/E ratio, comparing two stocks side by side etc.


There is also `colordiff` which works both as a standalone diff tool, but also as a colorizer filter for other diff tools. For example:

  diff -u a.txt b.txt | colordiff


Git ships with `diff-highlight` which is more than enough for me.

I use the following config.

    [core]
      pager = /usr/share/git-core/contrib/diff-highlight | less

    [color "diff-highlight"]
      oldNormal = red
      oldHighlight = 16 bold red
      newNormal = green
      newHighlight = 16 bold green


It seems that the person in question was #2 on the "patent trolls top list" in 2016

https://www.rpxcorp.com/wp-content/uploads/sites/2/2017/05/t...


Heh, it seems like being #2 on a scumbag top list is the worst place you can be; you don't even get to claim you're the biggest scumbag around.


Manifest V3 and killing web.request API were the reasons that finally convinced me to go back to Firefox. Almost a month later, I didn’t find a single thing that would make me go back to Chrome.

My main complaint about fresh Firefox installations is that it takes a lot of time to fine tune everything. Every single time I have spend hours changing settings in about:config, from disabling telemetry, Pocket, changing networking/DNS settings, pipelining etc. Vanilla installation is just not well optimized.


> My main complaint about fresh Firefox installations is that it takes a lot of time to fine tune everything.

And a lot of it is barely documented and there's no easy way to set/test it. Took me quite a while to go through the different things for scrollwheel scrolling until I had found a style regarding speed, distance, acceleration etc that I feel good using.


Do the changes to pipelining settings to have a noticeable effect?


Yes, a small but measurable improvement. The biggest perf bump you can get is probably enabling various prefetching, but most of time you are doing a performance/privacy tradeoff (e.g. enabling DNS prefetching)


If you use and like `ag`, I suggest taking a look at ripgrep (`rg`). It seems to be by far the fastest out of three (`ack`, `ag`, `rg`). And it has a pretty interesting codebase (written in Rust).


If you're working in a git repository then IMO the most appropriate search tool is simply `git grep`. I don't think there's any reason to use ripgrep, ag, ack etc in that situation. (Personally, if I'm working with text files, then I'm nearly always in a git repo.)


(author of ripgrep here)

Well at least one reason is because ripgrep is faster. On simple literal queries they'll have comparable speed, but beyond that, `git grep` is _a lot_ slower. Here's an example on a checkout of the Linux kernel:

    $ time rg '\w+_PM_RESUME' | wc -l
    8
    
    real    0.127
    user    0.689
    sys     0.589
    maxmem  19 MB
    faults  0
    
    $ time LC_ALL=C git grep -E '\w+_PM_RESUME' | wc -l
    8
    
    real    4.607
    user    28.059
    sys     0.442
    maxmem  63 MB
    faults  0
    
    $ time LC_ALL=en_US.UTF-8 git grep -E '\w+_PM_RESUME' | wc -l
    8
    
    real    21.651
    user    2:09.54
    sys     0.413
    maxmem  64 MB
    faults  0
ripgrep supports Unicode by default, so it's actually comparable to the LC_ALL=en_US.UTF-8 variant.

There are other reasons. It is nice to use a single tool for searching in all circumstances. ripgrep can fit that role. Maybe you don't know, but ripgrep respects your .gitignore file.


Thanks! I knew ripgrep was praised in particular for its performance but I didn't know the difference was that large. The repo I usually work in has 8.7M lines of code and I had been finding `git grep` performance very adequate (I use it in combination with the Emacs helm library where it forms part of an incremental search UI, and hence gets called multiple times in quick succession in response to changing search input.) It looks like it will be fun to try swapping in ripgrep as the helm search backend; I'll try it.


I wouldn’t recommend anyone to use this. Other than really poor implementation and quality of the algorithms (some of which are totally incorrect), code in that repo is anything but Pythonic - reimplementing a lot of things from the standard library, not using list, dict and set comprehensions, using indexes instead of iterators, copying things around for no reason etc. They didn’t even care to use linter to PEP8-ify it.


People commenting that $35 is too much for the content included - if you want an equivalent set of channels from Comcast/XFINITY, it will cost you almost 3x more. So it's a non brainer for me, and the fact I don't have to deal with Comcast is worth even more than saving ~60% of my monthly cable bill.


This of course is only true if you pay for cable TV.

I have Netflix and Amazon and the reset of the internet. I had a HDHomeRun hooked to an over the air but since I moved I have not hooked that up again yet as I need a huge mast to get reception.

You can use a VPN like service to get world wide streams also.

I do pay Comcast for a business internet connection to keep away from the data caps however.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: