Hacker Newsnew | past | comments | ask | show | jobs | submit | flipped's commentslogin

Almost every hobbyist reverse engineer uses cracked IDA which is easily available. I have never seen ghidra being recommended for serious work.

And everyone uses Ghidra exclusively where I work. I'd say we're a serious operation

Cracking IDA yourself was, and maybe still is, a "rite of passage" in certain communities.

This is changing, Ghidra is increasingly replacing IDA for commercial work.

I recommend it for serious work. Well, serious enough that I got paid for doing it, and/or given talks about it.

(not if you're only doing x86/ARM stuff, though)


Agree. IDA is surely the “primary” tool for anything that runs on an OS on a common arch, but once you get into embedded Ghidra is heavily used for serious work and once you get to heavily automation based scenarios or obscure microarchitectures it’s the best solution and certainly a “serious” product used by “real” REs.

The NSA doesn't do serious work?

That wasn't the claim. Ability + interest + time + budget + ... are what makes a serious tool.

You're just giving the troll an audience by reacting to it.

Can you be more specific? Is it getting easier to reverse rust and go, since I have read about it being the hardest to reverse.

It's not perfect, but in my personal experience it is still tough in languages like that due to the sheer volume of indirection and noise that makes it hard to follow. For example Go's calling convention is a little nutty compared to other languages, and you'll encounter a few *****ppppppppVar values that are otherworldly to make sense of, but the ability to recognize library functions and sys calls is for sure better.

Jia Tan wouldn't be interested in secret spyware firms. They hide their code in plain sight.

Regulated by whom exactly? Since you can't even read, the spyware is being exclusively used by all govts of the world. Regulation never works, if you need a secure phone use GrapheneOS.

There's always a comment for "regulation" by an ignorant HN normie under anything related to surveillance. I feel like it's mostly bots at this point.


> Regulation never works

Woah there cowboy, sure you want such a broad and strong claim? Maybe you've eaten too much asbestos, breathed too much lead-gasoline fumes or otherwise inhaled something strange, because I'm sure there are countless of examples of regulation working just fine. Not to say it isn't without problems, but come on, "never"?


Regulation never works in the interwebs*

In the age of LLMs, debugging is going to be the large part of time spent.


Interesting, I actually find LLMs very useful at debugging. They are good at doing mindless grunt work and a great deal of debugging in my case is going through APIs and figuring out which of the many layers of abstraction ended up passing some wrong argument into a method call because of some misinterpretation of the documentation.

Claude Code can do this in the background tirelessly while I can personally focus more on tasks that aren't so "grindy".


They are good at purely mechanical debugging - throw them an error, they can figure out which line threw it, and therefore take a reasonable stab at how to fix it. Anything where the bug is actually in the code, sure, you'll get an answer. But they are terrible at weird runtime behaviors caused by unexpected data.


> In the age of LLMs, debugging is going to be the large part of time spent.

That seems a premature conclusion. LLMs excel at meeting the requirements of users having little if any interest in debugging. Users who have a low tolerance for bugs likewise have a low tolerance for coding LLMs.


I don't think so. I think reviewing (and learning) will be. I actually think that the motivation to become better will vanish. AI will produce applications as good as we have today, but will be incapable of delivering better because AI lacks the motivation.

In other words, the "cleverness" of AI will eventually be pinned. Therefore only a certain skill level will be required to debug the code. Debug and review. Which means innovation in the industry will slow to a crawl.

AI will never be able to get better either (once it plateaus) because nothing more clever will exist to train from.

Though it's a bit worse than that. AI is trained from lots of information and that means averages/medians. It can't discern good from bad. It doesn't understand what clever is. So it not only will plateau, but it will ultimately rest at a level that is below the best. It will be average and average right now is pretty bad.


> In the age of LLMs, debugging is going to be the large part of time spent.

That seems a premature conclusion. LLMs are quite good as debugging and much faster than people.


Nftables has a really good doc site https://wiki.nftables.org/wiki-nftables/index.php/Main_Page. I wouldn't rely on any book


https://toni.cunyat.net/2019/11/nftables-vs-pf-ipv4-filterin.... According to this article, it depends on usecase.


Has anyone tried using distributed versions of sqlite, such as rqlite? How reliable is it?


rqlite creator here, happy to answer any questions.

As for reliability - it's a fault-tolerant, highly available system. Reliability is the reason it exists. :-) If you're asking about quality and test coverage, you might like to check out these resources:

- https://rqlite.io/docs/design/

- https://rqlite.io/docs/design/#blog-posts

- https://philipotoole.com/how-is-rqlite-tested/


Forgejo does all that while being lightweight and run by a non-profit. Gitlab is awfully resource hungry.


> Gitlab is awfully resource hungry.

Yes... and no.

Gitlab doesn't make sense for a low-volume setup (single private user or small org) because it's a big boat in itself.

But when you reach a certain org size (hundreds of users, thousands of repos), it's impressive how well it behaves with so little requirements!


Forgejo scales too, even for a large org it's a perfect choice.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: