Agree. IDA is surely the “primary” tool for anything that runs on an OS on a common arch, but once you get into embedded Ghidra is heavily used for serious work and once you get to heavily automation based scenarios or obscure microarchitectures it’s the best solution and certainly a “serious” product used by “real” REs.
It's not perfect, but in my personal experience it is still tough in languages like that due to the sheer volume of indirection and noise that makes it hard to follow. For example Go's calling convention is a little nutty compared to other languages, and you'll encounter a few *****ppppppppVar values that are otherworldly to make sense of, but the ability to recognize library functions and sys calls is for sure better.
Regulated by whom exactly? Since you can't even read, the spyware is being exclusively used by all govts of the world. Regulation never works, if you need a secure phone use GrapheneOS.
There's always a comment for "regulation" by an ignorant HN normie under anything related to surveillance. I feel like it's mostly bots at this point.
Woah there cowboy, sure you want such a broad and strong claim? Maybe you've eaten too much asbestos, breathed too much lead-gasoline fumes or otherwise inhaled something strange, because I'm sure there are countless of examples of regulation working just fine. Not to say it isn't without problems, but come on, "never"?
Interesting, I actually find LLMs very useful at debugging. They are good at doing mindless grunt work and a great deal of debugging in my case is going through APIs and figuring out which of the many layers of abstraction ended up passing some wrong argument into a method call because of some misinterpretation of the documentation.
Claude Code can do this in the background tirelessly while I can personally focus more on tasks that aren't so "grindy".
They are good at purely mechanical debugging - throw them an error, they can figure out which line threw it, and therefore take a reasonable stab at how to fix it. Anything where the bug is actually in the code, sure, you'll get an answer. But they are terrible at weird runtime behaviors caused by unexpected data.
> In the age of LLMs, debugging is going to be the large part of time spent.
That seems a premature conclusion. LLMs excel at meeting the requirements of users having little if any interest in debugging. Users who have a low tolerance for bugs likewise have a low tolerance for coding LLMs.
I don't think so. I think reviewing (and learning) will be. I actually think that the motivation to become better will vanish. AI will produce applications as good as we have today, but will be incapable of delivering better because AI lacks the motivation.
In other words, the "cleverness" of AI will eventually be pinned. Therefore only a certain skill level will be required to debug the code. Debug and review. Which means innovation in the industry will slow to a crawl.
AI will never be able to get better either (once it plateaus) because nothing more clever will exist to train from.
Though it's a bit worse than that. AI is trained from lots of information and that means averages/medians. It can't discern good from bad. It doesn't understand what clever is. So it not only will plateau, but it will ultimately rest at a level that is below the best. It will be average and average right now is pretty bad.
rqlite creator here, happy to answer any questions.
As for reliability - it's a fault-tolerant, highly available system. Reliability is the reason it exists. :-) If you're asking about quality and test coverage, you might like to check out these resources:
reply