Leading this by saying I've only used Ida free, I can't comment on Ida pro. I'm also a very lite user of both, I give name functions/vars, save bookmarks, and occasionally work out custom types, and that's about it, none of the real fancy stuff.
I was recently trying to analyse a 600mb exe (denuvo/similar). I wasted a week after ghidra crashed 30h+ in multiple times. A seperate project with a 300mb exe took about 5h, so there's some horrible scaling going on. So I tried out Ida for the first time, and it finished in less than an hour. Faced with having decomp vs not, I started learning how to use it.
So first difference, given the above, Ida is far far better at interrupting tasks/crash recovery. Every time ghidra crashed I was left with nothing, when Ida crashes you get a prompt to recover from autosave. Even if you don't crash, in general it feels like Ida will let you interrupt a task and still get partial results which you might even be able to pick back up from later, while ghidra just leaves you with nothing.
In terms of pure decomp quality, I don't really think either wins, decomp is always awkward, it's awkward in different ways for each. I prefer ghidra's, but that might just be because I've used it much longer. Ida does do better at suggesting function/variable names - if a variable is passed to a bunch of functions taking a GameManager*, it might automatically call it game_manager.
When defining types, I far prefer ida's approach of just letting me write C/C++. Ghidra's struct editor is awkward, and I've never worked out a good way of dealing with inheritance. For defining functions/args on the other hand, while Ida gives you a raw text box it just doesn't let you change some things? There I prefer the way ghidra does it, I especially like it showing what registers each arg is assigned to.
Another big difference I've noticed between the two is ghidra seems to operate on more of a push model, while Ida is more of a pull model - i.e. when you make a change, ghidra tends to hang for a second propagating it to everything referencing it, while Ida tries pulling the latest version when you look at the reference? I have no idea if this is how they actually work internally, it's just what it feels like. Ida's pull model is a lot more responsive on a large exe, however multiple times I've had some decomp not update after editing one of the functions it called.
Overall, I find Ida's probably slightly better. I'm not about to pay for Ida pro though, and I'm really uneasy about how it uploads all my executables to do decomp. While at the same time, ghidra is proper FOSS, and gives comparable results (for small executables). So I'll probably stick with ghidra where I can.
> I was recently trying to analyse a 600mb exe (denuvo/similar). I wasted a week after ghidra crashed 30h+ in multiple times.
During the startup auto analysis? For large binaries it makes sense to dial back the number of analysis passes and only trigger them if you really need them, manually, one by one. You also get to save in between different passes.
Yup. It was actually an openjdk crash, which was extra interesting.
I figured I probably could remove some passes, but being a lite user I don't really know/didn't want to spend the time learning how important each one is and how long they take. Ida's defaults were just better.
When I used it, the one use case I used it was to automatically launch a Jekyll server - if I'm working on a site I'm almost certainly going to want to look at my changes in the browser. Now that I've switched I just run one extra command, it wasn't a big saving, but it was kind of nice.
The MSVC linker has a feature where it will merge byte-for-byte identical functions. It's most noticeable for default constructors, you might get hundreds of functions which all boil down to "zero the first 32 bytes of this type".
My most common source of unintentional BOMs is powershell. By default, 'echo 1 > test' writes the raw bytes 'FF FE 31 00 0D 00 0A 00'. Not too likely for that to end up in a shell script though.
That's UTF-16, though, which is basically the only place where byte-order marks are actually useful. (It was arguably the thing they were designed for, being able to tell whether you have little-endian or big-endian UTF-16. I've never encountered UTF-16BE personally so I can't easily tell how widespread it is, but I suspect that while its use is rare, it's unfortunately not zero so you still need to know what byte order you have when encountering UTF-16.
Though that would have been a far worse problem a decade ago. Thankfully, these days any random document you encounter on the Web is 99% likely to be UTF-8, so the correct heuristic these days is "if you don't know the encoding, try decoding with UTF-8 first. If that fails, try a different encoding." Decoding UTF-8 is practically guaranteed to fail on a document that wasn't actually UTF-8, so trying UTF-8 first is a very safe bet. (Though there's one failure mode: ASCII encoded as UTF-16. So you might also try a heuristic that says "if the UTF-8 parses successfully but has a null byte every other character in the first 256 bytes, then try reparsing the document as UTF-16").
I'm rambling and getting off my point. The point is, I don't find BOMs in UTF-16 documents to be a problem, because UTF-16 actually needs them. (Well, probably needs them). UTF-8 doesn't need them at all.
In normal use it's essentially the same yes. The one interesting edge case that might catch some people out is there's actually nothing special about std::exception, you can throw anything, "throw 123;" is valid and would skip any std::exception handlers - but you can also just catch anything with a "catch (...)".
I recently started working on a repo which historically had 100+ parallel branches. I've been jumping between a few tools recently trying to find something which handles browsing these sanely - I always liked browsing graphs, but the sheer amount of branches ruins most of them.
Currently the best workflow I've worked out is just a plain
git log --oneline --graph -- <dir>
Followed by showing the specific commits. But this doesn't work so well with MRs that touched many files across different dirs. Anyone have suggestions for tools that might handle this better?
An interesting example was Borderlands 2. The game originally had a native port by Aspyr, which worked quite well. 5 years pass and they release a new update and DLC - but they neglected to update the ports. 5 more years pass, then at some point in the past year or two they just plain removed the native Linux packages. They didn't announce it or anything so not 100% sure when. So these days when you install it you always get the (updated) Windows build under proton.
I'm a little sad we lost the native version, but on the other hand running it under proton just works, as I'd been doing for years, and the game would never have been updated otherwise.
If you have an older game which still uses dx9, dxvk can give a decent little performance boost - even if you're running on Windows. It's kind of magical that adding a translation layer is able to improve performance.
In more modern games using dx11/12, I've always noticed a small loss like you'd expect. I haven't properly benchmarked any but I suspect a game with native Vulkan support would do pretty similarly, and might come out on top due to CPU load.
> an older game which still uses dx9, dxvk can give a decent little performance boost
Old games already run at insane frame rates on modern hardware, so while it's impressive from a technical standpoint, the performance boost is largely irrelevant. Even with a small loss, Proton is completely worth it -- no more Windows spyware, activation/registration nags, cloud/AI bloatware.
They exist, but one of the problems is they're not particularly good cubes. While it might help you learn the basics, not being able to handle it like a speedcube means they're probably not going to help you get faster.
That being said, while looking up those links, I found out that, since I got out of the hobby, smart cubes have become a thing, and are made by real speedcube manufacturers.
This is an easier problem to solve. I'm not sure if you have to solve it first or if it can identify pieces on power up, but after that it's just tracking rotations, which can be done from the (fixed position) centres alone. But if an actual speedcube manufacturer can already fit those electronics in without comprising performance, I can't imagine it's that much harder to fit some addressable LEDs on some slip-ring-esque connections. Must just not be much of a market.
I'll preface this by saying my experience is with embedded, not kernel, but I can't imagine MMIO is significantly different.
There would still be ways to make it work with a more restricted intrinsic, if you didn't want to open up the ability for full pointer forging. At a high level, you're basically just saying "This struct exists at this constant physical address, and doesn't need initialisation". I could imagine a "#define UART zmmio_ptr(UART_Type, 0x1234)" - which perhaps requires a compile time constant address. Alternatively, it's not uncommon for embedded compilers to have a way to force a variable to a physical address, maybe you'd write something like "UART_Type UART __at(0x1234);". I believe this is technically already possible using sections, it's just a massive pain creating one section per struct for dozens and dozens.
Unfortunately the way existing code does it is pretty much always "#define UART ((UART_Type*)0x1234)". I feel like trying to identify this pattern is probably too risky a heuristic, so source code edits seem required to me.
I was recently trying to analyse a 600mb exe (denuvo/similar). I wasted a week after ghidra crashed 30h+ in multiple times. A seperate project with a 300mb exe took about 5h, so there's some horrible scaling going on. So I tried out Ida for the first time, and it finished in less than an hour. Faced with having decomp vs not, I started learning how to use it.
So first difference, given the above, Ida is far far better at interrupting tasks/crash recovery. Every time ghidra crashed I was left with nothing, when Ida crashes you get a prompt to recover from autosave. Even if you don't crash, in general it feels like Ida will let you interrupt a task and still get partial results which you might even be able to pick back up from later, while ghidra just leaves you with nothing.
In terms of pure decomp quality, I don't really think either wins, decomp is always awkward, it's awkward in different ways for each. I prefer ghidra's, but that might just be because I've used it much longer. Ida does do better at suggesting function/variable names - if a variable is passed to a bunch of functions taking a GameManager*, it might automatically call it game_manager.
When defining types, I far prefer ida's approach of just letting me write C/C++. Ghidra's struct editor is awkward, and I've never worked out a good way of dealing with inheritance. For defining functions/args on the other hand, while Ida gives you a raw text box it just doesn't let you change some things? There I prefer the way ghidra does it, I especially like it showing what registers each arg is assigned to.
Another big difference I've noticed between the two is ghidra seems to operate on more of a push model, while Ida is more of a pull model - i.e. when you make a change, ghidra tends to hang for a second propagating it to everything referencing it, while Ida tries pulling the latest version when you look at the reference? I have no idea if this is how they actually work internally, it's just what it feels like. Ida's pull model is a lot more responsive on a large exe, however multiple times I've had some decomp not update after editing one of the functions it called.
Overall, I find Ida's probably slightly better. I'm not about to pay for Ida pro though, and I'm really uneasy about how it uploads all my executables to do decomp. While at the same time, ghidra is proper FOSS, and gives comparable results (for small executables). So I'll probably stick with ghidra where I can.