If the servers in question happen to be on the list of supported hardware, quite possibly. (I don't know of any up-to-date online list, but running `source setup` in the root of the source tree will print it.)
Really? If you're going to do that, it seems like you might as well just merge all those users into a single account anyway, considering how trivial it would be for any one of them to execute arbitrary code as any other.
Until I can, as an unprivileged user, watch my own process's syscalls with dtruss, it's not an alternative to strace. Requiring root is a major hindrance.
> When using little-endian, you imagine that the bits are in little-endian order, to be consistent with the bytes, and then everything is nice and consistent.
But isn't that kind of at odds with how shifting works? (i.e. that a left shift moves towards the "bigger" bits and a right shift moves toward the "smaller" ones.) Perhaps for a Hebrew or Arabic speaker this all works out nicely, but for those of us accustomed to progressing from left to right it seems a bit backwards...
> Case Sensitivity: Filenames are currently case-sensitive only.
First thought: they have seen the light!
A moment later: wait...they consider this a "limitation", and it's only "currently" the case. So maybe they're going to perpetuate the brain-damage anyway.
It pushes a localization and UI problem down into the filesystem layer. Case-insensitivity is pretty easy for US-ASCII, but in release 2 of your filesystem, you realized you didn't properly handle LATIN WIDE characters, the Cyrillic alphabet, etc. In release 7 of your FS, you get case sensitivity correct for Klingon, but some popular video game relied on everything except Klingon being case-insensitive on your FS, and now all of the users are complaining.
How do you handle the case where the only difference between two file names is that one uses Latin wide characters and the other uses Latin characters? This one bit me when writing a CAPTCHA system back in 2004. (Long story, but existing systems wouldn't work between a credit card processing server that had to validate in Perl, and a web form that had to be written in PHP, where the two systems couldn't share a file system. It's simple enough to do using HMAC and a shared key between the two servers, but for some reason, none of the available solutions did it.) I noticed that Japanese users had a disturbingly high CAPTCHA failure rate. It turns out that many East Asian languages have characters that are roughly square, and most Latin characters are roughly half as wide as they are tall, so mixing the two looks odd. So, Unicode has a whole set of Latin wide characters that are the same as the Latin characters we use in English, except they're roughly square, so they look better when mixed with Unified Han and other characters. Apparently most Japanese web browsers (or maybe it's an OS level keyboard layout setting) will by default emit Latin wide unicode code points when the user types Latin characters. Whether or not to normalize wide Latin characters to Latin characters is a highly context-dependent choice. In my case, it was definitely necessary, but in other cases it will throw out necessary information and make documents look ugly/odd. Good arguments can be made both ways about how a case-insensitive filesystem should handle Latin wide characters, and that's a relatively simple case.
Most users don't type names of existing files, exclusively accessing files through menus, file pickers, and the OS's graphical command shell (Finder/Explorer). So, if you want to avoid users getting confused over similar file names, that can be handled at file creation time (as well as more subtle issues that are actually more likely to confuse users, such as file names that have two consecutive spaces, etc., etc.) via UI improvements.
Fails to mention what is in my opinion the most devious, subtle potential pitfall with `set -e`: assigning (or even just a bare evaluation of) an arithmetic zero. `foo=0` won't do anything surprising, but `let foo=0` will return 1, and thus abort your script if you're not careful.
Also, as an alternative to the proposed `set +e; ...; set -e` wrapper for retrieving the exit status of something expected to exit non-zero (generally cleaner in my opinion, if slightly "clever"):
While this is a legitimate complaint, it also applies (though in fewer ways) to C. How do you parse this?
(x)(y);
Is that a call of function (or function pointer) `x` with argument `y`? Or is it a cast of `y` to type `x`? You need to have kept track of of all the typedefs in the code prior to that point to know.
Good point. But keep in mind that C predates C++ by over 15 years. There were tremendous advances in theory in that time. It's not completely fair to compare a hacker's tool from 1970 with a state of the art academic exercise from 1987.
Early versions of C date to 1972, while early versions of C++ (called C with classes back then) date 1979, only 7 years later, probably a few rooms down the hall from where C was originally developed.
True, templates came much later, but by then C++ was already hopelessly unparsable by LALR(1) parsers. But in truth, it was this way from moment 0, since it was based on C-syntax. Making it parsable by yacc was a lost cause, since yacc followed C and not the other way around.
I don't think the problems with that advice are singular. Perhaps the most ultimately-relevant is that looping exactly N times without a "ceremonial" counter is really only very rarely useful. Truly, how often do you see the canonical `for (i = 0; i < N; i++)` pattern in C with no references to `i` anywhere in the loop body? (Hint: if there are any, it's not "ceremonial".) Blindly repeating the exact same sequence of code just isn't something you want to do very often.
To address the issue of "okay, in what languages could we do this anyway?": taken in full mathematical generality, such a capability would necessarily require bignum support, which cuts down your options significantly. I'm not sure if that was actually the intent, but if we instead accept the limitation of "up to some largeish power of two", constructing a macro to do this "directly" in C would also be quite easy. So...(shrug)
I agree that it's rare, but aren't those exactly the cases where this kind of optimization is relevant? In the cases where you're using i and it could overflow, the optimization couldn't be applied anyway.