Usually, the packages themselves are not signed with GPG, only the Release file is (containing the hashes of all .deb files). This is actually the default of both Debian and Ubuntu. I never quite understood the reasons behind it... I'd not expect this vuln to happen, though.
I kind of get why they did it that way (my guess: managing lots of dev keys was problematic, so they used one key to sign a list of "official-seeming" files). But then why wasn't the Release file's contents verified (assuming this doesn't involve generating collisions for those packages)?
"The parent process will trust the hashes returned in the injected 201 URI Done response, and compare them with the values from the signed package manifest. Since the attacker controls the reported hashes, they can use this vulnerability to convincingly forge any package."
Wtf? This sounds like Apt is just downloading a gpg file and checking if it matches a hash in an HTTP header, and if it does, it just uses whatever is specified, regardless of whether your system already had the right key imported? This makes no sense. Any mirror could return whatever headers it wanted.
This is the real vuln, not header injection. If Apt isn't verifying packages against the keys I had before I started running Apt, there was never any security to begin with. An attacker on a mirror could just provide their own gpg key and Releases file and install arbitrary evil packages.
The hash is done locally in the http worker process. I think you may be confusing headers in the HTTP response with headers in the internal protocol used to communicate with the worker process. The 201 response is not an HTTP response.
Basically, it provides you with a simple API to store data in a location your user provides (and trusts). This might be in their local network, but could be Dropbox as well.
for (pos = 0; !found; pos++) {
row = get_next_row();
if (strstr(row, "user") {
// this is obviously a header row
continue
}
if (strstr(row, user_name)) {
found = 1;
}
}
That should be logical AND, not logical OR, because if you go over the limit and the right side is False, the left side will always be True if the item is not found, and True || False is still True, and the loop continues infinitely.
When you are over the iteration limit you want to trigger True && False which is False to break out of the loop where the right side False is the condition when pos > limit
Btw this is covered in Code Complete Second Edition by Microsoft Press on page 378-380
Allowing the same statement to do any kind of arbitrary nonsense is not the good kind of expressiveness. Far better to separate your concerns, and keep the orthogonal parts orthogonal. C-style for mixes too many different things.
Semantics of the C-style for loop are pretty well defined - it's: for(initialize form ; condition form ; step form) { loop body forms }. Ascribing any different meaning to it, like e.g. that all three forms in the loop header must be present or must refer to the same set of variables, is a mistake.
That said, if one has an alternative that can express their intention better (like e.g. mapcar, foreach loop, Lisp's dotimes, etc.), one should use it instead.
I disagree - debugging costs usually come directly from choosing a construct that does not express programmer's intentions (like using a naked for when foreach would be more adequate - or, conversely, by messing with conditionals inside loop body when putting the condition in the loop header would be more adequate).
Debugging a higher order function application is harder than stepping through a loop, at least in most debuggers I've used. I use them only when the logic called indirectly is not complicated to require much debugging (and I still wind up converting comprehensions into loops because they are too much trouble).
Oh ok, I can imagine more complex expressions being harder to debug from machine code level.
That said, I rarely if ever have to resort to low-level debugging in such cases. I'm yet to encounter a problem that couldn't be solved by looking at the code and (very occasionally) stepping it through in the debugger. Here, using higher-order constructs in code is actually very helpful - the closer your code gets to expressing your intent, the smaller the set of possible bugs is. For instance, you can't make an off-by-one error in a foreach or a mapcar.
There are many other things to debug beyond the basics. I admire those people who can do all the computer simulation in their head, but many of us want good debugging support, ever since the mid 90s (well, it really started with Microsoft's CodeView in the late 80s). PL abstractions should consider that in their designs, not just elegant conciseness. Code isn't just read.
What you call "Arbitrary nonsense" is for others good practice. For loops allow you to check all the conditions of the loop in one single line, without having to read the body of the loop, which i consider good practice.
What if I prefer a predictable release cycle for the base OS, but still need the latest LibreOffice/VLC/_____ for one reason or another?
I like the approach taken by e.g. Nginx and MariaDB, where I add additional vendor repositories, but this workflow is probably neither user-friendly enough for my mom, nor have all vendors the resources to maintain several repositories for different distributions and their respective versions.
Use an OS with backports, like Debian. For example, if you took Debian Jessie, you could pull LibreOffice 5 from backports. VLC is already almost up-to-date (2.2.1).
In which country is he legally able to do so? He shouldn't be able to control everything you do... I've experienced attempts for that, but not all clauses are enforceable.
While I agree with your points when storing timing of (happened in the past) events, but I disagree when storing display preferences for the user. As a user, I do not want to change my timezone settings just because my country went into Daylight Saving.
Came here to make the same point. Named timezones like America/Los Angeles are still important because they refer to both the timezone offset as well as the DST rules that govern that particular political zone.
That being said, the gist of the post (time normalization) is obviously good. Stored time should always be UTC and converted for display later.