Your comment sounds good but that's it. It's flawed. You don't hire 10x just people who make other people 10x? Then that person you just hired is 10x.
The person is clearly a liability, it's still possible he's an asset. In simple business terms you have a guy who'll get you sued for 1 million but will make you 10 million. That person while a douche is still a valuable asset. Chances of that ever being the case or anyone ever choosing to keep a known douche about is another thing.
> You don't hire 10x just people who make other people 10x? Then that person you just hired is 10x.
The point was more that I value a person's ability to lift those around them much higher than their abilities to outperform them. It's hard to scale one person. But one person who can help scale an entire team is infinitely more valuable.
> In simple business terms you have a guy who'll get you sued for 1 million but will make you 10 million.
He's not a liability because of potential lawsuits. He's a liability because he brings the team down (antics like this do have a huge impact on the rest of the team).
> The point was more that I value a person's ability to lift those around them much higher than their abilities to outperform them. It's hard to scale one person. But one person who can help scale an entire team is infinitely more valuable.
It seems like while trying to dislike 10x you fundamentally want 10x. All those things are in my opinion what 10x is, they bring 10x more value to the team. Some may be 10x because they literally code faster and better, or they could be 10x because their code helps other people also code faster (I suspect the two are pretty well linked), or they could just be the leader who gives them more confidence to do stuff, etc.
10x may or may not be a myth. I don't think it is, I just don't think it's someone who codes 10x faster than everyone else. I think they just make the entire team better. I also think someone who is 10x one place isn't always going to be 10x in another team due to team dynamics. I.E. the team also makes them better.
> He's a liability because he brings the team down (antics like this do have a huge impact on the rest of the team).
While I get what you mean, I don't see how the team fits in to this. It's not like it was at a team event, team were involved. They most likely will never know this happened. This could just literally be a once off weird thing the guy does while being a cool guy just as much as it could be that he's an around weirdo who does weird things and is constantly belittling people.
Sure, but they don't have to be caustic and quite so unpleasant in the chat room.
I am outing, specifically, Terenko. In all my years as a forum contributor/mod I don't think I've encountered such a poisonous dick as that person.
I know you do a lot of good work for the PHP community, along with Gordon and many others, but, my friend terenko is truly a knob end and sadly encourages others who are up and coming PHP debutants to behave the same.
It really made me weep as a diamond mod why you guys tolerated his snide comments and activities.
Why did we tolerate it? Because we weren't given the tools to do anything about it. We couldn't kick or ban people. We couldn't moderate our own room. All we (as owners) could do was move messages or flag. We got yelled at every time we flagged something, so we learned to live with it. The only other option we had was to leave (which many users did, even leaders).
Today, we have the ability to kick-ban. Awesome. But he's also calmed down a lot. And is seen as a resource.
It's gotten a lot better over the years, but there is still work to be done in there. But now we're starting to get the tools to handle it. Which is awesome.
Problem was that you and Gordon and the reasonable crowd ignored it, allowed him to keep being a snidey tool. I'm not blaming you specifically, but that individual was never complained about by you guys - there were ways to reach out to mods outside of the regular chat room tools. In fact, Tim Post and I tried for a while to assist the PHP community with stuff like creating canonical questions and answers, you knew where who we were, we were on your side, my email address was always on my profile.
If I'd gotten a message from yourself or Gordon about this kinda thing (out of band), I knew who you guys were, you are the life and soul of the PHP community on SO. We'd have acted, you're too good to lose.
But...as mods we kept a fairly light touch with regards to stomping into chat rooms and reading the riot act (even politely) because otherwise we'd be meta'd as Nazis's, Fascists and Stalinists. I remember many occasions us being lambasted for trying to "moderate" out unpleasantness (in the PHP and C++ rooms), I eventually gave up my mod diamond because I'd run out of "trying to be nice and diplomatic" energy.
Such is the way of curating a community.
I used to sit in the PHP chat room and become quite depressed that his attitude became the standard.
You know something, as a developer/ops for a web hoster I was about to throw my weight into the PHP project back in ~2010, i.e. contributions, bug reporting from live bulk hosted environments (we probably host around 12k busy PHP sites on Linux and IIS - sure not huge, we're a business focused hoster, uptime and rapid support is paramount [and we do ASP.NET, Perl, Classic ASP], but for a ten man company with some heavily customised environments it's a chunk of work), that kinda thing...but that room turned me off (that and the PHP dev mailing list - but that's another story).
How do you solve the torrent of duplicates and low quality questions then? The review queue is backed up all the time (currently 11900 questions with 1+ close-vote).
Ignoring the problem does not make it go away. If you think one of the ways people are trying to solve it should be forbidden, that's fine. Please share an effective alternative.
> How do you solve the torrent of duplicates and low quality questions then
This is what voting/scoring is for on all the SE sites. The cream will rise to the top. It would be easy enough for SE to disallow questions to show up in google results with a low or negative score, but leave it open for people to discuss, clarify or solve. The moderators on SO don't give the real community a chance by closing things within minutes. Duplicates aren't even a real problem, there is nothing wrong with seeing the same questions answered from a different approach or method.
> If you think one of the ways people are trying to solve it should be forbidden, that's fine. Please share an effective alternative.
We all agree about "send me teh codez", obvious duplicates etc.
If however this is the biggest problem then why is asking a question still one of a few things you can do without having rep?
If this is really the problem then raise the bar oh so slightly until you don't need to organize voting rings and detect duplicates at a speed that results in more than a few false positives.
Basically, there's nothing wrong with the concept, unless it's used incorrectly. So if someone closes something incorrectly (you can point to a definite reason it's incorrectly closed), then re-open it or raise a meta post.
If you want to remove the ability for the people helping moderate a community to moderate, then how do you expect it to be moderated?
The big issue is that there are a LOT of low quality questions being asked. Duplicates. Many times literally copy/pasting the the question title into Google will give you the answer. Should these questions remain open because you want to repwhore? Should they remain open and further reduce the ability for Google to take you to a good canonical answer?
Or should they be closed and point to the good canonical answer? That way people can find their way to good content, rather than littering the site with duplication and poor copies of other answers.
The meaning of the CV reasons has changed over time as the community matures and figures out what works and what doesn't.
I do disagree with closing questions about particular framework (unless there's a dedicated SE site for it).
But bitching doesn't help. Raise a question on Meta. Step into the chat rooms and have a discussion. Get involved and help us fix things.
All bitching does is make the people who are putting time and effort into the community feel like they are doing something bad. Which is the fastest way to kill a community.
Happily this isn't Stack Overflow and this discussion hasn't been closed yet just because a few admins didn't think think it would fit within "Anything that good hackers would find interesting. That includes more than hacking and startups."
Edit: to clarify - there are a number of reasons why this topic is interesting to quite a few of us[1]. Here are two:
* the usability issue of what we experience as someone someone destroying a good resource.
* reputation systems: a facinating thing in itself.
[1]: as can be seen by the simple fact that this post is still on the frontpage despite - I guess - having been flagged multiple times : )
That's definitely valid. However, in the vast majority of cases where someone complains about CV-PLS in my experience, it's because they had their question closed.
Look in this very thread. You have people saying it's a horrible practice. Yet nobody really saying what should be done instead.
The fact of the matter is that there is a huge problem on SO of under-moderation. Over 11,000 questions have >=1 closevote right now. CV-PLS is one technique that the community has found effective in keeping the site searchable and with good content.
The people who are against it, I'd love to hear ideas on other effective methods. But to say it should be forbidden is a bit short-sighted.
Are we SURE closing questions is that important? Maybe the original article is on to something: if Stack Overflow was built on a more google-like assumption, relevance would influence rankings and visibility of questions/answers/posts ... but there would be less need for outright removal/closing, which is alienating to posters on a human an emotional level.
What is the argument for closing questions ... logistical? Database is too big?
I am sure there are valid reasons, but maybe they should be reexamined in the context of the human cost, and some tweaks could be made.
I counted any point release since the latest security release as secure.
So for PHP's 5.6 line, only 5.6.4 is secure, since 5.6.4 is a security release.
I'm currently crunching numbers for other platforms. For example, Nginx's last security release (for 1.7) was 1.7.5, so 1.7.5 -> 1.7.9 are all considered security.
The problem you'll run into with PHP specifically is that reading an undefined string offset (past the end) will result in a notice: http://3v4l.org/nIkf5
Which means that errors are triggered. So you can increase the length of the user string and note a linear increase in runtime until you increase it past the length of the string, at which point it becomes MUCH slower on a per-character basis (even if you don't do anything with the notice, the error mechanism is still triggered internally, which isn't cheap).
Actually, my original code was more robust as it never read past the end of the string, preventing the notice:
/**
* A timing safe equals comparison
*
* To prevent leaking length information, it is important
* that user input is always used as the second parameter.
*
* @param string $safe The internal (safe) value to be checked
* @param string $user The user submitted (unsafe) value
*
* @return boolean True if the two strings are identical.
*/
function timingSafeEquals($safe, $user) {
// Prevent issues if string length is 0
$safe .= chr(0);
$user .= chr(0);
$safeLen = strlen($safe);
$userLen = strlen($user);
// Set the result to the difference between the lengths
$result = $safeLen - $userLen;
// Note that we ALWAYS iterate over the user-supplied length
// This is to prevent leaking length information
for ($i = 0; $i < $userLen; $i++) {
// Using % here is a trick to prevent notices
// It's safe, since if the lengths are different
// $result is already non-0
$result |= (ord($safe[$i % $safeLen]) ^ ord($user[$i]));
}
// They are only identical strings if $result is exactly 0...
return $result === 0;
}
Basically, while it may keep the length %64 safe (since cache lines are 64 bites wide), it doesn't keep the length safe in general. Some length information will be leaked on larger strings. And considering it's impossible to protect the length in the general case, making a function which says it protects length is a lie. Therefore I don't even try and hence save the complexity.
But let me ask this: what cases would you have where are you trying to protect the length? Anything with variable length input (like a password) should likely be one-way hashed anyway. So you'd be comparing fixed-length hashes. So where's the possible leak?
OP here. Seeing as this question is getting asked a lot, I'll edit something into the post, but I wanted to answer you here as well.
So, there are a few problems with this technique.
1. It ignores the local timing leak
An attacker who can get code running on the server (shared hosts for example), can carefully monitor the CPU usage to see when the process is actually doing work, vs when it sleeps. So really, its not hiding anything.
2. The resolution of the sleep call is WAY too high. We're talking about detecting differences down to 15 nanoseconds. Sleeping for blocks of microseconds or even milliseconds will be far to granular. It will introduce block-like patterns in the requests that should be pretty easy to detect with statistical means.
3. It's basically identical to a random delay. Considering it depends on the system clock, and the original request comes in at a random point, it's functionally identical to calling sleep(random(1, 100)). And over time (many requests), that will average out.
Now, what if we took a different approach. What if we made the operation fixed-time?
execstart = utime()
// whatever code
// clamp to always take 500 microseconds
sleep( 500 - utime() - execduration)
That might work (assuming you have a high enough resolution sleep function). Again, it suffers the local attacker problem (which may or may not matter in your case).
However, there are two reasons I wouldn't recommend it: It requires guesswork and idle CPU.
You would either need to actively guess every single operation (and remember to clamp it) or clamp the overall application.
If you do it for every operation, that sleep time can become expensive (if you have a lot of them).
If you do it on the application level, and if you do too little, an attacker can use other expensive control (like larger input introducing memory allocation latency) to increase the runtime past the sleep clamp (hence allowing them to attack the vulnerability anyway). If you do too much, the attacker can leverage it to DOS your site (since even a sleeping process is non-trivially expensive).
There are two valid ways of protection IMHO:
1. Make sensitive operations actually constant time.
2. Implement strong IP based protections to prevent the large amount of requests that would be needed to collect enough data to analyze noisy environments. (I need to add this to the post now that I write it).
Personally, you should be doing #2 anyway. But since I also believe in defense-in-depth, I'd do #1 as well.
Thank you for answering, it is true that if someone has access to the server you might be in trouble, but if he has access to a cpu monitor, he might also have access to RAM and could just get the data from there.
Also, you would only need to slightly clamp very important functions, so DoS attacks aren't that likely on it (and a constant timed function would also take the same time).
Most operating systems will not idle on a sleep() call, as far as I remember. Since the server is executing multiple applications, it is very likely that the processor will be assigned to another running application. The only way to really know this would be to know the state of the specific process php is using for the request, which seems unfeasible in a production environment (except if you have admin of course).
Well, it won't idle if there is another process ready to execute (load is greater than 1). If there is no process wanting to execute, it will idle.
Again, I'm not saying this is practical. I'm saying it might be possible (even if improbable).
And don't get me wrong, I'm not saying "OMG YOU ARE BAD IF YOU DON"T PROTECT THIS RIGHT". I'm more leaning on the side of "if there's a chance, I assume someone could possibly figure out a way".
I'm not an expert on timing attacks, but without clamping it seems quite tricky to guarantee that sensitive operations actually take constant time. There can be numerous subtle ways that timing information leaks while the code appears to be constant time. And programmers who touch sensitive code can easily forget the requirement for constant-time behaviour. Yes, having constant time operation without clamping is the best solution but it seems too easy to accidentally slip from this ideal.
I'm leaning toward the approach of having a simple clamping library at the application level that (a) throws an exception if the sensitive code takes longer than the 'clamp time'; and (b) has some simple heuristic to determine the clamp time, such as "double the maximum execution time recorded during the first 20 runs". It might have a drawback if the CPU is not idle, but the benefit is that it is dead simple to implement. (Assuming the platform supports nanosecond wait times)
The far better approach is to just make the operations not depend on the secret.
You only really need to worry about timing attacks for values that the attacker doesn't know, and you don't want them to know.
So it's only things like encryption keys, passwords, session identifiers, reset tokens, etc that you need to worry about.
> And programmers who touch sensitive code can easily forget the requirement for constant-time behaviour.
And that's why I support the discussion we were having on PHP's internals list where we talked about making functions which are commonly used with secrets timing safe by default. As long as there isn't a non-trivial performance penalty to it at least.
As far as worrying about it, I'd rather people understand SQLi and XSS better. They are both FAR bigger surface areas than a timing attack ever will be. And likely going to be the bigger threat to 99.99% of applications.
How can you actually make sensitive operations take constant time? This sounds impossibly hard. For example, your operating system could be context switching thousands of times per second. Your password comparison function could cause a page fault because the trailing end of the password spans onto another page of virtual memory. These are all factors that would throw any calculation for constant time out of the window.
> How can you actually make sensitive operations take constant time? This sounds impossibly hard. For example, your operating system could be context switching thousands of times per second.
Sorry, it appears that I didn't actually define constant time anywhere. What I really mean is that:
Runtime does not depend in any way on the *value* of secret data.
So while actual runtime may vary, it's not varying because of the value of something we want to protect.
So it's not about keeping "absolute" time constant, but only the impact of the secret on runtime.
I hire and value engineers that elevate those around them to 10x.
And this behavior lowers people, not elevates. So no, that person is not a valuable asset. He's the very definition of a liability.