Hacker Newsnew | past | comments | ask | show | jobs | submit | dom0's favoriteslogin

This isn’t just necessarily a problem with WhatsApp. The same applies to IRC, if you set away states.

Even if you don’t set away states, one can simply monitor every channel you’re in, every message you send, and then quickly determine what timezone you’re in, when you sleep, when you’re on vacation, etc.

Here’s an example graph of a user, every dot is a message: https://i.imgur.com/DrgVvVw.png and here one from a user with more regular sleep patterns: https://i.imgur.com/a1xdSqR.png (notice the timezone transition when daylight savings time starts? And notice how the user takes about 2 weeks to adjust?)


He would have been interested in Ed Lorenz' work (from the butterfly effect fame) I suppose. According to Tim Palmer:

"Despite being known for his pioneering work on chaotic unpredictability, the key discovery at the core of meteorologist Ed Lorenz's work is the link between space-time calculus and state-space fractal geometry. Indeed, properties of Lorenz's fractal invariant set relate space-time calculus to deep areas of mathematics such as Gödel's Incompleteness Theorem."

"Consider a point p in the three-dimensional Lorenz state space. Is there an algorithm for determining whether p belongs to IL? There are certainly large parts of state space which don’t contain any part of IL. However, suppose p was a point which ‘looked’ as if it might belong to IL. How would one establish whether this really is the case or not? If we could initialise the Lorenz equations at some point which was known to lie on IL, we could then run (1) forward to see if the trajectory passes through p. If the integration is terminated after any finite time and the trajectory still hasn’t passed through p, we can’t really deduce anything. We can’t be sure that if the integration was continued, it would pass through p at some future stage. The Lorenz attractor provides a geometric illustration of the Gödel/Turing incompleteness theorems: not all problems in mathematics are solvable by algorithm. This linkage has been made rigorous by the following theorem [7]: so-called Halting Sets must have integral Hausdorff dimension. IL has fractional Hausdorff dimension - this is why it is called a fractal. Hence we can say that IL is formally non-computational. To be a bit more concrete, consider one of the classic undecidable problems of computing theory: the Post Correspondence Problem [46]. Dube [16] has shown that this problem is equivalent to asking whether a given line intersects the fractal invariant set of an iterated function system [4]. In general, non-computational problems can all be posed in this fractal geometric way."

Source: Lorenz, Gödel and Penrose: New Perspectives on Determinism and Causality in Fundamental Physics https://arxiv.org/pdf/1309.2396.pdf


In the last few hours, this has been silently added to the post, and I cannot post a reply on that site:

Jumping on the internet to say that “they” (specifically, the people you’re not paying a cent to and who aren’t bothered by the GIL because it only penalises a programming style many of us consider ill advised in the first place) should “just fix it” (despite the serious risk of breaking user code that currently only works due to the coarse grained locking around each bytecode) is also always an option. Generally speaking, such pieces come across as “I have no idea how much engineering effort, now and in the future, is required to make this happen”.

Fine, then don't bother. But don't insult us or call foul for our pointing out the downside impact on us of this decision. Your assumption that we are naive rubes who don't know how to code is really, really wrong.

I have a very good idea how much engineering effort would be involved in fixing the GIL, and I am well aware that Python has involved many person-millennia of gratis work, and am appreciative of both. However, I still disagree with the Python devs' obviously entrenched position that fixing the GIL isn't worth the effort, and I will continue -- even when shouted down by the likes of you -- to advocate for the GIL's removal or some equivalently good solution. (As I said earlier, I am not opposed to STM solutions, but the current one performs unacceptably without special-purpose hardware.)

Why? Because I have a single, selfish interest in this. I depend heavily on Python now, and want the language to be better. I have written many lines of Python 2 code that rely on the threading primitives in the standard library. Perhaps it was foolish of me to expect that the threading model offered by the standard library, modeled on Java's threading primitives, would some day work in the same way as Java threads do in practice. Nonetheless, I am left with a real world problem: my CPU-bound threaded Python code does not scale well to multiple cores. I need the GIL fixed, or to rewrite my Python code, or to migrate to another language that supports the standard model of threading programming that real-world programmers have been using for several decades, and which has built-in support from all major operating systems. Or, sure, wait for STM to be ready for prime time and migrate my thread-based semantics to the new STM-based semantics.

The best path right for me right now is migration to Jython or IronPython. But then we are still unsupported orphans, living in the third world of Python 2.X.

I guess it comes down to: do you want people to actually use this language to write programs they want to write, or do you want Python to be an advocacy platform for "correct" programming? Python's pragmatism has always appealed to me, so the ivory tower reaction to the practical concerns around the GIL really seem dissonant. (And this is coming from an MIT AI Lab Lisp guy who would rather write everything in Lisp. But Lisp lacks Python's wonderful, high-quality third party libraries and developer-centric pragmatism regarding unit testing, documentation, etc.)

I know you are tired of hearing people bitch about the GIL, but, really: people write multithreaded programs. They should work as advertised, using native OS threading primitives and taking advantage of the native OS thread scheduler. Why does Python offer threading primitives if the language is not meant to support, from a practical standpoint, multithreaded programs?


Oh, man. This is a VM in JS ("Google Script") with a UI in Google Sheets.

If I had more time I'd post a VM in Google Sheets formulas only.

Anyone want to take it up?

Edit: hold my beer (see my reply below)


I'm in my mid-50s, and I definitely agree that I'm a lot less risk-averse than younger people. (Hell, I still ride a bicycle and roller skate without a helmet haha!) I don't really give a shit if what I say on the internet becomes associated with me, even though I use the standard 90s protocol of using fake names. I know it can be traced by someone, somewhere, in some government agency, and certainly by some asshole like Mark Zuckerberg. So what? I have very little to lose. The people who know me already know what an jerk I can be and yet they still let me come around, and the job I have wouldn't bat an eye since my private opinions have zero effect on their corporate image.

Every generation feels like they're waiting for the next generation to retire. I feel that way about Baby Boomers. (No, I'm not a Baby Boomer. I'm more like Gen X, but that's not quite right either. I'm in between those two, which is why I have feelings of contempt for the people who invent sociological models....but I digress.)

I am at an age where my experience counts for something with some organizations, and is considered a complete liability for other organizations. I have changed fields entirely several times in my life. Some of the skills I have that make me uniquely qualified to do valuable things for big organizations are completely unmarketable to those same organizations because I do not have an appropriate credential they can accept to certify my knowledge and expertise. I have given up on them, too.

My point is that there is always room for younger people to really shake up the dinosaurs, but if you are trying to influence my generation, you'd better be willing to take some risks and really just say it straight. I know that's difficult for people younger than me, and I know that people in my daughter's generation (she's 29) aren't very good at being blunt and usually screw it up when they try, but truth-telling without worrying about political nuance or whatever the hell it is that everyone worries about that keeps them from just saying what they're thinking is very attractive and useful, at least to people my age.

Oh, and don't whine when you're telling the truth. Not many people who are my age like whiners.


Say it with me: "Obfuscation on top of solid, security practices provably reduces risk against talented hackers." It's why the Skype attack took forever. They spent so long on reverse engineering where open software with obvious weaknesses would've cracked almost instantly. Now, put that concept in for runtime protections, NIDS, recovery mechanisms, etc. Hacker has much more work to do that increases odds of discovery during failures.

Obfuscation was also critical to spies helping their nations win many wars. There's so many ways for well-funded, smart attackers to beat you when they know your whole playbook. When they don't, you can last a while. If it changes a lot, you might last a lot longer. If not well-funded or maximizing ROI (most malware/hackers), they might move on to other targets that aren't as hard.


I would go with: "To err is human; to cascade, DevOps."

Is there a list of companies with over-the-top/invasive hiring practices somewhere?

It would make it easier to avoid them, and maybe send a message.


Alan Kay's intro quote is from this interview to Dr.Dobb's [0]. Here's some more context to that quote:

"Binstock: Are you still programming?

Kay: I was never a great programmer. That's what got me into making more powerful programming languages. I do two kinds of programming. I do what you could call metaprogramming, and programming as children from the age of 9 to 13 or 14 would do. I spend a lot of time thinking about what children at those developmental levels can actually be powerful at, and what's the tradeoff between…Education is a double-edged sword. You have to start where people are, but if you stay there, you're not educating.

Extracting patterns from today's programming practices ennobles them in a way they don't deserve The most disastrous thing about programming — to pick one of the 10 most disastrous things about programming — there's a very popular movement based on pattern languages. When Christopher Alexander first did that in architecture, he was looking at 2,000 years of ways that humans have made themselves comfortable. So there was actually something to it, because he was dealing with a genome that hasn't changed that much. I think he got a few hundred valuable patterns out of it. But the bug in trying to do that in computing is the assumption that we know anything at all about programming. So extracting patterns from today's programming practices ennobles them in a way they don't deserve. It actually gives them more cachet.

The best teacher I had in graduate school spent the whole semester destroying any beliefs we had about computing. He was a real iconoclast. He happened to be a genius, so we took it. At the end of the course, we were free because we didn't believe in anything. We had to learn everything, but then he destroyed it. He wanted us to understand what had been done, but he didn't want us to believe in it.

Binstock: Who was that?

Kay: That was Bob Barton, who was the designer of the Burroughs B5000. He's at the top of my list of people who should have received a Turing Award but didn't. The award is given by the Association for Computing Machinery (ACM), so that is ridiculous, but it represents the academic bias and software bias that the ACM has developed. It wasn't always that way. Barton was probably the number-one person who was alive who deserved it. He died last year, so it's not going to happen unless they go to posthumous awards.

It's like the problem Christian religions have with how to get Socrates into heaven, right? You can't go to heaven unless you're baptized. If anyone deserves to go to heaven, it's Socrates, so this is a huge problem. Binstock: I don't think they do that.

Kay: They should. It's like the problem Christian religions have with how to get Socrates into heaven, right? You can't go to heaven unless you're baptized. If anyone deserves to go to heaven, it's Socrates, so this is a huge problem. But only the Mormons have solved this — and they did it. They proxy-baptized Socrates.

Binstock: I didn't realize that. One can only imagine how thankful Socrates must be.

Kay: I thought it was pretty clever. It solves a thorny problem that the other churches haven't touched in 2,000 years."

[0] http://www.drdobbs.com/cpp/interview-with-alan-kay/240003442


Sad in a way, but no surprise. I recently summarized my opinions on hackernews[1] in response to why Netflix uses Linux instead of Solaris, which might be of interest here:

"I worked on Solaris for over a decade, and for a while it was usually a better choice than Linux, especially due to price/performance (which includes how many instances it takes to run a given workload). It was worth fighting for, and I fought hard. But Linux has now become technically better in just about every way. Out-of-box performance, tuned performance, observability tools, reliability (on patched LTS), scheduling, networking (including TCP feature support), driver support, application support, processor support, debuggers, syscall features, etc. Last I checked, ZFS worked better on Solaris than Linux, but it's an area where Linux has been catching up. I have little hope that Solaris will ever catch up to Linux, and I have even less hope for illumos: Linux now has around 1,000 monthly contributors, whereas illumos has about 15.

In addition to technology advantages, Linux has a community and workforce that's orders of magnitude larger, staff with invested skills (re-education is part of a TCO calculation), companies with invested infrastructure (rewriting automation scripts is also part of TCO), and also much better future employment prospects (a factor than can influence people wanting to work at your company on that OS). Even with my considerable and well-known Solaris expertise, the employment prospects with Solaris are bleak and getting worse every year. With my Linux skills, I can work at awesome companies like Netflix (which I highly recommend), Facebook, Google, SpaceX, etc.

Large technology-focused companies, like Netflix, Facebook, and Google, have the expertise and appetite to make a technology-based OS decision. We have dedicated teams for the OS and kernel with deep expertise. On Netflix's OS team, there are three staff who previously worked at Sun Microsystems and have more Solaris expertise than they do Linux expertise, and I believe you'll find similar people at Facebook and Google as well. And we are choosing Linux.

The choice of an OS includes many factors. If an OS came along that was better, we'd start with a thorough internal investigation, involving microbenchmarks (including an automated suite I wrote), macrobenchmarks (depending on the expected gains), and production testing using canaries. We'd be able to come up with a rough estimate of the cost savings based on price/performance. Most microservices we have run hot in user-level applications (think 99% user time), not the kernel, so it's difficult to find large gains from the OS or kernel. Gains are more likely to come from off-CPU activities, like task scheduling and TCP congestion, and indirect, like NUMA memory placement: all areas where Linux is leading. It would be very difficult to find a large gain by changing the kernel from Linux to something else. Just based on CPU cycles, the target that should have the most attention is Java, not the OS. But let's say that somehow we did find an OS with a significant enough gain: we'd then look at the cost to switch, including retraining staff, rewriting automation software, and how quickly we could find help to resolve issues as they came up. Linux is so widely used that there's a good chance someone else has found an issue, had it fixed in a certain version or documented a workaround.

What's left where Solaris/SmartOS/illumos is better? 1. There's more marketing of the features and people. Linux develops great technologies and has some highly skilled kernel engineers, but I haven't seen any serious effort to market these. Why does Linux need to? And 2. Enterprise support. Large enterprise companies where technology is not their focus (eg, a breakfast cereal company) and who want to outsource these decisions to companies like Oracle and IBM. Oracle still has Solaris enterprise support that I believe is very competitive compared to Linux offerings.

So you've chosen to deploy on Solaris or SmartOS? I don't know why you would, but this is also why I also wouldn't rush to criticize your choice: I don't know the process whereby you arrived at that decision, and for all I know it may be the best business decision for your set of requirements.

I'd suggest you give other tech companies the benefit of the doubt for times when you don't actually know why they have decided something. You never know, one day you might want to work at one."

I feel sorry for the Solaris engineers (and likely ex-colleagues) who are about to lose their jobs. My advise would be to take a good look at Linux or FreeBSD, both of which we use at Netflix. Linux has been getting much better in recent years, including reaching DTrace capabilities in the kernel.[2] It's not as bad as it used to be, although to really evaluate where it's at you need to be on a very new kernel (4.9 is currently in development), as features have been pouring in.

Also, since I was one of the top Solaris performance experts, I've been creating new Linux performance content on a website that should also be useful (I've already been thanked for this by a few Solaris engineers who have switched.) I've been meaning to create a FreeBSD page too (better, a similar page on the FreeBSD wiki so others can contribute).

FreeBSD feels to me to be the closest environment to Solaris, and would be a bit easier to switch to than Linux. And it already has ZFS and DTrace.

[1] https://news.ycombinator.com/item?id=12837972 [2] http://www.brendangregg.com/blog/2016-10-27/dtrace-for-linux... [3] http://www.brendangregg.com/linuxperf.html


> As opposed to just taking it, profiting off it (like ~90% of people who 'eschew' the GPL because they can't make as much money as possible off it) and never doing anything in return.

This right here - though I would extend it to also mention those who take it, extend it, profit off of it, then next-to-never (and many times never) returning any of those changes back to the community.

If we're lucky, they contribute something else back (and those contributions are welcomed, mostly) - but never the changes which allowed them to grow and expand well beyond anyone's expectations. We all know who you are.

I understand the want to see my code used by "the big guys" (or the "up-and-comers" who become "the big guys"); it does look good on one's CV. At the same time, I also understand Stallman's reasonings and message; he has personal experiences which led to the creation of the GPL and his philosophy on software - and he has been around long enough to see how the corporate world really works when it comes to the lifecycle of software. From the birth of an application or system, to its use and adoption, to its death.

Sometimes, its the latter end-of-life phase that we seem to forget about. So often as developers we have to "re-invent the wheel" when had old proprietary software source code been released at EOL, a base to build something better would be in place. Unfortunately, then as now, software has been looked on as an "asset" to be traded and sold for profit, despite the fact that it also depreciates faster than a new car. Yet, like an old Vega, some desperately cling onto that old software hoping to profit "someday, you'll see!".

And in many cases, that source code, and even the binaries - suffers the ignoble fate of "bit rot", sitting in a warehouse on an old hard drive (if we're lucky - if not, it's on some old nearly unreadable 7-bit paper tape literally rotting away, in a format used by a computer system that hasn't been turned on - if one even exists and hasn't been scrapped for gold - in over 50 years).

Original symbolics LISP? Some of the code behind the Apollo program (heck, and that was government funded!)? Cray Research operating systems, compilers, libraries, etc - for the old Cray supercomputers? This list could be extended nearly forever.

The sad thing is, we have lost man-centuries of work in the form of software, due to want-of-profit and bit-rot - and Stallman knows this - we should all know this. Furthermore, even when the code does survive (in whatever form), inevitably, the systems that can run it do not (in general); in some cases, it has become impossible to run code written for certain machines because no examples of those machines, nor their designs (to create an emulator) exist any longer. All we can do is guess at what the program should do (assuming we have at least documentation of the mnemonics and opcodes for the machine language binary to reverse engineer - if you don't have that, nor the real hardware, the binary code might as well be noise).

Crazy enough - in some cases we have the machine and the software extant - but only one operating copy of that machine! For example, there's a company in Texas which uses an old IBM accounting machine to this day; it uses punched-cards for it's data input/output, plus it has a printer. It's "programs" are hard-wired plugboards. A computer history museum delegation has begged them to donate the system to a museum for preservation, as it is the last operating example of such a system. But the company has an attitude of "if it ain't broke, why fix it" - and you can't fault them for that. Even so, one day it will break, and likely whomever is in charge will decide to sell it for scrap rather than donate it (due to likely amount of precious metals in the machine) - and that machine, plus all of its programs - will be lost forever.

Heck - we treat our automobiles and automotive history with greater respect, and that has had arguably a much lesser impact over the last 100 years, than what computer technology has had over the last 20.

/end ranting (and book)


GCM is extremely tricky to implement safely.

ECDH itself is very easy to implement; it's just DH (which is probably the simplest algorithm in cryptography), but in a different group.

ECC (the group) is hard to implement safely. The NIST P-curves are tricky to implement relative to Curve25519.

But there's also a lot more study of how to safely implement the NIST P- curves than there is for how to make a constant-time GCM.

I don't know. They seem like comparably difficult tasks.


The hilarious part here is Wikipedia calling PBKDF2 modern. It's a 2000 minimal update (to generate more bits, to be kind of UTF-8 aware) of a 1993 standard.

At the time of the RFC publication it was already obvious its security was way behind bcrypt that was used in OpenBSD since 2.1 (June 1997), which did its best to be ASIC hostile, which isn't the case for PBKDF2.

In retrospect, NIST choosing PBKDF2 over bcrypt in NIST SP800-132 could be seen as part of the effort to weaken standards for NSA profit.


Nothing has changed. The premise of that blog post is that we'd like to do better than ephemeral intersection requests using truncated hashes of contact identifiers, but that we don't believe it's currently possible to do better.

The author is incorrect. While Signal Protocol is used to communicate an SRTP master secret and a session id, the clients still need to do an ICE handshake in order to establish communication with each other before the responder can even ring. It is very straightforward for SA to block that traffic, and it is established fact that they do.

It seems as if WhatsApp is short circuiting this frustrating series of timeouts to improve a flaky seeming UX. That strategy does negatively effect people on the internet who register for WhatsApp with Saudi VoIP numbers when they're in France, but it is a much clearer UX for almost everyone who is actually a Saudi WhatsApp user or calling actual Saudi users. What the author is demanding is a worse UX for the same outcome.

It sounds like there might be room for improvement, but I have a feeling that if WhatsApp were recording their users' locations in order to provide a more advanced location-aware version of the same strategy, people would not be very happy about that.


VoIP traffic is UDP, and will always be very easy to identify. We barely have functional circumvention strategies for higher-latency traffic types, so building a circumvention strategy for traffic like realtime voice with ultra low-latency requirements is a tall order. Honestly getting VoIP to work well in the best possible network scenarios is enough of a challenge.

I assume that higher latency communication like WhatsApp's push-to-talk "voice notes" works just fine in SA though.


> Doesn't it? Not to challenge the authority, but I believe the only requirements OTR impose are constraints on the initial presence (thus, lack of offline operation) and constraints on message ordering.

It's an asynchronous world, trying to use a synchronous protocol in that world doesn't make a lot of sense. If you want to initiate an OTR session with your friend on an iPhone, you have to wait for them to pull the device out of their pocket and physically tap the notification (which just says something like 'you might get a message soon'), then receive the response (which might involve pulling the device out of your pocket and physically tapping the notification which says something like 'you can send a message now'), before you can send the actual message.

This isn't just "initial presence," either. OTR is a three-step ratchet, so if you want the benefits of forward secrecy, you have to "end" your OTR session after each conversation and "start" an OTR session at the beginning of the next one. Except we don't live in a world where there are "beginnings" and "endings" anymore. People aren't sitting down at their computers and chatting until they get up again, it's just one long asynchronous conversation now.

> I guess, it's a wrong sort of ubiquity, probably not something that works for the goal of "encryption for everyone", but it still works if I don't want to hop the services but layer security instead. Maybe that's too old-fashioned. Hope there will be Signal Protocol-based addons/libraries like this one day.

You can do this today if you want to, but what's the point when Signal Protocol is being baked into the messaging services themselves. Layering encryption has been a losing strategy for a decade or more now, building something that works so seamlessly that it can be a part of the default experience is actually showing progress.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: