Hi Hacker News. I’m one of the developers at QuizUp. We’re very proud of the product, but obviously we missed the mark when it comes to privacy and user data. On a cultural level we take these things to heart, and we take them seriously. It’s a matter of not having taken the time to review these things carefully enough.
Let me address the things mentioned in the article:
No data is ever sent or received to or from our servers in plain text. Due to a bug in our third-party network library the certificates were not being verified so a self signed certificate could decrypt the data. This issue has been addressed in an update waiting review at Apple. Users' passwords are hashed before we store them in our databases (pbkdf2, salt, multiple iterations).
Our user's address books are not stored on our servers and only used temporarily to help us find your friends. It was a mistake to not hash the contents of the address book before sending to our servers and we are currently changing the client application so it hashes the address book contents before sending to our servers.
Sensitive user data was exposed in certain endpoints (although only accessible for authenticated users). We have already addressed this issue in a server deployment and the hotfix is live now.
We are currently wading through inboxes looking for Kyle’s outreach. It looks like it may not have reached the core server developers. Please contact me personally at jokull@plainvanilla.is if you have questions.
Finally I want to thank Kyle Richter for working out our security holes, small and large. We’re currently reviewing our endpoints and codebase to further harden security and ensure the privacy of our users.
You make it sound like there are a couple minor bugs. What about the addresses of the users sent away? "only accessible for authenticated users": which means everyone.
From the home page "Play against friends in real time": this is a false advertising at best. Also, is it written anywhere that people can play against bots?
I don't understand how that's false advertising at all. I definitely have played with friends in real time, many times. That's just not the ONLY case.
I think most users with half of a brain can figure out that not all matches are real time. If you challenge a friend, it clearly tells you that you can play the match without them and they can play against "you" when they get around to it.
A really acknowledging response was given -- would those words really make that much of a difference?
We are very sorry for not treating our users’ private information more carefully.
In QuizUp you are playing a human in real time in almost every game. In the off chance we cannot find an opponent (which is becoming very rare due to our popularity) you may be pitted against a bot as a fallback strategy. Matchmaking is a hard technical problem, and we have chosen to maximize gameplay experience and consistency. I’m happy to share that the ratio of ghost games to real ones is getting very small! Hopefully we will be able to phase them out completely in the future.
Good catch. Sadly the "We are sorry" for being careless reminds me of a "BP - we are sorry" incident.
There is no cost for a faceless company to be 'sorry', and only prmotes the further unethical actions by other companies. I would rather see them pay the fine for privacy breach.
Moreover, this all goes down to the apps requiring ALL permissions to run, why is that acceptable? Why is QuizUp allowed to see user's location in first place?
To me, it feels like making stalker's life easier than ever. Make an app displaying cats, set it require full permissions, put on App Store.
Simply verifying the certificate is not enough, it is simple to decompile and reverse-engineer an IPA to bypass certificate checks.
You should NOT be sending such sensitive information on other users, encrypted or not. Unless of course you want to continue this trend of violating your user's privacy.
my impression is the OP is a bit loose and ambiguous in the some of the terms used like "plain text" and "local" etc; and the TC article makes the confusion worse.
So just for the record, are these all of the actual issues?
- no SSL verification means it's trivial to MITM
- exposure of other player's emails/bio/birthday/location/exif data in pics
- address book data is sent unhashed to the server
- signup emails expose the cleartext password (is this right?)
All important issues, and I'm assuming you've corrected these.
But the way I understand it, there's no reason or way to protect the client from the user him/herself - custom CA install, decompilation, etc are all ways for the user to get to their own data, or their own communication with the server.
So I'm a bit at a loss why the TC article is hammering on the "… and the local file which contained user information did not require any decryption to read."
The OP also mentions the FB tokens being exposed and such - I'm assuming these are only sent over SSL, and other people won't have access to it (with the caveat of the SSL fix), right?
- Yes. The certificate chain will be embedded in the client in the next release to mitigate this.
- We haven’t stripped EXIF data from uploaded pictures, although this is on the roadmap. Sensitive fields from user profiles have been stripped from all endpoints. This was done before the news hit TechCrunch.
- We were never saving contact lists, just using to cross reference our user database. In the next update we will compare hashes, not plain text emails.
- No passwords are ever stored in plain text, but they are transmitted over SSL during signup and login. We are considering ways to further obfuscate this, but strengthening SSL goes a long way. Please contact me at jokull@plainvanilla.is if you have comments or questions about our password policy.
You are right about the Facebook access tokens. The tokens are sent over SSL and we are not breaking any usage guidelines from Facebook. Access tokens can of course be invalidated by the user, or by Facebook. We are open to further enhancing the security of our OAuth flow, but currently it has not been exposed to any security weaknesses.
The Facebook token isn't an issue, that is how Facebook authorization works.
Recording single player games and then sending them to other users to work as fake real time multiplayer games seems like a very clever move and is probably the reason this game is doing so well. Not that I have heard of it before this post, though. It's a good hack that capitalizes on the way a quiz game works and doesn't have any real differences to true real time multiplayer except for the likely lack of real time messaging. The same could be done for any game in which people compete yet do not directly influence each other.
The benefits are very clear: reduced matchmaking times, eliminates latency issues, eliminates signal loss issues. All of these are major hurdles to multiplayer cellphone gaming, so I don't doubt that this game would be pretty successful because of it.
Sending users data to other users without permission like that feels like it should definitely be a punishable offense, but then the legal system doesn't work on logic so who knows.
> Not that I have heard of it before this post, though.
Tetris Friends does this for their multiplayer games[1][2]. When you "play against people", what you're really doing is playing against their replays. It's quite clever, and it had me fooled for a while while I was still in college.
The interesting consequence of this is that since you can react to the repalys, and the replays can't react to you, players will almost subconsciously play attack and defense in a smart way to win the game. So most players will have a win ratio of over 50% in a multiplayer game. It's a neat trick to keep everyone happy.
What is perhaps the most shocking is QuizUp is backed by several venture capital firms, including some very large and well known ones. The question I have is: did they not do their due diligence when vetting this software or did they not care. I am not sure which one is more alarming to me, and it doesn’t really matter either way. Is this a sign of a bubble when a company can raise millions of dollars with so little care put into its technology or development?
Ha, I laughed when I saw that. I've gotten term sheets from VC's without them even using my product. To expect them to do a full security audit is quite adorable.
Amazing how many developers, even from very prestigious schools, write really horrific code. As a self taught programmer getting into the industry over the past few years, I have been shocked. Oh well, I guess it's "fuck it, ship it".
Whenever someone posts an article here about some exploit relating to a startup, or that it turns out they were never doing anything but storing passwords in plaintext, you'll get a small army of posters pointing out that it would be insane for them to focus on anything besides getting the product out the door working just well enough to start taking people's money as fast as possible.
So yeah, "fuck it, ship it" seems to be more or less the standard.
hey man - I'm putting together an unfunded product on the side by myself, and my passwords are using scrypt, and they have a salt, and the salt is per-user, and the system rejects weak passwords based on popular entries, bad entropy, and easy guessability. It honestly wasn't more than 8 hours of foolery to get all that working. Is it ready for the credit card industry? no. But it's going to stop derps who get their hands on the DB.
Yeah but you didn't learn everything you know from poorly written PHP tutorial websites and W3Schools. The words "Key-Derivation Function" are nowhere to be found in the lexicon of these people.
Consider that you are just practicing cargo-cult security though. You just piled a bunch of password security recommendations parroted all over the Internet to the detriment of your users.
If you are using scrypt with a reasonable difficulty and a per-user salt, there is no reason to put the entropy restrictions, weak password restrictions, etc on your end-users. It is painful to interact with sites that enforce ridiculous password requirements.
You can get away with a 4 character password on Netflix. There is a reason for that. Security is much more subtle that password complexity.
> Consider that you are just practicing cargo-cult security though.
No, I really am not. But as I didn't describe my reasons, you don't have the context to understand them.
Frankly, if Netflix has 4-character passwords, I would expect it to be relatively easy to compromise their accounts live with a carefully put together campaign. If Netflix gets their username/pw database dumped, I expect we'll see their policy change as the passwords are trivially cracked.
Not only that, putting together a safe & sane password retry system isn't the easiest thing every, and doing careful fraud detection based on geolocation/ip etc isn't the easist thing ever either. Particularly when I don't have someone working full-time on security.
Further, what you also didn't know is that the password strength functions as written have knobs I can adjust if things are too onerous.
So having harder passwords goes a long way towards 'better security' on the account side for little effort.
I would advise you to be more cautious about making unsubstantiated statements based on ignorance in the future.
You piqued my curiosity, so I went looking. According to Know Your Meme, that most prestigious and reliable of sources, "derp" originated from Trey Parker and Matt Stone. First it was in a movie (where they were sniffing underwear), and then in South Park. Now, having not seen the episode, I can't really comment on its contents, but I assume that the character that first used the term in South Park was either simply stupid, or suffered from some form of disability. Either way, the term has since devolved into making fun of stupid people - which I believe the grandparent was also doing.
I would argue that first startup needs to spend the extra time or hire another developer if the end result otherwise is something as egregious as what QuizUp appears to be doing.
But I agree, there's a huge difference between just not being able to implement security and not considering it relevant. To me, this is clearly a sign of the latter.
People don't realize how easy it is to see the secret API's behind their mobile apps. There's no obvious view-source on my phone and a lot of devs lack a full picture of how all the pieces fit together.
Why shouldn't it? Prestige presumably had to be earned at some point. If prestigious schools are producing sub-par developers at a rate equal to other schools, what is the value of that prestige?
Not all prestige is equal, for one thing. Your fine institution might attract world leaders as speakers for its econ, foreign policy, and poly sci departments, but its CS department might be weak. For another thing, the academe and what it teaches is, as a general rule, not really focused on what the business world teaches. So excellence in the academe does not per se translate immediately to excellence in the business world. While an adaptable learner would be excellent potential and long-term capability, I would expect them to have a learning curve for the different pressures and knowledge needed to succeed in business.
Presumably someone who graduates from Stanford or MIT in CS will be preferred over someone graduating from Princeton or Yale in CS (Personally Indiana or NEU would be my first choices).
Because the quality of code correlates with the school prestige -- which, in turn, is built on the quality of education (or, at least, the quality of the graduates, may not be the same thing, if the quality of the applicants is also different).
We don't really know what it means to write good code. We can't measure it. We can barely talk about it meaningfully. It's rarely taught except via osmosis: you pair with someone more experienced and you read cargo cult blog posts.
seems, anecdotally speaking of course, that most of the people I know who are self taught create way better (for some nebulous definition of better) software than the purely school-taught folks. maybe we just enjoy it more who knows...
Actually, while the initial investigation of Path related to their poor information security and abusive use of user data to drive their viral coefficient, the fine specifically related to violations of COPPA. That law refers to extra privacy protections for children under 13, as well as parental approval. The regulations are onerous to the point that most online social networks filter out users under the age of 13 to avoid running afoul of COPPA. Path didn't filter out these users and we're found to have violated COPPA, resulting in their $800k fine.
Doesn't detract from your excellent piece or put Path in a better light, but that's the context you're referring to there.
It is a new kind of approach to software development: SFAQL.
Shoot First, Address Questions Later.
I bet this kind of decisions are a consequence of MBA/Excel mindset. Developing software properly takes time and money and that isn't... lean (lol) and doesn't drive billion dollar valuations.
You are aware that a core concern of business programs is risk assessment and mitigation, right? So how does this out-of-hand assumption follow so necessarily?
I don't have a problem with the bot thing. It's a clever solution to the "there's no one online" problem for new multiplayer games. I have no confirmation, but I assume that games like Fun Run do the same. Having played online RTS games in the 90's, these new iOS lobbies seem to fill up suspiciously quickly.
Let me address the things mentioned in the article:
No data is ever sent or received to or from our servers in plain text. Due to a bug in our third-party network library the certificates were not being verified so a self signed certificate could decrypt the data. This issue has been addressed in an update waiting review at Apple. Users' passwords are hashed before we store them in our databases (pbkdf2, salt, multiple iterations).
Our user's address books are not stored on our servers and only used temporarily to help us find your friends. It was a mistake to not hash the contents of the address book before sending to our servers and we are currently changing the client application so it hashes the address book contents before sending to our servers.
Sensitive user data was exposed in certain endpoints (although only accessible for authenticated users). We have already addressed this issue in a server deployment and the hotfix is live now.
We are currently wading through inboxes looking for Kyle’s outreach. It looks like it may not have reached the core server developers. Please contact me personally at jokull@plainvanilla.is if you have questions.
Finally I want to thank Kyle Richter for working out our security holes, small and large. We’re currently reviewing our endpoints and codebase to further harden security and ensure the privacy of our users.