Hacker Newsnew | past | comments | ask | show | jobs | submit | chrisjsmith's commentslogin

Does it matter? Governments are bullies.

http://www.vvsss.com/grid/xkcd-on-crypto.gif

Here in the UK, we already have to hand over our keys or be chucked in jail.


Original XKCD page, for posterity:

http://xkcd.com/538/

And a direct image link from XKCD:

http://imgs.xkcd.com/comics/security.png


Is there any reason you linked to a third-party site instead of xkcd itself?

http://xkcd.com/538/ (easily googled -- xkcd crypto, first hit.)


Oh pedant, I thoroughly apologise for my ignorance. May my vegetables all die, my breath smell for eternity and my armpits flood with hair and engulf the earth.


It's not pedantry. Linking directly to the content's author makes sure that he is properly credited and gets the exposure.


As far as we know. The Russians weren't always that honest about who they blew up by accident.


Aha, and what’s your evidence for that?

Soyuz 1 (1967) and Soyuz 11 (1971) both ended with fatalities. Four cosmonauts died. The Soviet Union did not hide those fatalities. Here is what best illustrates that: All their names are on the Fallen Astronaut memorial on the Moon, put there during Apollo 15, only a month after Soyuz 11 ended in disaster.

There have been no fatalities on Soyuz flights since Soyuz 11, all fatalities occurred early on. That’s forty years without fatalities, and that’s mostly why Soyuz is so safe and reliable.

Sure, more astronauts died during Space Shuttle missions but the Space Shuttle also brought many more astronauts into space than Soyuz. If you look at the percentage of fatalities Soyuz is only doing marginally better – one more dead cosmonaut and the percentages would be about the same. It’s not so much the raw numbers, it’s the forty years. Soyuz is a mature spacecraft, all the kinks have been worked out, the Space Shuttle was always too complicated to ever really work out the kinks, to ever really be considered safe.


If these are the problems you are trying to solve, just don't buy something with iOS on it.


Visual Studio is an awful product. It's bloated, performs badly, expensive and extremely unreliable.


Although I do mainly Ruby these days, I still use VisualStudio from times to times and find it quite enjoyable personally.

Which reliability issues did you meet on which version?


VS2010. Mainly the fact that the user interface is painfully slow[1] (either that or I'm unreasonably fast which is rediculous as Vim can keep up with me) and it just dies about 5 times a day on a good day. It might be the solution size though - it's got about 0.5 million lines of C# in it. Still it should work.

[1] On a quad core Xeon with 12Gb RAM, SAS disks and ATI FirePro card.


That size of codebase is definitely an issue :) Last time I had 500k of C#+C++ (5 years ago roughly), I split the solution into around 20 solutions, and used binary dependencies (with CruiseControl.Net on top of that [1]).

I remember reading similar advices in other places as well (and for other languages/platforms, too).

[1] http://mikebroberts.files.wordpress.com/2007/01/enterprise-c...


That looks painful. I'd rather like to move it to an SOA and split it into logical feature partitions and use service composition and windows workflow to integrate it all. Typically, I don't think anyone wants to pay for that though.


Don't they run Django? Is there something we should know about?


No, they don't run Django for the entire site. Django is mostly used for one-off apps, like:

http://www.washingtonpost.com/wp-srv/special/politics/electi...

or

http://apps.washingtonpost.com/highschoolchallenge/


That will end up in pain and tears for thousands of students and staff alike.

My trial of Office 365 lasted 45 minutes i.e. until I realised how much pain it is to get data back out of the Outlook and Sharepoint implementations. They just don't want you to do it.

It's the WORST lock in I've seen in any product.


Can't you just import the PST file into another mail app? And for Sharepoint there's a power toy: http://blog.falchionconsulting.com/index.php/stsadmpowershel...


Even the list hurts the eye. It still baffles me why Sharepoint is so pervasive in the market.


Does the list of commands for csh or emacs hurt your eyes too? I can read virtually every command and know what it does. This is certainly clearer and easier than the magic emacs and vi incantations. ZZ


I don't know though. using a text editor for you daily job and remembering the magic incantations because it makes you much faster seems worth it while the magic incantations to migrate your data away that you might use once does not. It's not really a valid comparison. If MS cared about vendor lock in they wouldn't expose that stuff via a command line tool. They would instead do it via GUI which is the de facto interface of windows ease of use.


The problem is that exporting all files and documents isn't common user functionality. It's generally an admin task. And for users where this is common you can map everything to a local folder, in which case you just use Windows Explorer drag drop, just like any other folder. For admins, command line tools are their standard UI.

As you even say yourself, for a task you might do once ever, why would you add a button to clutter the UI? That's like Apple adding a button to the iPhone to indicate that you'd like to terminate service with your carrier. Just wouldn't make any UI sense.


I have seen more than one Sharepoint intall crash so badly that the company was crippled for hours because they couldn't access their documents nor be sure they were working with the most recent versions. Having a regularly (as frequently as needed) exported mirror is a top priority for any SP-like application.

And, of couse, when you do realize SP is utterly awful when compared to its competitors, it makes the migration a lot easier.

I can easily imagine a migration from SP a couple years down the road would cost a lot more than the US$250K they are being offered to leave Notes (which is every bit as awful as SP)


Agree about Notes being as awful. I had to suffer that from '99-'02. Looked like it improved after that but you can't polish a turd as they say.

I've watched SharePoint go with a spectacularly large boom when some muppet renamed the AD domain... That was a fun two days of my life reverse engineering it believe me. Thank goodness they did write it on top of the CLR as it's easy enough to decompile then to work out what the hell is going on.


Forget the UI. It's not important.

It MUST be common user functionality both for private and corporate users.

People have completely unjustified trust for these cloud-based services when in the face of it, they screw people's data up all the time. You should be able to maintain an offline archive of all of your data to guard against vendor failure.

To be 100% honest, at least here in the EU, I think they should legislate to make sure that a) you can get your data back when you so desire and b) the data should come back in an open format.


Because there are consultants. That is the only reason!


Consultants alone won't explain that. You need clueless people making technology decisions on subjects they can barely understand, under advice by snake oil salesmen.


Fair point. I'll give you that.


You HAVE to buy Outlook to get the PST file out by syncing your local outlook with the Exchange Live server which is unacceptable. You can't access the data any other way.


The irony of this is that it's actually quite easy to build a document store on top of PostgreSQL that performs very well indeed. I tried it. I used a similar approach to the one described at [1]. You get the advantage of years of experience (never underestimate this!), MVCC, transactions, consistency and replication as well.

(yes I know it doesn't "perform" as well as the other NoSQL stores but performance is not without tradeoffs [2])

[1] http://bret.appspot.com/entry/how-friendfeed-uses-mysql

[2] http://www.caradvice.com.au/wp-content/uploads/2006/12/Ferra...


To me, the biggest advantage of MongoDB is autosharding. If you read the friendfeed article, you'll see that they also have to deal with 'eventual consistency' on the indexes.


I'd say that's not necessarily an advantage. Sharding is incredibly complicated to get right considering all factors such as balancing and recovery.

I'd rather partition the data based on function onto distinct clusters, you know like eBay do.

You can update the indexes in a transaction too, so that's not necessarily an issue. MySQL has problems with this due to locking but the MVCC implementation in PostgreSQL allows much better concurrency.


I think the blogger was being an idiot. You should check the authenticity of what you are downloading, not just snag it.

It's like eating a kebab dropped in the street.


Technically how should he know jquery.com is more trusted than jquery.it?

jquery.com does NOT appear to have a fully valid SSL certificate: Chrome gives me "the site's security certificate is not trusted!"

Like it or not, Google is an important part of establishing reputation -- that's what pagerank was built on initially and if that becomes worthless then finding the true source of something becomes very difficult.


jquery.com does NOT appear to have a fully valid SSL certificate

Hypothetically supposing that jquery.com had a lovely little green lock, that wouldn't matter, because on jquery.it a) you wouldn't be looking for the lovely green lock and b) if you did look for it, look here, a lovely little green lock and c) you didn't click the lovely green lock to see who it was issued to but if you did d) it was issued to jquery.it, which matches the address in your bar.

SSL solves one problem, really really nicely: it makes it impossible to eavesdrop between the user and a trusted endpoint. It does basically nothing to make sure that the trusted endpoint is the one the user thinks they are interacting with.


True -- the green lock itself wouldn't help here. I was thinking more along the lines of code signing certificates.

When I visited by bank's web site and drill into the certificate details I can at least establish that someone my browser vendors trusts (or someone they trust ...) issued the certificate to an _organization_ called 'Bank of Nova Scotia' in Toronto, not just the domain name.

If I was able to register micr0soft.com then hopefully I would have a hard time getting an SSL certificate issued for it. I know there have been a number of discussions on certificate infrastructure here that show how complex this can become.


SLL certificates bring nothing other than a false peace if mind. I've seen fake antivirus software that goes to great lengths to provide verified (!) SSL encrypted pages to steal your credit card details.


Well, that, and actually allowing SSL sessions to be encrypted without being trivially susceptible to MITM attacks.


Which fake software is this? If it's already taken control of the client side, too, couldn't it just be altering the root certificate set rather than exploiting some weakness of the union of all of the existing roots (which no doubt have many such weaknesses regardless)?


"Vista Security 2012". It can't touch the root certs as you need elevated privileges to do that. The entire thing hijacks the user's shell via the registry. You can log in as another user on the machine and it appears not to be infected.

Quite well designed really :-)


Find the Github account with the most forks and followers. That's probably the official repository and will link to the official website. They may not use Github of course, but there are other similar methods.

Other good signs are them being linked to from cdnjs, cached-commons or microjs. If I'm looking to solve a javascript itch I'll first browse these sites to see if there is a popular tool.

Also, if you're looking for jQuery then it's because you've read about it online somewhere. Simply go back and follow the links.


Well, in this specific case, the header comment in http://code.jquery.it/jquery-1.6.2.js still references jquery.com


This is an interesting point - why does jquery.com have a https version in the first place? And why did someone bother to set it up with a self-signed certificate?

Sounds like perhaps someone was testing something long back (cert was signed in 2009) and just never turned it off.


This is a great question for new users of jQuery, or indeed any software I need to download and integrate with my software/website. When a user is hit with malware, it affects just them (and maybe their email/facebook friends). If I download malware and incorporate it into my software, then I'm now distributing it to my users!

So how I do it is I look for the community. github is a good place to look. HN is itself a good source of vetting. Google, certainly, but not the first link I find. In fact, when I first heard about jQuery, I didn't assume that the "real" site could be trusted either: if I'm going to install this on my site, and serve it to people who trust me, then it had better be trustworthy.

Now imagine I run a tutorial website, and people come to my site because they trust me, and then they install software they copied from me (or my links), and distribute that to their users. Wow. Kudos to the author: I think it was bad form to blame google here, but the fact that he admitted it all does a lot to reestablish trust.


Just to be clear, I didn't distribute the janky copy of jQuery to anyone myself. I test my samples pretty thoroughly before publishing, and definitely would have caught something like this.

The situation here was that someone was using one of my samples from the jQuery 1.2 era and wanted to see if it would work with 1.6.2. He downloaded the ".it" copy of jQuery to test it with, got the syntax error when he used it in my sample, thought it was because my code didn't work right with 1.6.2, and got in touch with me about it. That's about where the post picks up at, when I rushed to grab a copy of 1.6.2 via Google and made the mistake of downloading the ".it" copy without noticing.


Sorry, I didnt meant to suggest that actually happened in the last paragraph but it does read that way on review. My apologies.


No worries. I just wanted to make sure you (or others) didn't think I was that careless.


It's not the same. If I type jQuery into google, click one of the top results and the site looks exactly like the jQuery site, I'd probably be fooled too. The domain is close enough to not catch out of the corner of your eye.


With the upcoming version of chrome, there won't even be a url bar. AFAIK firefox wants to get rid of it too.

Now is this an argument for keeping the url bar? It's obviously error-prone, but the other methods of establishing identity don't seem to be there yet either.


The URL bar will be there, it just won't be visible at all times, but it'll still appear when the page is loading or you select the tab.


Whether he should have downloaded from that site or not, it's ridiculous that it was showing up above the official site in search results. That's his main point.


I think the rediculous thing is more that he spent that much time making a blog post because he made a mistake whilst obviously trying to do something quickly without thinking.


Centralised DNS = bad. That's the issue.


It is a billion times better than decentralized DNS. http://en.wikipedia.org/wiki/Hosts_file#History


if anyone is interested, i wrote a script to improve that a little. obviously it's no replacement for dns, but it makes it easier to share the data - it treats your hosts file as a database that can be both dumped to a file and updated from various sources, including web pages. https://github.com/ghettonet/GhettoNet


I imagine a decentralized DNS service based on the bitcoin network would be kinda cool.


Check out namecoin.


I had difficulty understanding namecoin. Do you simply pay for DNS registration with namecoins or is the DNS itself hosted in a decentralized and anonymous fashion?


You pay for the registration and update of names with namecoins and the data is stored in the namecoin blockchain. In this way it is decentralized. Lookups are done directly from the blockchain.


Or just .onion using Tor.


Interesting - thanks for the link.


It isn't just the domain names system, the whole way IP addresses are handled isn't much better.


Care to elaborate? I think the RIRs are doing a reasonable job, though if you don't like the needs-based approach to allocating addresses I could see the RIRs as disappointing.


Yes the DPA does cover you on the same basis that linked parties on a credit file can be obtained under the act.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: