Hacker Newsnew | past | comments | ask | show | jobs | submit | steckerbrett's commentslogin

> Obviously you have pay anonymously, with bitcoin, for exaple (if youuse it carefully)

Bitcoin is anonymous? Time to go to jail.


Bitcoin can be used in a way that defeats anonymity - as per the parentheses in the quote.


Being pedantic, but I think you mean helps ensure anonymity?


Could you expand on your comment? My understanding is that if a party can't tie a wallet to an identity then it is anonymous. So if you can acquire bitcoins (eg. mining) and purchase something (eg. VPS) without giving up your identity then you are solid.


I've heard conflicting information as far as this goes.

Thinking this through- an adversary who's watching the block chain probably knows some inputs and some outputs. As in, these addresses belong to an exchange, these addresses belong to a hosting company.

Okay, fine. Now remember than any user can literally create wallets out of thin air, and in fact doing so is considered basic security hygiene. Let's say Joe User transfers one coin from one wallet to another wallet under their control. Let's say they do this 20 times, sometimes with the full amount, sometimes less.

How does the adversary attach an identity to those transactions?


You have to use your bitcoins someday. Either to buy real currency or real goods. Then you know where the money went TO. Tracing the transactions back (where the money came FROM) is then not a big deal - full history is in the blockchain.

So as long as you don't do a transaction that connects your identity to any bitcoin address, you are fine. but to use bitcoins you are almost always required to do it (its an electronic financial transaction, they are governed by law to have an identity, but of course you can find entities who do not follow these laws).


Only as you say if you convert them into a "real" currency. If they only used their Bitcoin to purchase goods (such as VPS) which was not tied to a physical address, then they could still remain anonymous.

As for where the Bitcoins came from, I'm sure the author of this document would have some digital assets they could sell on the darknet to acquire some Bitcoin. Where those Bitcoin originated then would not be their problem.


Nobody that I can remember has been able to identify the large bitcoin thefts over the years by tracking the coins, those people cashed out somehow. However the SEC filing on Pirateat40's ponzi scheme was remarkably detailed, they were able to track every single coin he received and prove he spent it on himself.

I would imagine others use JoinMarket to mix up the coins[1], use coin control[2] to exchange for other cryptocurrency p2p, or other obfuscation methods like buying up high demand items with bitcoin then selling them remotely for other bitcoins.

[1]https://github.com/JoinMarket-Org/joinmarket/wiki http://joinmarket.io/

[2]https://bitcointalk.org/index.php?topic=144331.0


Just by following the flow of the money between wallets? Assuming that at least one of the wallets can be connected to an identity, guessing that the others belong to the same person shouldn't be too difficult, just by observing transaction patterns.


Graph analysis.


The[insert relevant law enforcement] could subpoena the VPN company for your IP address. So that wouldn't be 'anonymous'.

Telling the blockchain about your bitcoins and their transactions would also leak your IP.

To be anonymous you need to do all transactions from anonymous internet and get all your stuff anonymously.

Perhaps a purchased ebook downloaded from TOR.

You can wash your coins of course, I think it currently requires trust in the company doing it and if not done correctly might still leave a trace.

Of course the real world is different, would the FBI do enormous op sec to catch a small time crook. It's more about risk management.


Not if you understand how it works.


Yes.


I've been wearing mine for 5 hours (since I got up) without noticing them at all.


> I noticed their scan process is not using ASLR

You can be pretty sure that none of their software is fit for purpose if they're not using basic protections for a process which runs as a super user and parses every file it can find.


I think you have an extra "not" in there; it's hard to parse anyway.


Ah the old, break the PoC to make the researcher stop complaining move but don't fix the underlying insanity. Classic.


My background's in application security assessments. I've seen this hundreds (or more) times, from developers that should really know better.

"Hey, there's SQLi in this input form! Better make sure ' OR 1=1;-- is blacklisted," but don't properly parameterize their queries or sanitize input.


"Hey, they reported cross-site scripting! Let's blacklist angle brackets, that'll do the trick!"

In case this is not clear to anyone in 2016, blacklisting known-dangerous characters is not an adequate bug fix. It's a rabbit hole, you will burn hours trying to blacklist every character or character combination that can cause a vulnerability just to have someone own you anyway.


What's current best practice?


The proper fixes for common web application vulnerabilities are as follows:

Session Hijacking/Fixation/etc.: Use TLS.

SQL Injection: Prepared statements that AREN'T emulated; PHP's defaults are bad here.

EDIT: If you're writing in another language, make sure it's not providing string escaping masquerading as prepared statements, but actual prepared statements. (My earlier comment was too broad; some forms of emulated prepared statements might be OK, but PHP's is dangerous.)

Cross-Site Scripting: Context-aware escaping (templating libraries) + Security Headers

Cross-Site Request Forgery: CSRF tokens

Password storage: bcrypt, scrypt, PBKDF2-SHA2, Argon2

Encryption, Digital Signatures, Authenticated Key Exchanges, etc.: Hire an expert, don't do it yourself based on the advice contained within HN comments.

File Inclusion / Directory Traversal: Don't write your applications in a dumb way that makes these vulnerabilities possible. But if you must, use something like realpath() with a sanity check based on the expected parent directory (in PHP).

XML External Entities: Make sure you disable the entity loader:

    libxml_disable_entity_loader(true);
PHP Object Injection in PHP 5: don't ever pass user input to unserialize(); use json_decode() instead.

PHP Object Injection in PHP 7: either disable object loading or whitelist the allowed types; i.e. unserialize($var, false); or unserialize($var, ['DateTime']);

These are just some of the common problems I frequently find, of course. There are more basic ways to mess up an application ("not even checking that you're authenticated" being at the top of that list).

https://paragonie.com/blog/2015/08/gentle-introduction-appli...

Further reading and resources:

* https://securityheaders.io

* https://github.com/paragonie/awesome-appsec

And if anyone wants their code reviewed: https://paragonie.com/services


"Encryption, Digital Signatures, Authenticated Key Exchanges, etc.":

If you just want to get data from A to B over the network, TLS 1.2 (but upgrade to 1.3 when it's ready). For an app(lication) where you control the code on both ends, with additional certificate pinning. Probably still worth hiring an expert to make sure you're doing it right but you have less chance of shooting yourself in the foot than if you try and roll your own.

Sometimes I think if cryptographers wrote libraries that the rest of us could use and "just work", security worldwide would improve. Bernstein's NaCl and the derived libsodium is a good starting point though.


> If you just want to get data from A to B over the network, TLS 1.2 (but upgrade to 1.3 when it's ready).

Right. If you're not using TLS for your network communications, then your communications are not secure.

Some people also have other requirements (e.g. "I need to store SSNs, how can I encrypt them and still be able to search by them in MySQL?") which require separate app-layer crypto. In those situations, don't roll your own. :)

> Probably still worth hiring an expert to make sure you're doing it right but you have less chance of shooting yourself in the foot than if you try and roll your own.

Agreed.

> Sometimes I think if cryptographers wrote libraries that the rest of us could use and "just work", security worldwide would improve.

Ah yes, boring cryptography. :)

> Bernstein's NaCl and the derived libsodium is a good starting point though.

Strongly agreed.


>PHP Object Injection in PHP 5: don't ever pass user input to unserialize(); use json_decode() instead.

>PHP Object Injection in PHP 7: either disable object loading or whitelist the allowed types; i.e. unserialize($var, false); or unserialize($var, ['DateTime']);

I'd stick to not unserializing user input in both cases, that's a can of worms you just don't want to open.

Also, RNG bugs are common and exploitable enough to be worth noting: Never use mt_rand, stick to openssl_random_pseudo_bytes.


That's the more sound advice.

Also, random_bytes() > openssl_random_pseudo_bytes() :P

Though if you use random_compat[1] it might be the same function ;)

[1]: https://github.com/paragonie/random_compat


Do prepared statements count as emulated if the DB doesn't support prepared statements, but the DB adapter is doing replacement during the encoding-to-typed-binary-wire-protocol step (i.e. replacement of typed tokens with other typed tokens) rather than by just concatenating strings?


By prepared statements, I mean your application actually sends the query string in a separate packet from the data, and thereby gives the data no opportunity to corrupt the query string.

You can stop all known attacks with escaping, but then you run into fun corner cases like http://stackoverflow.com/a/12118602/2224584

What PHP does is silently perform string escaping for you instead of doing a prepared statement. This is stupid, but PHP Internals discussions are painful (so changing it is unlikely to happen any time soon) and the userland fix is easy:

https://github.com/paragonie/easydb/blob/f90fbca34ac7b7b96f7...

If you're sending a 1+N packets (for N >= 1) to your RDBMS for each new query, then you're probably using prepared statements.


That doesn't really address my question. There are real prepared statements like you're talking about; there's the crap PHP does; and then there's what you get if you use e.g. Erlang's Postgres library, which is that you pass it this:

    execute("SELECT foo FROM bar WHERE baz = ?", [5])
and it becomes something like this:

    db_socket ! encode_to_wire_format(
      {'SELECT', "foo", "bar", [{'baz', 5}]}
    ))
Postrges's prepared statements aren't being used, but the distinction between "tainted" user-generated data and the "trusted" statement is maintained, because the 5 in the above is typed data being sent over the wire in a length-prefixed binary encoding, rather than string data being serialized+escaped into another string.

Which is to say, if you (or your users) tried to put a fragment of SQL in place of the 5 above, it'd just get treated as string-typed data, rather than SQL. You don't need packet-level separation to achieve that.

But is this approach still bad for "emulating" prepared statements, somehow? I don't see how.


> That doesn't really address my question.

Sorry.

The answer to your question is: I don't know, that's a new solution to me.

It looks like it could be safe, but I'd have to dig into its internals to know for sure. My gut instinct is that it's probably safer than escape-and-concatenate.

If any Erlang experts want to chime in with their insight, please do.

EDIT:

> Which is to say, if you (or your users) tried to put a fragment of SQL in place of the 5 above, it'd just get treated as string-typed data, rather than SQL. You don't need packet-level separation to achieve that. > > But is this approach still bad for "emulating" prepared statements, somehow? I don't see how.

Above you said:

> the distinction between "tainted" user-generated data and the "trusted" statement is maintained

If this holds true, then you've still solved the data-instructions separation issue and what Erlang does is secure against SQL injection. So, yes, you don't need to send separate packets to ensure query string integrity in that instance.

The shit PHP does is what I meant to decry when I was talking about emulated prepared statements.

Thanks for broadening my horizons a bit. I've edited my earlier post. :)


Is it too early to be suggesting Argon2? I've not heard of it until now, but the Wikipedia entry[1] shows that the paper was just released late last year.

[1] https://en.wikipedia.org/wiki/Argon2


> Is it too early to be suggesting Argon2?

Most environments don't have an implementation for it yet, and the ones that do will probably only get it through libsodium for the first few years.

> I've not heard of it until now, but the Wikipedia entry[1] shows that the paper was just released late last year.

Argon2 was the winner of the Password Hashing Competition, a several-year cryptography competition to find a new password hashing algorithm that would be secure against an attacker armed with a large GPU cluster.

The judges included a lot of famous cryptographers and security experts. Of particular note: Colin Percival, the author of scrypt, and Jens Steube, the project lead for hashcat.

I've read the paper and I think Argon2 will stand the test of time, but I could (of course) be wrong.


    Most environments don't have an implementation for it yet
The speed with which environments actually got implementations of previous secure algorithms was half the problem with their use, but I think Argon2 has this nailed. The README now links bindings for Go, Haskell, JavaScript, JVM, Lua, OCaml, Python, Ruby and Rust.

Disclaimer: I wrote the Ruby one.

https://github.com/P-H-C/phc-winner-argon2


I don't trust them. The various language bindings are maintained by random people who have gone through no particular vetting, and their code is not formally reviewed by anyone.

When I started looking through the node bindings, I found a number of minor bugs and a critical issue that left ~1% of passwords vulnerable.

I trust that the C developers do a good job, but phc-winner-argon2 does not appear to have ever made a formal release. Is master really always perfect?

It's not ready yet.


My suggestion, if you really want to overkill and knock it out of the park: use both. Run it through bcrypt, then through Argon2. If something happens where one of them is deemed insecure/bad practice, you've still got the other one.


This falls into the category of "coming up with your own system". It sounds theoretically as strong as either one, but it could end up weaker overall.

Define X as the maximum time you can allow a hash to run on your server, before it either starts to annoy users, or becomes a DoS issue. Moving from "Argon2, such that it runs for X" to "both algorithms, with a total cost X" means both of them are running with a much reduced work strength.

In the case of Argon2, there is an "iterations" counter, but t=2 is already reasonable, and on low end hardware, you may see t=1. So as per the spec, reducing runtime in order to make whole thing work is going to involve reducing m.

Except bcrypt is already not memory hard, and you've just reduced the only memory constraint in your algorithm.

And entirely possible there are bigger issues I didn't up with two minutes of thinking about it.


If you're going to use both, pay a crypto engineer (such as one of the authors of either library) to write that for you.

Don't do it yourself.


> 50% OF US LIVE NEAR THE COAST. WHY DOESN'T OUR DATA?

It's corrosive, expensive to get things to and from it for replacement, leaks destroy the hardware, it's not close to power generation, internet access needs cables because RF doesn't penetrate water, everything is going need watercooling which is rather expensive.


Imagining they completely solve the problem of sea water, leaks, etc, it still is amazing to think that you would want to do your server maint by pulling a data center out of the ocean on a boat and replacing hard drives and the like.

The only way this makes sense to me is if there is the ability to create something akin to the cargo container as a building block of a data center, where you can have arbitrary compute and storage plug into a greater complex.


Sounds like that's exactly what they want to do: they would only pull them out of the water every 5 years to do computer replacements / maintenance. If some components fail then who cares. They wouldn't do a full rebuild for 20 years.


I worked in large data centers before and I just don't see how this can be done practically. Data centers require quite a bit of physical maintenance.

Every computer design has some element that will render a large part of the design inoperable in case of failure. Either it is a SAN head (even if you have two, the fail over can malfunction), or a switch setup.

Then there are things like failures of simultaneously purchased components (hard drives purchased at the same time, that are worked the same load will roughly fail at the time).


Cloud datacenters are not complex heterogeneous mixes of components. There's no SAN head. It's one thing multiplied + some networking gear. Even if a top of rack switch fails they're still not going to yank the box yet because the TCO will be lowered by too much maintenance at this scale. They wait for their maint interval and fix everything at once (or just upgrade the hardware).


Think of a farm of small data center pods with cloud apps. When failure in a pod exceed useful threshold, apps are migrated out to other pods and the pod is retrieved, serviced and returned to its place.

A custom made barge with dynamic positioning gear and a grabbing/coupling system to detach the pod from the subsea grid, lift it, and then re-attach it would make the servicing relatively efficient.

I could see the roundtrip time for a full hardware replacement of a pod being under an hour, conceivably under 10-15 minutes.


> The only way this makes sense to me is if there is the ability to create something akin to the cargo container as a building block of a data center

Which is something Google already did[1].

1: https://en.wikipedia.org/wiki/Google_Modular_Data_Center


And if it is just cooling, why aren't datacenters built on the coast pumping sea water for cooling?


High cost of the land on the coasts?


And I guess you wouldn't even need to use sea water as a primary cooler, just as a secondary cooler. I.e. the primary cooler flows through your datacenter and the secondary cooler cools that primary cooler. So fewer pipes are exposed to sea salt.


So, same procedure as power plants.


Yeah. Instead of trying to protect the environment from what's inside (radioactivity), you are trying to protect what's inside from the environment (sea salt)!


I think they should be able to handle most of those issues, for example they may be able to use wave power for power generation. Furthermore, I don't see how RF opacity is an issue, seeing as anyone running a data center over RF is criminally insane.

edit: I can't grammar


You run management connections from separate computers over RF as a backup and for fail-over in event everything else fails. I mentioned that in a recent comment related to Github outage and preventing those:

https://news.ycombinator.com/item?id=10996442

Can help with certain security situations, too. Not sure how much that applies if it's underwater, though. Most infiltrations would probably turn into a denial-of-service attack effectively haha.


Fair enough. It doesn't get much more literally out of band, eh?


I tried to get it further by proposing a neutrino-based communication system. Can just send the signals straight through the planet itself to a datacenter very far away. I was told there would be both implementation problems and cost overruns with that project. Went back to default recommendations for wireless.


> everything is going need watercooling which is rather expensive

Why would everything need water cooling? I'd expect that something using water would be used to keep the air inside the unit cool, and then the cooling for the servers themselves would be ordinary air cooling.


Assuming they have ways around some of those issues this could work out rather well. The important thing is that it's only a research project. Microsoft's research turns out some really awesome stuff but plenty of it failed or is cut. Who knows what'll happen to this but it's a really interesting proposition!


For power you could use a small nuclear reactor, just like submarines and aircraft carriers. I've always thought it would be a fun exercise to take a decommissioned nuclear submarine and turn it into a floating datacenter.


When nuclear vessels are decommissioned they remove the reactors. Operating a naval reactor requires a constant watch by multiple highly-paid experts. They are not cost effective for electrical power generation.


The staffing required to operate a nuclear submarine is astronomical, so it would set the person:server node ratio back decades.

It might be fun but there are better things you can do with $700M.


Yeah but we only need the reactor, not the whole submarine.


Do you want the raft? Because that's how you get the raft.


Because 50% of us live near the coast, not past it. Maintaining anything in close association with an ocean is painful. Everything rusts. Even the stuff they say doesn't does. Anything that moves ages at an accelerated rate. As soon as the slightest waves start, little salt crystals appear on every surface.


> everything is going need watercooling which is rather expensive

Water cooling significantly reduces running costs which is why many DCs are switching to it. Over the long term you save money.


I've read that ancient Roman concrete was manufactured in a way that is seemingly lost to time. It's also practically impervious to the elements.

http://www.romanconcrete.com/docs/spillway/spillway.htm

If we want to find a way to build beneath the sea that's a good place to start.


According to this article, a 2013 study successfully reverse-engineered the recipe for Roman concrete:

http://www.ancient-origins.net/news-history-archaeology/rese...


That is an awesome link. Thanks for that.


Do data centers ever rely on RF for connectivity?


No, but it's a harder proposition to have fibre runs to a server which is in the ocean. You can throw a normal server somewhere silly and connect to it wirelessly, this is just the one remarkable exception.


What if you hook up to some of the undersea fiber that's already there? (I still think this is a bad idea)


I don't think that's possible, it would be sheathed in extremely thick steel and not something you can just splice onto.


The NSA disagrees.


Same question in my head, how do they prevent corrosition?

And why "everything is going need watercooling which is rather expensive" ?? If i remember correctly every OVH DC Server are using it.


For marine gear you either use a material which won't readily corrode, or you use sacrificial anodes which are galvanically consumed rather than something you care about.


Seriously. It's a proposition obviously cooked up by someone who has never spent more than a week's vacation by the sea.


-> It all started in 2013 when Microsoft employee, Sean James, who served on a US Navy submarine submitted a ThinkWeek Paper.


Well Seattle is right by the sea, so..


Microsoft isn't in Seattle, so...


I'm aware - I worked there. But it's pretty close - certainly not far enough to be oblivious to the ocean


It's a 20 minute drive from Redmond to Seattle.

It's a 3 hour drive from Seattle to the ocean.


I suppose it would depend on what you'd qualify as "the ocean". I lived in Seattle for 4.5 years, and while I don't consider Elliot Bay[1] to be "the ocean" per se, it's pretty close - and you have all the corrosion problems and general exposure to the elements that you'd get, which I think was the original point of the comparison in this thread.

[1] https://en.wikipedia.org/wiki/Elliott_Bay


Ok someone has never been to Seattle it seems. It's not a 3 hour drive to the ocean.


It's amazing that the drive to the Northern part of the Olympia Peninsula is so long, Bellingham which looks much farther is around a half hour closer.


With minimal traffic, it is: https://tinyurl.com/zbkp89q


Depends on the browser but there's usually a 1k character limit which isn't a great deal of state. Perhaps that's counted badly so you can get 1000 char * 4 bytes for unicode, but I wouldn't bet on it being handled the same in different browsers anyway.


Pity there's absolutely no documentation for any of it.


Yeah, I think it's going to be a while before this is well developed. For now, it's kind of like an alpha/beta product as far as I can tell.


They went to all the effort of getting it FCC certified and had injection molds made at considerable expense, this is the final product.

It's all about the miner! Which is useless.

It's all about the software! Which is closed source, and has no documentation.

You'll need one to develop with us! Why?

You can buy and sell services with our network! Only to other people who have $400 to drop on a Raspberry Pi.


Eh, it's a version of their final product. If I remember right this is the gen2 chip already.


Do you realize it's tens of millions of dollars to spin an 18nm ASIC? They're not doing additional revisions, the (at their websites admission) loss-producing miner is as good as it gets.


The setup guide involves a serial console and not putting it on a metal surface because the exposed bottom of the raspberry pi will short out.

https://21.co/setup


Oh man, didn't see that metal surface bit before... I feel like they launched before they were entirely ready for it. The next gen of these devices might be pretty cool once you are able to sell/host your own services, however it seems like $400 is going to be a stretch for most people when it looks like a slightly modified raspberry pi.

Regardless, I'll be watching 21.co to see what all they end up doing with these things, if nothing else I'm intrigued.


You are serious - wow! Is there a reason they left out $1-5 worth of a basic case for it? 121MM, good times lmao


Why bother with it at all? Authenticated database sounds like.. postgres.


Because it's distributed (NasDaq going bankrupt won't kill it) and all changes are in the open (no rogue employee stealing your shares). Sure you can achieve more or less the same thing with a relational DB or by drawing sticks on a piece of papyrus.


So you think Bitcoin is the first distributed database? No. There is no reason for a private blockchain to exist; it's a nonsensical idea.


This seems to be a case of "you don't understand" - what sources / reading would you suggest covers this issue ?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: