Hacker Newsnew | past | comments | ask | show | jobs | submit | nonsens3's commentslogin

Hetzner randomly shuts down one of my servers every 2-3 months.


I am sorry what? I have been with Hetzner over 10 years hosting multiple servers without issue. There has to my knowledge never been a shutdown without notice on bare metal servers and it does not happen often. Like once every 2 years.


Hetzner suspended the account of a non-profit org I voluntarily supported, without explaining the reason or giving us possibility to take our data out. The issue was resolved only after bringing it to the public space. Even there they tried to pretend we are not actually their customers first


Right -- regrettable. And also a very rare anecdote.


I've been using Hetzner for years and what happens every 3-4 years is that a disk dies. So I inform them, they usually replace it within an hour and I rebuild the array, that's all.

Recently I've been moving most projects to Hetzner Cloud, it's a pleasure to work with and pleasantly inexpensive. It's a pity they didn't start it 10 years earlier.


Nice of them to test your failover for you.


I had the same issue, sent them ticket, they swapped server, and worked fine since then.


Yeah you do have to have redundancy built in, but we don't get random shutdowns.


Liquidation contracts and Arbitrage contracts do check the caller and would not allow to be executed by non-approved senders. This raises the bar, so that you can front run only contracts that you can implement and deploy.

If anyone could just replace an address and execute a profitable transaction by being first on existing contracts, surely miners would be doing it already, no?


> If anyone could just replace an address and execute a profitable transaction by being first on existing contracts, surely miners would be doing it already, no?

To a large degree not yet.


I stream live video (no audio) and watch how my dog chills at home while I am at work :) Sometimes I catch him on the sofa!


I don't want to hijack the discussion from IPFS, but Swarm has good ideas with respect to dynamic content if you're interested how that might work in a decentralized setting, for example see Swarm Feeds presented here: https://www.youtube.com/watch?v=92PtA5vRMl8


Swarm and IPFS together with Filecoin try to address the same problem - persistent data storage in a decentralised network.

Swarm is not at all "working already" - the incentivisation layer for nodes to store data for other users is not implemented and currently mostly theoretical and work-in-progress.

IPFS is more mature in comparison to Swarm, but the underlying architecture is rather different.


What is Swarm's intended incentivisation layer and where can I read about their plans? It seems like all documentation including plans are outdated, and I was ignored in their gitter chatroom where devs wanted to talk about dev things and outreach people seemed nonexistent.

I see things being stored on Swarm without incentives, like plain text


Swarm documentation might not be perfect, but it is not outdated - https://swarm-guide.readthedocs.io

I believe the chapters about PSS, Swarm Feeds, ENS, Architecture, among others, are mostly up-to-date.

You can read about the incentivisation layer at https://swarm-gateways.net/bzz:/theswarm.eth/ethersphere/ora...

Currently incentivisation is not integrated or implemented in Swarm, so a user has no guarantees about what happens with their uploaded content. If the node hosting it disconnects from the network, it will be gone. The plans to address this are through the sw^3 protocols suite and/or erasure coding.

Regarding plain text - it doesn't really matter what bytes you store in Swarm - encryption is implemented and you can store non-encrypted or encrypted bytes, this has nothing to do with incentives for persistent storage.

We try to do outreach and answer community questions when possible, but the team is not big and this is currently done on a best-effort base, we could definitely improve on that front, I agree.


Sorry for the offtopic, but have you ever used TLA+ to verify a non-trivial piece of production software? I recently found out about it, and was wondering if there are good public use-case details on it! I know Amazon has used it for some of their production systems, such as S3, and DynamoDB, but couldn't find many details.


This is the one paper I know of: http://research.microsoft.com/en-us/um/people/lamport/tla/am...

I know of some small scale use at Facebook (where I work), where it has been used to verify a non-trivial protocol under various combinations of failures.


I've been referred to this paper before when asking this question.

Whilst it does discuss that Amazon used TLA+ in nontrivial circumstances, it's still a long mental leap to me to get to "TLA+ for AWS S3 looked like this, and this is the sort of bug it helped quash".

It'd be great to see some of that sort of thing.


If you want to see what TLA+ specifications of distributed systems look like, there's this lecture series, each on a different distributed algorithm: https://github.com/tlaplus/DrTLAPlus

The kind of bugs you find when verifying distributed systems is, if this machine was in that state and that message was in transit, and then the other machine failed, then data was lost".

Basically, because formal verification of this kind ensures the specification is correct, it will find any kind of bug there is. But it's those bugs that are either a result of a severe flaw in the design, or bugs that would be hard to find in testing but their results may be serious, that make this worthwhile.


Thanks. This is also the paper I was referring to.


Yes (well, it's not in production just yet; it's a fairly complex distributed database, for which I verified consistency in the face of faults), and I gotta say -- I enjoyed every minute of it!

It has some objective qualities and some subjective ones. The objective ones (as confirmed by others) are that it's easy to learn and you can get productive very fast. I immediately started specifying the large project after two weeks of going through Lamport's tutorial[1]. The other objecive benefit is that it has a model checker. Deductively verifying such large specifications (while possible) is simply infeasible (or, rather, unaffordable and an overkill) -- regardless of the tool you use.

The subjective benefit is that the conceptual model quickly clicked for me, personally. I've never liked the aesthetics of modeling arbitrary computations as functions, and the TLA formalism handles sequential, concurrent and parallel computations in the same, extremely elegant way (and allows using the same proof technique regardless of the kind of algorithm, if you want to use the proof assistant). It made me understand what computation is, and how different computations relate to one another, a lot better. The concept -- without the technicalities of TLA+ -- is nicely overviewed in [2].

That said, formal methods are never easy because rigorous reasoning about complex algorithms isn't easy, but it beats hunting down bugs (especially in distributed systems) that are very hard to catch and may be catastrophic; it is also very satisfying. I've found TLA+ to add very little "accidental complexity" to this difficult problem.

[1]: https://research.microsoft.com/en-us/um/people/lamport/tla/h...

[2]: https://research.microsoft.com/en-us/um/people/lamport/pubs/...


Is the database and/or specification open source? A formal verification of a distributed system sounds very interesting.


No; not yet, at least. But this is a lecture series about using TLA+ for distributed algorithms: https://github.com/tlaplus/DrTLAPlus

There's also a Coq framework called Verdi (http://verdi.uwplse.org) for formalizing distributed systems, but I don't know much about it.


Nice, thanks!


Thanks a lot! I will try the `hyperbook` as a starting point.


FWIW there is a http://taxime.to/ in Bulgaria, which has a reputation system and was embraced by existing taxi companies - you get a well-ranked driver that is licensed and insured, with overall quality much higher compared to random taxis in the country.


Just wanted to say that I wrote the 2nd article you mentioned, and I fall more into the category of "engineer that just wants to play with things" rather than "run a mission-critical business", so take my article with a grain of salt. Thanks for the good summary!


1) Testing a Dockerfile, then scp'd it to the server, doesn't guarantee that it will build successfully on the server. However if you build successfully an image and push it to a repository - it will definitely work. Based on this, you can decide which one works for your setup. If you are on a production setup, I would say that you should use tested images, instead of hoping that the Dockerfile will build correctly.

3) As far as I understand your problem is that both containers would be running their own nginx, and would have to take port 80 for example. If this is what you mean, you could just EXPOSE port 80 from within the container, and it will automatically be mapped to a random port like 43152. Both containers would be mapped to different random ports (for example 43152 and 43153). You could then install Hipache and route different domain names/sites to different containers, essentially having Hipache proxy in front your Docker containers setup.

EDIT: There is also a project called Shipyard, which is Docker management... what I described above is called "Applications" inside Shipyard.

[0] https://github.com/shipyard/shipyard [1] https://github.com/dotcloud/hipache


Same problem here. Sideprojectors tries to solve this, but hasn't worked for me so far.

http://www.sideprojectors.com


Any particular ideas about what's wrong with it?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: