It seems possible the timing attack could have been (mostly) outside the Tor network:
They seemingly knew the guys Ricochet username, which includes the Tor hidden service ID. Then, they ordered the ISP to monitor connections to Tor entry nodes and monitored when he came online and cross-checked that with the logs from the ISP. The article mentions that once they caught the guard node of his Ricochet client. That might have been when they were sure.
I think there was a similar case a long time ago where something similar happened.
That is not cheap. And we have very high pressure here and not only cave and rock, but technic around it. And pushing air in and letting air out again will have degradation of that expensive equipment.
In recent decades most people were looking for concise information and contributed concise information. A wiki or forum was great for that!
Younger generations though have arguably less (stable) relationships. So, a community around a FOSS project that also provides social interaction and where experiences are exchanged - not just information - will work better for these generations. It makes them feel connected.
I mean, come on, perfect. But, it seems even M-W has done what a reasonable dictionary should(?) do and has updated the entry to add a "3: (computing)" entry to reflect its modern usage https://www.merriam-webster.com/dictionary/hallucination
No, hallucination is a better term. It conveys the important fact -- "these chatbots will just confidently state things as facts even though they're completely made up" -- in a way that everyone understands. If you used the term "confabulation" you'd have to start by explaining what "confabulation" means any time you wanted to talk about a chatbot making something up.
It's not even more accurate. The problem with hallucinations isn't a "gap in memory". The fundamental problem is that the chatbots are "plausible English text" generators(*), not intelligent agents. As such, no existing term is going to fit perfectly -- it neither hallucinates nor confabulates, it just generates probable token sequences(*) -- so we may as well use a word people know.
(*) I know it's slightly more complicated, especially with RLHF and stuff, but you know what I mean.
Especially since they admit it was a DDoS attack. What I find outrageous is first that they charge for incomming traffic (which is often free with other providers), but also 55$ per 100GB. For comparisson, Hetzner charges you 1€ per 1TB of outgoing traffic while incoming is free.
So even a reduction to 0.2% would habe been possible. Honestly don't understand why anyone feels comfortable overpaying so much. Especially when there is no configurable spending limit.
Eh, I wouldn’t say that’s necessarily the case. AWS support, for example, tends to be really good about waiving charges for things that are clearly your mistake, like an unused instance that you forgot to turn off for a couple months. That’s not because hosting instances doesn’t actually cost Amazon anything! It’s because they want to keep you as a customer even if it loses them a bit of money right now.
In the Netlify case, though, insisting that this person still pay 5% is downright insulting. I’m sure they’re taking a hit already - just waive the whole thing.
AWS support, for example, tends to be really good about waiving charges for things that are clearly your mistake, like an unused instance that you forgot to turn off for a couple months.
This is an admission that their UX sucks and makes it hard to know what state your account is in and what you're paying for. They waive the fees because a few high profile cases of people paying thousands due to the AWS console being awful would drive a lot of customers away.
Nowadays for customers spending millions of dollars you'd expect (at least, Amazon would expect) that the customer has a FinOps department who are already working on getting the most 'bang for their buck' out of what they're paying for and minimising their spend, and they would jump to another platform in a heartbeat if they thought they could save money. It's not unreasonable to think that you don't need to do these customers any favours to keep their business, because those customers are big enough to look after themselves.
For smaller customers, the friendliness of customer support and the flexibility to help them if they make mistakes is much more likely to be a retention consideration. And who knows when a company spending 3 digits a month becomes a customer spending 6 digits a month? You want to be the provider of choice in case the company grows.
AWS will save us so much money! We don't have to pay for people to look after hardware! ... just pay for people to set up AWS, and maintain AWS, and make sure we're not paying thousands extra for AWS...
Yeah, exactly. I’m talking “I got billed $15 for an instance I haven’t used for the last few months. Can you refund me?”, not “You guys mind writing off a million or two?”
Is it, though? We've been getting a lot of pushback for months, even for things that weren't really completely our fault (and were made worse by the horrible lag of cost explorer), or even for things that were aws bugs. Maybe now it's official policy, but definitely it's been hard to get refunds for a while now. They were throwing tens of thousands of dollars of credits at us to just play with new services a year and a half ago.
I wonder if you’ve hit some kind of internal limit. I don’t know if such things exist but I’ve noticed a pattern around how discounts and credits are allocated.
I do feel your pain though. Managing AWS costs can be a full time job itself.
That’s not because hosting instances doesn’t actually cost Amazon anything
Except it doesn't cost them anything. The marginal cost of keeping your single instance running is $0 (unless they were 100% out of capacity and they could have sold that instance to someone else either at full price or spot price)
But that's not what's happening: they aren't keeping a full host for you.
Your argument is like saying that a bus traveler costs the gas needed to power the bus, but it's never the case: the bus would be cruising no matter what. And symmetrically the VM host would be up no matter what you did with your instance.
You assume that the hardware would be unneeded, but that's a very strong assumption.
It would be very bad for any cloud provider to leave hosts with only one VM running on it, and you can be pretty sure only very small minority of their park that end up in that situation where shutting down a single VM would lead to a shut-down of the entire host, because it means that the host was vastly under-used in the first place.
As far as I know, most cloud hosts don't actually support automatically moving live VMs, so I think it's fairly common for a host to be left running a single VM.
At least in AWS, they never supported this, and in fact may require you to reboot an instance occasionally in order for it to be moved to a new hardware host (typically when they are upgrading their hardware).
But why are you talking about moving VMs?! Looks like you're adding tons of far-fetched speculations at every step of your reasoning.
The way you easily deal with this issue is very simple and does not require moving VMs: you just allocate newly spawned VMs to existing hosts with available room! When you do so (and they obviously all do!) you end up with little unused hardware…
Say you have 3 hosts, each with a capacity of 10 VMs. At some point you have 28 running VMs - 10 on host1, 10 on host2, 8 on host3. Someone then closes down 2 of the VMs on host1, and 7 of the VMs on host3.
Now you have 19 VMs running, but need to keep all 3 hosts powered. If you don't have live VM moving, you are now forced to keep Host3 running only because 1 VM is running on it, even if that VM is idle. So, this one idle VM is responsible for all the energy consumption of host3, and will continue to be so until at least 3 more VMs get started (since you have room for 2 more VMs on host1).
If you did have live VM migration, or if the idle VM were powered down instead of running idle, you could close host3 completely, moving the VM to host 1, and only re-open host3 if needed for new VMs.
This is equivalent to the problem of memory fragmentation. Even though overall usage is low, if host usage is highly fragmented and you aren't allowed to move used memory around (compacting), you can end up consuming far more than actually needed.
Except you don't have 3 hosts but 3 thousands, and during the time you're stopping the 9 VMs somebody else is starting 5 or 15 new ones!
Yes it is similar to memory fragmentation in some way, but your argument is like saying an integer stored on the heap costs a full memory page! You realize that it's nonsense. Sure in extreme edge cases it can, but that's not a good metric to know the memory footprint of an integer!
Being able to move VMs is nice as it allows more host use, but it's doesn't mean hosts end up with single idle VMs often!
Then you need to account for their low share in idle VMs when measuring how much electricity it is responsible for. If it's only the case for 1% of the idle VMs, then you need to count only 1% of the electric power of a host per idle VM (+ the small fraction of a host CPU power that an idle VM consumes). In any case, it's going to be very small (~$20/year)[1] and the “it costs them nothing” is a good approximation of that, or at least a much better one that assuming that the cost they charge you reflects an expense on their side (which is the point that was argued by rafram at the very start of this discussion.
[1]: let say 10W, which at $.2 per kWh[2], ends up costing $17.5 for an entire year!
What does "idle" mean? Both a Linux or Windows OS not running any active software will still do computation and even network traffic (disk cache wrangling, indexing, checking for updates, NTP clock syncing etc), and requires electricity to do so.
It's very low cost, especially if its on a VM from a host that otherwise runs other VMs, but it's not 0. And if it happens to be the last VM preventing a hardware server from completely powering off, then it's actually quite far from 0.
At that scale, probably. Especially since AWS would offer you discounts over their public prices too. Netlify et al probably stop making sense when they cost more than a few engineering hours and the cost of AWS, Azure or GCP
>it shows how disconnected this is from their real bandwidth cost
It's a value added service, they don't trade bandwidth as a commodity. Therefore unfair characterisation.
Plus, if you dive deeper: Bandwidth doesn't cost anything because bandwidth is just about pulsing some light in some glass fiber and applying some minuscule voltage on some metal fiber.Okay, maybe it costs some amount of electricity but all this is just a business model for paying on capital expenditure through time share arrangements. People can have all kind of models for this, for example you can come together with others or pay it all by yourself to install the equipment and have free bandwidth for the lifetime of the equipment.
It's all just arrangements to cover the capital investment and earn something on top of it. That's not a scam. A scam would be if they didn't account correctly for the timeshare usage or induce usage to boost payments.
I really don't get your point. If you're a hosting provider, the very thing you're selling is bandwidth (and disk space). Everything else is a value added service.
I disagree, they are not a colocation service that happens to rent servers. They are opinionated platform for deploying web applications in a specific way. The bandwidth happens to be a necessity to do that and also a useful metric for billing by usage.