Hacker Newsnew | past | comments | ask | show | jobs | submit | pell's commentslogin

> Most of the whining I've heard about DB boils down to inconvenience in situations nobody could have predicted nor helped [..]

I agree with you that there’s a lot of complaining and it does get tiresome. The German train system is one of the most complex in the world and works closer to an interconnected spider web than the typical straight line systems in other countries.

However much of this has been predicted in the past. I think that’s why a lot of people are annoyed. Here are some sources if you’re interested to read more:

(2006) Audit critique regarding the bad state of DB funding after privatization: https://dserver.bundestag.de/btd/16/008/1600840.pdf

(2011) DB is not spending enough on the track network: https://taz.de/Investitionen-in-das-Schienennetz/%215117195/

(2014) State of German train bridges: https://www.zeit.de/mobilitaet/2014-09/deutsche-bahn-bruecke...


> If you find yourself in a similar situation and want out - call emergency services, say chest pain, out of breath, and where you are.

It is if you instruct people how to best lie to emergency services because your train was delayed.


DB‘s quality decline started when this move to privatization happened. They didn’t put money into maintenance, closed lot of tracks and ignored all warnings by experts who predicted this exact scenario more than a decade ago. Most of the time now DB issues seem to be connected to a lack of available tracks. A super fast ICE has to wait for some slow train to clear the path. There’s an issue on one track and thus the entire traffic is backed up till that’s resolved.

I do think they’re working on improving these conditions. But I wish they did more to communicate that. Where is the big marketing campaign explaining how they got there, apologizing, and explaining how they will do better?


That was my first thought too. I replayed it recently. Still a great game.


It’s a complicated question nowadays as the first generation of networks that were really all about networking have mostly died out or morphed into algorithmic feeds. So the question is whether there’s a market for this classical networking at all. If so for a network to make sense you do need networking effects of some kind as people would likely not want to pay to be registered to a service that shows them having 0 friends. I do think it’s a difficult bet.

I can however see this for niches and small groups. Something more akin to old school bulletin board forums. In a sense Metafilter works a bit that way already.


That distinction resonates.

If you assume the unit of value is a pre-existing group rather than an individual user, do you think paid access becomes viable earlier, or does it introduce different failure modes?

I’m interested in whether group-first adoption meaningfully changes the cold start problem, or simply moves it?


Was there any concern about giving the LLM access to this return data? Reading your article I wondered if there could be an approach that limits the LLM to running the function calls without ever seeing the output itself fully, e.g., only seeing the start of a JSON string with a status like “success” or “not found”. But I guess it would be complicated to have a continuous conversation that way.


> No model should ever know Jon Snow’s phone number from a SaaS service, but this approach allows this sort of retrieval.

This reads to me like they think that the response from the tool doesn’t go back to the LLM.

I’ve not worked with tools but my understanding is that they’re a way to allow the LLM to request additional data from the client. Once the client executes the requested function, that response data then goes to the LLM to be further processed into a final response.


I was confused by that too. I think I've figured it out.

They're saying that a public LLM won't know the email address of Jon Snow, but they still want to be able to answer questions about their private SaaS data which DOES know that.

Then they describe building a typical tool-based LLM system where the model can run searches against private data and round-trip the results through the model to generate chat responses.

They're relying on the AI labs to keep their promises about not training in data from paying API customers. I think that's a safe bet, personally.


Makes sense. I agree that it’s probably a safe bet too. Not sure how customers would feel about it though.

It’s also funny how these tools push people into patterns by accident. You’d never consider sending a customer’s details to a 3rd party for them just to send them back, right? And there’s nothing stopping someone from just working more directly with the tool call response themselves but the libraries are setup so you lean into the LLM more than is required (I know you more than anyone appreciate that the value they add here is parsing the fuzzy instruction into a tool call - not the call itself).


> You’d never consider sending a customer’s details to a 3rd party for them just to send them back, right?

I use hosted database providers and APIs like S3 all the time.

Sending customer details to a third party is fine if you trust them and have a financial relationship with them backed by legal agreements.


That would be the normal pattern. But you could certainly stop after the LLM picks the tool and provides the arguments, and not present the result back to the model.


> No you don't.

Growing old together is what couples want. He was afraid he would lose her to cancer thus he’s happy they’re getting older as it means they’re both still here together.


In a grand view removed 1000 years from now the introduction of digital communication and their network effects must have been pivotal though even if it was in a negative way (which very well may be). I just doubt that would then be a point about Facebook specifically as this is just a tiny slice of that era, I think.


We are a tiny slice of history. A thousand years from now we may be hazily recalled as the period that slavery was abolished (edit: sadly enough we probably won't be) , electricity and computers were invented, 3 of the world wars occured, and the first great population explosion and cultural implosion took place. Most electronic information will be lost so our century will be known as the electronic dark ages. All of this will be studied by the advanced artificial intelligence entities and the sentient cockroaches, the last surviving carbon life forms on earth.


3 world wars?


MySpace was much earlier, as well as a few other forerunners


Similarly overlooked is the philosophy of the Americas before European colonization. A great read I recommend to anyone who’s interested: “ Aztec Philosophy: Understanding a World in Motion” by James Maffie

It obviously only focuses on the Aztecs so hardly a deep dive on all there is to learn.


Thanks for the rec!


It’s that time of the year again where we all realize that relying on AWS and Cloudflare to this degree is pretty dangerous but then again it’s difficult to switch at this point.

If there is a slight positive note to all this, then it is that these outages are so large that customers usually seem to be quite understanding.


Unless you’re say at airport trying to file a luggage claim … or at the pharmacy trying to get your prescription. I think as a community we have a responsibility to do better than this.


> I think as a community we have a responsibility to do better than this.

I have always felt so, but my opinion is definitely in the minority.

In fact, I find that folks have extremely negative responses to any discussion of improving software Quality.


Merely reducing external dependencies causes people to come out in rashes.

A large proportion of “developers” enjoy build vs buy arguments far too much.


I always see such negative responses when HN brings up software bloat ("why is your static site measured in megabytes").

Now that we have an abundance of compute and most people run devices more powerful than the devices that put man on the moon, it's easier than ever to make app bloat, especially when using a framework like Electron or React Native.

People take it personally when you say they write poor quality software, but it's not a personal attack, it's an observation of modern software practices.

And I'm guilty of this, mainly because I work for companies that prioritize speed of development over quality of software, and I suspect most developers are in this trap.


What I find annoying, is people making fun of folks that choose to “roll their own.”

The typical argument that I see, is homemade encryption, which is quite valid.

However, encryption is just a tiny corner of the surface.

Most folks don’t want to haul in 1MB of junk, just so they can animate a transition.

Well, I guess I should qualify that: Most normal folks wouldn't want to do that, but, apparently, it's de rigueur for today's coders.


I think we have a new normal now though. Most web devs starting now don't know a world without React/Vue/Solid/whatever. Like, sure you can roll your own HTML site with JS for interactivity, but employers now don't seem to care about that; if you don't know React then don't bother.


You aren’t cloudflare’s customer in these examples. It depends on the companies that are actually paying for and using the service to complain. Odds are that they won’t care on your behalf due to how our society is structured.

Not really sure how our community is supposed to deal with this.


“We” are the ones making the architecture and the technical specs of these services. Taking care for it to still work when your favourite FAANGMC is down seems like something we can help with.


> If there is a slight positive note to all this, then it is that these outages are so large that customers usually seem to be quite understanding.

Which only shows that chasing five 9s is worthless for almost all web products. The idea is that by relying on AWS or Cloudflare you can push your uptime numbers up to that standard, but these companies themselves are having such frequent outages that customers themselves don't expect that kind reliability from web products.


> It’s that time of the year again

It's monthly by now


If I choose AWS/cloudflare and we're down with half of the internet, then I don't even need to explain it to my boss' bosses, because there will be an article in the mainstream media.

If I choose something else, we're down, and our competitors aren't, then my overlords will start asking a lot of questions.


Yup. AWS went down at a previous job and everyone basically took the day off and the company collectively chuckled. Cloudflare is interesting because most execs don’t know about it so I’d imagine they’d be less forgiving. “So what does cloudflare do for us exactly? Don’t we already have aws?”


And if everyone else is down, and you are not, you will get no credit.


Or _you_ aren't down, but a third-party you depend on is (auth0, payment gateway, what have you), and you invested a lot of time and effort into being reliable, but it was all for less than nothing, because your website loads but customers can't purchase, and they associate the problem with you, not with the AWS outage.


Right. Whereas if we get whacked with a random DDoS, that's my fault.


In reality it is not half of the internet. That is just marketing. I've personally noticed one news site while others were working. And I guess sites like that will get the blame.


Happy to hear anyone's suggestions about where else to go or what else to do in regards to protecting from large-scale volumetric DDoS attacks. Pretty much every CDN provider nowadays has stacked up enough capacity to tank these kind of attacks, good luck trying to combat these yourself these days?


Somehow KiwiFarms figured it out with their own "KiwiFlare" DDOS mitigation. Unfortunately, all of the other Cloudflare-like services seem exceptionally shady, will be less reliable than Cloudflare, and probably share data with foreign intelligence services I have even less trust for than the ones Cloudflare possibly shares them with.


Is a DDOS more frequent and/or worse than stochastic CDN outages?


Anubis and/or Bunny are good alternatives/combination depending on your exact needs

- https://anubis.techaro.lol/

- https://bunny.net/


Unfortunately Anubis doesn't help where my pipe to the internet isn't fat enough to just eat up all the bandwidth that the attacker has available. Renting tens of terabits of capacity isn't cheap and DDoS attacks nowadays are in the scale of that. BunnyCDN's DDoS protection is unfortunately too basic to filter out anything that's ever so slightly more sophisticated. Cloudflare's flexibility in terms of custom rulesets and their global pre-trained rulesets (based on attacks they've seen in the past) is imo just unbeatable at this time.


The Bunny Shield is quite similar to the Cloudflare setup. Maybe not 100% overlap of features but unless you’re Twitter or Facebook, it’s probably enough.

I think at the very least, one should plan the ability to switch to an alternative when your main choice fails… which together with AWS and GitHub is a weekly event now.


We live in the world of mass internet surveillance. DDoS like this are not very common, partly because people who do it keep going to jail.


Why do people on a technical website suggest this? It's literally the same snake oil as Cloudflare. Both have an endgame of total web DRM; they want to make sure users "aren't bots". Each time the DRM is cracked, they will increase its complexity of the "verifier". You will be running arbitrary code in your big 4 browser to ensure you're running a certified big 4 browser, with 10 trillion man hours of development, on an certified OS.


Because there is a real problem that needs to be solved one way or another.


Anubis doesn't solve anything, bud.


bunny.net is not reachable for me too... really funny

https://imgur.com/a/8gh3hOb


All the edges are gone! :)


I clicked the image thinking I was seeing the message you were getting (geoblocked in the UK), then realised I'd clicked an imgur link :facepalm:

(Note: Zero negative sentiment towards imgur here)


Just accept that a DDoS might happen and that there's nothing you can do about it. It's fine, it's just how the Internet works.


That was possible when a DDos was usually still an occasional attack by a bad actor.

Most time I get ddosed now it's either Facebook directly, Something something Azure or any random AI.


That sounds like an app-level (D)DoS, which is generally something you can mitigate yourself.


It's harder when it's a new group of IPs and happens 2-3x every month.


And if you do rule based blocking they just change their approach. I am constantly blocking big corps these days, barely any work with normal bad actors.

And lots of real users time wasted for captchas.


How (or to what end) would Facebook want to directly DoS someone?


What do they even have an spider for? I never saw any actual traffic with source Facebook. I don't understand either, but it's their official IPs, their official bot headers and it behaves exactly like someone who wants my sites down.

Does it make sense? Nah, but is it part of the weird reality we live in. Looks like it

I have no way of contacting Facebook. All I can do is keep complaining on hackernews whenever the topic arrises.

Edit:// Oh and I see the same with Azure, however there I have no list of IPs to verify it's official just because it looks like it.


I got DoS'd by them once, email not HTTP traffic though. Quick slip of their finger and bam low cost load testing.


So accept that your customers won't be able to use your services whenever some russian teenager is bored? Yeah, good luck with justifying that choice.


And how often does that happen?


For the service I'm responsible for, 4 times in the last 24 hours.


Congratulations, you're the exception rather than the norm.


Oh no, we had 30 minutes of downtime this year :(


5 9's is like 7 minutes a year. They are breaking SLAs and impacting services people depend on

Tbh though this is sort of all the other companies fault, "everyone" uses aws and cf and so others follow. now not only are all your chicks in one basket, so is everyone elses. When the basket inevitably falls into a lake....

Providers need to be more aware of their global impact in outages, and customers need to be more diverse in their spread.


99.999% availability is around 5 minutes or so of downtime per year.


> Providers need to be more aware of their global impact in outages

So you think the problem is they aren't "aware"?


These kinds of outages continue to happen and continue to impact 50+% of the internet, yes, they know they have that power, but they dont treat changes as such, so no, they arent aware. Awareness would imply more care in operations like code changes and deployments.

Outages happen, code changes occur; but you can do a lot to prevent these things on a large scale, and they simply dont.

Where is the A/B deployment, preventing a full outage? What about internally, where was the validation before the change, was the testing run against a prodlike environment or something that once resembled prod but hasnt forever?

They could absolutely mitigate impacting the entire global infra in multiple ways, and havent, despite their many outages.


They are aware. They don't want to pay the cost benefit tradeoff. Education won't help - this is a very heavily argued tradeoff in every large software company.


I do think this is tenable as long as these services are reliable. Even though there have been some outages I would argue that they’re incredibly reliable at this point. If though this ever changes the costs to move to a competitor won’t be as simple as pushing a repository elsewhere, especially for AWS. I think that’s where some of the potential danger lies.


> 30 minutes of downtime

> this is tenable as long as these services are reliable

do you hear yourself, this is supposed to be a distributed CDN. imagine if HTTP had 30 minutes of downtime a year.

and judging by the HN post age, we're now past minute 60 of this incident.


> and judging by the HN post age, we're now past minute 60 of this incident.

Huh? It's been back up during most of this time. It was up and then briefly went back down again but it's been up for a while now. Total downtime was closer to 30 minutes


twitter still down for me


Twitter is down while Mastodon is proudly and strongly still standing up. I knew this day would come.


i also can host apps for 100 users


> especially for AWS

CF can be just as difficult if not more to migrate off of especially when using things like durable objects


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: