Hacker Newsnew | past | comments | ask | show | jobs | submit | 010101010101's commentslogin

Pump and dump is not the same as competition resulting in winners and losers, it’s a grift by the losers to profit at the expense of users through deception.

And this is why the OOP article makes zero sense. How is Cursor a grift to profit at the expense of users? Users use Cursor because they want to write code faster. Whether writing code faster is an inherently good thing is up to the users. Was Visual Studio (premium version once sold at ~$5000, btw) a pump & dump?


It confirms that indeed React Native is used, and not React.js/WebView, in case someone got confused.


It's used for a specific component in the start menu, it doesn't power the entire start menu.


> If you don't need what kafka offers, don't use it.

This is literally the point the author is making.


It seems like their point was to criticize people for using new tech instead of hacking together unscalable solutions with their preferred database.


That wasn't their point. Instead of posting snarky comments, please review the site guidelines:

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize."


But honestly, isn't that the strongest plausible interpretation according to the "site guidelines" ? When one explicitly says that the one camp chases "buzzwords" and the other chases "common sense", how else are you supposed to interpret it ?


> how else are you supposed to interpret it?

It's not so hard. You interpret it how it is written. Yes, they say one camp chases buzzwords and another chases common sense. Critique that if you want to. That's fine.

But what's not written in the OP is some sort of claim that Postgres performs better than Kafka. The opposite is written. The OP acknowledges that Kafka is fast. Right there in the title! What's written is OP's experiments and data that shows Postgres is slow but can be practical for people who don't need Kafka. Honestly I don't see anything bewildering about it. But if you think they're wrong about Postgres being slow but practical that's something nice to talk about. What's not nice is to post snarky comments insinuating that the OP is asking you to design unscalable solutions.


Which is crazy, because Kafka is like olllld compared to competing tech like Pulsar and RedPanda. I'm trying to remember what year I started using v0.8, it was probably mid-late 2010s?


But in this case, it is like saying "You don't need a fuel truck. You can transport 9,000 gallons of gasoline between cities by gathering 9,000 1-gallon milk jugs and filling each, then getting 4,500 volunteers to each carry 2 gallons and walk the entire distance on foot."

In this case, you do just need a single fuel truck. That's what it was built for. Avoiding using a design-for-purpose tool to achieve the same result actually is wasteful. You don't need 288 cores to achieve 243,000 messages/second. You can do that kind of throughput with a Kafka-compatible service on a laptop.

[Disclosure: I work for Redpanda]


I'll push the metaphor a bit: I think the point is that if you have a fleet of vehicles you want to fuel, go ahead and get a fuel truck and bite off on that expense. However, if you only have 1 or 2, a couple of jerry cans you probably already have + a pickup truck is probably sufficient.


Getting a 288-core machine might be easier than setting up Kafka; I'm guessing that it would be a couple of weeks of work to learn enough to install Kafka the first time. Installing Postgres is trivial.


"Lots of the team knows Postgres really well, nobody knows Kafka at all yet" is also an underrated factor in making choices. "Kafka was the ideal technical choice but we screwed up the implementation through well-intentioned inexperience" being an all too plausible outcome.


Indeed, I've seen this happen first hand where there was really only one guy who really "knew" Kafka, and it was too big of a job for just him. In that case it was fine until he left the company, and then it became a massive albatross and a major pain point. In another case, the eng team didn't really have anyone who really "knew" Kafka but used a managed service thinking it would be fine. It was until it wasn't, and switching away is not a light lift, nor is mass educating the dev team.

Kafka et al definitely have their place, but I think most people would be much better off reaching for a simpler queue system (or for some things, just using Postgres) unless you really need the advanced features.


I'm wondering why there wasn't any push for the Kafka guy to share his knowledge within his team, or to other teams?


Multiple factors (neither a good excuse, just reality):

* Lack of interest for other team members, which translated to doing what they thought was a sufficiently minimal amount of knowledge transfer

* An (unwise) attitude that "it's already set up and configured, and terraformed, so we can just acquire that knowledge if and when it's needed"

* Kafka guy left a lot faster than anybody really expected, not leaving much time and practically no documentation

* The rest of the team was already overwhelmed with other responsiblities and didn't have much bandwidth available

* Nobody wanted to be the person/people that ended up "owning" it, so there was a reverse incentive


Interesting, thanks!


This is the crux of my point.

Postgres is the solution in question of the article because I simply assume the majority of companies will start with Postgres as their first piece of infra. And it is often the case. If not - MySQL, SQLite, whatever. Just optimize for the thing you know, and see if it can handle your use case (often you'll be surprised)


The only thing that might take "weeks" is procrastination. Presuming absolutely no background other than general data engineering, a decent beginner online course in Kafka (or Redpanda) will run about 1-2 hours.

You should be able to install within minutes.


I mean, setting up Zookeeper, tweaking the kernel settings, configuring the hardware, the kind of stuff mentioned in guides like https://medium.com/@ankurrana/things-nobody-will-tell-you-se... and https://dungeonengineering.com/the-kafkaesque-nightmare-of-m.... Apparently you can do without Zookeeper now, but that's another choice to make, possibly doing careful experiments with both choices to see what's better. Much more discussion in https://news.ycombinator.com/item?id=37036291.

None of this applies to Redpanda.


True. Redpanda does not use Zookeeper.

Yet to also be fair to the Kafka folks, Zookeeper is no longer default and hasn't been since April 2025 with the release of Apache Kafka 4.0:

"Kafka 4.0's completed transition to KRaft eliminates ZooKeeper (KIP-500), making clusters easier to operate at any scale."

Source: https://developer.confluent.io/newsletter/introducing-apache...


Right, I was talking about installing Kafka, not installing Redpanda. Redpanda may be perfectly fine software, but bringing it up in that context is a bit apples-and-oranges since it's not open-source: https://news.ycombinator.com/item?id=45748426


Good on you for being fair in this discussion :)


Just use Strimzi if you're in a K8s world (disclosure used to work on Strimzi for RH, but I still think it's far better than Helm charts or fully self-managed, and far cheaper than fully managed).


Thanks! I didn't know about Strimzi!


Even though I'm a few years on from Red Hat, I still really recommend Strimzi. I think the best way to describe it is "a sorta managed Kafka". It'll make things that are hard in self-managed Kafka (like rolling upgrades) easy as.


>> If you don't need what kafka offers, don't use it.

> This is literally the point the author is making.

Exactly! I just don't understand why HN invariably always tends to bubble up the most dismissive comments to the top that don't even engage with the actual subject matter of the article!


If you have a mechanism that can prove arbitrary program correctness with 100% accuracy you’re sitting on something more valuable than LLMs.


so human powered LLM user ??


For sure, I've never seen a human write a bug or make a mistake in programming


that's why we create LLM for that


You’re absolutely right!


Why stop at one? Imagine how much safer we’d be with TWO cops per citizen! And all those extra jobs that would be created!


And then cops for the cops!


I don't think you know how policing works in America. To cops, there are sheep, sheepdogs, and wolves; they are sheepdogs protecting us sheep from the criminals. Nobody needs to watch the sheepdogs!

But lets think about their analogy a little more: sheepdogs and wolves are both canines. Hmm.

Also "funny" how quickly they can reclassify any person as a "wolf", like this student. Hmm.


> Nobody needs to watch the sheepdogs!

A sheepdog that bites a sheep for any reason is killed.


If I make a list of people’s private information publicly accessible on accident without their permission and you access it which one of us is liable?


Those two things don’t sound mutually exclusive to me.


> it's much more likely that someone wins the lottery this week (~100% in fact) than that someone gets struck by lightning this week

No it isn’t? Not only are the individual odds of winning the lottery lower than the individual odds of being struck by lightning, but far more people are exposed to lightning on a weekly basis than participate in any given lottery.


You can buy more than one lottery ticket. You can't buy more than one per-person chance of getting struck by lightning over a period.


You can buy thousands of lottery tickets and it won't meaningfully impact your odds of winning though. You can also go stand outside in a field with a metal rod in your hand during a thunderstorm. "You" isn't really the point, it's the cumulative probabilities that matter. For lotteries this is easy to calculate, for lightning strikes the best you can do is probably looking at past statistics.


Right, but the population of people who buy lottery tickets often do buy more than one lottery ticket, so even if the number of people buying lottery tickets divided by the per ticket chance to win is smaller than the number of people divided by the chance of being hit by lightning, the overall chance of anyone winning the lottery can be higher than the overall chance of anyone getting hit by lightning for the same period.


(This is why golf courses have storm sirens, incidentally.)


Lets reframe it: Someone always win the lottery, lighting doesn't always strike a human.


That’s not true either though - someone eventually always wins the lottery, someone eventually always gets struck by lightning. The latter usually happens before the former.


When a lottery happens, there is always a winner, that's how they work. When there is a lightning, it doesn't always strike a human. The former is (almost) guaranteed to happen, barring something out of the ordinary, while the latter usually doesn't happen, but does happen sometimes.


> When a lottery happens, there is always a winner, that's how they work.

It's possible this is a language and cultural thing, but most (possibly all?) state run lotteries in the United States don't work this way - they simply pick numbers from a pool at random and if no one has selected those exact numbers the prize pool rolls over to the next week. Powerball (afaik the largest US based lottery) works by selecting 5 numbers from a pool of 1-69, and one number from a pool of 1-26, if no one matches all six numbers then the primary prize pool carries into the next drawing. There's no guarantee anyone wins the jackpot on any given week, and often multiple weeks and sometimes months will pass with no winner, ballooning the jackpot further.

I'd more often refer to what you're saying as a "drawing" or a "sweepstakes" where tickets are sold and the winning ticket is selected from the pool of all tickets sold, but that's distinctly different to a "lottery" for me.


https://youtu.be/bpiu8UtQ-6E?si=ogmfFPbmLICoMvr3

"I'm closer to LeBron than you are to me."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: