Hacker Newsnew | past | comments | ask | show | jobs | submit | iamkonstantin's commentslogin

I've had a chance to use Gleam for a few small components in production and I'm loving it. That's a very cool talk.

I think the parent is trying to say that whatever issues Italy may have internally, it's not up to Cloudflare to comment or enact solutions on their own.

What's ridiculous is that the subscription at 180€/month (excl. VAT) is already absurdly expensive for what you get. I doubt many would sign up for the per-API usage as it's just not sustainable pricing (as a user).

For the bizarre amount of work that gets done for that 180 euro, it is really cheap. We are just getting used to it and sinking prices everywhere, it is just that CC is the best (might be taste or bias, I at least think so), so we are staying with it for now. If it gets more expensive, we will go and try others for production instead of just trying them to get a feel for the competition as we do now.

This take is ridiculous. Nearly everyone who uses Max agrees that what they get for the money paid is an amazing deal. If you don't use or understand how LLMs fit in your workflows, you are not the target customer. But for people who use it daily, it is a relatively small investment compared to the time saved.

> If you don't use or understand how LLMs fit in your workflows, you are not the target customer.

I feel like this is a major area of divergence. The "vibes" are bifurcating between "coding agents are great!" and "coding agents are all hype!", with increasing levels of in-group communication.

How should I, an agent-curious user, begin to unravel this mess if $200 is significantly more than pocket change? The pro-agent camp remarks that these frontier models are qualitatively better and using older/cheaper approaches would give a misleading impression, so "buy the discount agent" doesn't even seem like a reasonable starting point.


If you just want to play I believe the Google alternative can even run on the free tokens you get from them. It's not going to do all that much before running out of tokens but you can probably have it make a simple single page web site for a company or something like that.

The$20 plan exists for a reason. If you're interested you can give it a whirl.

That entirely depends on your business case. If that call costing 50 Cent has done something for me which would have taken me more than 1 minute of paid working time to do it's sustainable.

It pays for itself in a day for some folks. It is a lot but it’s still cheap.

Have you noticed how Google search summaries have taken the shape of those annoying blogposts that take you through several “What is a computer program” explainers before answering the question?

Yup, comments on the wired article: https://news.ycombinator.com/item?id=46548451

I like the idea that you can follow cities and they appear in the timeline. Also knowing it's all just markdown - let's just say it's refreshing to have things that don't endeavour to operate on hyperscale.


> I wish I'd paid more attention to the *BSDs

Same! I've been trying to reduce complexities in my stack (e.g. Docker) and while systemd exists, I think the concept of "jails" or sandboxes is quite neat. I love tools that come with better out-of-the-box readiness.


systemd nowadays has a lot of sandboxing built in [0]! You can achieve jails using just systemd and no separate container manager.

[0]: https://wiki.archlinux.org/title/Systemd/Sandboxing


Nice. This also means sex workers (which are a legal and protected profession here) will finally be able to use the full range of card services without being subjected to the prudish views of Visa/Master Card. Same for adult entertainment websites and generally any service that doesn't align with the "US man in charge mindset". I think that alone makes it worth it.


Not just Visa/Master Card. Banks themselves have been known to deny business accounts to sex workers because of whatever reason they can think of, forcing them to use personal bank accounts that then get them banned because of "business use".

The 3000 euro limit will pose a problem for these businesses, though I suppose you could just take out half a dozen cards and rotate funds.


I think the friction really comes from their "partners" like MasterCard who are so averse to adult related services. Paying for adult entertainment is not tabu in most of Europe, being able to transact cashless... would be a very welcome improvement and even lift some of that business out of the grey zone.


Not really, in the Netherlands prostitution is legal but sexworkers have a hard time getting bank accounts.

The banks are wary of the connection to human trafficking and the obvious 99% cash transactions.


Because their card providers are Visa/MasterCard who are known to have these limitations. Having a way to operate without them in the loop will certainly lift such limitations


No it's mostly about anti-fraud and anti-money laundering obligations. It's a risk thing.


I don’t think so, banks would love for these transactions to move from cash to cards


The Dutch Labour Union for Sexworkers says otherwise. They've been complaining about this for years: https://www.nu.nl/economie/6356783/sekswerkers-krijgen-lasti...


Yes, it's a similar situation here in Belgium. Even though they've gained labour status and protections by law, some banks are still being difficult for various reasons including risk due to historical associations with criminal activities. I'm not saying all will be well... but a major obstacle (payment processors imposing their own views on the topic) will be removed.


The hate starts with the name. LLMs don't have the I in AI. It's like marketing a car as self-driving while all it can do is lane assist.


That's because there are at least 5 different definitions of AI.

- At it's inception in 1955 it was "learning or any other feature of intelligence" simulated by a machine [1] (fun fact: both neural networks and computers using natural language were on the agenda back then)

- Following from that we have the "all machine learning is AI" which was the prevalent definition about a decade ago

- Then there's the academic definition that is roughly "computers acting in real or simulated environments" and includes such mundane and algorithmic things as path finding

- Then there's obviously AGI, or the closely related Hollywood/SciFi definition of AI

- Then there's just "things that the general public doesn't expect computers to be able to do". Back when chess computers used to be called AI this was probably the closest definition that fits. Clever sales people also used to love to call prediction via simple linear regression AI

Notably four out of five of them don't involve computers actually being intelligent. And just a couple years ago we still sold simple face detection as AI

1: https://www-formal.stanford.edu/jmc/history/dartmouth/dartmo...


And yet, somehow, "it's not actually AI" has wormed its way into the minds of various redditors.


It's the opposite. It is doing the driving but you really have to provide lane assist, otherwise you hit the tree, or start driving in the opposite direction.

Many people claim it's doing great because they have driven hundreds of kilometers, but don't particularly care whether they arrived at the exact place, and are happy with the approximate destination.


Then what do they have?

Is the siren song of "AI effect" so strong in your mind that you look at a system that writes short stories, solves advanced math problems and writes working code, and then immediately pronounce it "not intelligent"?


It doesn’t actually solve those math problems though, does it? It replies with a solution if it has seen one often enough in training data or something that looks like a solution but isn’t. At the end, the human still needs to proof it.

Same for short stories, it doesn’t actually write new stories, it rehashes stories it (probably illegally) ingested in training data.

LLMs are good at mimicking the content they were trained on, they don’t actually adopt or extend the intelligence required to create that content in the first place.


Oh, I remember those talks. People actually checking whether an LLM's response is something that was in the training data, something that was online that it replicated, or something new.

They weren't finding a lot of matches. That was odd.

That was in the days of GPT-2. That was when the first weak signs of "LLMs aren't just naively rephrasing the training data" emerged. That finding was controversial, at the time. GPT-2 couldn't even solve "17 + 29". ChatGPT didn't exist yet. Most didn't believe that it was possible to build something like it with LLM tech.

I wish I could say I was among the people who had the foresight, but I wasn't. Got a harsh wake-up call on that.

And yet, here we are, in year 20-fucking-25, where off-the-shelf commercially available AIs burn through math competitions and one shot coding tasks. And people still say "they just rehash the training data".

Because the alternative is: admitting that we found an algorithm that crams abstract thinking into arrays of matrix math. That it's no longer human exclusive. And that seems to be completely unpalatable to many.


Based on the absolute trash I usually get out of ChatGPT, Claude, etc, I wouldn’t say that it writes “working” code.


You and I must be using very different versions of Claude. As an infra/systems guy (non-coder), the ability for me to develop some powerful tools simply by leveraging Claude has been nothing short of amazing. I started using Claude about 8 months ago and have since created about 20 tools ranging from simple USB detection scripts (for secure erasing SSDs) to complex tools like an Azure File Manager and a production-ready data migration tool (Azure to Snowflake). Yes, I know bash and some Python, but Claude has really helped me create tools that would have taken many weeks/months to build using the right technology stack. I am happy to pay for the Claude Max plan; it has returned huge dividends to my productivity.

And, maybe that is the difference. Non coders can use AI to help build MVPs and tooling they could otherwise not do (or take a long time to get done). On the other hand, professional coders see this as an intrusion to their domain, become very skeptical because it does not write code "their way" or introduces some bugs, and push back hard.


Yeah. You're not a coder, so you don't have the expertise to see the pitfalls and problems with the approach.

If you want to use concrete to anchor some poles in the ground, great. Build that gazebo. If it falls down, oh well.

If you want to use concrete to make a building that needs to be safe and maintained, it's critical that you use the right concrete mix, use rebar in the right way, and seal it properly.

Civil engineers aren't "threatened" by hobbyists building gazebos. Software engineers aren't "threatened" by AI. We're pointing out that the building's gonna fall over if you do it this way, which is what we're actually paid to do.


Sorry, carefully read the comments on this thread and you will quickly realize "real" coders are very much threatened by this technology - especially junior coders. They are frightened their job is at stake by a new tool and take a very anti-AI view to the entire domain - probably more-so for those who live in areas where the wages are not high to begin with. People who come from a different perspective truly see the value of what these tools can help you do. To say all AI output is slop or garbage is just wrong.

The flip of this is to understand and appreciate what the new tooling can help you do and adopt. Sure, junior coders will face significant headwinds, but I guarantee you there are opportunities waiting to get uncovered. Just give it a couple of years...


No. You're misreading the reactions because you've made some incorrect assumptions and you do a fundamentally different job than those people.

I legit don't know any professional SWE who feels "threatened" by AI. We don't get hired to write the kind of code you're writing.


Every HN thread about AI eventually has someone claiming the code it produces is “trash” or “non-working.” There are plenty of top-tier programmers here who dismiss anyone who actually finds LLM-generated code useful, even when it gets the job done.

I’m tempted to propose a new law—like Poe’s or Godwin’s—that goes something like: “Any discussion about AI will eventually lead to someone insisting it can’t match human programmers.”


By that metric: do you?

Seeing an AI casually spit out an 800 lines script that works first try is really fucking humbling to me, because I know I wouldn't be able to do that myself.

Sure, it's an area of AI advantage, and I still crush AI in complex codebases or embedded code. But AI is not strictly worse than me, clearly. The fact that it already has this area of advantage should give you a pause.


Humbling indeed. I am utterly amazed at Claude's breadth of knowledge and ability to understand the context of our conversations. Even if I misspell words, don't use the exact phrase, or call something a function instead of a thread, Claude understands what I want and helps make it happen. Not to mention the ability to read hundreds of lines of debug output and point out a tiny error that caused the bug.


See also hoverboards


We use them in our shop. It's quite straightforward if you're already familiar with Github Actions. The Forgejo runner is tiny and you can build it even on unsupported platforms (https://code.forgejo.org/forgejo/runner) e.g. we've setup our CI to also run on Macs (by https://www.oakhost.net) for App Store related builds. It's really quite a joy :)


Are you building MacOS apps? More specifically, are you doing code signing and notarization and stamping within CI? If so, is this written up somewhere? I really struggled with getting that working on GitLab. I did have it working, but was always searching for alternatives.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: