Hacker Newsnew | past | comments | ask | show | jobs | submit | more smith7018's commentslogin

Unrelated to the conversation at hand but a strange fun fact is that it's actually legal to drink while driving in Mississippi and the Virgin Islands.


In a lot of jurisdictions, the offence isn't drinking while driving, it's having a blood/breath alcohol level above a certain threshold.


That 39 year old woman anecdote is a strange addition. I know many 20-to-30-somethings that know how to cook. It's far too expensive to constantly eat out nowadays so people know how to provide for themselves in other ways. It sounds like you met a woman that didn't know how to cook and extrapolated that experience into thinking society is over and we're all helpless.


You took "a 39 year old" and felt targeted. Where there's one there's more, it doesn't need to be all to be statistically significant.

Society collapses when the capable are helpless. There's no bandwidth to help the actual needy when enough of the normies need caretaking too.

Old puritans in government and corporate would just lop off the tail but that's actual people who mean something to their useful people.


That's a lot of doom around a potato.


Kotlin is much better than "Java, but just a little better."


Kotlin has everything. A language isn't better just because it has more.

Its great for DSLs though.


Java is pretty good for DSLs too.


Language.Builder().getOh().getYes().getJava().getIsSoNice().setFor(() -> { return "DSLs";});

No trailing lambdas, no infix operators, no @DslMarkers, no top level functions, and an infinite list of examples of Java verbosity making even the smallest thing look like an ancient greek epic. Java is utterly terrible for DSLs.


   Language {
       oh {
           yes {
              Java {
                  isSoNice
           }
       }
   }

You can contrive awfulness in any language.


The awfulness of one style of Java DSL is the awfulness of Lisp, that is

  building(of(S("expressions"),everything())


Not going to lie, my brain doesn't really comprehend lisp. So the joke (or point?) is way over my head.


I hope both of you know that you're in the extreme minority, right?


Are there available numbers to support this? Software engineering in the U.S. is well-compensated. $200/mo is a small amount to pay if it makes a big difference in productivity.


Which raises the question: If the productivity gains are realized by the employer, is the employer not paying this subscription?


My day job in talks to do that. I'm partly responsible for that decision, and i'm using my personal $200/m plan to test the idea.

My assessment so far is that it is well worth it, but only if you're invested in using the tool correctly. It can cause as much harm as it can increase productivity and i'm quite fearful of how we'll handle this at day-job.

I also think it's worth saying that imo, this is a very different fear than what drives "butts in seats" arguments. Ie i'm not worried that $Company will not get their value out of the Engineer and instead the bot will do the work for them. I'm concerned that Engineer will use the tool poorly and cause more work for reviewers having to deal with high LOC.

Reviews are difficult and "AI" provides a quick path to slop. I've found my $200 well worth it, but the #1 difficulty i've had is not getting features to work, but in getting the output to be scalable and maintainable code.

Sidenote, one of the things i've found most productive is deterministic tooling wrapping the LLM. Eg robust linters like Rust Clippy set to automatically run after Claude Code (via hooks) helps bend the LLM away from many bad patterns. It's far from perfect of course, but it's the thing i think we need most atm. Determinism around the spaghetti-chaos-monkeys.


Perceived productivity or actual productivity?


Yes, but that doesn't mean they aren't finding real value

The challenge with the bubble/not bubble framing is the question of long term value.

If the labs stopped spending money today, they would recoup their costs. Quickly.

There are possible risks (could prices go to zero because of a loss leader?), but I think anthropic and OpenAI are both sufficiently differentiated that they would be profitable/extremely successful companies by all accounts if they stopped spending today.

So the question is: at what point does any of this stop being true?


> I think anthropic and OpenAI are both sufficiently differentiated that they would be profitable/extremely successful companies by all accounts if they stopped spending today.

Maybe. But that would probably be temporary. The market is sufficiently dynamic that any advantages they have right now, probably isn't stable defensible longer term. Hence the need to keep spending. But what do I know? I'm not a VC.


A very productive minority.


Have we seen any examples of any of these companies turning a profit yet even at $200+/mo? My understanding is that most, if not all, are still deeply in the red. Please feel free to correct me (not sarcastic - being genuine).

If that is the case at some point the music is going to stop and they will either perish or they will have to crank up their subscription costs.


It's possible Anthropic is cash-flow positive now.

Claude 3.7 Sonnet supposedly cost "a few tens of millions of dollars"[1], and they recently hit $4B ARR[2].

Those numbers seem to give a fair bit of room for salaries, and it would be surprising if there wasn't a sustainable business in there.

[1] https://techcrunch.com/2025/02/25/anthropics-latest-flagship...

[2] https://www.theinformation.com/articles/anthropic-revenue-hi...


Cost to train and cost to operate are two very different things


I am absolutely benefitting from them subsidizing my usage to give me Claude Code at $200/month. However, even if they 10x the price its still going to be worth it for me personally.


I totally get that but that’s not really what I asked/am driving at. Though I certainly question how many people are willing to spend $2k/mo on this. I think it’s pretty hard for most folks to justify basically a mortgage for an AI tool.


My napkin math is that I can now accomplish 10x more in a day than I could even one year ago, which means I don't need to hire nearly as many engineers, and I still come out ahead.

I use claude code exclusively for the initial version of all new features, then I review and iterate. With the Max plan I can have many of these loops going concurrently in git worktrees. I even built a little script to make the workflow better: http://github.com/jarredkenny/cf


Again I understand and I don’t doubt you’re getting insane value out of it but if they believed people would spend $2000 a month for it they would be charging $2000 a month, not 1/10th of that, which is undoubtedly not generating a profit.

As I said above, I don’t think a single AI company is remotely in the black yet. They are driven by speculation and investment and they need to figure out real quick how they’re going to survive when that money dries up. People are not going to fork out 24k a year for these tools. I don’t think they’ll spend even $10k. People scoff at paying $70+ for internet, a thing we all use basically all the time.

I have found it rather odd that they have targeted individual consumers for the most part. These all seem like enterprise solutions that need to charge large sums and target large companies tbh. My guess is a lot of them think it will get cheaper and easier to provide the same level of service and that they won’t have to make such dramatic increases in their pricing. Time will tell, but I’m skeptical


> As I said above, I don’t think a single AI company is remotely in the black yet.

As I note above, Anthropic probably is in the black. $4B ARR, and spending less than $100M on training models.


It looks like their revenue has indeed increased dramatically this year but I can’t find anything saying they’re profitable, which I assume they’d be loudly proclaiming if it had happened. That being said looking at the charts in some of these articles it looks like they might pull it off! I need to look more closely at their pricing model, I wonder what they’re doing differently


Why would they want to be profitable? Genuine question.

Profit is for companies that don't have anything else to spend money on, not ones trying to grow.


I guess my genuine question in response is can you tell investors "Please give us billions of dollars - we never plan on being profitable, just endlessly growing and raising money from outside sources"? Unless the goal is to be sold off eventually that seems a bit like a hard sell.


> "Please give us billions of dollars - we never plan on being profitable, just endlessly growing and raising money from outside sources"?

The goal for investors is to be able to exit their investment for more than they put in.

That doesn't mean the company needs to be profitable at all.

Broadly speaking, investors look for sustainable growth. Think Amazon, when they were spending as much money as possible in the early 2000s to build their distribution network and software and doing anything they possibly could to avoid becoming profitable.

Most of the time companies (and investors) don't look for profits. Profits are just a way of paying more tax. Instead the ideal outcome is growing revenue that is cost negative (ie, could be possible) but the excess money is invested in growing more.

Note that this doesn't mean the company is raising money from external sources. Not being profitable doesn't imply that.


I know very little about this. But isn't the inference cost the big one. Not the training?


> My napkin math is that I can now accomplish 10x more in a day than I could even one year ago, which means I don't need to hire nearly as many engineers, and I still come out ahead.

The only answer that matters is the one to the question "how much more are you making per month from your $200/m spend?"


In terms of revenue for my startup, plenty more.


I'm curious, how are you accounting this? Does the productivity improvement from Claude's product let you get your work done faster, which buys you more free time? Does it earn you additional income, presumably to the tune of somewhere north of $2k/month?


You would honestly pay 2k a month to an AI tool? Do you not have other costs like a mortgage or rent?


Are there studies to show those paying $200/month to openai/claude are more productive?


Anecdotally, I can take on and complete the side projects I've always wanted to do but didn't due to the large amounts of yak shaving or unfamiliarity with parts of the stack. It's the difference between "hey wouldn't it be cool to have a Monte Carlo simulator for retirement planning with multidimensional search for the safe withdrawal rate depending on savings rate, age of retirement, and other assumptions" and doing it in an afternoon with some prompts.


For curiosity, how complex are these side projects? My experience is that Claude Code can absolutely nail simple apps. But as the complexity increases it seems to lose its ability to work through things without having to burn tokens on constantly reminding it of the patterns it needs to follow. At the very least it diminishes the enjoyment of it.


It varies, but they're not necessarily very complex projects. The most complex project that I'm still working on is a Java swing UI to run multiple instances of Claude code in parallel with different chat histories and the ability to have them make progress in the background.

If you need to repeatedly remind it to do something though, you can store it in claude.md so that it is part of every chat. For example, in mine I have asked it to not invoke git commit but to review the git commit message with me before committing, since I usually need to change it.

There may be a maximum amount of complexity it can handle. I haven't reached that limit yet, but I can see how it could exist.


Simple apps are the majority of use-cases though - to me this feels like what programming/using a computer should have been all along: if I want to do something I’m curious about I just try with Claude whereas in the past I’d mostly be too lazy/tired to program after hours in my free time (even though my programming ability exceed Claude’s).


Well that's why I'm curious. I've been reading a lot of people talking about how the Max plan has 100x their productivity and they're getting a ton of value out of Claude Code. I too have had moments where Claude Code did amazing things for me. But I find myself in a bit of a valley of despair at the moment as I'm trying to force it to do things I'm finding out that it's not good at.

I'm just worried that I'm doing it wrong.


There are definitely things it can't do, and things it hilariously gets wrong.

I've found though that if you can steer it in the right direction it usually works out okay. It's not particularly good at design, but it's good at writing code, so one thing you can do is say write classes and some empty methods with // Todo Claude: implement, then ask it to implement the methods with Todo Claude in file foo. So this way you get the structure that you want, but without having to implement all the details.

What kind of things are you having issues with?


This has nothing to do with AI, but might help: All complex software programs are compositions of simpler programs.


I work at an Amazon subsidiary so I kinda have unlimited gpu budgets. I agree with siblings, I'm working on 5 side projects I have wanted to do as a framework lead for 7 years. I do them in my meetings. None of them are taking production traffic from customers, they're all nice to haves for developers. These tools have dropped the costs of building these tools massively. It's yet to be seen if they'll also make maintaining them the same, or spinning back up on them. But given AI built several of them in a few hours I'm less worried about that cost than I was a year ago (and not building them).


It's subjective, but the high monthly fee would suggest so. At the very least, they're getting an experience that those without are not.


The point is that if a minority is prepared to pay $200 per month, then what is the majority prepared to pay? I also don’t think this is such an extreme priority, I also know multiple people in real life with these kinds of selections.


>if a minority is prepared to pay $200 per month, then what is the majority prepared to pay?

Nothing. Most people will not pay for a chat bot unless forced to by cramming it into software that they already have to use


Forget chat bots, most people will not pay for Software, period.

This is _especially_ true for developers in general, which is very ironic considering how our livelihood is dependent on Software.


Yeah, cause we want to be in control of software, understandably. It's hard to charge for software users have full control of - except for donations. That's #1 reason for me to not use any gen AI at the moment - I'm keeping an eye on when (if) open-weight models become useful on consumer hardware though.


> Forget chat bots, most people will not pay for Software, period.

Apple says their App Store did $53B in "digital goods and services" the US alone last year. Thats not 100% software, but its definitely more than 0%


Games are a big exception here, as is anything in the app store.

But productivity software in general, only a few large companies seem to be able to get away with it. The Office Suite, CRM such as SalesForce.

In the graphics world, Maya and 3DS Max. Adobe has been holding on.


It's a generic chat LLM product, but ChatGPT now has over 20 million paid subscribers. https://www.theverge.com/openai/640894/chatgpt-has-hit-20-mi...


So $415m revenue per month, annualized $5 billion / yr. Let's say we use a revenue multiple of 4x, that means OpenAI should be valued at $20 billion USD just based on this. Then one obviously has several other factors, given the nature of OpenAI and future potential. Maybe 10x more.

Which puts the current valuations I've heard pretty much in the right ballpark. Crazy, but it could make sense.


China is actually #1 with social media at this point


The Big Beautiful Bill will add $4.5 trillion to the deficit in the next decade. If we hadn't passed it, we could have continued learning about cosmic inflation _and_ helped millions of people regarding food and healthcare and still saved trillions in the process. Of course, America would never do that, but our current issue is no longer "we should be helping people instead of doing unnecessary spending." Now we're squarely in "let's starve everyone of resources and give it all to the 1%."


The BBB will add $4.5T (this is the largest estimate) in addition to the $15T-$20T that would have happened without the BBB

The debt would have been about $52T+, now it will be $56T+, if projections are accurate.

While I do not agree with the BBB for many reasons, and I do agree that it increases the debt, it is not the primary driver of the debt.

The largest driver of our debt is our "health" system. We spend $5T a year on our "health" system, which is twice the amount per capita that western European nations spend, and we have outcomes that are, across the board, worse.

We spend $2.5T more per year than we "should" be spending on "health", which is by far the largest waste of our resources.

If we would "simply" find a way to spend as much as western Europe does (even keeping our poorer outcomes), we would save $25T over the next 10 years. Our entire national debt could be eliminated in 20 years by doing this, even with the BBB.


Clothing is handmade. It doesn't matter if it's luxury or from Shein; it's all handmade. Artisans can work tirelessly to make sure everything is stitched the exact same way but anything below that is made for the mass market. Those tend to be people paid nothing to work as fast as possible to make as many items as possible. In that environment, you're going to get a lot of inconsistency. The only tech that helps here is the sewing machine and using lasers to cut the pieces. Compare that to iPhones where there are a lot of industrial machines that are used to create each of the pieces paired with highly trained individuals helping assemble it. The iPhone is also a "luxury" good so they have a lot of QC whereas a shirt from Old Navy is cheap and as long as it "looks" correct then they'll sell it for $8.


Encore is a new-ish used clothing indexer that might be what you're looking for. I also use Gem which doesn't use AI but indexes multiple vintage/used sites and will notify you when something pops up with your saved searches.


Sounds like steps in the right direction, but not entirely what I'm looking for.

I want AI that can scrape shop websites for attributes that people commonly search for, such as size and color, etc., but also shipping methods, shipping costs, etc. I think this would be trivial for an LLM. For me the scope should be bigger than just used clothes. I prefer new clothes (but I wear them until the end). And the system should be web-wide, not just selected shops.

And then I want a basic filtering system that allows me to quickly find what I need by checking some boxes.

It sounds so simple ...


You can build an x86 machine that can fully run DeepSeek R1 with 512GB VRAM for ~$2,500?


You will have to explain to me how.



Is that a CPU based inference build? Shouldn't you be able to get more performance out of the M3's GPU?


Inference is about memory bandwidth and some CPUs have just as much bandwidth as a GPU.



How would you compare the tok/sec between this setup and the M3 Max?


3.5 - 4.5 tokens/s on the $2,000 AMD Epyc setup. Deepseek 671b q4.

The AMD Epyc build is severely bandwidth and compute constrained.

~40 tokens/s on M3 Ultra 512GB by my calculation.


IMO, it would be more interesting to have a 3-way comparison of price/performance between DeepSeek 671b running on :

1. M3 Ultra 512 2. AMD Epyc (which Gen ? AVX512 and DDR5 might make a difference in both performance and cost , Gen 4 or Gen 5 have 8 or 9 t/s https://github.com/ggml-org/llama.cpp/discussions/11733 ) 2. AMD Epyc + 4090 or 5090 running KTransformers (over 10 t/s decode ? https://github.com/kvcache-ai/ktransformers/blob/main/doc/en...)


Thanks!

If the M3 can run 24/7 without overheating it's a great deal to run agents. Especially considering that it should run only using 350W... so roughly $50/mo in electricity costs.


Out of curiosity, if you dont mind: what kind of an agent would you run 24/7 locally?

I'd assume this thing peaks at 350W (or whatever) but idles at around 40w tops?


I’m guessing they might be thinking long training jobs as opposed to model use in an end product if done sort.


What kind of Nvidia-based rig would one need to achieve 40 tokens/sec on Deepseek 671b? And how much would it cost?


Around 5x Nvidia A100 80GB can fit 671b Q4. $50k just for the GPUs and likely much more when including cooling, power, motherboard, CPU, system RAM, etc.


So the M3 Ultra is amazing value then. And from what I could tell, an equivalent AMD Epyc would still be so constrained that we're talking 4-5 tokens/s. Is this a fair assumption?


No. The advantage of Epic is you get 12 channels of ram so it should be ~6x faster than a consumer cpu.


I realize that but apparently people are still getting very low tokens/sec on Epyc. Why is that? I don't get it, as on paper it should be fast.


The Epyc would only set you back $2000 though, so it’s only a slightly worse price/return.


How many tokens/s would that be though?


That's what I'm trying to get to. Looking to set up a rig, and AMD Epyc seems reasonable but I'd rather go Mac if it's giving many more tokens per second. It does sound like the Mac with M3 Ultra will easily give 40 tokens/s, where as the Epyc is just internally constrained too much, giving 4-5 tokens/s but I'd like someone to confirm that, instead of buying the HW and finding out myself. :)


Probably a lot more. Those are server-grade GPUs. We're talking prosumer grade Macs.

I don't know how to calculate tokens/s for H100s linked together. ChatGPT might help you though. :)


Well, ChatGPT quotes 25k-75k tokens/s with 5 H100 (so very very far from the 40 tokens/s), but I doubt this is accurate (e.g. it completly ignored the fact they are linked together and instead just multiplied the estimation of the tokens/s for one H100 by 5).

If this is remotely accurate though it's still at least an order of magnitude more convenient than the M3 Ultra, even after factoring in all the other costs associated with the infrastructure.


Hopefully 7 years from now you'll still be able to use it with modern apps, websites, and video content. IMO, The benefits of these chips are in longevity rather than pushing them to the limit today.


This is the pretty obvious answer. I'm looking at replacing my gen-3 iPad Air from 2019 because it's feeling pretty pokey now. (And my wife's gen-1 iPad Air from 2013 is entirely unusable.)


I don't think there's any amount of processing power that can keep up with website bloat long term, but you out to get an extra year or two from the M3


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: