Hacker Newsnew | past | comments | ask | show | jobs | submit | anothermathbozo's commentslogin

child care policy frees labor capacity for work that is more likely to earn a slice of the national income. It’s almost certainly going to result in greater economic activity for the state. In the immediate it is funded from two existing funds.

State + local tax burden in NM is 10.2%[1]. Revenue neutral would mean those taking the child care would instead take a job with average salary $120,000. But as another comment points out this policy attracts new jobs to the state, which complicates the math

[1]https://taxfoundation.org/location/new-mexico/


Income tax is not the only revenue that the state and local governments capture when production increases.

Given how much capital has already been committed to infrastructure development I find it very unlikely that even a Black Monday style market correction will do much to alter the medium and long term outlook for ai development and investment.

Too many people talk about a bubble almost wish-casting that such a thing will make this all go away. It’s probably safest to assume your political enemies won’t be hoisted on their own petard anytime soon.


> Given how much capital has already been committed to infrastructure development I find it very unlikely that even a Black Monday style market correction will do much to alter the medium and long term outlook for ai development and investment.

So let's bet on sunken cost fallacy? That didn't work out during the dotcom era.

When people talk about the bubble is mostly about ending the LLM hype, not making everything associated just disappear. The infrastructure, ideas and developments will stay and probably end up being a lot more useful without the noisy hype surrounding it right now.


It kind of did, Facebook, YouTube, Google, were all enabled by the massive internet cable infrastructure investment laid in the 90s.


Yep, but the dotcom burst was still was a burst. Ex post success does not matter in that analysis, and in fact, I'm arguing that the dotcom burst was instrumental for companies like Facebook, Youtube and Google to develop as they did.


A bursting bubble doesn't instakill an entire industry. No one is suggesting that. Bursting bubbles are harmful to most market participants and often have a blast radius beyond the industry. It's great that Facebook and YouTube emerged (I suppose) but that doesn't help grandad who lost 35% of his retirement fund.


> Given how much capital has already been committed to infrastructure development

Here’s the stupid thing about that infrastructure: This isn’t railroads. This isn’t skyscrapers. This isn’t underseas fiber optic. These are, at the end of the day, data center GPUs that can and will start failing en-masse on normal hardware timelines.

When that happens, they will all need to be replaced, at whatever rate NVIDIAs selling at - or can sell at, if there’s any geopolitical incidents. If there aren’t enough customers, capacity shrinks, and the remaining customers must foot the bill for the new system and the previous round of investment, causing a death loop wherever there isn’t profitable use feeding into the system.

A clearer comparison: It’s like if airlines had to replace their entire fleet every 3-4 years while selling $20 tickets to anywhere; and the growth metrics look incredible when everyone’s flying at that rate. So much so, that investors paid for the first fleet round calling it “infrastructure.” Meanwhile the amount of ticket sales keeps growing when everyone finds amazing use cases and new business models from the $20 tickets. Surely, this can only get better from here, and justifies even more fleet investment.


All the things you mentioned require a lot of maintenance as well.


There are no 100 year graphics cards as there are tunnels and bridges is the point. The maintenance bill comes due a lot sooner then.


Yeah. This time it’s different.


The problem is it will create a broader economic problem in the markets.

The top 10 holdings for VTI, the "easy button" investment option. Are: NVIDIA, Microsoft, Apple, Amazon, Facebook, Broadcom, Google, Tesla, Berkshire Hathaway. They represent a third of US market value, and 60% of them are heavily hooked into AI. A panic will be a decimation for NVIDIA, and that splash zone will be broad.


$RSP (equal weighted) solves that


Totally! You can also use something like VIG or VDADX if you're restricted in terms of offerings.

I personally account for that in my planning. But many folks have their money in 401k funds in generic broad market securities -- often with high fees on top of it!


> You can also use something like VIG or VDADX if you're restricted in terms of offerings.

How is that? RSP and VIG look very different.


VIG omits some of the high flier growth stocks that pay no dividend.


Ok so they sort of have something in common...but I'm not sure it's good advice to say one can be used to replace the other due to their very different overall strategies.


Fair. I should have qualified more clearly that I meant for retirement accounts where you have limited options.

RSP definitely performs better and doesn’t shut you out of the growth stocks.


I mean, every year or so there's a new NVIDIA GPU, making last years models obsolete and GPUs from 2 generations back essentially landfill. I don't think there's another large-scale investment that decays at this rate.

The infrastructure being there might also be a problem.

- If there's no breakthrough, and we're only going to make incremental gains, then its wasteful

- If there's a breakthrough, it might turn out a new AI architecture needs either lot less compute, or a different kind of it


> I mean, every year or so there's a new NVIDIA GPU, making last years models obsolete and GPUs from 2 generations back essentially landfill. I don't think there's another large-scale investment that decays at this rate.

Ampere generation A6000’s are still selling used for close to MSRP on eBay. I bought an A4500, used it for a year and then sold it for MORE than I bought it for.

I’ve never seen computer hardware appreciate in value like it is now, even if new cards are still significant upgrades.


Yeah, I think it's pretty messy. I bought some keplers a year ago (far below MSRP), and they are fine -- but the trouble becomes software support; sort of like tablets/phones where the hardware's still more than enough, but because software stops supporting it, the landfill becomes landfilled -- excepting those people willing to set up the very specific environments where there's still use in them -- and to that end, obviously the things're more than a decade old, but hardly useless, though it would be fair to say most people would not run them.

If you can keep up software support, the hardware becomes much more valuable both now and later, but this is much harder to do in the consumer space where the software than in the industrial/AI space; I'd say the majority of my PC replacements until I was established financially were the result of software not supporting my hardware (not that it couldn't do the task if there were support, but simply that there was no support), and I should say, too, that in almost every case, I regretted dumping $1k in today-dollars for a single game or application otherwise.

Obviously, the problem's much worse with phones/tablets where most people use them for Cloud activities; people use them for 2-3 years and they go in the trash or their service provider might buy it back essentially as e-waste to be scrapped (despite marketing implications of them being reused to help surgeons in Africa).


This because of AI hype/demand, not a general rule...


It's exactly that: wishful thinking.

Clearly, a lot of people very desperately want AI tech to fail and disappear. And if there is such a strong demand for "tell me that AI tech will fail", then there will be hacks willing to supply.


> The reality of what these tools can do is sinking in

It feels premature to make determinations about how far this emergent technology can be pushed.


The cognitive dissonance is predictable.

Now hold my beer, as I cast a superfluous rank to this trivial 2nd order Tensor, because it looks awesome wasting enough energy to power 5000 homes. lol =3


What difference does that make at this stage of things?


All the difference. The difference between what is real, what is possible, what is plausible, what is feasible, and what is mass delusion.

"Those who can make you believe absurdities, can make you commit atrocities." - Voltaire


Why would that be?


Warrantless and totally spiteful for you to make unqualified claims like “cognitive decline” from skimming two papers. This is shameful.


This isn't exactly the case. The trend is a log scale. So a 10x in pretraining should yield a 10% increase in performance. That's not proving to be false per say but rather they are encountering practical limitations around 10x'ing data volume and 10x'ing available compute.


I am aware of that, like I said:

> (Or at least because O(log) increases in model performance became unreasonably costly?)

But, yes, I left implicit in my comment that the trend might be “fleeting” because of its impracticality. RL is only a trend so long as it is fashionable, and only fashionable (i.e., practical) so long as OpenAI is fed an exponential amount of VC money to ensure linear improvements under O(log) conditions.

OpenAI is selling to VCs the idea that some hitherto unspecified amount of linear model improvement will kick off productivity gains greater than their exponentially increasing investment. These productivity gains would be no less than a sizeable percentage of American GDP, which Altman has publicly set as his target. But as the capital required increases exponentially, the gap between linearly increasing model capability (i.e., its productivity) and the breakeven ROI target widens. The bigger model would need to deliver a non-linear increase in productivity to justify the exponential price tag.


This happens once it starts improving itself.


I suppose that is the question...


Disappeared. He was kidnapped and disappeared.


Optical Character Recognition (OCR) and Handwritten Text Recognition (HTR) are different tasks


Anthropic’s CEO said their technology would end all disease and expand our lifespans to 200 years. What on earth do you mean they’re not playing the hype game?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: