Hacker Newsnew | past | comments | ask | show | jobs | submit | reducesuffering's commentslogin

Remember when YC funded (and boosted reach of) ~50 crypto scamlike co's during the heyday of the craze? Like the Stablegains scam fiasco:

https://news.ycombinator.com/item?id=31686140

https://news.ycombinator.com/item?id=31431224

https://news.ycombinator.com/item?id=31461634


50 crypto scams? Why do you link the same one thrice, then?

https://forum.effectivealtruism.org/posts/5mghcxCabxuaK4WTs/...


It's quite hilarious that you link to an EA forum to make your point when it's a known fact SBF was from the EA movement and they openly discussed --after the SBF/FTX/Alameda fraud was exposed-- whether scamming people to give part of the money they stole to fund things they thought would make EA participants look as white knight was acceptable or not.

The best example is SBF's guru who bought a 15 million GBP mansion in the UK for the EA movement with stolen funds.

Now he's keeping a very low profile because I know for a fact that up to a few years ago there was still assets being clawed back from the Enron fraud (!). So that mansion could be seized one day from the EA movement.

Let's steal money, let's buy private jets and fancy villas for our parents in tax heavens, let's give some to worthy cause (worthy in their own eyes).

Despicable people this EA movement.

And, no, I'm neither taking lessons nor explanations from what are, in the end, just petty scammers / thieves.


See Scott Alexander’s The Whispering Earring (2012):

https://gwern.net/doc/fiction/science-fiction/2012-10-03-yva...


Wasn't there a follow-up to this where Scott denied that the story was "about" the obvious thing for it to be about?


Despite the false advertising in the Tears for Fears song, everybody does _not_ want to rule the world. Omohundro drives are a great philosophical thought experiment and it is certainly plausible to consider that they might apply to AI, but claiming as is common on LessWrong that unlimited power seeking is an inevitable consequence of a sufficiently intelligent system seems to be missing a few proof steps, and is opposed by the example of 99% of human beings.

> Instrumental convergence is the hypothetical tendency of most sufficiently intelligent, goal-directed beings (human and nonhuman) to pursue similar sub-goals (such as survival or resource acquisition), even if their ultimate goals are quite different. More precisely, beings with agency may pursue similar instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—because it helps accomplish end goals.

'Running the planet' does not derive from instrumental convergence as defined here. Very few humans would wish to 'run the planet' as an instrumental goal in the pursuit of their own ultimate goals. Why would it be different for AGIs?



I don't agree with the parent commenters characterization of Karpathy, but these projects are just simple toy projects. They're educational material, not production level software.

You just proved the parent’s point.

He said “…who has never written any production software…” yet you show toy projects instead.

Well done.


It's because few realize how downstream most of this AI industry is of Thiel, Eliezer Yudkowsky and LessWrong.com.

Early "rationalist" community was concerned with AI in this way 20 years ago. Eliezer inspired and introduced the founders of Google DeepMind to Peter Thiel to get their funding. Altman acknowledged how influential Eliezer was by saying how he is most deserving of a Nobel Peace prize when AGI goes well (by lesswrong / "rationalist" discussion prompting OpenAI). Anthropic was a more X-risk concerned fork of OpenAI. Paul Christiano inventor of RLHF was big lesswrong member. AI 2027 is an ex-OpenAI lesswrong contributor and Scott Alexander, a centerpiece of lesswrong / "rationalism". Dario, Anthropic CEO, sister is married to Holden Karnofsky, a centerpiece of effective altruism, itself a branch of lesswrong / "rationalism". The origin of all this was directionally correct, but there was enough power, $, and "it's inevitable" to temporarily blind smart people for long enough.


It is very weird to wonder, what if they're all wrong. Sam Bankman-Fried was clearly as committed to these ideas, and crashed his company into the ground.

But clearly if out of context someone said something like this:

"Clearly, the most obvious effect will be to greatly increase economic growth. The pace of advances in scientific research, biomedical innovation, manufacturing, supply chains, the efficiency of the financial system, and much more are almost guaranteed to lead to a much faster rate of economic growth. In Machines of Loving Grace, I suggest that a 10–20% sustained annual GDP growth rate may be possible."

I'd say that they were a snake oil salesman. All of my life experience says that there's no good reason to believe Dario's predictions here, but I'm taken in just as much as everyone else.


(they are all wrong)

A fun property of S-curves is that they look exactly like exponential curves until the midpoint. Projecting exponentials is definitionally absurd because exponential growth is impossible in the long term. It is far more important to study the carrying capacity limits that curtail exponential growth.


1. you can definitely tell apart an S-curve and an exponential if you look at the derivative(s). AI progress does not seem to be close to the middle of the S-curve. 2. e.g.: a slowdown hasn't shown up in moore's law yet.

Covid was also an S curve...

> I'd say that they were a snake oil salesman.

I don't know if "snake oil" is quite demonstrable yet, but you're not wrong to question this. There are phrases in the article which are so grandiose, they're on my list of "no serious CEO should ever actually say this about their own company's products/industry" (even if they might suspect or hope it). For example:

> "I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power"

LLMs can certainly be very useful and I think that utility will grow but Dario's making a lot of 'foom-ish' assumptions about things which have not happened and may not happen anytime soon. And even if/when they do happen, the world may have changed and adapted enough that the expected impacts, both positive and negative, are less disruptive than either the accelerationists hope or the doomers fear. Another Sagan quote that's relevant here is "Extraordinary claims require extraordinary evidence."


"In Machines of Loving Grace, I suggest that a 10–20% sustained annual GDP growth rate may be possible.""

Absolutely comical. Do you realise how much that is in absolute terms? These guys are making up as they go along. Cant believe people buy this nonsense.


Why not? If they increase white color productivity by 25%, and that accounts for 50% of the economy, you'd get such a result.

I mean, once we're able to run and operate multinational corporations off-world, GDP becomes something very different indeed

> Cant believe people buy this nonsense.

I somewhat don't disagree, and yet. It feels like more people in the world buy into it than don't? To a large degree?


I really recommend “More Everything Forever” by Adam Becker. The book does a really good job laying out the arguments for AI doom, EA, accelerationism, and affiliated movements, including an interview with Yudkowsky, then debunking them. But it really opened my eyes to how… bizarre? eccentric? unbelievable? this whole industry is. I’ve been in tech for over a decade but don’t live in the bay, and some of the stuff these people believe, or at least say they believe, is truly nuts. I don’t know how else to describe it.

Yeah, it's a pretty blatant cult masquerading as a consensus - but they're all singing from the same hymn sheet in lieu of any actual evidence to support their claims. A lot of it is heavily quasi-religious and falls apart under examination from external perspectives.

We're gonna die, but it's not going to be AI that does it: it'll be the oceans boiling and C3 carbon fixation flatlining that does it.


> Anthropic was a more X-risk concerned fork of OpenAI.

What is XRisk? I would have inductively thought adult but that doesn't sound right.


Existential

This is the most important article to come across HN in a while and I encourage you to read it for the immense intellectual wisdom it contains rather than the reflexive uneducated discourse on AI that envelops HN these days. I'm sure if you read it end-to-end you'd likely agree.

Yep: https://slatestarcodex.com/2015/01/31/the-parable-of-the-tal...

Another example of amazing writing (about amazing writing)


jmyeet:

"Putin is in the wrong here but there are no good guys. US rhetoric on this has predicted a full-scale invasion that hasn't come to fruition multiple times and the media just laps it up. It's reminiscent of the WMD justification for invading Iraq. It's straight up Manufacturing Consent [1]. However, Putin has a point: extending NATO membership to Ukraine is an overtly hostile act by the US and NATO member states. Putin no more wants NATO bases in the Ukraine than the US would want Chinese or Russian military bases in Canada or Mexico.

But Russia is not and never was going to launch a full-scale invasion of Ukraine. It would destroy Russia. Trying to do this in Afghanistan, a substantially smaller and less developed country, played a significant factor in the collapse of the Soviet Union.

Russia wants a buffer between it and NATO and access to the Black Sea. That's it."

https://news.ycombinator.com/item?id=30421629


I got a friend who tends to be good at making strategic predictions but didn't see that;) because in addition to all the other bad things it would be strategically completely dumb move by putin. and it was.

it doesn't sound that Xi is as dumb as Putin but who knows.


How does that practically work out in selecting bond investments though? Betting on receiving euro coupon payments would look like buying BNDX over BND. But in the last year, while the Euro appreciated ~12% over the Dollar, $BND is up ~3% while BNDX is down ~1%.

International Bond ETFs are normally dollar-hedged. There are some unhedged ones that are in the local currencies and better at tracking foreign exchange rates. e.g. BWX is an unhedged international treasury fund.

In case it wasn't clear LLM conversations are being analyzed in a similar way to the social media advertising profiles...

"Q: ipsum lorem

ChatGPT: response

Q: ipsum lorem"

OpenAI: take this user's conversation history and insert the optimal ad that they're susceptible to, to make us the most $


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: