Hacker Newsnew | past | comments | ask | show | jobs | submit | lechatonnoir's commentslogin

I mean... you can't think of any ways that AI could actually generate new value? Or more abstractly, of a way that Jevons' paradox can't apply in the case of AI?


i think by your logic, they only thing that they do that is condescending is to say that an interview is not guaranteed.

people are mentioning that they do this for a reason, which explains away that behavior, so yeah, it kinda does change the fact of whether they are being condescending.


it's kind of hard to tell what your position is here. should people not ask chatbots how to scrape html? should people not purchase RAM to run chatbots locally?


i think in this thread the goalposts were slowly moved. people were initially talking about success being predicted by having the excess necessary to comfortably take many shots on goal. it seems like we've granted that this $250k shot was a one-time thing.

it is true but irrelevant to the original topic that this is more money than the global poor ever see, and that this is more money that most people get to have. i don't think anyone was arguing that this represents zero privilege


I'm pretty sure there's no reason that Anthropic has to do research on open models, it's just that they produced their result on open models so that you can reproduce their result on open models without having access to theirs.


I am sure that there are some people who exhibit the behaviors you're describing, but I really don't think the group as a whole is disinterested in prior work or discussion of philosophy in general:

https://www.lesswrong.com/w/epistemology

https://www.lesswrong.com/w/priors

https://www.lesswrong.com/posts/2x67s6u8oAitNKF73/ (a post noting that the foundational problems in mech interp are grounded in philosophical questions about representation ~150 years old)

https://www.lesswrong.com/w/consciousness (the page on consciousness first citing the MIT and Stanford encyclopedias, then providing a timeline from Democritus, through Descartes, Hobbes,... all the way to Nagel, Chalmers, Tegmark).

There is also sort of a meme of interest in Thomas Kuhn: https://www.lesswrong.com/posts/HcjL8ydHxPezj6wrt/book-revie...

See also these attempts to refer and collate prior literature: https://www.lesswrong.com/posts/qc7P2NwfxQMC3hdgm/rationalis...

https://www.lesswrong.com/posts/xg3hXCYQPJkwHyik2/the-best-t...

https://www.lesswrong.com/posts/SXJGSPeQWbACveJhs/the-best-t...

https://www.lesswrong.com/posts/HLJMyd4ncE3kvjwhe/the-best-r...

https://www.lesswrong.com/posts/bMmD5qNFKRqKBJnKw/rigorous-p...

Now, one may disagree with the particular choices or philosophical positions taken, but it's pretty hard to say these people are ignorant or not trying to be informed about what prior thinkers have done, especially compared to any particular reference culture, except maybe academics.

As for the thing about Aella, I feel she's not as much of a thought leader as you've surmised, and I think doesn't claim to be. My personal view is that she does some interesting semi-rigorous surveying that is unlikely to be done elsewhere. She's not a scientist/statistician or a total revolutionary but her stuff is not devoid of informational value either. Some of her claims are hedged adequately, some of them are hedged a bit inadequately. You might have encountered some particularly (irrationally?) ardent fans.


The epistemology skews analytic and also "philosophy of science". It's not inherently an issue, but it does mean that there's a reason that I spend a lot of time here on orange site talking about Kantian concepts of epistemology in response to philosophical skepticism about AI.

A good example of the failing of "rationality" is Zionism. There are plenty of rationalists who are Zionists, including Scott Aaronson (who I incidentally think is not a very serious thinker). I think I can give a very simple rational argument for why making a colonial ethnostate is immoral and dangerous, and they have their own rational reasons for supporting it. Often, the arguments, including Scott's, are purely self interest. Not "rational."

>My personal view is that she does some interesting semi-rigorous surveying

Posting surveys on Twitter, as a sex worker account, is so unrigorous that to take it seriously is very concerning. On top of that, she lives in a bubble of autistic rationality people and tries to make general statements about humanity. And on top of that, half her outrageous statements are obvious attempts at bargaining with CSAM she experienced that she insists didn't traumatize her. Anyone who takes her seriously in any regard is a fool.


Here's a collection of debates about that topic:

https://www.lesswrong.com/posts/85mfawamKdxzzaPeK/any-good-c...

I personally don't have that much of an interest in this topic, so I can't critique them for quality myself, but they may at least be of relevance to you.


I am really not sure where you get any of these ideas. For each of your critiques, there are not only discussions, but taxonomies of compendiums of discussions about the topics at hand on LessWrong, which can easily be found by Googling any keyword or phrase in your comment.

On "considering what should be the baseline assumption":

https://www.lesswrong.com/w/epistemology

https://www.lesswrong.com/w/priors, particularly https://www.lesswrong.com/posts/hNqte2p48nqKux3wS/trapped-pr...

On the idea that "rationalists think that they can just apply rationality infinitely to everything":

https://www.lesswrong.com/w/bounded-rationality

On the critique that rationalists are blind to the fact that "reason isn't the only thing that's important", generously reworded as "reason has to be grounded in a set of human values", some of the most philosophically coherent stuff I see on the internet is from LW:

https://www.lesswrong.com/w/metaethics-sequence

https://www.lesswrong.com/w/human-values

On "systematically plan to validate":

https://www.lesswrong.com/w/rationality-verification

https://www.lesswrong.com/w/making-beliefs-pay-rent

On "what could hold true for one moment could easily shift":

https://www.lesswrong.com/w/black-swans

https://www.lesswrong.com/w/distributional-shifts

https://www.lesswrong.com/w/forecasting-and-prediction


Looking it the first link https://www.lesswrong.com/w/epistemology - it has frankly a comically shallow description of the topic, same with https://www.lesswrong.com/w/priors. In just about every discussion, I may be just the entirely wrong audience, but to me they don't even begin to address the narrow topic of choice, let alone form competent building blocks to form any solid world view.

I support anyone trying to form rational pictures of the universe and humanity. If the LessWrong community approach seems to make sense and is enriching to your understanding of the world then I am happy for you. But, every time I try to take a serious delve into LessWrong, and I have done it multiple times over the years, it sets off my cult/scam alerts.


Well, yeah, I think it's a pretty socially unaware thing to say about yourself out loud, so that's a pretty strong filter there.

It's rather different for a community to say that's a standard they aspire to, which is a lot less ridiculously grandstanding of a position IMO.


I think you're conflating different groups of people pretty severely.

"shunned" in particular is a really strong word, e.g, global health and biosecurity are two of the named categories at the most central EA events:

https://www.effectivealtruism.org/ea-global/events/ea-global...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: