Hacker Newsnew | past | comments | ask | show | jobs | submit | felixakiragreen's commentslogin

Brilliant.

Until you have 2 people that are near identical. They don’t even have to be twins, there are plenty of examples where people can’t even tell other people apart. How is an AI going to do it?

You don’t own your likeness. It’s not intellectual property. It’s a constantly changing representation of a biological being. It can’t even be absolutely defined— it’s always subject to the way in which it was captured. Does a person own their likeness for all time? Or only their current likeness? What about more abstract representations of their likeness?

The can of worms OpenAI is opening by going down this path is wild. We’re not current able to solve such a complex issue. We can’t even distinguish robots from humans on the internet.


Ironically this post only reinforced my longtermist leanings. I do value the species over the individual and I tend to think that if we don’t realign our incentives away from maximizing short term profits we may be destroying our future. Longtermism places species above nations, because nations can threaten our very survival. It’s the evolution of tribalism.


Longtermism, like many ideologies, sounds good on paper. The problem comes when you realize our ability to predict the future in a week is terrible, and a decade out is basically unknowable. You can pick out hindsight examples "oh X predicted Y" all day but hindsight is the mother of all survivorship biases. Therefore in the end, it ends up on the long list of ideologies that promise everything with no means to deliver it.

"Oh just suffer now for this guaranteed future benefit" "Whoops turns out the future is complicated and the suffering and hate was all for nothing"


Or any philosophy, for that matter. I have been reading up on this subject, "What We Owe The Future" book lately. As I soak up more of the arguments, I see a lot of spilled ink, highly debatable key takeaways, and no practical recommendations for actions which were not already obvious.

In other words, the same thing you will get from any moral philosophy class. If you start philosophy expecting to get something tangibly productive... you are in for a disappointment.

For me, I think the important grain of salt is that better moral accuracy provides little to no value. You or I can do more good for the world with greater effort, but the efficacy of that good is mainly based on the real opportunities which we are connected to through the course of our actual lives. Different moral compasses would give slightly different answers, but the optimization makes a vanishingly thin difference. There was never much doubt about what was needed to be a better person, it's just self-interest and vices, which are completely known quantities, that keeps us from it. The exact flavor of utilitarianism under the hood is pretty much irrelevant.

I do find some value for philosophy as a thought exercise that allows us to entertain radical thought experiments far from the mainstream (see internet trolley cart memes). However, as I see longtermism now, it's pretty vanilla.


I feel the same way about the AI ethicist people. We are mostly just guessing that AI will look and function like the sci-fi books we’ve read and how it will integrate/impact society. Not just in the “will we get real AGI” tech sense but even most so in how humans will learn to adapt and protect ourselves. Especially when it involves crippling our primitive AI R&D today for some indefinite future possibility.


One of those things makes claims about the future to justify whatever present actions people prefer.

The other warns about future dangers and cautions us not to take certain present actions. Slowing is not crippling, it's just conservative. And our species has a long history of abusing new tech and causing damage, which leads to strict, regressive reforms. And neither the damage nor the reforms are desirable.


> ability to predict the future

And

> hindsight is the mother of all survivorship biases

Increasing the chances of success is not predicated on predicting the future. Another strategy (given our knowledge of evolution by natural selection) is that we can increase species chance of long term success by enabling more degrees of freedom so that the good “choices” survive.

Multiplanetaryism is such an example. Another is to avoid authoritarian structures because you don’t know a priory which ideas will turn out to be best/right. Hence, you want idea competition and not suppress dissent.

So, again, you don’t need to predict the future to increase chances of success.


> Another is to avoid authoritarian structures because you don’t know a priory which ideas will turn out to be best/right. Hence, you want idea competition and not suppress dissent.

It's important to note that high wealth inequality is an "authoritarian structure" in this respect because a small number of people end up controlling access to the capital necessary to realize an idea.


> because a small number of people end up controlling access to the capital necessary to realize an idea.

Ignoring the fact that capital isn't static (it is created), do you believe that if there's minimal wealth inequality that we'd see an explosion of great ideas that today aren't realized? If so, how would you account for the natural tendency for money to chase good ideas?


I don't know why people are obsessed with equity (physical or financial) capital.

Capital productivity theories should be rejected on the basis that they fail to consider input materials. The problem with this is that ultimately all value is derived from land or better yet, the universe and the laws of physics itself.

Those factors were not created but we can play attribution games and pretend they were.


I would say that controlling global warming is another way to increase our degrees of freedom for future choices.

I generally agree but many of the quotes in the article display the kind of extreme hubris that leads to taking large risks that increase our chances of existential doom.


Ok, so we should reject longtermism on the basis of rejecting authoritarianism. After all, people can just decide for themselves how future oriented they want to be. Therefore there is no need for longtermists to force their opinions onto other people, no matter how enticing their expected value calculations are.


>"Oh just suffer now for this guaranteed future benefit"

I don't know why but Austrian Economists genuinely believe in pure time preference theory. They say time preference (there are multiple kinds) is always positive because nobody would delay consumption unless they get rewarded.

Therefore delaying consumption always deserves a reward no matter whether the economy is growing or shrinking. Essentially this is a belief that all present suffering will result in a better future. In other words, longtermism is a very hardcore capitalist philosophy.

The party poopers talking about finite resources and increasing entropy should just shut up.


I believe that some degree of longtermism is a requirement for consistent progress… yes, there’s a lot of pressing problems that need to be addressed today, but at the same time humanity isn’t going to be moving forward as a whole if everybody is busy staring at their feet.

It’s only natural for some number of people to be concerned with making sure that tomorrow brings progress on some axis, because that is not something that just happens on its own. Progress, particularly leaps of it, is made as a result of people having “outlandish” visions and working to achieve them.


Yes this sounds great but the issue is about how longtermists revel in the opportunity to sacrifice near term goals for long term ones, in an almost pathological way. I still don't understand why there should be any conflict between aggressive environmentalism and longtermism, but it keeps coming up over and over again. All I can think of is that longtermists want to demonstrate how truly long-term they are and therefore act like one of the most pressing current issues is secondary to a bunch of other stuff. In general and in the absence of more info I think it's safe to go for the proximal goal of stabilizing the world's ecosystems, even though it _could_ in theory be a non-existential risk. All of that said, the conflict between longtermism and environmentalism might mostly be manufactured by the critics of longtermism, like the author of this article.


> if we don’t realign our incentives away from maximizing short term profits we may be destroying our future.

What does that look like, in practice?

It seems to me the magic of Longtermism is that when time horizons can be arbitrarily set, anything that one dislikes or doesn't care about can be dismissed as "short term."


Longtermists disregard people who only care about the next thousand years, for instance.

Surely, at some point the human ability to process information reaches its end right? Talking about millions of years in the future is beyond any individual


Wild. This was only a weak criticism and still made longtermism look embarrassingly bad.

Why do longtermists put any faith in their ability to discern the current state of the world, to discern the consequences of current events, of their own actions, of reactions to their reactions, etc? Tremendous measurement and prediction error in all these even if you give it a couple coats of "rational" paint so you feel smart when you talk about it.

Any reasonable course of action under so much uncertainty would not try to wiggle the entire future, but to accomplish some smaller part of "the work" now, taking responsibility for something you can actually understand (not just delude yourself into imagining you understand). Go work on some constituent component of spaceflight, maybe, at best. AI X-risk is still OK but only to the extent your work accomplishes anything (thus far—zilch, all wasted.) But absolutely anything else done in service of the "species" is the grandiose delusion of children, and can be used to justify anything. If the species will survive anyway, then you're just comparing infinities—so some other criteria is needed. Better pick a good one and now you still have to decide what Good is. The "longtermist" imagines themself above all finite concerns. The only work good enough for their ego is lecturing others about the vastest things possible.

Early 20thc eugenics committed a fallacy constantly that comes up here too. You learn about evolution, you learn that the ultimate "goal" of evolution is to survive and thrive and outcompete, and then you falsely conclude that therefore you should make that your goal. Not so. It does not follow. You just found some reason to justify your desire to glorify yourself.

Same for long-termism. The species ought to survive, but when you make that your personal objective, you reject all the other emotions that have evolved to ensure survival and have done so so far, in particular, caring for the wellbeing of people around you, and the capacity to solve actual problems, at a scope where you can understand the consequences of your actions.


Furthermore, in other words—god this makes me so mad—it's a question of responsibility. The species has loads of people to take responsibility for it, and will have an infinitely more in the future. The people who need help immediately around you? The actual world? Not so much. If you completely dissociate from your physical existence in the world and imagine all people everywhere at all times as equidistant from you, like a god, well great, but I suspect you're doing that because you're too vain and hurt and too much of a coward to engage with the world around you in all its painful reality—painful to you, because it will remind you that you haven't felt like a valid part of it YET, and you never WILL until you accomplish the Work that makes you worthy of existing—fixing everything everywhere for all time, but starting with—nothing, nowhere, ever. You'd do well to notice that you have voluntarily accepted responsibility for this on some trumped-up rational basis because that's actually EASIER than doing something once for anyone that actually exists.


> my longtermist leanings. I do value the species over the individual

Do you think the essay misrepresents "big L" Longtermist beliefs? My reading of it makes it seem that "value the species over the individual" maybe does not reflect their goals very accurately.

I tend to think in the long term myself. I like a lot of the same goals that Longtermists have: using technology to make ourselves better than we are, and expanding into other star systems being the most obvious ones.

On the other hand, I don't consider myself a Longtermist. I think it's completely futile to try to calculate "utility" for individuals, and abhorrent to develop a society that tries to forcibly maximize that imaginary "utility" number, instead of letting people choose for themselves, because I believe that trying to forcibly maximize a value that can't actually be measured accurately would inherently reduce the overall happiness of the entire species.

In other words, rather than "value the species over the individual", I would describe Longtermism (as portrayed in the article) as trying to forcibly scale up act-utilitarianism to apply to everyone, everywhere, all the time. At that scale, it can superficially seem like valuing the species, but IMO if one actually wants to think in the long term, rule-utilitarianism should be the guiding philosophy. Rule-utilitarianism makes it even more impossible to calculate results, of course, so it can only be a philosophy, not a rigid system of data and calculation.


One problem with neoclassical economics is that utility curves are unobservable and they have philosophical origin.

At some point some has to decide what utility really means and capitalists run into the problem that they run out of things to invest that turn a profit and even if they don't, the profit they get becomes increasingly meaningless. At some point all forms of investment become recreational, just like a video game.


> I do value the species over the individual

Not so far into the future, people probably won't even be the same species. Even back a couple hundred thousand years, we probably couldn't mate with our ancestors. Does longtermism place value on making sure future people are genetically similar to us?

Species don't seem that important to me, in the long-term, as they are all quite ephemeral.


The importance of species shows up in longterm focus on AI. They are worried that AI will eliminate or replace humans. It ignores that AI could be people. Are faithful AI better descendants than diverged humans?

Consider the scenario when humans can't live off of Earth so it isn't possible to colonize the galaxy. What is the value of Earth humans compared to AI that can spread across the galaxy?


> Does longtermism place value on making sure future people are genetically similar to us?

You don't get 10^58 people in flesh and blood bodies. Most of them only exist in their version of the matrix inside a quantum computer.


Curious, do you have children?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: