Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not a fan of this hyper aggressive line-in-the-sand argumentation about AI that pushes it all precariously close to culture war shenanigans. If you don't like a new technology that is perfectly cool and your right to an opinion. Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment. That is NOT at all clear, settled, or even correct much of the time. I'm open to that conversation and debate, but diatribes like this make it far too black-and-white with "good" people and "bad" people.


The issue is that without loud declaration like this money men will just soldier on with implementing shittier future.

It's always do something first and then ask for forgiveness. But at the point you ask for it it's too late and too many eggs were broken. And somehow you're richer at the end of it all and thus, protected from any consequences. While everyone else is, forgive my French, fucked.

Has Facebook been a net positive so far? Has twitter? You may make case for you YouTube, but what about Netflix?

It's only been good to us (engineers) and our investor masters, but not for the 90% of the rest, which may I remind you is the distribution that created us in the first place. Sorry for being dramatic, but I do seriously think these things need to be reigned in, and especially people like Altman who while believing themselves to be good-willed (and I have no doubts that he is better man than Musk for example) end up being Robert Moseses' of our generation. That is someone with good intentions who ends up making things worse overall.


Why would YouTube or Netflix be a net negative?


YouTube was one of the sites where the recommendation engine, trained to increase engagement, was pushing people into conspiracy theories and politically divisive content… and some other darker stuff.

They have done some work to try and mitigate some of that, but it seems like it will be a cat and mouse game between the AI and society, and a lot of damage was already done.


I said YouTube isn't a net negative. Even then you have to think globally, what about children who grew up consuming AI generated abomination animations made with 0 oversight of even its greedy creators?

Netflix was the cause for current slate of tax writeoff cancellations (no Netflix no overinvestment by Warner etc and no clumsy Zaslav cleanup), terrible shows being greenlit, homogeneity of camerawork, casting, identity-politics pandering that nobody asked for, oversaturation of the market with streaming services that is almost as bad as cable etc. It's basically a cancer overgrowth, good short-term and terrible long-term. Binge-tv model is also not a good thing, in terms of how much time it takes up, how little it brings in terms of pleasure and validates our growing impatience. I could go on and on. Does anyone even remember Mank or The Killer? Imagine Poor Things being Netflix-only release. Nobody would even say a single peep about it outside of niche film twitter accounts.

Generally speaking, if you've read "Seeing Like a State", then you can apply the same logic to companies and the entire industries, or really any aspirations of "man". We crave control and fear uncertainty, so we make environments far more deterministic, which brings more short-term profit but ruins the environment (be it nature or film itself). Look at Disney, Iger created the superhero movie boom (by making it super deterministic and boring: every movie is part of a giant puzzle so that each piece brings money) but in the process killed Star vehicles, killed experimentation (by directors, actors), and now Scorsese and Coppola need to throw around their weight to reverse the course. Sure, A24 exists, but before this all movies were A24 movies essentially. Now a major star being in a horror movie is an "Event". (Who is even a star anymore? DiCaprio, Cruise? These people were around since 80s. You think Chris Evans will have the same longevity?) Yeah, there were similar periods of dominance (80s action movies) but they weren't so precisely fine-tuned and featured greater directorial freedom and less emphasis on being non-offensive.

I guarantee you none of you will quote Avengers (Ultron onwards) in next 20 years. People still quote Terminator 2 or Predator or Lethal Weapon, despite them also being brainless flicks (some not so brainless actually). Look at Dr Strange 2, they forced Raimi to stop being Raimi basically (the first movie has some good moments), and made him fall in line with the "agenda", because the plan™ is too important to compromise on. In reality the plan™ is the money perputuum mobile lol. Sure, these people were always greedy, but stochasticity of the system allowed for good stuff to pass through their Eyes of Sauron lmao.

Tbh I don't even know why I'm responding a "throwaway" account.


> terrible shows being greenlit, homogeneity of camerawork, casting, identity-politics pandering that nobody asked for, oversaturation of the market with streaming services that is almost as bad as cable etc. … Binge-tv model is also not a good thing

None of this is unique to Netflix. Terrible shows have been greenlit since the dawn of television. Shows are incredibly homogenous because they’re largely produced by a select few people. If anything, Netflix has broken that homogeneity by allowing more indie film/tv creators to breakthrough (like squid game).

Casting and identity politics shenanigans are definitely not unique to Netflix and started way before Netflix started producing content. The oversaturation problem only became a problem when all the other networks wanted their own slice of the pie. It was actually great for awhile when Netflix was the only big player in town.

And finally, binge-tv has always been possible. My grandmother would sit in front of cable television sun-up to sun-down watching whatever was on. 24 hour marathons of mythbusters and other shows like that were very common. Reruns of all your favorite sitcoms play all night on all major cable channels. Bingeing isn’t a unique problem to Netflix. Netflix just allows you to do it with new shows instead of waiting arbitrary amounts of time.

Also, is binge-reading a novel unhealthy? I’ve had 8 hour reading sessions when I’m gripped by a great book, like the last book in the Wheel of Time series. If that’s acceptable, why isn’t watching a show for 8 hours acceptable? I don’t think the medium really has that much of a tangible effect. Now if you’re bingeing shows everyday, then it’s unhealthy. But once a quarter when a new show you like comes out? Idk, that seems fine to me.


I'm not sure if you're joking about binge-ing but you have to be delusional to think that someone taping their favorite show on their own or waiting for it until it ends is nearly the same as dropping all episodes at the same time on principle for every show and only offering that as an option for a long time. building your whole UI around it and encouraging this behavior? The point about others jumping on the bandwagon can only happen if Netflix broke down the barrier and over-invested (by going into deep red) to justify accessibility, knowing full well they won't be able to keep up the steam indefinitely. I know you're trying to be smart here, and failing, but are books built to be binged? Do they know when your attention is dipping? Do books have such UI to do this? You can make same inane argument about doing math for 8 hours a day or something.

Also the scale at which Netflix was throwing money around was unprecedented, so much so that other tv shows and writers were making fun of it. That's like saying periods of good investment client are equivalent to a dot com crash or a housing bubble.


Not to comment on AI, or the merits of television as a medium here, but specifically on the drop-releases of entire seasons of shows.

I do not want to retain the context of some show across weeks. If I'm going to watch something, it will be all in one go, over the course of some reasonable time period that _I_ define - that may be a single day (transatlantic flight, for example), or may be a single week.

Typically for the streaming services that don't release all episodes at once, that means I won't even start until the complete season is available, and almost inevitably will get so annoyed by the service that I will just cancel a subscription to it.


I’m not joking.

> but you have to be delusional to think that someone taping their favorite show on their own or waiting for it until it ends is nearly the same as dropping all episodes at the same time on principle for every show and only offering that as an option for a long time

What? The only reason cable didn’t release seasons all at once was to maximize profits. They get to run more ads, to force a user to stay subscribed for longer to finish their favorite show, charge studios extra for prime time spots, and more. These big productions are usually done with the whole season by the time it would release on cable. It’s not like they would stagger the release out of the goodness of their hearts to help people avoid bingeing.

What does it matter if you watch 8 hours of the same show or 8 hours of different sitcoms? I never mentioned taping a show and bingeing it all at once. I mentioned the fact that some people will watch large amounts of television regardless of what’s actually on the screen.

> I know you're trying to be smart here, and failing, but are books built to be binged?

And I guess you’re trying to be smart and failing? What’s with the backhanded comment. Why not engage with the argument being made instead of making comments like this?

That’s beside the point. The argument I was making is people for some reason find bingeing a show for 8 hours morally reprehensible. But reading a book is fine. The same thought process has been applied to gaming for long sessions. My argument is why are these different mediums deserving of different moral judgments? What makes reading, doing math, playing video games, or watching tv for long periods of time more or less reprehensible? These all serve one purpose: activities meant to entertain (maybe not math). Why does the medium the entertainment is delivered through make it any better or worse?

It isn’t. I think the thing people have a problem with, rightly so, is the lack of balance. It’s unhealthy to obsessively engage in one activity for long periods of time. But that’s a different argument altogether.

All in all, you didn’t really refute anything I said or try to show how any of these things are unique to Netflix. I agree with you by the way, but you’re framing your argument poorly in my opinion.


I honestly am not sure what you were arguing for in the first place. I agree that I was being rude to an extent, but your previous comment didn't lend itself to most charitable interpretation as it wasn't clear to me what in mine practically got you to respond refuting my points?

> My argument is why are these different mediums deserving of different moral judgments? What makes reading, doing math, playing video games, or watching tv for long periods of time more or less reprehensible? These all serve one purpose: activities meant to entertain (maybe not math). Why does the medium the entertainment is delivered through make it any better or worse?

I'm not making moral judgment on the medium itself. I love the medium, I'm a cinephile to be honest. I'm saying any entertainment that becomes gamified like this (be it books or shows), and having more and more control over you by the way of gathering data while you use it, is worse than the same type of standalone entertainment that has less influence. I don't think you will argue with me that a passive TV cannot influence you as much as a system that actually keeps track of your activities.

Basically, any medium that outstays its welcome in your life by underhanded tactics is bad in my opinion. If you've read The Diamond Age then you remember the "Illustrated Guide..." book which is basically adaptive AI that weaves in your life into its storytelling. Imagine an e-reader with GPT-6 embedded that does just that but instead of teaching you, it just keeps creating a more and more compelling story full of ads or something. I'd be equally opposed to that (and the reading of it). It's not the medium for me it's the vehicle of delivery becoming bigger than the delivery itself. The horse becomes the cart if you will. So a period of seeming freedom followed by this winter isn't good for the industry basically.

Now I'm not claiming Netflix is responsible for Marvel/Disney, those are separate beasts and processes. But I do argue that they come from the same tendency and desire that fuels other companies I mentioned prior: FB, Twitter, YouTube etc.

In terms of how Netflix itself is responsible, my argument is that its underhanded tactics in 'disrupting' the industry (lowering threshold of entrance, running at a loss for a time) just forced other players in the same local minimum and now everyone stayed in this way. And to make it even clearer I think my issue is that Netflix ushered an era of greater centralization and homogeneity, where practices throughout the industry became even narrower, and things like cancellations mid-season even more a norm. Now I'm not sure if it's necessarily different from the past (probably not) but knowingly creating a bubble and then resultant layoffs and losses of jobs are no different than a drug dealer who got you pure stuff first few times and then sells you diluted dope once you're hooked.

As I said I don't think anything fundamentally has changed in terms of how money men operate, what I don't like is how we keep giving them tools to become more and more powerful which is what I was railing against throughout this thread. Yes, we depend on their funding but it doesn't mean we have to help them secure their empires to a 1984 extent. Because at the rates it's going it will happen. Altman is a person who (given his recent actions, like military contracts) will lead us down that way. The employee revolt showed that these engineers only care for their bottom line.

Anyway, apologies for misinterpreting your point, but I do think you also didn't necessarily get mine. Since we are not in disagreement we can keep the argument but in more civil terms.


Man i really hate how people absolve themselves of responsibility like this. "It's not my fault I spent all weekend watching Netflix, it's their UI!"

No, it isn't. I love when they release full seasons at a time. I can watch them at whatever pace I please. If some degenerate can't control themselves that's their problem.

"But muh children" parent them. "But irresponsible parents" Netflix is probably the best case scenario there.


I don’t interpret this discussion as about absolving oneself of responsibility. To be fair, what people spend their weekends doing is none of my business.

But it is true that Netflix makes UI decisions that encourage binging. They are not evil for doing so, because honestly, there isn’t anything wrong with binging anyway, but it’s indicative of the logic that is going to be used when it comes to producing their own shows.


You could say they encourage binging, you could say they have good UX. I don't really know what type of functionality were talking about here, I'm thinking things like automatically playing the next episode, skipping intros etc. To me that's just good design, it's exactly what I want the app to do.

And I don't really see how Netflix benefits from binging. It seems to me that Netflix is more like gyms - they want customers who pay but don't use the service. The more you watch the more you're costing Netflix. If you pay but never watch anything you're the perfect customer, they're collecting free money.

If you just watch a full show in a weekend and unsubscribe that's really not ideal for them.


What if they are spending their weekends building devices to kill you? Does it now become your business?


No one is saying you don't have agency as an individual. This is an aggregate statement. In any A/B test you're interested in proportion of people converted who displayed increase in desired behavior. What you can't do is to go on an extrapolate this to any individual person, because that's now how statistics work or are designed.

You're taking an rightwing/libertarian approach (no judgement) where everyone has complete free will to do anything they want and make fully informed decisions. Rational actor and all that. Reality is quite different, and if you don't believe me you can peruse a ton of work in behavioral economics that show it.

Hell, I don't even need to go far to conjure an example: gambling addicts.


I don't really know what you think I don't understand. I'm not arguing from the point of view of an A/B test, I'm arguing from the point of a Netflix user.

I don't care how others ruin their lives. There's a million and one ways to do so and if you try to remove one they'll fine another. If you choose to binge Netflix all day that's on you. If you choose to overindulge in drugs or food or whatever, that's on you.

By all means, help people who need it. I'm a strong supporter of all kinds of social safety nets. Free healthcare, free rehab, free counseling, free education, bring it on. It's the best possible investment a society can make, any society that ignores these obvious improvements is shooting itself in the foot. If someone needs and wants help, help them.

All I'm saying is Netflix (and similar) is great the way it is. It's a much much much better experience than tv used to be.

So when I see people seemingly hold them responsible for the behavior of their users, it honestly makes me angry. They're doing what we want. We should celebrate them for that, not criticize them. It's not their fault people can't control themselves.

And I fail to see what you think the motivation is. You seem to think Netflix is secretly scheming to make their viewers binge more - why? That would be like a gym trying to make their support members come in to the gym. Fact is if everyone went to the gym on a weekly basis they wouldn't have space for half of their members and they would go bankrupt. I don't know the details of Netflix's server costs but I'm betting that if everyone on there were to start binging everything they would go under as well. I don't see any reason why Netflix would want people to binge more. Not one. It would increase server costs, maybe also licensing costs, bring in zero extra money, and once people were done watching everything interesting they would unsubscribe. It seems much more favorable for them if people watched one episode per week and kept paying for years while hardly using Netflix servers.

But we don't want one episode per week, we want to decide our own pace. We don't want to have to choose to play the next episode, we can pause the show whenever we want to. We don't want to watch the same intro for each episode.

That's why we pay for the service - it's what we want. That's their incentive. Not making degenerates spend more in server costs per week than they pay per month. That's not good business for Netflix.


This sounds like a critique of the creators of the AI moreso than the users of it, which TFA is targeting.


They always have done always will do and society has almost never done anything about it and isn't doing anything about it now. AI is just another tool in a the toolbox and as I'll keep repeating, the problem is never with the tool but with the tools using the tool.

How can we justify complaining about AI for these reasons; we've all sat on our asses and now they're are billionaires and soon to be trillionaires. We've already failed, dude.


Ai/ml will change out world.

It already does.

It's a Paradigma shift and probably the most impactful after the internet.

From human to machine interface to medicine, research, content creation etc.

No one cares about some dude posting some negative rant like that.


"Change the world" and "paradigm shift" are not inherently good.


It's evolution and our responsibility to make it good.

Doesn't matter though if we like it or not because it's happening.

The only thing preventing this is a total economy collapse so crazy that our society doesn't continue chip production.


We weren't able to make the internet good, what makes you think we'll do any better with AI?


The internet is responsible for longer lifespans and decreased mortality globally. It is, on balance, well beyond "good."


I can do my bank stuff online, pay and manage bills, can call my parents with video, send pictures etc.

Internet is a huge success.

Internet is connection not private websites.


I appreciate the optimism but to me this should have been you essay about the good of the internet. I'm convinced it would be worth reading. If you wrote it, I wont see it because that is how certain people like it.

My lengthy rant in response would likely be about the almost impossible puzzle of logistics, the access to the vast ocean of knowledge that humanity has accumulated and the organization of this complete mess that is civilization.

We have plenty of stuff, we just cant get it to the right place at the right time. We know plenty of stuff but we cant get it to the right person at the right price. We really want to make this democracy thing work but despite our effort we keep getting sausages filled with you don't want to know.

My definition of a huge success is different. Maybe I'm wrong for thinking we could do more with the tool. If I'm wrong I don't want to hear it :-)


To make it good, critics need to point out where it’s going bad.


Clearly somebody cares, or else we would not be in this subthread.


I care. So, you should recheck the facts that you presumably got from your Bing query.


> Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment.

I don't think this article even remote attempts this claim. The closest it gets is suggesting that if these defenses are too much trouble for you, then perhaps your use case for AI wasn't great in the first place.

> but diatribes like this

How is this a diatribe? There's nothing bitter about the writing here, it's entirely couched within the realm of personal opinion, and is an unexpurgated sharing of that opinion.

Please don't position your arguments so that if I want to share my opinion I have to defend myself from accusations that I'm being exceedingly bitter or somehow interfering with what you intend to do.

You're effectively attempting to bully people out of their own opinions for the sake of your convenience.


> it's entirely couched within the realm of personal opinion

"AI output is fundamentally derivative and exploitative"

"If you want custom art, pay an artist."

"Human recommendations will always be better."

If you can't argue against any of those stances, what stances are up for debate?

Surely the person you're responding to was just posting their own opinion, and you're as much a bully as they are?


> I don't think this article even remote attempts this claim.

It's in the first sentence, "AI output is fundamentally derivative and exploitative (of content, labor and the environment)."


Any fruit of any manufacturing labour is fundamentally derivative and exploitative: it needs raw materials from the environment, and it needs labour for the intended transformation; if anything, the AI output is less exploitative because the raw inputs don't end up destroyed in the process.


> You're effectively attempting to bully people out of their own opinions for the sake of your convenience.

Maybe it's just me, but "bully" seems like a very exaggerated choice of words here.


No you absolutely should have to defend yourself. Like the author, I don't want to touch anything you create that is produced with generative AI.

The ONLY exception is if you can demonstrate that your model was trained solely on datasets of properly licensed works and that those licenses permit or are compatible with training/generation.

But the issue is that overwhelmingly, people who use generative AI do not care about any of that and in practice no models are trained that way so it's not even worth mentioning that exception in this day and age.


I'm with you, but I think it is a bit more complicated. I think a reason for a lot of pushback is because these systems are being over sold. A lot of tech over promises and under delivers. I'm not sure it is just an AI thing rather than the limit in which you can edge forward the amount of acceptable exaggeration.

It definitely is frustrating that many things are presented as binary. But I think we can only resolve this if we dig a little deeper and try to understand the actual frustration that is being attempted to be communicated. Unfortunately a lot of communication breaks down in a global context as we can't be reliant on the many implicit priors that may be generally shared across different groups. Complaining is also the first step to critiquing, but I think you're right that we should encourage criticisms over complaints, but I think we can attempt to elicit critiques from complaints too, and that we should.


The idea that machine learning like large language models and image generating systems exploit labor might be up for debate, but the fact that they are disproportionately damaging to the environment compared to the alternatives is certainly true in the same way that it's true for Bitcoin mining. And there's more than just those two aspects to consider, it's also very much worth considering how the widespread use of such technologies and the integration of them into our economy might change our political, social, and economic landscape, and whether those changes would be good or bad or worth the downsides. I think it's perfectly valid to decide that an emerging technology is not worth the negative changes it will make in society or the downsides that it will bring with it, and reject its use, technological progress is not necessarily inevitable in the way that every new technology must become widespread.


> disproportionately damaging to the environment compared to the alternatives

This is a new one to me. Do you have any source for that? Once a model is trained, it seems pretty obvious that it takes Dall-E vastly less to create an image than a trained artist. I have trouble believing the training costs are really so large as to change the equation back to favoring humans.


Dall-E is usually not an alternative to a trained artist, but an alternative to downloading a stock image from the internet, which takes way less energy.


AI generated images have already won numerous awards. They can easily make assets good enough for an indie video game. Even stock images have to come from somewhere


Jevon's paradox though? Planes are so much more efficient now than the first prototypes, yet usage is so much higher that resource consumption due to airplanes have vastly increased. Same goes with generative models.


I'm not sure your premise even makes any sense here, because it doesn't take an artist much more resources to produce art then it took them to just exist for the same amount of time. They're still just eating, sleeping, making basic usage of the computer, using heating and light, and so on either way. Whereas someone using dall-e is doing all of that plus relying on the immense training costs of the artificial intelligence. That basic usage of the computer in order to use the machine learning model might be shorter than the basic use of the computer to use procreate or something, but they'll still be using the computer for about the same amount of time anyway, because the time not spent not making art will just be shifted over to other things. So it doesn't seem to me like having machine learning models do something for you instead of learning a skill and doing it yourself will really decrease emissions or energy usage noticeably at all.

Furthermore, even if there is some decrease in emissions using pre-trained machine learning models over using your own skills and labor, the energy costs of training a powerful machine learning model like you're thinking of are way higher than I think you are imagining. The energy and carbon emission cost of training even a 213M parameter transformer for 3.5 days is 626 times the cost of an average human existing for an entire year according to [this study](https://arxiv.org/abs/1906.02243). Does using a pre-trained machine learning model take that much emission out of people's lives? Or a day's worth out of 228,490 lives, perhaps? I doubt it.

But we aren't even using such a small transformers anymore either — they actually aren't that useful. We're using massive models lile GPT-4, and pushing as hard as we can to scale models even further in a cargo cult faith that making them bigger will fundamentally qualitatively shift their capabilities at some point.

So what does the emissions picture look like for GPT-4? The study above found that emissions costs scale linearly with number of parameters and tuning steps as well as training time, so we can make a back of the napkin estimate that GPT-4 is 8,592,480 times more expensive to train than the transformer used in the study, since it is rumored to have 1.76 trillion parameters versus the 213 million of the model in the study, and GPT-3 was said to take 3640 days to train (despite using insane amounts of simultaneous compute to scale the compute up in conjunction with the scale of the model) versus 3.5 days. This in turn means it is 5,378,892,480 times more expensive to train a GPT-4 than it is for a human to live for one year. And again, to reiterate, no matter what work the humans are doing, they're going to be living for around the same amount of time and using roughly the same amount of carbon emissions as long as they're not like taking cross country or transatlantic flights or something. So it's more expensive to train gpt4 then it is for almost 6 billion people to live for a year. I don't think it's taking a year's worth of emissions off of 6 billion people's lives by being slightly more convenient than having to type some things in or draw some art yourself. And there are only 8 billion people on the planet, so I don't think there's enough people to spread smaller gains out across to justify the training of this model (you'd have to take a days worth of emissions off of 1,963,295,755,200 people to offset that training cost!), especially since in my opinion the decrease in emissions of using machine learning models would necessarily be absolutely miniscule.


This back of the napkin estimate for GPT-4 emissions costs is too high by orders of magnitude. Your estimate is that training it emitted about as much as CO2 as 5.38 billion average humans living their lives for a year did. With a world population of 8 billion, it would mean that GPT-4 training was equivalent to 0.67 years of total anthropogenic CO2 emissions. Since GPT-4 CO2 emissions all come from manufacturing hardware with fossil fuels or burning fossil fuels for electricity, this is roughly equivalent to 0.67 years of global fossil fuel production.

But OpenAI had neither the money nor the physical footprint to consume 0.67 years' worth of global fossil fuel production! At those gargantuan numbers OpenAI would have consumed more energy than the rest of the world combined while training GPT-4. It would have spent trillions of dollars on training. It would have had to build more data centers than previously existed in the entire world just to soak up that much electricity with GPUs.


That's a good point, that's what I get for doing a linear extrapolation. This looks like a better estimation, which doesn't look good for my argument: https://towardsdatascience.com/the-carbon-footprint-of-gpt-4...

I still think my point about imagining that using ML models decreases emissions versus a human doing the same task still stands though — humans don't produce that much more or less emissions depending on what task they're doing, and they'll be existing either way, and probably using the computer the same amount either way, just not spending aa much time on that one task, so I don't see how you can argue using an ML model to write or draw something uses less CO2 than a human doing it. You can't count the amount of CO2 the human takes to exist for the amount of time it takes for them to do the task as the CO2 cost of the human doing the task because humans don't stop existing and taking up resources if they're not doing a task unlike programs. And you can't really compare the power used to run the ML model to the power used by the computer the human is using during the time it takes them to do the task either, since the human will need to use the computer to access your ML model, interact with it to define the prompt, edit the results, etc (and also bc again they'll probably just shift any time saved doing that task on the computer to another task on the computer). Additionally of course there's the fact that you can't really use large language models to replace writers or machine learning image generation tools to replace artists if you actually care about the quality of the work.


Huge kudos for admitting this changes your reasoning - I don't see people willing to admit that often, especially on the internet.


Thank you! It would have been silly for me to deny that my math was off, I don't really know how I would have rhetorically done that lol. I did find another relevant link on this topic for consideration, though, after writing my above comment: https://futurism.com/the-byte/ai-electricity-use-spiking-pow.... According to that article, although large language models are not yet be drawing as much power as I calculated (so my linear extrapolation was still silly), apparently they might eventually do so (0.5% of the world's energy by 2027). The actual study is paywalled though so I don't really know what their methodology is and they may well be doing the same linear extrapolation thing I was doing above, so I'm not really sure how seriously we should take this. It's something to consider though when we weigh the costs and benefits.


I would argue that most social progress comes from automating a task and freeing humans up to do something else - your logic counts just as solidly against building a car in a factory, or using a sewing machine, or a thousand other socially acceptable things. Surely the "LLM Revolution" isn't worse than the Industrial Revolution was?


Nothing I said was about automation per se being bad? I'm not sure where you got that from. I was specifically talking about the relative carbon emissions of machine learning models doing something versus human beings doing something, and that the former doesn't have an advantage over the latter in emissions in my opinion. I don't think that really applies to automation in general, because I wasn't really making a point about automation, I was just making a point about the relative emissions of two ways of automating something. I actually agree with you that in principle automation is not a bad thing, and that economies can eventually adjust to it in the long run and even be much better off for it, although we would probably disagree on some things, since I think our current economic system has a tendency to use automation to increase inequality and centralized power and resources in corporations and the rich as opposed to truly benefiting everyone, because those with economic power are going to be the ones owning the automation and using it to their advantage, while making the average person useless to them and not directly benefitting us. But that's an entirely different discussion really.


It’s so ironic that you have this stance about the value of other people but you feel so humiliated by OP as to think they’re bullying.


I think you replied to the wrong post?


how's it more damaging to the environment of you can replace 1k people, that's 1k people staying at home instead of commuting, sure that causes pain if we can't figure out ubi or a way to house and feed the masses, also many of the biggest ai users are working to get their energy 100 percent from solar, wind, and geothermal. AI is something we've been heading towards since the dawn of man.

Hell, ancient Rome had automatons. There's no way to stop it. Ideally we merge with the ai to become something else than give it super powers and it decides to destroy us. I'm not sure the benevolent care giver of humanity is something we can hope for.

It's a scary but interesting future, but I mean we've also got major problems like cancer, global warming, etc, and ai is a killer researcher, that did 300k years worth of human research hours in a month to find tons of materials that can possibly be used by industry.

They're doing similar with medicine, etc... there's many pros and negatives, I'm a bit of an accelerationist, rip the band-aid off kind of guy, everyone dies someday I guess, not everyone can say they were killed by a Terminator, well at least not yet lol, tongue in cheek.


> I'm a bit of an accelerationist, rip the band-aid off kind of guy, everyone dies someday I guess

Are you volunteering to go first?


> how's it more damaging to the environment of you can replace 1k people, that's 1k people staying at home instead of commuting,

Check my comment above, where I do some rough back of the napkin calculations around this. Training gpt4 for example produced around 6 billion times the carbon emissions a human emits in total in a year, which probably includes commuting, so unless gpt4 removes the commute time of probably a significant fraction more than 6 billion people (since it wouldn't be eliminating their emissions entirely, just their commuting emissions) it is a net loss. Also, we can eliminate commute emissions by having better public transportation and walkable/bikable cities, we don't need to prostrate ourselves before a dementia addled machine God to get there.


>Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment.

Why should you be free of accountability for the effects of your actions?


Because the effects of my actions in this case have yet to be demonstrated, let alone shown to cause harm. The author claims there is expoititive harm to labor, the environment, and maybe others. That is not at all obvious or provably true yet. As I said, I'm open to the discussion, but I can't defend myself in good faith when people claim some slam dunk moral certitude. Again, don't use generative AI if it makes you feel bad, but there is absolutely nothing clear cut yet about this radically brand new technology.


>The author claims there is expoititive harm to labor [...]. That is not at all obvious or provably true yet.

Not at all obvious? These models are trained on vast amounts of content, much of it copyrighted, and basically none of it licensed.


Human artists have been training on the same content for decades and no one seemed to complain. You can argue that machines should be held to a different set of legal and ethical standards, but it's certainly not obvious.

Most factories are designed based on vast amounts of prior manual labor, so it's not like "automating a manual process based on analyzing existing methods" is new, either. Why is it okay to automate the knowledge of all those other craftsmen, but not that of painters?


Human artists are not robotically ingesting terabytes of content.


AI are not human artists, so there's no connection to your point and the discussion.


you are just making shit up to suit your narrative at this point.


So ai are human artists?


You could just as easily claim that since AI are not human artists that copyright does not apply to them.


It applies to whoever uses them as a tool. If you say copyright doesn't apply to a photocopier because it isn't human that doesn't mean it suddenly doesn't apply to you. It's just a bad argument.


Correct, not at all obvious. The obvious effect of generating an image of a dog on the moon is that you now have an image of a dog on the moon. If you showed it to 100 artists, some percentage of them might recognize it's AI, but none of them would claim it as their art and ultimately none would be harmed. The harm is non-obvious.


The flip side of that coin is brazen "ingenuity" with complete disregard for the consequences is just as bad as blindly declaring all AI is bad.

We need people like the person writing this article so the starry eyed people who are too excited about AI and pushing it into everything are kept in check.


^^ THIS ^^

The middle road I've taken is that I use various consumer AI tools much the way I used the Macintosh or the Atari ST with MIDI when they showed up while I was in music school, as tools that may be used as augmentative technology to produce broader and deeper artistic output with different effort.

There's something mystical and magical about relinquishing complete control by taking a declarative approach to tools that require all manner of technique and tomfoolery to achieve transcendent results.

The jump from literate programming to metaprogramming to what we have in the autonomous spectrum is fascinating and worth the investment in time, assuming the output is artistic, creative, and philosophical.

AI is not free, but the price being paid comes at the cost of creators trying to create safe technology usable by anyone of any age.

Given the similarity to selling contraband, these AI tools need far more than just conditional guard rails to keep the kids out of the dark web... More like a surgeon general's warning with teeth.

Bard and Bing should be treated as if they were Therac 25, because in the long run we may realize that like social media, the outcome is worse.


Please don't do “^^ this ^^”, comments are reordered here.


Thanks for the reminder. I won't make that mistake again. I'm guessing the spatial affordance, for lack of a better phrase, of "^^this^^" arose with ephemeral comments on irc, but is clearly dependent on the message list being static, which is not true here; hence, the technique is out-of-place here. Good to know. Thanks again.


> Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment.

Can you please give me access to your private repositories? I'd like to se if there's anything useful there for me to sell. You shouldn't say no, at least I ask politely and use the magic words. It can only benefit humanity right?

I'm not against crowdsourcing LLM models, but copyright is copyright. I say that as someone who pirates heavily, but I'm not a hypocrite about what I do.


There's a version of the future where AI actually takes larger and larger chunks of real work while humans move towards spending more and more of their time and energy on culture war activities.


ALL technology can be weaponized, and what you are sleep-walking into is an era where AI is easily weaponized against not just nation states or groups, but the individual.

Either have this conversation now, or face the consequences when weaponized AI is so prevalent, you will have to dig a hole in the ocean to escape it ..


Your name is "stolenmerch"... I wonder if that colors your perspective at all.


Are We the Baddies?


> Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment.

You, personally, likely are not (apart from electricity use but that's iffy.) But the technology you want to use could not exist, and cannot continue to be improved, without those two things. That's not unclear in the slightest, that's just fact.

> I'm open to that conversation and debate, but diatribes like this make it far too black-and-white with "good" people and "bad" people.

I get that any person's natural response to feeling attacked to defend oneself. That's as natural as natural gets. But if shit tons of people are drawing the same line in the sand, no matter how ridiculous you might think it is, no matter how attacked you might feel, at some point, surely it's worth at least double checking that they don't actually have a point?

If I absolutely steel-man all the pro-AI arguments I have seen, it is, at the very best:

- Using shit tons of content as training data, be it written, visual, or audio/video, for a purpose it was not granted for by it's creators

- Reliant on labor in the developing world that is paid nearly nothing to categorize and filter reams upon reams of data, some of which is the unprocessed bile of some of the worst corners of the Internet imaginable

- Explicitly being created to displace other laborers in the developing and developed world for the financial advantage of people who are already rich

That is, at best, a socially corrosive if extremely cool technology. It stands to benefit people who already benefit everywhere, at the direct and measurable cost of people who are already being exploited.

I don't think you're a bad person for building whatever AI thing you are, for what it's worth. I think you're a person who probably sees cool new shit and wants to play with it, and who doesn't? That's how most of us got into this space. But as empathetic as I am to that, tons of people alongside you who are also championing this technology know exactly what they are doing, they know exactly who they are screwing over in the process, and they have said, to those people's faces, that they don't give a shit. That they will burn their ability to earn a living to the ground, to make themselves rich.

So if you're prepared to stand with them and join them in their quest to do just that, then I don't think anyone is obligated to assuage your feelings about it.


Your "steelman" is embarrassingly bad. Why play devil's advocate if you're going to do such a bad job of it? Here's an alternative:

- As a form of fair use, models learn styles of art or writing the same way humans do - by seeing lots of examples. It is possible to create outputs that are very similar to existing works, just as a human painter could copy a famous painting. The issue there lies in the output, not the human/model.

- Provide comfortable office jobs for people in economically underdeveloped countries, categorizing data to minimize harm for content moderators worldwide. One piece of training data for a model to filter harmful content can prevent hundreds/thousands of people from being exposed to similar harmful content in the future.

- Reduces or eliminates unpleasant low-skill jobs in call centers, data entry, etc.

- Creates new creative opportunities in music, video games, writing, and multimedia art by lowering the barriers to entry for creative works. For example, an indie video game developer on a shoestring budget could create their own assets, voice actors, etc.

- Reduces carbon emissions by replacing hours of human labor with seconds of load on a GPU.


> As a form of fair use, models learn styles of art or writing the same way humans do - by seeing lots of examples.

“a lot” is doing very heavy lifting here. The amount of examples a human artist needs to learn something is negligible in comparison to the humongous amounts of data sucked up by AI training.


> As a form of fair use, models learn styles of art or writing the same way humans do - by seeing lots of examples. It is possible to create outputs that are very similar to existing works, just as a human painter could copy a famous painting. The issue there lies in the output, not the human/model.

I've seen this analogy parroted everywhere and it's garbage. Show me a human being that, in an afternoon, can study the art of Rembrandt and from that experience, paint plausibly Rembrandt style paintings in a few minutes each, and I'll swear by AI for the rest of my life.

Absolute bunk.

> Provide comfortable office jobs for people in economically underdeveloped countries, categorizing data to minimize harm for content moderators worldwide.

... who do you think the content moderators are? It's the same people being paid pittance wages to expose themselves to images of incredible violence, child abuse, non-consensual pornography, etc. etc. etc.

No person should have to look at that to earn a GOOD living, let alone a shit one.

> One piece of training data for a model to filter harmful content can prevent hundreds/thousands of people from being exposed to similar harmful content in the future.

Yeah this is the exact nonsense that is spouted every time you criticize this shit. "Oh all we need to do is absolutely obliterate entire swathes of humanity first, and theeeeeen..." with absolutely zero accounting for the job that has to be done first. And again, I don't see any AI scientists stepping up to page through 6,000 jpegs, some of which depict unspeakable things being done to children, oh no. They find people to do that for them, because they know exactly how unbelievably horrible it is and don't want themselves being exposed to it.

If it's so damn important, why don't YOU do it? If you're going to light someone's humanity on fire to further what you deem to be progress for our species, why not at least have the guts to make it your OWN humanity?

> Reduces or eliminates unpleasant low-skill jobs in call centers, data entry, etc.

And where are those people going? Who's paying them after this? Or are you going to suggest they attend a weekend Learn-to-Code camp too? And who's paying their wages in the middle of that transition, when the skills they have become unmarketable? Who's paying for their retraining? Or are we just consigning entire professions worth of people to the poorhouses now without so much as a thought?

> Creates new creative opportunities in music, video games, writing, and multimedia art by lowering the barriers to entry for creative works.

Derivative works. No matter how much you want to hype this up, AI is not creative. It just isn't. It gives you a rounded mean of previous creations that it has been shown, nothing more. AI will never invent something, in a thousand years it will not. This is why people call AI art soulless.

> For example, an indie video game developer on a shoestring budget could create their own assets, voice actors, etc.

Have you seen those games? They're shit. They're lowest common denominator garbage designed to get hyperactive kids on iPads to badger their parents into spending money.

> Reduces carbon emissions by replacing hours of human labor with seconds of load on a GPU.

So like, this just straight up means you know damn well people are going to die from this. They will be displaced, their labor made worthless, and they will perish. That's just like... what you just said there, because otherwise, the statement "reduces carbon emissions" makes no sense, because if someone gets fired and gets a new job, their carbon emissions do not necessarily go down, and they certainly aren't eliminated.*


> Show me a human being that, in an afternoon, can study the art of Rembrandt and from that experience, paint plausibly Rembrandt style paintings in a few minutes each, and I'll swear by AI for the rest of my life.

So it's okay to learn, but only if you do it very slowly? I surely don't need to point you to the existence of forgers - you know a human can study the art of Rembrandt and paint plausibly Rembrandt style paintings

> are we just consigning entire professions worth of people to the poorhouses now without so much as a thought?

We have been doing that since the dawn of history - what makes this any different from cars obsoleting the horse drawn carriage? Computers have been automating people's jobs for decades - should we ban programming writ large?

Where, exactly, do you feel the line ought to be drawn?


> So it's okay to learn, but only if you do it very slowly?

No, it's a fundamentally different process with different results. An artist learns from previous artists to express things they themselves want to express. An AI digests art to become a viable(ish) tool for people who want to express themselves, as long as that expression resides somewhere in the weighted averages of the art the model has digested. Two fundamentally different things, apples and oranges, and, also not without it's own set of limitations. Despite the rhetoric around this stuff that anyone can create anything, that's just not true: you can create anything that you can find a model that's suitable to create it, that was itself trained on a LOT of similar material to what you want to create. Effectively automated ultra-fine scrap-booking.

Honestly if creativity is your thing, even if you find creating difficult for whatever accessibility reason you feel like pretending you care about, you will find AI more frustrating than anything, and the bounds of your creativity are the model itself, and the safeguards whatever provider has decided are important to put in place. You've just exchanged one set of limitations you probably can't control for another set you definitely can't control.

> I surely don't need to point you to the existence of forgers - you know a human can study the art of Rembrandt and paint plausibly Rembrandt style paintings

Yes and those are worthless once found, just like AI art. And again, you've sidestepped the scale: Adobe Firefly can bash out 3 images in roughly 2 minutes of solid resolution. No human can even dream of getting close to creating Rembrandt forgeries at that rate.

> We have been doing that since the dawn of history - what makes this any different from cars obsoleting the horse drawn carriage?

Because cars costed a fortune when new and were toys for the wealthy, before Henry Ford came along some three decades later to fix that. And then, the former farriers had time to retrain for new work. Not to mention, carriage builders were still employed through many decades with the rise of cars, because originally buying a "car" meant you got a chassis, suspension, engine and the essentials, which you would then take to a coach builder to have a "skin" if you will build around it. Hence the term "coachwork."

> Computers have been automating people's jobs for decades - should we ban programming writ large?

Is this the "debate" you were saying you were open to? Hyperbolic statements with zero substance? I can see why few want to have it with you.

> Where, exactly, do you feel the line ought to be drawn?

Consent. Tons of people's creative output was used to build machines to replace them, without their consent and more often than not, explicitly against their wishes, under the guise of a "research project" and not a monetized tech product. Once again a tech company bungles into the public square, exploits it for money, then makes us live with the consequences. I frankly think that question aught to be reversed: what makes OpenAI entitled to all that content, for a purpose it was never meant for, with zero permission or consent on the part of it's creators?

I'm not opposed to ML as a concept. It has it's uses, even for generating images/writing/what have you. But these models as they exist now are poisoned with reams of unethically sourced material. If any of these orgs gave even the slightest shit about ethics, they'd dump them and re-train them with only material from those who consented to have it used that way. Simple as.


May I just say, as a third-party simply reading this back and forth from the outside, that the tone of your writing and the implied attitude with which you are engaging in "debate", reads as very aggressive and uninterested in actually having a sincere discussion. To me at least.

I imagine you probably won't like this comment, but perhaps you might use it as an opportunity for reflection and self-awareness. If your interest is actually to potentially change someone's mind, and not just "be right", you might consider approaching it in a different way so that your tone doesn't get in the way of the substance of arguments you wish to make.

Just a suggestion. Take care.


You aren't wrong in the slightest, apart from that you've gotten the implication that I'm here for a debate. I'm not. I've been having this debate since the StableDiffusion blow up at the mid-ish of 2023. I've read these points restated by countless pro-AI people and refuted them probably dozens of times at this point, here and elsewhere, always ending in a similar deadlock where they just stop replying, either because they're sick of me, or I've "won" for whatever that means in the context of online discussion.

Nevertheless I'm always open to be persuaded by actual arguments, and I have on numerous issues, but I have yet to see any convincing refutations on these points I've outlined here regarding primarily, but not limited to:

- The unethical sourcing of training data

- The exploitation of lesser-privileged workers in managing it

- The harm being done and the harm that will be done to various professions if they become standard

And not mentioned in this thread:

- These various firms' superposition between potentially profitable business and "research initiatives," depending if they're trying to get investment or abuse the public square respectively

- The exploitative/disgusting/disinformative things these AI's are being used to produce to a society already saturated with false information and faked imagery

But these discussions usually dead end, like I said, when the other person stops answering or invokes the "well if we don't build it someone will" which is also unpersuasive.

Relating specifically to your point about wanting to change someones mind: in my first comment I do feel I put out an olive branch with empathy for being excited about a new thing. But when the new thing in question is so saturated from beginning to end in questionable ethics... I'm sorry there's only so much empathy I can extend. If you (not you specifically, but you as the theoretical person) are the kind of person ready to associate with this technology at this stage, when it's foibles and highly dubious origins are so well known, then I'm not overly interested in assuaging your feelings. This person came into this thread bemoaning the fact that so many people are calling them out on this and they're sick of it, and like, there's a great way to stop that happening: stop using the damn technology.

I will always extend empathy, but if your position is whining about people rightfully, IMO, pointing out that you are using unethical tech and you wish they'd stop? Like, sorry not sorry man, maybe you shouldn't use it then. Then you get less yelled at and a clear conscience. Win/win.

But I do appreciate the reply all the same, to be clear. You aren't wrong. I've just had this argument too much, but also don't feel I can really stop.


> always ending in a similar deadlock where they just stop replying, either because they're sick of me, or I've "won"

My general experience on Hacker News is that threads rarely go beyond one or two replies, so I'll often tap out on the assumption that the other party isn't likely to actually read/respond to any thread more than a couple days old. As far as I know, there's not any indicator when someone replies to your comments, unless you go and check manually?

If I'm just using the site wrong, do please let me know!

Otherwise, I'd suggest you might want to update from "sick of me" to "never saw the reply due to the format of the site". For what it's worth, it took me a while to adjust to that



> An artist learns from previous artists to express things they themselves want to express.

Ahh yes, that well known human impulse to produce stock artwork for newspapers and to illustrate corporate brochures. I can't imagine what the world would be like if we let cold, soulless processes design our corporate brochures!

I suppose this argument works for Art(TM), but why is it relevant to the soulless, mass produced art? Should it be okay to discard all the artists who merely fill in interstitial frames of an animation? Is "human expression" actually relevant to that?

> And again, you've sidestepped the scale

Pick one: either this is about speed or it isn't. Would you actually be fine with AI art if it was just slower? If not, then stop bringing up distractions like this. If this really is just about scale, it's a very different conversation.

> Because cars costed a fortune when new and were toys for the wealthy, before Henry Ford came along some three decades later to fix that.

Sorry, when did Rembrandt paintings stop being toys for the wealthy?

> And then, the former farriers had time to retrain for new work.

So, again, it's just that progress is moving too fast? If we just slow things down a bit and give the artists time to flee, that makes it okay?

> Hyperbolic statements with zero substance?

We haven't talked before, so I didn't know whether you were someone who was okay with automation putting people out of work. That's hardly zero substance. I'll assume this means you're fine with it, since you don't think it's even worth discussing.

> Consent

Okay, so, bottom line: you're saying that if they spend a few billion to license all that art, and proceed to completely replace human artists with a vastly superior product, you're OK with that outcome? (I'm not saying this is inconsistent, just trying to understand your stance - previously you were talking about the importance of artists expressing themselves and the speed at which AI can do things - what's actually important, here?)


> Ahh yes, that well known human impulse to produce stock artwork for newspapers and to illustrate corporate brochures. I can't imagine what the world would be like if we let cold, soulless processes design our corporate brochures!

As someone who works on the side in creative endeavors, I assure you that work that I do even that I would prefer to not carries with it my principles as a designer and a small piece of my humanity, every last thing, even the most aggressively bland and soulless contains an enigma of tiny choices based upon years of making things that most people will never notice. Or at least, I always thought they didn't notice, until you start putting even bland corporate art next to AI generated garbage. Then they do.

From the creative perspective, that's what I think lends it that... smoothed over, generic vibe. An artists "voice" even in something like graphic design, even in an oppressive and highly corporatized environment, would be best characterized as a thousand tiny choices that won't overall really impact a ton on their own in terms of the final product, but do give a given work it's "humanity" that no machine can touch. When I, for example, design an interface: why do I consistently use similar dimensions for similar components and spacings? I honestly couldn't tell you. To me, it "looks nice," a word choice that undermines decades in my industry but nonetheless is the most fitting. And all of those are subject to change by committee later on to be sure, but even so, they rarely are.

AI takes these thousands of tiny choices that contribute to this feeling and replaces it with a rounded mean of previous choices made by innumerable artists with different voices. It takes the "voice" as it were and replaces it with an cacophony of conflicting ones, which is subject to change it's tone with each pixel. This, IMO, is it's core failing.

> I suppose this argument works for Art(TM), but why is it relevant to the soulless, mass produced art? Should it be okay to discard all the artists who merely fill in interstitial frames of an animation? Is "human expression" actually relevant to that?

For the love of everything, yes. And you say "why is it relevant for soulless mass produced art" but we already know why it is, Disney spent billions of dollars showing us what happens when the content mill becomes so utterly and completely detached from the art it was meant to be with the MCU. The newer movies just... look like shit, and not because of AI (probably?) but because all the movies are made down to a formula, down to a process, no vision, no plan, just an endless remixing of previous ideas, no time for artists to put in actual work, just rushing from task to task, frame to frame, desperately trying to crank it the hell out before their studios go bust.

People rag on generic, popular art but even popular art is art, and if you take away the humans (or as Disney did, beat them into such submission they can no longer be human) people definitely notice.

> Pick one: either this is about speed or it isn't. Would you actually be fine with AI art if it was just slower? If not, then stop bringing up distractions like this. If this really is just about scale, it's a very different conversation.

It's relevant because you're bringing up industrialized mechanization as a comparison, and it's really an ill-fitting one. The printing press, MAYBE, could be an example on the scales we're talking about, and the main difference there is mass produced books basically didn't exist and literacy of the common people was substantially rarer, ergo, the number of scribes displaced in their skills was much lower.

But the vast majority of "technology replaces workers" type things can be (and you have invoked this already) compared to the industrial revolution, but again, the difference is scale. They didn't build a horseshoe maker by analyzing 50,000 horseshoes made by 800 craftsman that could then produce 5,000 of the things per day.

And sure, those horseshoes all suck ass, they're deformed, don't work well and the horses are visibly uncomfortable wearing them, but the corporate interests running everything don't care and so shit tons of craftsman lose paying work, horses are miserable, and everything keeps on trucking. That's what I see, all around me, all the time these days.

> Sorry, when did Rembrandt paintings stop being toys for the wealthy?

I mean, the art market being a tax-dodge and money-laundering scheme is a whole other can of worms that we really shouldn't try to open here.

> So, again, it's just that progress is moving too fast? If we just slow things down a bit and give the artists time to flee, that makes it okay?

I'd be substantially more pleased with a society that cared for the people it's actively working to displace, yeah. I don't think any artist out there is dying to make the next Charmin ad, and to your earlier point of soulless corporate art, yeah I'd imagine everyone would have a lot more fun making anything that isn't that. The problem is we have millions of people who've gone to school, invested money, borrowed money, and constructed a set of skills not easily transferable, who are about to be out of work. And in our society, being out of work can cost you everything from the place that you live, to the doctors that heal you to the food that nourishes you. I don't, and I doubt anyone gives a damn about maintaining the human affect in corporate art: apart from the fact that those humans still need to eat, and most of them are barely doing it as it stands now.

> We haven't talked before, so I didn't know whether you were someone who was okay with automation putting people out of work. That's hardly zero substance. I'll assume this means you're fine with it, since you don't think it's even worth discussing.

On the whole, less work is a-okay by me. Sounds great! The problem is we as a larger collective of workers never see that benefit: Instead of less work, we all just produce more shit, having our 40-hour week stuffed with ever more tasks, ideas, and demands of management as they add more automation and cut more jobs and push the remaining people ever harder.

We were on the cusp of a 30-hour workweek in the 1970s and now? Now we have more automation than ever but simultaneously work harder and produce more shit no one needs than we ever have.

> Okay, so, bottom line: you're saying that if they spend a few billion to license all that art, and proceed to completely replace human artists with a vastly superior product, you're OK with that outcome? (I'm not saying this is inconsistent, just trying to understand your stance - previously you were talking about the importance of artists expressing themselves and the speed at which AI can do things - what's actually important, here?)

What's important is I want people to survive this. I'm disillusioned as hell with our society's ongoing trajectory of continuously trying to have more, to do more, always more, always grow, always produce more, always sell more. To borrow Greta's immortal words: "Fantasies of infinite growth." I see the AI revolution as yet another instance where those who have it all will have yet more, and those who do not will be ground down even harder than they already are. It's a PERFECT solution for corporations: the ability to produce more slop, more shit, infinitely more, as much as people can possibly consume and then some, for even less cost, and everyone currently working in the system is now subject to even more layoffs so the executives can buy an even bigger yacht.

If you don't see how this stuff is a problem I don't think I can help you.


> we as a larger collective of workers never see that benefit

Child mortality has dropped from 50% to 1%.

We had a world-wide plague, and far less 10% of the population died.

We have computers. We have the internet. We have an infinite wealth of media.

We fixed the hole in the ozone.

We eliminated lead poisoning.

We are constantly making progress against world poverty.

We got rid of kings and monarchs and tyrants.

War is so rare, we don't even bother with the draft despite the army struggling massively with recruitment.

You simply CANNOT look back on history and think we don't have it better


> Child mortality has dropped from 50% to 1%.

And mother-mortality is creeping upwards here in the states thanks to the cost of healthcare and Republican's ongoing efforts to control women's bodies.

> We had a world-wide plague, and far less 10% of the population died.

An inordinate amount of which was concentrated in America, because we've industrialized and commercialized political radicalization for profit.

> We have computers. We have the internet. We have an infinite wealth of media.

We have devices in our pockets that spy on us (also powered by AI), about five websites, and infinite derivative shit.

> We fixed the hole in the ozone.

That one I'll give you. Though the biosphere is still collapsing, we did fix the ozone hole and that isn't nothing.

> We eliminated lead poisoning.

Eeehhhhhh.... mostly? Plenty of countries still use leaded gasoline, and tons of lower-income people are still living in homes with both lead and asbestos.

> We are constantly making progress against world poverty.

In the developing world, maybe, but that comes with a LOT of caveats about what kinds of jobs are being created and how well those workers are being paid. China has done incredible work lifting their population, but not without costs that the CCP is only now starting to see the problematic side of. India is a similar story. And worth noting, both of those success stories, if you decide to call them that, are based heavily on some creative accounting and massive investment from the West. I don't think that's a bad thing but I'm also guessing said investors are expecting to be paid back, and it's finite and unsustainable.

Meanwhile, the developed world, workers are getting fucked harder than ever. Rent is now what, 2/3 of most people's income? People out here working three jobs and they still can't make a decent living.

> We got rid of kings and monarchs and tyrants.

We living in a different world here? We have an entire wave of hard-right strongmen making big splashes right now. Trump was far from an isolated thing. No they're not dictators... YET... but like, they don't usually start that way if you study your history.

> War is so rare, we don't even bother with the draft despite the army struggling massively with recruitment.

Uh, I think some Gazans, Ukranians, Iraqis, and Rohingya might take issue with that statement?

> You simply CANNOT look back on history and think we don't have it better

I mean yeah, I'm not one of those lunatics who think we were better shitting in caves. But that doesn't mean our society as it exists is not rife with problems, most of which have a singular cause: the assholes with all of the money, using that money to make the world worse, to make more money.

Hence being pissed about AI.


All of your objections are nitpicking about small, localized setbacks compared to massive global gains. As far as I can tell, we agree that the world is consistently getting better, and that these gains all come from technological progress. As far as I can tell, we agree that while the world isn't perfect, and some technologies do more harm than good, "technological progress" is a net positive.

I don't think you want to go back to a 50% child mortality rate, even if it somehow convinced Republicans to drop their crusade against abortions. I don't think you prefer World War 2 to the Ukraine war. I certainly don't think you want to reinstate monarchy and fascism across Europe.

If I'm wrong, then go ahead and tell me what decade you want to rewind to - what progress are you willing to give up?

If I'm not wrong, then... how does this at all lead to "hence being pissed about AI"? What's so uniquely evil about AI that we should give up the gains there, and assume it's a net evil in the long term, compared to everything else we've done?


> All of your objections are nitpicking about small, localized setbacks

Small wars are still wars. No, we don't have any global conflicts with well-naturalized two-sides like the Axis and Allies of World War II, yeah, true enough. But that's not because war is done or distasteful: it's because global hegemonic capitalism now rules all of those societies and makes certain such wars don't happen between the countries that matter. Which is why we have the "police actions" in Vietnam and Korea, why we had Operation Iraqi Freedom, why we nearly went to war with South America over the price of bananas, etc. The colonial powers have essentially unionized and now use the bludgeon of the military might of America to keep poorer indebted nations in line, and if they fail to capitulate, a reason will be manufactured to unseat the power in that place, more often than not by force, more often than not with heavy civilian casualties and economic destruction, the rebuilding of which in turn will be financed by the West afterward so the poorer countries never have a ghost of a chance in hell of standing on their own two feet and making their own fucking decisions about their resources and people.

That is not due to technical progress. Technical progress is, if anything, jeopardizing that balance because the information now is much harder to contain about how absolutely fucked everyone in the global south is at basically all times.

> As far as I can tell, we agree that while the world isn't perfect, and some technologies do more harm than good, "technological progress" is a net positive.

I would absolutely cosign that, if said technological progress wasn't extremely concentrated in the wealthy nations on this planet, while the other ones are making do scrapping our old ships wearing tennis shoes and smoking the cigarettes we export them.

> I don't think you want to go back to a 50% child mortality rate, even if it somehow convinced Republicans to drop their crusade against abortions.

No I want Republicans to govern on conservative principles, not mindless culture war bullshit. And I'd also like the Democrats to stop governing on conservative principles because their opposition in the states is a toddler eating glue and screaming about pizza places on the floor of the fucking Senate.

> I don't think you prefer World War 2 to the Ukraine war.

All war is terrible, the scale is irrelevant.

> I certainly don't think you want to reinstate monarchy and fascism across Europe.

A lot of fascist-leaning voters in Europe might do it anyway though.

> If I'm wrong, then go ahead and tell me what decade you want to rewind to - what progress are you willing to give up?

I want the progress. I just don't want it hoarded by a particular society on our planet. We ALL deserve progress. We ALL deserve to earn a living commensurate with our skills, and we ALL deserve to be supported, housed, and fed, and we already have the resources to do the vast, vast majority of it. We simply lack the will to confront larger issues in how those resources are organized and distributed, and the fundamental inequities that we reinforce every single day. Largely, because a ton of people currently have a lot more than they need, and a small amount of people have a downright unethical amount, and the latter group has tricked the former group into thinking they can join the latter group if they only work hard enough, while also robbing them blind.

> If I'm not wrong, then... how does this at all lead to "hence being pissed about AI"? What's so uniquely evil about AI that we should give up the gains there, and assume it's a net evil in the long term, compared to everything else we've done?

It's not uniquely evil at all. It's banal evil. It's the same evil that exists everywhere else: tech industries insert themselves into economies that they don't understand, they create something that "saves" work compared to existing solutions (usually by cutting all kinds of regulatory and human corners), sell that with VC money, crush a functioning industry underneath it, then raise the prices so it's no cheaper at all anymore (maybe even more expensive) and now, half the money made from cab services goes to a rich asshole in California who has never in his life driven a cab. It's just that, over, and over, and over. That's all silicon valley does now.


> the scale is irrelevant.

Okay, seriously? You don't care whether 100 or 100,000,000 people die? You don't see ANY relevant differences between those two cases? It must be perfect, or else we haven't made any progress at all?

I don't think I can help you understand the world if you really can't understand the difference there


You take ONE SINGLE POINT out of that entire post just to bitch about me making perfect the enemy of good?

My point isn't that 100 people dying isn't preferential to 100 million people dying. My point is that the 100 people died for stupid, stupid, stupid reasons. Specifically the ongoing flexes of the West over the exploited Global South.


Overall, I think you make a fairly convincing argument for all sorts of social changes - the problem is, that's not actually what you're advocating for.

> We ALL deserve progress. We ALL deserve to earn a living commensurate with our skills, and we ALL deserve to be supported, housed, and fed, and we already have the resources to do the vast, vast majority of it.

This is a great argument for UBI, or socialism, or... well, see, the problem is precisely that you never actually define anything actionable here. You've successfully identified a major problem, but your only actual proposal is "oppose AI artwork".

The problem is, "opposing one specific form of progress" doesn't actually do much at all to fix the issue. And indeed, if we had UBI or increased socialism/charity programs, then we wouldn't need to stop ANY form of progress.

And, of course, fixing the underlying issue is incredibly hard. We've tried Communism twice and proven that it's vastly more destructive. The Nordic Model seems to be doing well, but there's all sorts of questions on how it scales. And you're not actually proposing anything, so there's no room for the real, meaningful debate about those methods.


That's not a steelman. At the very best:

- All content was viewed and learned from, which is ethical (even a good) use of all content that has ever been released content to the public.

- Gave jobs to 3rd world laborers.

- Benefited us, made some of us everymen more productive and able to build and create in ways that we weren't able to before.

I suspect you don't agree with all the above, but that's more like what a steelman argument should be.


This is a bad take, chief. You're not a smol bean. If someone is telling you that the technology you are using is harmful to many people and to society as a whole the least you could do is to make an argument that either those harms are not what is being claimed or that there are significant benefits that outweigh the harms. "Don't say it's bad, that makes me feel bad so we shouldn't talk about it" is both a weak and useless position.


> I'm not a fan of this hyper aggressive line-in-the-sand argumentation about fossil fuels that pushes it all precariously close to culture war shenanigans. If you don't like a new technology that is perfectly cool and your right to an opinion. Please don't position it so that if I want to use fossil fuels I have to defend myself from accusations of polluting the air and the environment.


I'm not sure what you think you did here, but juxtaposing climate change with copyright squabbles really brings out how much of a first-world-problem such squabbles really are.


Are you going to pretend that the emissions caused by the enormous usage of energy by ML training and inference are not a thing?

Also, I wouldn’t morally have a problem with AI companies violating copyright if they weren’t hypocritical about it and open-sourced their software.

Anyways, the main message of the analogy is that you can’t just wave away the moral responsibility for the consequences of your actions. It wasn’t supposed to be a comparison of severity.


You should have to defend yourself if you are going to use this unreliable, untested, irresponsible technology.

Everyone who wants to do things that completely ignore the reasonable concerns of their fellow citizens should feel some heat, at least.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: