Hacker Newsnew | past | comments | ask | show | jobs | submit | more soultrees's commentslogin

Here come the innovation sponges.

If this goes through then the models that the general public have access are going to be severely neutered while the ownership class will have a much better model that will never see the light of day due to legal risks and claims like this - therefore increasing the disparity between us all.


I made a reply about rolling my eyes to this comment that got flagged (rightly so); this was unproductive and impulsive and I admit I shouldn't have done it.

I'm not sure how HN handles replies to flagged comments, so I'm posting the following here in the hopes it'll be seen by more fellow technical people :

In the future, if you wish to invite productive comments from your audience and not curt dismissal, consider framing your concerns as potential risks rather than the cynical expressions of fatalistic certainty so often employed by naive, greedy technologists when regulation that is firmly in the public interest threatens their paychecks.


[flagged]


I’d like a productive comment if you have it.


I'd've liked very much to give you one, but your blanket dismissal of what I consider to be a very important step for the field of AI ethics as the work of "Innovation Sponges" reveals that we fundamentally disagree about some pretty big underlying issues here.

In the future, if you wish to invite productive comments and not curt dismissal, consider framing your concerns as potential risks rather than the cynical expressions of fatalistic certainty so often employed by naive, greedy technologists when regulation that is firmly in the public interest threatens their paychecks.


Fair enough and your criticism is appreciated.


Is there any open source front ends out there? I know of anything LLM but hoping to plug my own home built rag system into a nice front end.


I just did this today using openAI’s function calling. I have a bunch of elements in a scene and trying to classify them between various ‘buckets’ has been the challenge. The way I set it up is that in the schema expected, you take that top level free text, wrap it in quotes and it becomes the parent object with the elementCategory as a string required inside it, with a list of all category types in the description. Then loop through and create a dynamic schema based on how you chunk your data, then add a validation step at the end to ensure that chatgpt doesn’t forget, add or change keys. I found that if you create a numberedKeywordString for your chunk, and wrap each in quotes, then stuff that numberedKeywordString into the system prompt then it’s solid and will ensure the LLM catches every key and gets the associated elementCategory value.

It works quite well for dynamic classification in my purposes but I’m sure there is a better way.


Try this: https://github.com/refuel-ai/autolabel

Then the main challenge just becomes prompt design, which can sometimes be nebulous for NLP annotation.


At this point who cares honestly. The more ‘fake’ generated nudes out there, means it’s just not going to be a novelty. And if everyone has the ability to generate an image of everyone naked, the value for ‘real’ nudes will go high but it will also be good cover for people who get their nudes leaked.


I’ve wondered about a similar thing. If there was something automatically constantly generating nudes of everyone, surely the noise would desensitize people to the signal.


That’s beautiful. I love it.


Thanks a lot!


It’s crazy. Even with an adblocker you are subjected to a crazy amount of ads or hostile design. With ads, it’s just a free for all on your attention.


and for your cpu. Ads are the number one killer of performance.


And money. Metered connections do exist, and ads consume a lot of bandwidth which is not free.


The cognitive load is the worst part of ads for me. The amounts of ads that display an X to close the banner only for that X to be an actual link to another fkn ad or straight up opens a new tab. Not to mention the invisible divs spanning the entire page so wherever you click, it's gonna open a link which is most of time either porn, betting sites or aliexpress. This sh* should be illegal.


Yup. They're pollution. (There's gotta be a phrase for "cognitive load" + "pollution".)

I used to love tech ads. I'd inspect every ad in BYTE, Creative Computing, etc. I'd even buy Computer Shopper, a massive catalog interrupted by some articles, just for the ads.

Ads can be useful, informative, and engaging. I wouldn't mind that.


I like the term “information pollution”.


Oh and the list goes on. Click hijacking. Redirect soup. 1024 JavaScript tags that record the same events, only they all await each other. CNAME shenanigans. Unsubscribe really means subscribe to 1000 more. Request interception, email interception, email link rewrites in flight, await request to dead dns adserver with no timeout set making your page a hostage requiring closing the tab/browser and visiting page again.


I dont press the x, I use uBlocks ZAP feature to close unwanted elements.


I used something similar on Firefox to block unwanted divs from Facebook and Reddit but now Safari is my main browser and I haven't come across an extension that does that.


I wonder what the CO2 emissions of all these ads are?


https://www.mdpi.com/2227-7080/8/2/18

Although this study doesn't discuss C02 emissions, it does discuss energy use and the potential impact of internet advertising.

Unfortunately, it only discusses the impact on client-side devices, the energy consumption of hardware used to serve advertisements (networking, servers) is left to a future study.

>Strikingly, uBlock Origin has the potential to save the average global Internet user more than 100 h annually.

>So, for example, the 1.35 × 1010 kWh saved globally for using uBlock Origin is equivalent to more than 1.0% of the electricity generated per year from coal in the United States, which is responsible for the premature deaths of about 52,000 American every year from air pollution [43,44].

>Globally, the results with the most efficient open source ad blocker tested, uBlock Origin, would be even more substantial: ad blocking would save consumers more than $1.8 billion/year.


Wow, that is a mind-blowing figure. $1.8 billion/year and 100 h annually. Would've never expected the figure to be that large.


So many people so concerned about CO2 emissions from computing devices... all the way down to Microsoft setting the default timeout for Bluetooth on Windows 11 to ONE MINUTE. It took me 2 weeks of hair pulling and updating everything before I realized why my mouse and keyboard would stop working, because it NEVER occurred to me in my WILDEST dreams that an OS developer would be given the insane task of creating a timeout like this, and writing a UI to control it. Then I realized that some middle manager in the guts of Microsoft probably got a bonus by being able to tell his management that "Microsoft" was now saving a collective million pennies a year of energy costs by crippling this basic feature. Well done, guys.

Where's the outrage from the colossal carbon footprint of the overarching, advertising-based economy? Does anyone have any idea what the electrical costs or carbon footprint per dollar of ad revenue is? It surely must be one of the lowest returns per environmental impact in the entire spectrum of capitalism. Sure, complain about cryptocurrency "setting the earth on fire," but Google gets a free pass for much the same thing to make their trillions?


Or, for that matter, any incremental call Microsoft is making to any GPT-based model. The power consumption from these inferences are immense, and appearing everywhere in their interfaces.


Or, for that matter, any incremental call Microsoft is making to any GPT-based model. The power consumption from these inferences are immense, and appearing with increased frequency.


True, I wonder how much power the inefficient "Windows Update" uses every month, too.

I can hear the fans on my Windows machine spin up, and I know it's Windows Update time.


A lot, given the sheer number of network requests a single ad can spawn


"A lot", compared to what? A single household? Probably. As a percentage of overall global electricity consumption? Probably negligible.

Some math with conservative estimates:

* power consumption of mobile processor: 5W

* daily ad use: 1 hour (assume it's 100% cpu for the entire time)

* smartphone users: 7.8 billion (world population)

Total yearly electricity consumption given the above: 2.8 TWh

World electricity consumption: 23,921 TWh in 2019


You'd have to include the servers doing the auctions, sentiment analysis, profiling,... into it as well. And then the whole ad industry, every email, every meeting, ...

I don't think you can actually put a number on this waste.


>You'd have to include the servers doing the auctions, sentiment analysis, profiling,... into it as well.

Feel free to put up an estimate then.

>I don't think you can actually put a number on this waste.

Well that just makes it clear you're only interested in your feelings/vibes as opposed to anything objective.


You have to include the fact that the very clear objective of advertising is to make you consume more. Thus producing more pollution than if there were no ads.

So, even if advertising was technically neutral (which is clearly not the case), it would still be an ecological cataclysm.

One would even argue that the whole climate emergency is created by our advertise/consume culture.


That, plus the industry exists to support a negative-sum game. Past some degree of market or channel or target saturation, the only thing your ad spend does is cancels out the ad spend of your competitors. That, and it fuels the growth of an industry that's happy to enjoy the infinite money printer.


And the services needed to consume said money printer. GCP wouldn’t exist. To be fair, a lot of it runs on AWS so AWS’ business would take a serious hit as well.


careful now, you can't make cents without advertising dollars. (pun)


>Well that just makes it clear you're only interested in your feelings/vibes as opposed to anything objective.

Pulling an "estimate" out of your ass isn't exactly any more objective than admitting that attempting to estimate it is futile.

"I know that I know nothing" and all that.


> I don't think you can actually put a number on this waste.

To give a really simple upper limit: 36.8 billion tonnes per year. Because that's the current total global CO2 emissions per year. And we can be pretty sure that ads are less than that, or at least not more.

I agree with your assessment of the grandparent comment.


hostile design, maybe? i use pihole and ublock and it's really rare that I see an ad on any of my devices. What am I doing wrong?


This is great. Know of any examples using these rules in practice?


I find this notion interesting. What makes you think AI will automatically kill humans?


I don't think it's certain the AI kills everyone, but it's not certainly impossible either. It depends on how the AI works and what it "wants" for some value of "wants".

Humans have not been particularly kind to the species less intelligent than us. Why would we anticipate being well treated by an entity more intelligent than us?

Even if we're not that relevant to a super intelligence, creating one forfeits human control over the Earth and known universe to the machines. Right now, in a century or two or three or whatever - we might be building Dyson Spheres and colonizing the galaxy. In an alternate and plausible timeline the machines are doing that and we are not.


And moreover, what makes people even think that a desire to commit mass murder is an innate characteristic of an 'intelligent' being, that increases the more 'intelligent' it becomes? (If they believe themselves to be 'intelligent', do they believe they have a greater desire to commit mass murder?)


This, “desire”, “murder”, “belief” is human thinking about what humans would do. It may not apply to a superhuman machine intelligence.


If it isn't aligned with humans and doesn't understand exactly what humans want it to do, then you're just made of matter that it doesn't know not to repurpose. It's not that it makes a deliberate decision to "kill" you, it's that understanding "kill" to a sufficient degree to not do it as a side effect is halfway to alignment already.

When we plow a field, we don't check for mouse burrows first. When we cut down a tree for lumber, we don't check for ants in the way of the chainsaw.

Preempting the obvious response: if your thought is "we don't let AI do those things directly, we just ask it for information", consider that for a sufficiently powerful and unaligned AI, you don't have to let an AI out of the box, it can let itself out. (And that's leaving aside that we hand some AIs Internet access.)


From what I’ve seen there is definitely value in knowing how to get what to you want from these machines and especially how to not get what you don’t want, and every company using AI will need someone in charge of all the prompts the company uses, especially as they get more granular and quantitative, keeping a backlog of previous prompts used to write any internal or external comments, and being the main chopping head on the block for when it’s used in a way that gets the company sued etc. But the old way of prompt engineering has now evolved into a more scientific, testing multiple times in multiple models so I’d say the job is here to stay.


I think that is true. And I also think the future would be automated by the algorithm but the human will still be a significant part in deciding the outputs. Humans cannot be replaced 100% unless big corps create something truly AGI.


You probably hold yourself to the same standard on separating the good vs the bad but do you allow that same freedom from everyone you know? It’s like finding out your family doctor cheated on his wife after he died; it says something about character but doesn’t diminish accomplishments.

The next question is if we are able to afford everyone the freedom to separate actions from character then would we be better off?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: