Hacker News new | past | comments | ask | show | jobs | submit login
Stackoverflow is investing into baking GenAI (stackoverflow.co)
87 points by rounakdatta on June 20, 2023 | hide | past | favorite | 103 comments



I feel like GenAI has become the next "blockchain". Wall Street (or someone else!) is setting some weird expectation for every company to somehow say "we're doing something with GenAI." Doesn't matter what it is they're doing with it, they just have to say the magic word in a press release or blog post.

Apparently, "we're using AI" is a better story than "we're making great products, regardless of the underlying tech."


In my opinion it's significantly more meaningful than blockchain hype. Blockchain doesn't add any value whatsoever for 99% of the use-cases that companies were advertising it for, but AI can immediately and cheaply create output that previously would have been expensive, time-consuming, or impossible.


To me it looks more like the dotcom era than blockchain. Obviously there's a lot of hype and much of it will be vaporware, but I think there's also some really good stuff that's here to stay. Might be hard to tell which is which if it turns into a bubble until it's over thought...


Yeah, I think dotcom era is a much better comparison. There is a lot of actual innovation happening and real products being built, but there's also a lot of companies slapping .com onto their names and pretending they're part of the new wave, and a lot of exciting new companies which don't actually have a business model. I think a lot of the current AI hype is wildly overblown, but there's zero chance that this all goes nowhere and had no lasting effect.


Yeah that’s a good baseline. Though I think it’s pessimistic; quite frankly AI is going to be far more transformative than the internet.

Anyone who thinks this is like blockchain should spend 10 hours coding with GPT-4, and then extrapolate those capabilities out a few years. It’s already adding a huge amount of value and productivity gain for me, though you need to know what to use it for.


Yeah this is more correct. There are already useful implementations of generative AI:

* in/out painting with photo editing software

* GitHub Copilot

* voice synthesis

* ChatGPT (not great for what a lot of people try to use it for, but I find it useful for brainstorming or as a conversational programming Copilot)


Someone commented here somewhere that ChatGPT closed the gap between native and non native speakers.

As a good non native German speaker, the ability to send a piece of text to a good LLM for review and correction is incredible. I do not need my wife to review the text I send to the tax authorities, the LLM is doing it for me (and I am good enough to critically assess the quality of the of the LLM work).


I like this comparison. A ton of dotcom-era stuff was "the same tech, applied to different industries" - that's actually common for a lot of cycles, by nature. And there are going to be winners and losers, but it's likely you have a lot of application-specific winners that emerge (vs JUST OpenAI/MS/whoever) as well as a fleet of also-rans.

This also doesn't at all require ChatGPT to be the first step on a road to self-aware sentient AI or anything else like that - the tools being built today will already disrupt a lot of things enough to allow for new winners and losers.


Yes, and also StackOverflow is massively impacted by ChatGPT. Many developers are already using the latter in place of SO. It makes sense for them to try and keep those developers on board.


This style of AI doesn't generate value, it extracts value. You can deliver a 90% as good product for 10% of the cost. That's great for the person arbitraging the 90% savings, but worse for anyone actually using the service. In this regard it functions primarily as a wealth transfer mechanism, so it ends up like blockchains anyway.


If you want to be reductive, you can say the same about delivering knowledge over the internet.

You get "90%" of the value of the world's best library - access to information - but lose things like a well-organized professional scheme, expert humans to assist you with your queries, and the ability to fill up a table with a bunch of books and reference materials side by side vs what you can fit onto your screen at once.

But you also add some new things - you don't have to travel, multiple people can use it at once, etc - that all basically could be thrown into the "10% the cost" bucket: a company doesn't have to spend to send researchers around, they don't have to wait, etc.

And publishers and artists of course were convinced that the internet would be the death of their current business models, that they were just extractive tools, not creative, and bad for their world; and weren't exactly wrong.

They're both just moving bits around and summarizing more than doing the original research to figure out atomic theory, say. But "10% the cost" - and especially "10% the time required" unlocks A LOT of ability to do more, fancier things. As does bypassing gatekeeping requirements (pay to travel, pay to have marketing copy written, pay/have connections to get access to the right archives, "sponsor" an artist to create stuff personally for you...).

I think ChatGPT is generally bad as a creative agent (vs a carefully used tool) and is gonna result in the internet being full of even more low-value BS than before, but I think it's inaccurate to say that it's not going to unlock anything new. It's just gonna look way different than today's internet.


Paying an artist to make art is not "gatekeeping", what the fuck.


It's sorta a two-sided thing.

Is a world with Patreon more or less gate-kept from the perspective of "influencing things to your taste" than one where you had to be rich enough to sponsor a Mozart or such?

Is a world where you can't afford to make your own compositions because you don't have the money to afford the tutelage or the free time to master every single step more or less gate-kept than one where you can play around and learn on free or cheap software tools?

It'll be different but there will still be artists - creation will be open to more people than ever before - and I also don't expect that the power-law function of cultural products and taste will change, so there will still be big winners from those artists able to command $$$.


It's not two-sided! That's just not what "gatekeeping" is! You're talking about like ten other things, none of which are gatekeeping.


Sure, what are those ten other words, sub them in where you like. Maybe "barrier of cost of entry" for most of the ones beyond historical wealthy-person-patronage? I'm not too concerned with what specifically to call examples of how things will change.

Overall I believe that this is certainly a disruptive, possibly revolutionary, information systems technology but I don't believe it's more uniquely purely extractive than something like the internet or previous disruptive ones.


It can certainly create cheap output, but it doesn't create correct output. AI is being used for many use cases where correct output should be the requirement (e.g. Quora answers). Blockchain resulted in a lot of fraud, AI is going to result in a lot of misinformation.


I think a lot of people are having trouble distinguishing between the two highly-hyped technologies. So there's a lot of "fighting the last war".

Generative AI, or AI in general, has actual use cases that people can immediately understand (and in fact use already on a daily basis). Not one crypto fan ever made a pitch for cryptocurrency that made any sense if you asked follow-up questions.


Its different.

But also its the same.

Economically, generative AI is massively more useful than blockchain algorithms, and its also a much wider field. Its actually producing useful content right now, in spite of being in a very rough and janky state like its straight out of a rushed research lab.

But its also getting the same slimey, scammy hype that crypo and blockchain got. The space is stuffed with grifters. You can't trust what anyone says about it.


I do believe that we will talk about the AI era in a 'changing the industry' way and not in a 'hot garbage we should have just ignored'.

And we already see the effect and have not even reached any hill.

Its for me the same as with renewable energie and EVs: We have not yet invested that much money into batteries as we could and all the problems we are facing will be gone sooner than later because there is no ceiling currently visible. Just not enough VC for it.

Battery tech started to get much more funding 2022.

AI will defintily change a lot and the progress is astonishing.


AI hype is like blockchain hype is a common HN meme which is gravely mistaken.

Blockchain didn't have any proper usecase that was better than what it was aiming to replace.

But AI already has real use cases. And it is already solving problems. And I don't mean ChatGPT, Copilot, etc.

I have myself developed and deployed AI models into real-world use that have been in production for years.

Blockchain was only hype. AI has real uses + hype.

AI will be a bubble that will burst, but even after that, people will improve it and use it more.


I feel like people learned not to trust their gut with blockchain when they saw a lot of people making a lot of money and noise for things they didn’t use and didn’t know why anyone would. But this feels different because I see “ordinary” people uploading their Lensa portraits and talking about how cool silly stories they made chatgpt write were. Things understood as interesting or valuable by everyday people that weren’t feasible 2 years ago now are.


I'm thinking it's because stockholders don't want a Kodak. Spending some money on looking at how AI can be used in the company, it could almost be seen as a defensive move.

Funny side note, I misremembered which company it was and my attempts at googling Canon case study brought up nothing relevant. Trying Chatgpt it said that Canon did fine, so I asked it wether there wasn't some company that did fail, and it reminded me it was Kodak.


GenAi can do useful work, blockchain can idk transfer wealth? have programs that work exactly like every other transactional program, but some of the bits are now on the user computer so that everyone is at risk of losing them? reimplementing banks from first principles, but with no guarantees, while speed running frauds?

now, it's true that everyone is jumping on the bandwagon etc, but the two tech are foundamentally different in usefulness.


It's not about the techs themselves, it's about how investors and shareholders push for the hot new thing to be included in the company, even when it's not relevant (or adversarial to) providing a great product.


That's where I was going with my comment. See this other commenter[1] who quoted StackOverflow's communications:

> Stack Overflow is investing heavily in enhancing the developer experience across our products, using AI and other technology, to get people to solutions faster.

They could have simply said:

> Stack Overflow is investing heavily in enhancing the developer experience across our products to get people to solutions faster.

And it would not have changed the meaning of the message at all, except that we would not have read the magic word.

1: https://news.ycombinator.com/item?id=36405875


It's a marketing text, don't expect too much of it. Your simplified version retains the meaning of the original, because neither is actually saying anything. "investing heavily in enhancing the developer experience across our products to get people to solutions faster", if not immediately followed by a list of specific examples, is just manipulative bullshit - it implies a lot, but doesn't actually say much.


Take a look at the answers on this post if you want to know how their most recent genAI experiment went:

https://meta.stackoverflow.com/questions/425162/we-are-seeki...

In short, it was a complete disaster. They create a tool to provide formatting suggestions on your new questions by piping the entire question into ChatGPT (or one of the comparable services). The internal prompt was figured out within minutes. And there are plenty of examples of this tool messing up questions entirely. And even when it doesn't destroy the questions, it still edits far too much minor and irrelevant stuff.

I don't understand how this experiment could make it this far, they actually put it into a live test on the site. So a part of the new users asking questions got this tool in this broken state. And the way this failed was entirely predictable when you just pipe content into genAI.


I found it useful to clean up a question I wrote recently. I had to do a few round trips through it, and did need to proof read and fix what it suggested, but nothing too major. The result was something that was much tighter than my usual aspy prose, and got positive responses and suggestions and didn't seem to trigger anyone being pissy.


I think this experiment would have gone over much better if SO leadership wasn't so disconnected from the actual users of the site, and had been willing to say "here's an early prototype, please red-team it and show us where it breaks so we can hopefully get it to a state where it's actually useful in a way that makes your jobs easier"


> The internal prompt was figured out within minutes.

So what?

> And there are plenty of examples of this tool messing up questions entirely. And even when it doesn't destroy the questions, it still edits far too much minor and irrelevant stuff.

If it can usually help but sometimes doesn't, then this is just an optional suggestion which can be skipped. This seems like a decent design for rolling this out.


The users that are the most likely to benefit from this kind of tool are also the least likely to be able to judge the quality of the edits it produces. Especially if you're not a native English speaker you might not notice that it changed the meaning of the text in some way.

It wasn't hard to trigger problems here, it isn't like this mostly works. And it also breaks code in a way that invalidates the question. With no visible trail of the original question being available.


> if you're not a native English speaker you might not notice that it changed the meaning of the text in some way

Many questions (especially, but not only, from non-native English speakers) are very unclear and ambiguous already, so I wouldn't be that concerned if the meaning is slightly changed - at the very least, GenAI text tends to be self-consistent.

> it also breaks code What is this referring to? As I understand it, the tool being discussed only makes suggestions about the title.


Can someone ELI5 for me what GenAI is? I've never heard of them and their website is just the usual content-free buzzword salad:

artificial intelligence - what else, sigh!

transformative - you bet

business AND consumer - so basically everyone, why tell at all?

various industries - everyone again

pioneering - yeah, what else

vertically integrated - sure, I was just waiting for that

AI solution - oh, who would have thought?

AI-powered!- you are saying!

I've literally not even finished the second sentence on the page and this rambling abomination goes on and on and on...

EDIT

For reference the first two sentences:

"GenAI, our mission is to harness the power of artificial intelligence to create transformative tools that benefit businesses and consumers across various industries. GenAI is a pioneering artificial intelligence company focused on developing a vertically integrated AI solutions business through its proprietary MAI Cloud™ database, with the development and commercialization of AI-powered tools and solutions for businesses and consumers across multiple industries."

EDIT

After reading other comments I think GenAI is a generic term (like blockchain) referring to Generative AI. In my defense the OP is not enlightening in this regard at all, so I just googled the term and the company website of genai-solutions.com was the first result. Nothing in the OP says that Stackoverflow is working with this specific company, however, that was just my (likely erroneous) assumption.


It's a fancy name for automated plagiarism.


It's generating content based on the previous content it encountered - just like a human. As such, it's only plagiarism if a human passes it as their own work (same as they would with a human ghost writer), but then it's the human plagiarising rather than the AI.


This is such baloney. Another way it resembles crypto.

Humans sense, interpret, organize, and consciously feel and experience internal and external stimuli none of which is applicable to stochastic parrots.


It’s an abbreviation for generative ai, which are models trained to synthesize novel things matching structures in the training data. Think GPT, stable diffusion, eleven labs.


Missed a great opportunity to call it AItocomplete


> business AND consumer

At least for me, those two categories don't (strongly) include education or government.


Fair, but I doubt they'd refuse to sell those too.


that sounds like a scam


Me: Hey AI how do I do this using X?

AI: Why would you ever want to do that using X? You should do Y.

Me: Well I need to do it using X because of these reasons.

AI: ...That's dumb. You should just do Y.


That AI is definitely trained on past SO content.


Closed as duplicate of "how do I do that using X" even though on close inspection that!=this


Use jquery


That actually sounds amazing - I would love to see the AI express opinions that contradict mine - all that I'm getting is "I apologize for the confusion. You're right ..."

If it was actually capable of having these kinds of conversations, it would be on its way to solving the XY Problem [0].

[0] https://en.wikipedia.org/wiki/XY_problem


Their traffic is rumoured to be down 25-30% after ChatGPT was released, so good to see that they are doing something to keep the community alive in the new era of GenAI.


Yep. they're in deeeep panic mode due to AI.

They're currently sending, massive unsolicited e-mail campaign to all (sic!) stackoverflow users. Message they're sending below. Please find 6 places where they try to address AI directly and indirectly :)

Stack Overflow is investing heavily in enhancing the developer experience across our products, using AI and other technology, to get people to solutions faster.

As part of that initiative, we’ve launched Stack Overflow Labs.

Here is where we’ll share our experiments, demos, insights, and news - across all Stack Overflow products. We plan to continually add to this site as we experiment and release new solutions. You can sign up to get previews and early access to features that will be available on stackoverflow.com.

Our guiding principles for Stack Overflow Labs

Find new ways to give technologists more time to create amazing things. Accuracy is fundamental. That comes from attributed, peer-reviewed sources that provide transparency. The coding field should be accessible to all, including beginners to advanced users. Humans should always be included in the application of any new technology.


Very interesting -- had not seen that. Indeed, I think they could do with a bit of a makeover. Their ad platform is also terrible, way worse than Twitter's, so it is both good and bad that they're feeling a bit of a pinch I suppose.


more than that the platform is the more aggressive in the worst way possible, or your ask so specific about weird behavior that nobody responds or they say this already bean asking even if not(because some mods don't read what they ban), they block conversation with hundreds of votes, they practically insult the post writer for “not knowing”, i'm happy if they start changing the nefarious moderation scheme, im 100 % more likely of making the same question in reddit than in stackoverflow people are kind, because moderation encourage that, that would totally increase the use of their platform lots more than this.


Indeed, asking the wrong question is like being fed to the wolves..


I don't know about ChatGPT, but for me Phind [1] really lowered my SO usage

[1] https://www.phind.com/


It's ... interesting.

I asked

how do i remove diacritics from unicode characters https://www.phind.com/search?cache=bd2b33eb-9454-4d38-975e-1... and it answered with multiple Python code snippets with the second solution getting close to the real solution but is slow and incorrect.

Now if I ask the question I actually want

how do i remove diacritics from unicode characters with php https://www.phind.com/search?cache=c29fe466-16cc-4795-94bf-0... then you can see it misunderstanding the question: the first answer is the desirable output (I suspect it only works for latin letters tho) except for the iconv problems mentioned but the second is completely incorrect. The third answer is close but no cigar as it only works with Latin characters.

The answer I wanted to see is using the ICU library to run the rules NFD; [:Nonspacing Mark:] Remove; NFC. as mentioned on https://unicode-org.github.io/icu/userguide/transforms/gener... and https://www.unicode.org/iuc/iuc22/a339.html and a million other places but I wanted to link only official Unicode documentation.


I'm not getting great results. It seems to love to latch on to a keyword in the question and run with it (maybe the GPT-4 version does better). I asked a question about async in Rust, it gave me a comparison between threads and async and then kept going in that direction instead of answering what I asked. I mentioned a specific crate as something I wanted an alternative to, and it told me a description of the crate. Rephrasing the question a few times eventually got a correct answer, but it's not much better than going straight to ChatGPT.


I've tried Phind a few times, but my gripe with it, so far, is it gives me outdated examples that don't work because the API has changed. Is there some magic keywords I should be including to prevent that?


Make sure that you're logged in, and have turned on the GPT-4 toggle?


I assume that by GPT-4, you're talking about the "Use Best Model" toggle right?


Not sure about that, it's been mostly good for me in the case I needed it


Maybe my use case of dotnet core development is going to be different than for python devs.


It's far from perfect, but in my experience, it's much better than both ChatGPT and Bard. I've also used it to ask non-technical questions, and it often generates a good response, so not really just "for developers".


25% of that traffic was asking questions answered by a glorified markov chain. I dont think stackoverflow lost any valuable users or interaction unless they're looking for newbies with syntax errors to flood the place.


> newbies with syntax errors to flood the place

I think they are looking for that. As long as they read the ads, they don't care how good at programming they are. It's a "web property", not a programming contest ;)


SimilarWeb shows that their traffic drop is real, roughly 20% in the last two months:

https://www.similarweb.com/website/stackoverflow.com/#overvi...


The founders sold at a perfect time, getting that bag just before traffic dropped due to generative AI.


Could also be due to VSC copilot


They tried A/B testing the “question formatting assistant” on the production site a few days ago. The results were…not great, and they had to quickly shut off the experiment: https://meta.stackoverflow.com/questions/425162/we-are-seeki...


As writing, all comments seems to be quite negative. I think this is awesome and definitely something that can unlock a lot for me.

I problem I often encounter when using stack overflow is that questions are answered ina a very narrow way that doesn't really fit me. So I need to bounce between 3-4 answers that are related to get the right solution.

I would guess this solution could do this for me, provided that I provide anough context is my problem prompt.


SO could go on all in using the 100k context length of Anthropic Claude, feed in the documentation of a library or product, and have it generate questions and answers, and post them on Stack Overflow to fake activity now that everyone I know participates on it in a read-only fashion.


The projects listed [1] seem a bit mundane at first, but I prefer that to the opposite, where companies shoehorn GPT into places where it just irritates users.

[1] Title generation, Formatting assistant, Sentiment analysis, Identifying duplicate questions


Duplicate question closing will now be instant if your question has an embedding that's remotely close to another question


It really is obscene how many useful answers I've found on threads that were subsequently closed as duplicate -- with a link to a question that's not even close to the same.


Or the original is like a decade old and the solutions don't even work anymore. And even if they did still work, maybe some one has come up with some new solutions in the last 10 years and I'd like to hear about as well.


Can you provide a link to an example or two of that happening?


As near as I can tell, SO prunes duplicate-tagged questions. I 404d on the first few examples I pulled.


Users with 10k reputation will be able to see them even though they're deleted. Can you post them anyway?


Wait, what?!

SO hides questions with answers... but still retains them and allows logged in users with enough karma to see them?

That is insane.


Isn't that fairly common? Wikipedia lets admins see and undelete deleted pages too.


With unlimited history? Why not just leave them visible but read-only?


I was going to post this response, too. Yes, you can close it because it is duplicate, but god knows the huge range of how good people are at explaining something to people of various levels of understanding.


After reviewing a few meta posts on it, much appears to boil down to there being poor SO UX options.

I.e. there isn't a "closed for being homework" category

Consequently, duplicate served as the go-to but inappropriate "there's something wrong with this, and I don't want to deal with it" button.


Just like wealth redistribution and inequality, I'm seeing this migration of "programming discussion" moving from open forums like SO to closed systems (guarded silos of GH Copilot, ChatGPT, Phind and others). We are preferring to ask the nitpicks in their chatbox instead of as a public comment on Stackoverflow.


In a way SO is the perfect platform for human feedback in reinforcement learning


True, feedback on stack overflow can be brutally thorough and exacting to an extreme that the general population (or a mechanical turk) would struggle to emulate; perfect feedback for a LLM.


The Stackoverflow research initiatives per generative AI are very good examples of what some enterprises can do to improve user-input text context on websites and applications.

1. Adapt user text to a standard format. 2. Correct spelling and grammar mistakes. 3. Detect inconsistencies during the input process.


Embedding search in a chat UI over all the SO content (with valid citations) would be the obvious feature I’d build if I were them.

“Me: How do I do X? SO: here are a few highly voted answers … Me: not quite, what if to foobar is like this? SO: In that case this is probably the answer: … Me: Upvote answer (giving RLHF to the system to help with future training)”

ChatGPT is great at answering code questions but if you are on SO you want to engage with the community, upvote the answer, or comment etc.


Whatever happened with the moderator strike over the AI? This seems like a great way to cheapen the brand.


Still ongoing.


Is this why ChatGPT answers are banned on SO? Because it would compete with their own upcoming business?


Really no. I’ll quote myself replying to another HN comment:

> If you go to ChatGPT and just ask it, you’ll get the equivalent of asking Reddit: a decent chance of someone writing you some fan-fiction, or providing plausible bullshit for the lulz.

I disagree. A layman can’t troll someone from the industry let alone a subject matter expert but ChatGPT can. It knows all the right shibboleths, appears to have the domain knowledge, then gets you in your weak spot: individual plausible facts that just aren’t true. Reddit trolls generally troll “noobs” asking entry-level questions or other readers. It’s like understanding why trolls like that exist on Reddit but not StackOverflow. And why SO has a hard ban on AI-generated answers: because the existing controls to defend against that kind of trash answer rely on sniff tests that ChatGPT passes handily.. until put to actual scrutiny.


Is there really an epidemic of troll answers? Not sure I've ever seen intentionally bad programming advice on reddit. (Unintentionally bad, sure.) Even 4chan of all places is rare in this regard, honestly, and in the cases where it does happen the answer is usually dripping with blatant sarcasm.

I'd guess totally made-up answers will be way, way, way more common with LLMs compared to any human social network.


There absolutely is an epidemic of troll answers on Reddit but not on programming subs, presumably precisely because of the aforementioned reasons. The kind of person that knows the actual answer (or has sufficient background knowledge to BS a believable answer) is the kind of person that won't post a troll answer in a programming sub. The kind of person that will post a troll answer won't have the requisite background knowledge and will be immediately "outed" and downvoted on the more legitimate (not r/all) subs, and eventually be discouraged from trying.

ChatGPT is immune to this organic feedback loop.


How would they know the answer was gotten from ChatGPT though? If I ask ChatGPT something, test it and confirm it’s correct, I can answer it in a simple way.


Actually, SO leadership recently pissed off their moderators by updating their moderation guidelines to de facto allow Generative AI answers on the site:

https://meta.stackexchange.com/questions/389811/moderation-s...

So it's the opposite, they decided to unban it just as they started selling the same thing.


And before long it's going to be, "Let AI write an answer to a question and you only need to edit it!".


> Dear members of the Stack Overflow community, I hope this question finds you well.

> I am a novice in the world of JavaScript and, as part of my learning journey, I've undertaken the task of attempting to manipulate the Document Object Model (DOM). Specifically, my aim is to add a new div element dynamically to my webpage, utilizing the common programming language called JavaScript.

> This snippet of code is my current attempt:

    var newDiv = document.createElement("div");
    newDiv.id = "newDiv";
    newDiv.innerHTML = "This is a new div.";

    document.body.appendChild(newDiv);
> The logic behind the above code snippet is as follows:

> - I create a new div element by invoking document.createElement("div") and assign it to the variable newDiv.

> - I then assign an id to the newly created div by setting newDiv.id to "newDiv".

> - To add content to the div, I set newDiv.innerHTML to "This is a new div."

> - Lastly, I attempt to append the new div to the body of the document using document.body.appendChild(newDiv).

> Nevertheless, the code does not seem to produce the "div" I was hoping for. Additionally, there are no errors displayed on the console. I have attempted to diagnose the issue myself but have failed, and I have exhausted all other online resources

> Time is of the essence. Additionally, I would prefer this answer to be as concise and easy to understand as possible. Your guidance and assistance in this matter is very much appreciated.

> Thanks in advance,

> user4798145768


[closed]: needs details or clarity

0 comments


Do I need to ask the question or can I automate that too?


As someone already said in another thread somewhere, we should cut to the chase already and let ChatGPT click the ads on the page.

Even better, we can further optimize the experience by simplifying all news websites, reddit, SO networks to just a few empty HTML pages with ads so the can be clicked by the AI. No need for content, agencies, editorial staff.

I mean, sure, Facebook, Microsoft and Google would probably say that's against the TOS of their ad services, but after all it's not like they cared about TOSes when they used the same websites to train their models.


You might be interested in AdNauseam: https://adnauseam.io/

It's a uBlock Origin fork that behaves as described.


I'm pretty sure the parent was sarcastic about clicking the ads.


If you have to ask...


I'd rather it helped people to write coherent questions rather than the usual dumping of a few irrelevant lines of logs onto the page and asking "please help".


I mean, that would be the “next step” wouldn’t it? I’m less scared for the big stack exchange communities which have enough people to vote out junk, but for the little ones I’m not sure.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: