Hacker News new | past | comments | ask | show | jobs | submit login
Princeton ‘AI Snake Oil’ authors say GenAI hype has ‘spiraled out of control’ (venturebeat.com)
110 points by CharlesW on Aug 23, 2023 | hide | past | favorite | 101 comments



The authors' substack is great. They're sort of like investigative journalists who pick a topic that's got a lot of hype and then dissect it to (often very quickly) find that the hyped up topic was just hot air. Some examples:

The claims of "liberal bias" being just a completely bunk study: https://www.aisnakeoil.com/p/does-chatgpt-have-a-liberal-bia...

No evidence of GPT-4 getting worse over time: https://www.aisnakeoil.com/p/is-gpt-4-getting-worse-over-tim...

The analyses are measured and well written. I always enjoy when a new one comes out.


>No evidence of GPT-4 getting worse over time

I read the the article you linked to, and either you're putting words in their mouth and/or making claims that go way beyond the conclusions presented in the article.

The authors choose to distinguish between "capability degradation" and "drastic behavior change." Now, almost no one I've witnessed complaining about past and present results of ChatGPT care one bit about making such a distinction. They often simply complain that their previously good result producing prompts are now spewing garbage.

The authors explicitly admit that "the kind of fine tuning that LLMs regularly undergo can have unintended effects, including drastic behavior changes on some tasks." i.e. Fine tuning may in fact be causing the worse results.

Furthermore, the authors merely make an additional point that such a drastic behavior change shouldn't be assumed to be a "capability degradation." Meaning that, theoretically, if you toy around with your past prompt enough you should be able to eventually find an alternative version of the original prompt that outputs the past result you were looking for. Again, none of the complainers that I've come across really give a shit about this theoretical conclusion. They only care about past prompts now giving different results. In the article, the authors freely admit that this "drastic behavior change" could be objectively occurring due to fine-tuning.

To further prevent readers from making sweeping generalizations from their comments, the author's explicitly state that OpenAI makes it pretty damn difficult to do reproducible research on the LLMs in question, so you should not come to any hasty conclusions regardless ("As we have written before, this underscores how hard it is to do reproducible research that uses these APIs, or to build reliable products on top of them.")

Well, "building reliable products/processes on top of ChatGPT" is all the complainers care about anyway, so we're back to square one. Far from disproving the claim that ChatGPT is becoming more unreliable and producing worse results for the complainers use cases, the authors are in fact saying that their grievances may be justified for the only practical criteria the naysayers actually give a shit about.


Thanks for this. I agree they do nothing to support the idea that GPT4 isn’t getting worse. They simply criticize a paper that makes the affirmative claim, and explains why it’s difficult to test.


The key way I try to distinguish "Snake Oil" is: Is anyone actually using this routinely, and are the problems it solves self-evident to new non-zealot users after a demo?

Take blockchain for example; people spent years and millions of dollars looking for additional usages for the technology beside coins/coin-contracts; and aside from the forementioned usages it didn't really gain routine adoption. That was why the non-coin-blockchain always struct me as a "hype train" or "snake oil" because it was a solution in search of a problem. NFTs are even worse since they haven't found any purpose to exist yet (money laundering?).

Contrast that with the "GPT" offerings (ChatGPT/Bing/Copilot/Bard/etc). Multiple of my colleagues are actively using them routinely every workday and when you demo it to another person, they understand it, and they too start utilizing it in their workflow. Heck, my mom discovered it herself on Bing's homepage and was telling me I should check it out.

That's the opposite of "snake oil" or a hype train, it is arguably a competitive threat to search engines.

PS - Small disclaimer: "AI" is a nebulous term. I'm talking specifically about LLMs. Other types of "AI" have already been here for a long time making our lives better (e.g. computer vision).


Another test is whether kids use it w/o being forced to. My kids are obsessed with both MidJourney and ChatGPT. They have found use cases w/o any profit motive and w/o any outside pressure.

No amount of Blockchain "Webinars" would have convinced my kids to use the blockchain.


If blockchain did fulfill the claims of it revolutionizing the world of finance, it's still unlikely your kids would have become obsessed and regular users of it. If room temperature and pressure superconducting or nuclear fusion became viable, your kids might also be unlikely to play around with them on their own. Kids apparently love Roblox and most adults think the metaverse is dead. What kids like seems pretty orthogonal to what is legit or snake oil.


> If room temperature and pressure superconducting or nuclear fusion became viable, your kids might also be unlikely to play around with them on their own.

I disagree. Kids will turn anything not too abstract into toys. An improvised radioactive sample (actually some neon indicator bulbs in a box) + Geiger counter brought by my parents = a version of a hide-and-seek game from my own past.

Also, a model maglev train is something so obvious that I am sure such toys will appear immediately once the technology becomes viable.


Point taken, but how about these:

- My mom cant figure out Blockchain either

- My sister (doctor) and brother (3 degrees) cant figure out Blockchain either, because they are too afraid of scanning the wrong QR code and having their balance drained

My mom, sister, and brother all use ChatGPT. My brother uses it actively for work.


You can dig deeper.... what exactly are the real-world applications of an immutable transaction ledger? I can't think of very many. Let alone a bunch of kids.


Immutable decentralized public ledgers could have many usages, but the biggest problem is: who is authorized to add new entries? There are different answers to this question depending on use case, but it is really the crux of the matter.


Decentralized is critical to the recipe as well. 'Immutable' centralized ledgers are largely a non-starter because the scenarios you want to use block chain for are zero-trust type scenarios. With centralization, the promise of immutability holds exactly as long as it's convenient for the centralizing power.

In any case, defining public use or accessibility as the metric of relevance of something is quite illogical. You end up with countless absurdities like relativity being worthless and CocaCola being the most relevant invention ever.


I don't blame them since the stakes are much higher, but it probably doesn't help to say "they can't figure out blockchain" because that doesn't make any sense.

Which 'blockchain'? Blockchain is a singular noun, that's like saying "they can't figure out phone".


Sure. I'm not saying Blockchain fulfilled it's promise. I'm just not convinced that every major new thing will be immediately adopted by kids. It seems like it's an orthogonal thing.


> it without being forced to

I think this is an important detail. A lot of "forced into it" numbers will get presented as "happy users" in various unfair metrics comparisons.


> Kids apparently love Roblox and most adults think the metaverse is dead.

Can you define what a metaverse is in this case and how is it different from what we used to call an MMORPG?

Roblox involves no block chain and no VR, so what exactly is "metaverse-y" about it?


Anyone can create and monetize their content in a world where a user's character can access all of the content. MMORPGs generally seem to be built by a single developer who controls the whole experience. They're also generally a single game instead of thousands of mini games.


Plenty of kids I know use the blockchain, for their fake ids I think it is the backing financial infrastructure (crypto knowledgeable friend purchases ids and redistributes), for drug purchases


Midjourney and ChatGPT are both free to try, if someone gave me free bitcoin I might have made a wallet.


They had this in the early days. Bitcoin Faucet


Sigh... I "parked" my free bitcoin in the wrong place. Didn't know yet about just holding on to the private key and let it sit on the chain.


Not a great test, teens in the last few years sometimes get into crypto.


Trading it on a centralized exchange to speculate on its future value doesn’t involve using it at all, it’s just a MySQL database at Coinbase tracking your bids and balances.

I’d wager the biggest non-speculative use case is pig butchering scams and ransomware.


> I’d wager the biggest non-speculative use case is pig butchering scams and ransomware.

Probably, but drugs are also a big use-case (and I don't just mean retail, its used in wholesale for transferring sums of money internationally to buy precursors, for example)


Only for the investment purpose by thinking they will get rich quick. I have never seen anyone using crypto other than that. Author specifically said not for profit.


I'm routinely using LLMs in place of search engines. Would you rather type in a cocktail and instantly get a recipe? or do you want to open google, scroll past 50 ads, click through a bunch of blogspam that gives you the history of cocktails, the prohibition, the chemical formula for ethanol and their life story, followed by a recipe that was itself generated by a substandard AI model anyway

edit: I'm prepared for the enshittification of GenAI, but given competitive edge, many people will pay top $ for faster/higher parameter inference, I'm hoping ad support is minimal on the super deluxe god tier AI subscription which I'll have to pile all of my spare cash into.


I have two concerns when people tell me they're using this stuff like a search engine:

1. How expensive are these to operate when compared to search engines? If search engines are about as good at information retrieval, but are much cheaper, what does that mean for the future of these services?

2. How good are these systems actually when compared to search engines? I get that for things like code, you can immediately verify results through testing, but GPT was trained on what looks good to humans thanks to HFRL, not necessarily for correctness.

I really wonder what's going to happen with this comparison in the near future. I think there's two possibilities:

1. Running inference on models and keeping the models relevant by introducing new data remains extremely expensive and companies like OpenAI operate at massive losses. Eventually this leads to their products getting worse.

2. Models get cheaper to run and efficient enough to run on mobile hardware without losing much fidelity. OpenAI still might be in trouble in this scenario if "open source" models like Llama2 get popular.


I prefer search engines because I can look through multiple sources. LLMs just give an answer without context or what alternatives might be. They make it difficult to judge to validity of the response.


perplexity.ai usually gives 4-6 sources that you can validate the response with.

As far as I can tell, it performs a search and then feeds the full text of the search results to an LLM and present a summary with citations.


Not all questions need a variety of answers. If its good enough for your needs go for it. Saves some time searching.


> or do you want to open google, scroll past 50 ads, click through a bunch of blogspam that gives you the history of cocktails, the prohibition, the chemical formula for ethanol and their life story

I think in this aspect the difference between GPT vs Google not in level of tech, but because GPT took text from copyright owners and distributes it without their consent and without showing their ads, which is being challenged legally in my understanding.


Doesn't matter. If governments blunder this and try to enforce copyrights in AI, then we are all going to be using Chinese AI in 5 years instead of Google because they don't give a damn about copyright.


CCP already created its own regulation rules for AI.


And what does it say about copyright? Genuinely curious.


internet articles say something about copyright protection, but more important part is that AI has two promote core values of socialism and avoid statement criticizing Chinese government.


Interesting that the CCP rules for AI are the same as the rules for any meatI under their control. At least they are consistent.


It depends on how important it is that I get a precisely correct answer. For a cocktail it's not a big deal if I get one that's slightly different, in other cases an incorrect answer might be catastrophic.


Using bing chat, you always just check the source of its answer if it's crucial. Saves time searching.


Amusingly, Kevin Roose and Casey Newton made a ChatGPT-authored cocktail recipe on "Hard Fork" a while back: https://www.nytimes.com/2023/06/30/podcasts/hard-fork-ai-poi....

I think the review was basically, "I’ve had worse cocktails. I’ll be honest."

So...meh?


If you pay for search you get some of those benefits already


Sincere question: what paid search do you have in mind?

How conceptually could a paid search engine solve the problem of SEO and spam? (Curation seems economically infeasible?)

Is what you're actually proposing ad-free search results? (I'd be willing to pay Google approximately $0.01 per search for this. I suspect this is probably more than they get from anything but the most expensive ads (pharma ads that confused old people will click on).)

What I would really love is a browsing-assistance agent based on something like an LLM that can summarize and de-cruft search results for me and show me content extracted from web pages in a standard format without any Javascript. (I don't need an LLM to be the arbiter of truth, I just want it to be good at summarizing documents.)

(Because Google is so wedded to the ad ecosystem I'm not sure I trust them to be the steward of technology that is basically about removing ads from my life.)


Kagi https://kagi.com/ is the paid search I've tried. It's was decent, but not a noticeable enough improvement to win me over as a customer.

OTOH, I am a paying ChatGPT Plus user and there are some days where I feel like I get my monthly subscription price's worth in a single day.

In terms of summarization, this study Anyscale just published is interesting - llama2-70b gets within a hairsbreadth of gpt-4/human scores for summarization, so it's conceivable that sometime soon, you might be able to run a local LLM on your browser and get decent results (4-bit quants of llama2-70b currently take about 40GB of memory, you need at least 2 x 24GB or 1 x 48GB GPU to run them at reasonable speeds atm): https://www.anyscale.com/blog/llama-2-is-about-as-factually-...


Who do I pay for better search? Honest question, I'd love a recommendation.


https://kagi.com/ is the one I see recommended. The results went to focus hard on "no blogspam" which is nice, and supposedly you can completely ban whole sites from your results forever (e.g. goodbye GeeksForGeeks and your shit SEO hacking)


I like Phind


Or you can run your own open source AI, on your own hardware.


The authors specifically state that they do not think that GenAI is snake oil. You can’t legitimately argue that it is.

On the other hand, I’m not sure that you can legitimately argue that it is not over-hyped. The only way that it isn’t is if we achieve “singularity” imminently - because that it is what a sizable number of people, including possibly some people at OpenAI, are actually expecting.


I think we’re nearing the point of the initial dotcom bust. A whole of internet startups went completely belly up when the early 2000s initial bubble burst.

That was no reflection of the actual potential it just indicated that people at all points of the value chain had not yet grasped how or what the internet was or how it would mesh with humanity.

That’s where we are now. It’ll burst but not because it’s overhyped; because we don’t collectively understand the implications.


Snake oil is about the thing being sold, it's about the sales pitch. It's the people selling the product.

An important snake oil trait is making grand claims which have little basis as fact, particularly with things which are difficult to understand or difficult to verify (if not impossible.) You can say anything about a product if the prospective buyers can't verify the claims. And so, the claims tend to get broader and flashier.

Therefore, a snake oil sales pitch could still be applied to a highly useful product.


Are people durably achieving a competitive advantage through the GPT?

I think we've seen some people striving for a competitive advantage, and willing to see one exist, but I don't know if we've seen any "AI company" turn a meaningful and durable profit yet (open to being wrong on this).

To me (so far), it seems like GPT use cases that apply it as a table stakes feature of some greater application (rather than an ecosystem or a platform), are the ones that are actually showing promise in terms of expected utility + value capture.

If GPT4+ level engines become as cheap to execute as a SQLite query, I think that's where things get interesting (you can start executing this stuff at the edge).

But I still can't see new companies (a la OpenAI, MidJourney, etc.) making a lot of money in this scenario, it seems to overwhelmingly favor companies that already have distribution.


We're likely way too early to call this one. Based on current hardware trends, GPT-4 will run on your phone within 6-10 years. Right now, folks are feeling out the edges of what makes sense and what doesn't make sense at extreme R&D and opex expense. In 10 years, we'd expect the winners of today to still be winners and have great margins due to declining compute costs.

Granted, if you are spending 10x the future value of the product your are offering ... then even a 10x decline in compute costs won't get you where you need to be.


> Based on current hardware trends, GPT-4 will run on your phone within 6-10 years

they quantized model from 16 bits to 4 bits which was low hanging fruit, and looks like they can't quantize it anymore to 2 bits..


CPU/GPUs are general purpose, if enough workload demand exists specialized Transformer cores will be designed. Likewise, its not at all clear that current O(N^2) self-attention is the ideal setup for larger context lengths. All to say, I'd believe we have another 8-10x algorithmic improvement in inference costs over the next 10 years. In addition to whatever Moore's law brings.


Mobile TPUs/NPUs:

Pixel 6+ phones have TPUs (in addition to CPUs and an iGPU/dGPU).

Tensor Processing Unit > Products > Google Tensor https://en.wikipedia.org/wiki/Tensor_Processing_Unit

TensorFlow lite; tflite: https://www.tensorflow.org/lite

From https://github.com/hollance/neural-engine :

> The Apple Neural Engine (or ANE) is a type of NPU, which stands for Neural Processing Unit.

From https://github.com/basicmi/AI-Chip :

> A list of ICs and IPs for AI, Machine Learning and Deep Learning


For programming it’s like an additional feature on top of Google. You can dump entire error logs, get correct syntax (i give it a 4/10 for accuracy though), and even dump your entire page of code if there’s a bug in it.

Yea the sql queries, string or array manipulation are better and fast than google 8/10 times because of the errors it generates.


I agree with your last paragraph.

In my opinion, Microsoft's proposed 365 Co-pilot pricetag of $30 per month per user will probably make a profit. High usage users being offset by low usage users when corporate 365 group memberships come into the mix. Many corporates will take a chance on it, willing to throw plenty of money at anything that is perceived as having a chance of improving hard to define competitive edge, creativity and productivity.


Given the high-end art and luxury goods market has had a large use for money laundering, crypto targeting that market makes complete sense. That it happens to be illegal? Well that just requires some more ju$tification.


> Heck, my mom discovered it herself on Bing's homepage and was telling me I should check it out.

Sounds like grade A Microsoft marketing speak. But wait, there's more!


The predominate usage of NFTs are non-art based. Insurance policies and market making use NFTs very heavily, especially after a BSL license from one market maker expired this year.

And for art NFTs, much of the interest is from the collectors space or collector sentiment as these are an improvement in supply transparency and provenance in comparison to other mediums of collections.


Well that's a bad criteria. A lot of people get hooked on "snake oil", and will even try to seek it out. Astrology, alternative medicines, crystals, vibes, etc., cults and other such religious beliefs, various social, economic, political and other beliefs that aren't supported by evidence. Lots of them aren't "zealots", they're pushed by fear, misconception, misinformation, desperation, FOMO, hype, etc.

Also, even if some people in some usages get benefit out of it, does not mean that fuels hype and snake oil to try to sell it to uninformed people to use in inappropriate places. Baking soda is good for making bread, lots of people use it for that with great success. It is snake oil for curing cancer, however.


NFTs as domain names is novel, interesting, and non-financial.

https://ens.donains


My belief is that blockchain-based solutions are typically inferior to centralized ones except when the goal is to evade regulation. (Crypto is good for dealing in contraband, dodging currency controls, and collecting ransomware; but it's always less efficient than centralized solutions with a trusted authority. And the only time you can't find a trusted authority is when you're trying to dodge regulation.)

In this instance, I don't see how this offers any advantage over the usual registrar-based DNS: at the end of the day I'm just using the ENS rootnode keyholders as my registrar, and I don't see how the blockchain is solving any problem that couldn't be more efficiently solved with a traditional database.


"Evading regulations" is a phrase that holds a different weight when you don't live in a free society. "Evading regulations" could be using a gay dating app in Iran. Or organizing to fight against a repressive government in Russia. Even in so-called civilized Western nations, there are increasing tendencies toward autocratic rhetoric and encroachments into personal liberties. See: abortion access in the United States.

Not too long ago the US government put pressure on all financial institutions to stop processing payments for Wikileaks without them ever having been convicted of committing any crimes. That's just a single example plucked out of the sky, regardless of your feelings about that organization.

The great advantage that we get from a decentralized, opt-in trust infrastructure is that we have an avenue around these unjust and extralegal encroachments.

Having said all that, even if you don't care about politics, there are countless examples of centralized entities changing ownership, or changing strategies, and their users pay the price. See: Twitter taking people's usernames.

You might be fine with dealing with living at the whims of billionaires but I want my digital life to be more durable than that.


1. The first part here doesn't argue against my claim that "blockchain is only good for evading regulation", it argues that evading regulation is sometimes a good thing. That's fine, let's set that aside.

2. The second part argues about centralized entities changing ownership. I don't think the example of Twitter usernames is a good use-case for blockchain: there's nothing here that isn't already solved by public key crypto alone (which is how blockchain solves the problem anyway).

Maybe Twitter here is just an example and you want to talk about corporate control over payment services? I don't think this is a compelling argument. There's no shortage of payment services and even if one of them is bad you can always find another. Yeah: Stripe or your bank go out of business tomorrow, but switching to Apple Pay or a different bank is not a huge problem.


Corporate control over payments is a topic worth discussing but I'm talking about corporate control over identity. Twitter usernames are identities. Email addresses are identities. Phone numbers are identities. IP addresses are identities. Domain names are identities. All of those things are corporately controlled and can destroy businesses and communities on a whim.

I agree with you that public key crypto solves many of these problems, but you still need to publish your public key somewhere. And you need an infrastructure where public keys are first class citizens with the platform.

I gave ENS as an example precisely because it is open source digital public infrastructure where such things can be published forever, and it is not corporately controlled. It happens to use a blockchain to do consensus and create incentives for people to opt into running the public infrastructure. I personally think these game theoretical incentives are integral for the functioning of the platform but am not married to the idea if there are better ones.

Something like Ethereum is an anti-authoritarian platform. You don't use it unless you desire the qualities of anti-authoritarianism. For everything else there's centralized solutions.


My washing machine says "Optimized by AI". My clothes washing machine. Hell yes it's gotten out of control.


When TVs switched from standard def, suddenly all sorts of products from makeup to paint to sunglasses were advertising themselves as “HD.” After the iMac became a surprise hit for Apple, every piece of tech junk imaginable was immediately renamed to add a lowercase “i.” Blockchain had a similar, albeit briefer, hypecycle. Marketers love to jump on the latest trend by co-opting buzzwords.


I liked how in the late 90s everything imaginable was updated to be the Generic Item 2000 to indicate it was the latest and greatest, and then once Y2K came and went they had to one up it and make everything a Generic Item 3000. Gonna confuse the hell out of some future archaeologists.


There's quite a few results in google for "iBlockchain", plus some "Blockchain and AI" articles. Unfortunately I don't see any that mixes all three, or any of them with "HD".


Many of the appliances from the 90s came with a “fuzzy logic” label.

Nobody knew what it meant then, but it was still used to hype up products


I remember seeing the fuzzy logic label on japanese rice makers.


Not that I disagree, but as I've said before, it's easy to be a theoretical contrarian, as in claim something is BS but not stake anything on it or profit from it (other than book deals).

We're in a housing bubble, at the peak of a multi-century economic mega-cycle, we're overdue for earthquakes, tsunamis, asteroids, the caldera under Yellowstone, flu pandemics, etc etc. It's super easy to say all that and demand people should listen, what's hard is committing to any actionable prediction.


Being a pessimist is academically safe and intellectually lazy.

If you’re wrong in your predictions nobody cares because it’s better than if you’re right.

On the other hand, if you’re an optimist and you are wrong you look like an idiot and the social penalty is higher.

So people continue to predict doomsday and most of the time it doesn’t happen.


> So people continue to predict doomsday and most of the time it doesn’t happen.

Well, the same is true for any prediction of anything unlikely. People continue to predict world-changing innovations, and most of the time they are wrong. That's the reality of any low-probability/high-impact prediction, in the positive or negative direction.

As a specific case of that: most startups fail, so predicting a startup will fail is safe.

I'm pretty prone to exactly that prediction. But I like this quote from Erik Davis (https://www.google.ch/books/edition/High_Weirdness/Rcq2DwAAQ...): "In the court of the mind, skepticism makes a great grand vizier, but a lousy lord."


All I ever wanted was a way to short NFTs


Never short a ponzi, even if you win your counterparty won’t pay


Great to read this back-to-back with the takedown on the GAN image generation offering also on HN.

The point of most critiques/polemics on AI Snake oil isn't on effectiveness of AI in context. It's about mis-application, belief its AGI, belief the answers are right, and belief it has no downsides. It's often mis-applied, It's absolutely not, and not even on the road to AGI, the answers are not always right, and it has massive social downsides, employment included.

It has upsides, sure. It's not the job of snake oil warners to catalog the good, they're pointing out the abysmally bad, in the current AI hype-wave.


Crazy thing is that intelligence doesn't mean you get the right answer. Humans get the answer right fewer times than AI.

It's absolutely not AGI, but it's not "not the road to AGI". Every step forward could be the road to AGI.


Humans get the answer right fewer times than AI

Maybe individually but not collectively. Let's not forget that we are the authors of the data AI consumed, although very few of us actually made a difference - myself included. If something like AGI is truly possible it will replace our process of discovery and creativity, arguably one of our best features. We need to know if it's better than ALL of us rather than any of us because arguably all of us will make that sacrifice.


Could is doing a lot of heavy lifting here. People have been hypothesising about the nature of mind and intelligence for a long time. There is no compelling connectionist, moar-will-work, it-can-learn-to-think model, which infers the nature of intelligence and how that relates to GAN or GPT or any of what people are doing now.

What people are doing now is amazing. I love looking at pictures of the pope in puffer jackets. I love "this person does not exist"

It could be on the road to AGI, but then, I could win lotto. and I do enter lotto, even knowing the odds. The thing is I don't plan as if I WILL win lotto and a lot of "this AI could be the road to AGI" is about planning as if it will, to secure a slice of the future.

It could? yea. But, it isn't.


The future will probably have amazing computers and models, but what we already have has the capacity to profoundly change the world.

I don't think you are focusing on what matters most. It's really the language itself that contains the intelligence, not the AI models. The models are just vessels for absorbing all that knowledge encoded in text. Just like human brains - we can have different neural wiring but learn the same things through education.

So the huge datasets these AI systems are trained on are key. That's how models like GPT-4 gain such language understanding. The architecture matters less than all that linguistic data it ingests. And language has been evolving for millennia, long before AI. It replicates through culture, speech, writing. Now with AI it has a whole new medium.

Fascinating question - how will AI affect language evolution going forward? As models produce more human-like text, which can further train better models, it's like a Lamarckian evolution. Acquired linguistic intelligence gets absorbed by AI then improved and propagated back into the corpus.

So while AI tech will keep advancing, it's language evolution that's most profound. By creating this new way for language to evolve, AI could really reshape cultural evolution. Since language enables intelligence, it'll have big impacts on where AI is headed next.


Does Princeton not have any meaningful generative AI work or research going on?


I went to Princeton, I can assure you Narayanan is only one professor among the many brilliant STEM scientists there.


Analysis: true, six months ago


such a title!


Everyone should read this Reddit comment: https://www.reddit.com/r/blender/comments/121lhfq/i_lost_eve...

This is happening, and the reddit thread is just a single instance of a wide phenomenon. I wonder how one can square "snakeoil" theories against reality like this.


Sal Khan (founder of Khan Academy) had a decent analogy for what AI is doing to creative fields in the No Priors Podcast: https://www.youtube.com/watch?v=NH95LKOILgE&t=2626s&ab_chann...:

The camera is the best metaphor. In the 19th century, being an artist was a real thing; it was really a technical field. You were a portrait painter and the best artists would study for years to be as accurate to reality as you could - look at how the light moves and all of that. All of a sudden, the camera comes out and artists fear this is the end of art because this new thing can capture reality better than anyone can.

But then very quickly, people realized that in some ways this liberates the artists. It's not a coincidence that the impressionist movement coincided with the advent of the camera. People realized it's not about capturing reality but the expression and feelings conjured. This led to an explosion about what art is and come out of the trap about painting, nobility, and grand scenes into things that really evoke and challenge us.

People are now saying AI can write pretty well. It can code pretty well. It can create movies, images pretty well. What that tells me is that it liberates the creator to move beyond that. Someone can elevate and integrate and manage these tools.


Most jobs in human history were not labors of passion, it is only in very recent history that it is so. We should not expect it to be the case that people will love their job, as not all loves are employable, as OP in the thread is finding out. While OP might dislike it, there are many others who will fill their space if OP leaves that job or industry, those that actually enjoy what they do.

Contrast that to the people on https://old.reddit.com/r/StableDiffusion, where people willingly experiment with multiple approaches to media generation. At least some of those people will be interested in 3D modeling and will willingly take OP's job, because they genuinely enjoy it and OP now does not.

In the end, adapt, or others who do will take over. That's been the historical precedent for millennia, and it will not stop now.


My company has openly bragged they predicted they would need to hire over 200 employees this year, and AI systems meant they only had to hire 45.


> just a single instance of a wide phenomenon

Another instance is the 'hey pi' chatbot. For some people it's a superior experience to BetterHelp, or even a lot of so-called professional therapists out there.

The lack of actual human presence is a bit deflating for me, but I still got a good session out of it once. It feels like it has the potential to be at least better than nothing for some of the lonely people out there. The option for different voices is a nice touch that adds to the illusion.

There are so many possibilities. I'm looking forward to seeing what happens with everything from NPCs in video games to robotics (among other things, to actually explain what they're doing & why, and converse). Not to mention the applications in education and health care. Anyone who thinks this is a flash in the pan has not observed enough of what's going on.


Maybe the reality isn't evenly distributed. I'm supposedly at the working end of that job and can't immediately recall seeing the result of that transition.


My hypothesis is that a lot of high-status intellectuals are reflexively dismissing AI out of fear that it threatens their status, as AI poses to turn intellectual labor into a cheap commodity rather than the premium bespoke product they offer it as via their professorships, speaking fees and book sales.

Most criticism of AI has all the hallmarks of coping mechanisms. They seem to shift quickly between declaring it an ineffective parlor trick, and then say it's so good and effective that it poses a risk to the human spirit. There are a lot of legitimate criticisms I've seen, especially of OpenAI's potential regulation capture and misuse of models, but calling it all snake oil is hilariously naïve and seems more like a fear of change.


Really? You think people who write for a living might be biased towards confirming that the thing already replacing many people who write for a living is a fad that's going to blow over soon?

My favorite part with crap predicting doom is the sheer unawareness of the rate of change of the rate of change.

If you had asked me 18 months ago how long after GPT-3 until something existed with the capabilities of GPT-4, I'd probably have guessed about 5 years.

If you asked me 5 years ago how long until an AI could explain why a joke was funny, I'd have maybe guessed at least a decade or two, if it was even possible.

If you asked me 10 years ago if I'd see AI replacing artists or copywriters in my lifetime, I'd have guessed maybe when I was in a retirement home (I'm still a fair ways away from that).

No one thought what exists today was even possible within our lifetimes a decade back.

I'm reminded of the Louis C.K. routine about everything being amazing and no one is happy.

Unthinkable AI has already become so normalized that people are predicting its doom based on shortcomings that are less than two years old because AI doing fucking IMPOSSIBLE things (or so everyone thought) is only that old.

The rate is outrageous, and while I do think there's currently a significant setback with obsolete alignment approaches being carried forward to models that probably need new techniques, that's going to be a temporary step back in parallel to significant strides forward in the underlying technology from hardware to model design to improved knowledge in how to squeeze the most water from the rock.

I just hope when all these folks turn out to have been dead wrong that we don't collectively forget. Futurists should live and die by their record, but too many have goldfish memory and continue to listen to false futurists well after they've shown their own snake oil hand.


> already replacing many people who write for a living

Is this actually the case? Are there examples of professional writers/academics/etc who have lost their jobs to LLMs?


Yes, mostly low quality newspapers though. It remains to be seen if those companies regret their decision.


> I'm reminded of the Louis C.K. routine about everything being amazing and no one is happy.

If AI meant the human was now free, I'd be happy, but ATM AI seems to mean the human will be jobless and soon homeless and starving.


All snark aside it must be really tough to make a name for yourself by calling something snake oil only to have a flavor of it become massively successful...and then have to double down because you already had a book deal to fulfill.


It's not really about "ai hype", it's about funding.

"In the last few months, there has been this increasing so-called rift between the AI ethics and AI safety communities. There is a lot of talk about how this is an academic rift that needs to be resolved, how these communities are basically aiming for the same purpose. I think the thing that annoys me most about the discourse around this is that people don’t recognize this as a power struggle.

It is not really about intellectual merit of these ideas. Of course, there are lots of bad intellectual and academic claims that have been made on both sides. But that isn’t what this is really about. It’s about who gets funding, which concerns are prioritized. So looking at it as if it is like a clash of individuals or a clash of personalities just really undersells the whole thing, makes it sound like people are out there bickering, whereas in fact, it’s about something much deeper."


They were wrong and now they have to pivot, from "AI isn't effective" to "it's bad that AI is so effective."


I have stopped using the "snake oil" epithet, because I found out an interesting background on that.

Oils derived from various snakes in the desert were found to be effective treatments and remedies for various things by the Native American population, and therefore widely used by certain tribes.

Once the White Man caught wind of this, he capitalized on its reputation, and a booming business of scams bloomed anywhere there were gullible buyers. Of course, the snake oil was rarely authentic or effective for its advertised uses.

Therefore, snake oil was given a very bad and rather undeserved reputation for the rest of history, and medical science continues to flounder.


“Snake oil” is a reference to Clark Stanley, whose snake oil was infamously found to contain no snake oil whatsoever.


C/F "goanna oil" Australia. Still made. Nowadays, eucalyptus oil, pine oil, peppermint oil, camphor, menthol and turpentine.

https://ourstory.moretonbay.qld.gov.au/nodes/view/38026




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: