Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I hope so because I'm extraordinarily sick of the technology. I can't really ask a question at work without some jackass posting an LLM answer in there. The answers almost never amount to anything useful, but no one can tell since it looks clearly written. They're "participating" but haven't actually done anything worthwhile.


I hope so, but for different reasons. Agreed they spit out plenty of gibberish at the moment, but they’ve also progressed so far so fast it’s pretty scary. If we get to a legitimate artificial general super intelligence, I’m about 95% sure that will be terrible for the vast, vast majority of humans, we'll be obsolete. Crossing my fingers that the current AI surge stops well short of that, and the push that eventually does get there is way, way off into the future.


It doesn't have to be super, it just has to inflect the long term trend of labor getting less relevant and capital getting more relevant.

We've made an ideology out of denying this and its consequences. The fallout will be ugly and the adjustment will be painful. At best.


I believe (most) people contribute their ambitions to nurture safe, peaceful, friend-filled communities. AGI won’t obsolete those human desires. Hopefully we weather the turbulence that comes with change and come out the other side with new tools that enable our pursuits. In the macro, that’s been the case. I am grateful to live in a time of literacy, antibiotics, sanitation, electricity… and am optimistic that if AGI emerges, it joins that list of human empowering creations.


Wise words, thank you.


Current AI degrades totally unlike human experts. It also, by design, must lag its data input.

Anything innovated must come from outside or have a very close permutation to be found.

Generative AI isn't scary at all now. It is merely rolling dice on a mix of other tech and rumors from the internet.

The data can be wrong or old...and people keep important secrets.


Gotta wonder if Google has used code from internal systems to train Gemini? Probably not, but at what point will companies start forking over source code for LLM training for money?


It seems much cheaper, safer legally and more easily scalable to simply synthesize programs. Most code out there is shit anyway, and the code you can get by the GB especially so.


How do they synthesize programs?

I would assume that internal code at Google is of higher quality than random code you find on Github. Commit messages, issue descriptions and code review is probably more useful too.


Or liberating... as Douglas Rushkoff puts it.

If and only if something like high-paying UBI comes along, and people are freed to pursue their passions and as a consequence, benefit the world much more intensely.


How can one not understand that UBI is captured by inflation.

Its just a modern religion really because anyone can understand this it is so basic and obvious.

You don't have to point out some bullshit captured study that says otherwise.


Inflation is a lack of goods for a given demand though. Ie if we can flood the world with cheap goods then inflation won't happen. That would make practical UBI possible. To some extent it has already happened.


My intuition, based on what I know of economics, is that a UBI policy would have results something like the following:

* Inflation, things get more expensive. People attempt to consume more, especially people with low income. * People can't consume more than is produced, so prices go up. * People who are above the break-even line (when you factor in the taxes) consume a bit less, or stay the same and just save less or reduce investments. * Producers, seeing higher prices, are incentivized to produce more. Increases in production tend to be concentrated toward the things that people who were previously very income-limited want to buy. I'd expect a good bit of that to be basic essentials, but of course it would include lots of different things. * The system reaches a new equilibrium, with the allocation of produced goods being a bit more aimed toward the things regular people want, and a bit less toward luxury goods for the wealthy. * Some people quit work to take care of their kids full-time. The change in wages of those who stay working depends heavily on how competitive their skills are -- some earn less, but with the UBI still win out. Some may actually get paid more even without counting the UBI, if a lot of workers in their industry have quit due to the UBI, and there's increased demand for the products. * Prices have risen, but not enough to cancel out one's additional UBI income entirely. It's very hard to say how much would be eaten up by inflation, but I'd expect it's not 10% or 90%, probably somewhere in between. Getting an accurate figure for that would take a lot of research and modeling.

Basically, I think it's complicated, with all the second and third-order effects, but I can't imagine a situation where so much of the UBI is captured by inflation that it makes it pointless. I do think that as a society, we should be morally responsible for people who can't earn a living for whatever reason, and I think UBI is a better system than a patchwork of various services with onerous requirements that people have to put a lot of effort into navigating, and where finding gainful employment will cause you to lose benefits.


I'm not sure passion exists in a world without struggle...


The idea that AI will ever remove all struggle, even if it reaches AGI, is absurd. AI by itself can't give you a hug, for example--and even if advances in robotics make it possible for an AI-controlled robot to do that, there are dozens of unsolved problems beyond that to make that something that most people would even want.

AI enthusiasm really is reaching a religious level of ridiculous beliefs and this point.


I doubt ai will remove all struggle. I suspect we wouldn't see great extents of human passion in a world where everyone is fed, clothed, housed, etc without needing to exert themselves at all.


And AI isn't going to feed, clothe, or house people either.

AGI, at best, would provide ideas for how to do those things. And the current AI, which is not AGI, can only remix ideas humans have already given it--ideas which haven't fed, clothed, or housed us all yet.


"I only make you struggle because I love you!"

(Mmmhmm, I'm sure the benefits received by the people on top have nothing to do with it.)


That requires achieving post-scarcity to work in practice and be fair, though. If achievable, it’s not clear how it relates to AGI. I mean, there’s plenty of intelligence on this planet already, and resources are still limited - and it’s not like AGI would somehow change that.


One thing I thought recently, is that a large amount of work is currently monitoring and correcting human activity. Corporate law, accounting, HR and services etc. If we have AGI that is forced to be compliant, then all these businesses disappear. Large companies are suddenly made redundant, regardless of whether they replace their staff with AI or not.


I agree that if true AGI happens (current systems still cannot reason at all, only pretend to do so) and if it comes out cheaper to deploy and maintain, that would mean a lot of professions could be automated away.

However, I believe this had already happened quite a few times in history - industries becoming obsolete with technological advances isn’t anything new. This creates some unrest as society needs to transition, but those people are always learning a different profession. Or retire if they can. Or try to survive some other way (which is bad, of course).

It would be nice, of course, if everyone won’t have to work unless they feel the need and desire to do so. But in our reality, where the resources are scarce and their distribution in a way that everyone will be happy is a super hard unsolved problem (and AGI won’t help here - it’s not some Deus ex Machina coming to solve world problems, it’s just a thinking computer), I don’t see a realistic and fair way to achieve this.

Put simply, all the reasons we cannot implement UBI now will still remain in place - AGI simply won’t help with this.


I guess the point I am trying to make, is that paradoxically the more an AI company's products are integrated into the economy, the less value they can extract from the economy. As a large amount of the world's economic output is just dealing with the human factor.


I'm not sure if that is something we actually would want.

Lots of people certainly think they want that.


Why wouldn't you want it, unless you are currently benefiting from employing people who would rather be doing literally anything else?


For the vast majority of people, getting rid of necessary work will usher in an unprecedented crisis of meaning. Most people aren't the type pursue creative ends if they didn't have to work. They would veg out or engage in degenerate activities. Many people have their identity wrapped up in the work they do, or being a provider. Take this away without having something to replace it with will be devastating.


Good. Finally they’ll realize the meaninglessness of their work and how they’ve been exploited in the most insidious way. To the point of forgetting to answer the question of what it is they most want to do in life.

The brain does saturate eventually and gets bored. Then the crisis of meaning. Then something meaningful emerges.

We’re all gonna die. Let’s just enjoy life to the fullest.


>They would veg out or engage in degenerate activities

"Oh no the sinners might play video games all day"

I do expect the next comment would be something like "work is a path to godliness"


>I do expect the next comment would be something like "work is a path to godliness"

And you think these kinds of maxims formed out of vacuums? They are the kinds of sayings that are formed through experience re-enforced over generations. We can't just completely reject all historical knowledge encoded in our cultural maxims and expect everything to work out just fine. Yes, it is true that most people not having productive work will fill the time with frivolous or destructive ends. Modernity does not mean we've somehow transcended our historical past.


> And you think these kinds of maxims formed out of vacuums?

Do you think they've always existed in all human cultures throughout time?

The pro-work ethic is fairly new in human civilization. Previous cultures considered it to be a burden or punishment, not the source of moral virtue.

> Yes, it is true that most people not having productive work will fill the time with frivolous or destructive ends.

And that's fine! A lot of people fill their time at work with frivolous or destructive ends, whether on their own or at the behest of their employer.

Not all work is productive. Not all work is good. It isn't inherently virtuous and its lack is not inherently vicious.


> They are the kinds of sayings that are formed through experience re-enforced over generations.

Sure, but the whole point is that the conditions that led to those sayings would no longer be there.

Put a different way: those sayings and attitudes were necessary in the first place because society needed people to work in order to sustain itself. In a system where individual human work is no longer necessary, of what use is that cultural attitude?


It wasn't just about getting people to work, but keeping people from degenerate and/or anti-social behavior. Probably the single biggest factor in the success of a society is channeling young adult male behavior towards productive ends. Getting them to work is part of it, but also keeping them from destructive behavior. In a world where basic needs are provided for automatically, status-seeking behavior doesn't evaporate, it just no longer has a productive direction that anyone can make use of. Now we have idle young men at the peak of their status-seeking behavior with little productive avenues available to them. It's not hard to predict this doesn't end well.

Beyond the issues of young males, there's many other ways for degenerate behavior to cause problems. Drinking, gambling, drugs, being a general nuisance, all these things will skyrocket if people have endless time to fill. Just during the pandemic, we saw the growth of roving gangs riding ATVs in some cities causing a serious disturbance. Some cities now have a culture of teenagers hijacking cars. What happens to these people who are on the brink when they no longer see the need to go to school because their basic needs are met? Nothing good, that's for sure.


What exactly do you think would happen? Usually wars are about resources. When resource distribution stops being a problem (i.e, anyone can live like a king just by existing), where exactly does a problem manifest?

All the "degenerate activities" you mentioned are a problem in the first place because in a scarcity-based society they slow down/prevent people from working, therefore society is worse off. That logic makes no sense in a world where people don't need to put a single drop of effort for society to function well.


>All the "degenerate activities" you mentioned are a problem in the first place because in a scarcity-based society they slow down/prevent people from working

This is a weird take. Families are worse off if a parent has an addiction because it potentially makes their lives a living hell. Everyone is worse off if people feel unsafe because of a degenerate sub-culture that glorifies things like hijacking cars. People who don't behave in predictable ways create low-trust environments which impacts everyone.


I would say that those attitudes are 99% caused by resource-related issues. There's a reason why drug abuse (and antisocial behavior generally) is mostly found among the lower classes.

If I could pick between the world we are in now and one where all the problems societies face that are related, directly or indirectly, to the distribution of resources are eliminated, I would pick the latter in a heartbeat. The "price to pay" in the form of a possible uptick in "degeneracy" during the first few months/years is worth it, not to mention that I doubt that problem would arise at all.


It's a dangerous fantasy to think that all societal problems are caused by uneven distribution of wealth and that they will be solved by redistribution. No, some people just aren't psychologically suited to the modern world, whether that involves delaying gratification or rejecting low effort, high dopamine stimulation. The structure involved in necessary work and the social structures that lead people down productive paths are one way we collectively cope with the incongruence between our society and our psychology. Take away these structures and the results have the potential to be massively destabilizing.


So... manufactured poverty?

You're just saying it's desirable that some people be at the bottom even in a scenario where the opposite could be feasibly achieved. All on some theory that the human mind (or at least some instances of it in the population) simply... won't be able to take it without going insane?

We should need a much, much higher standard of proof for what could result in unnecessary pain and suffering for years. Especially when this:

> some people just aren't psychologically suited to the modern world, whether that involves delaying gratification or rejecting low effort, high dopamine stimulation.

...is not a proven fact, and is, with respect to social media, highly contested and inconclusive.


>You're just saying it's desirable that some people be at the bottom even in a scenario where the opposite could be feasibly achieved.

What's wrong with having people at the relative bottom? Trying to force equality onto society does not have a good track record. We can raise the absolute bottom past the point of poverty while also not upending social structures that have served us well for centuries.

>All on some theory that the human mind... simply... won't be able to take it without going insane?

I'm saying transformative change across the whole of society shouldn't be undertaken lightly. I don't need to prove that a world where human labor is obsolete would be damaging to the human psyche. Those who want to rush ahead just assume things will be just fine. They have the burden of proof. We've seen how bad things can get when the social engineers get it wrong. We're at a local peak in human flourishing for a large part of humanity. Why should we pull the lever on the unknown in hopes that we will come out ahead?


> And you think these kinds of maxims formed out of vacuums?

No, they formed in societies where it WAS necessary for most people to work in order to support the community. We needed a lot of labor to survive, so it was important to incentivize people to work hard, so our cultures developed values around work ethics.

As we move more and more towards a world where we actually don’t need everyone to work, those moral values become more and more outdated.

This is just like old religious rules around eating certain foods; in the past, we were at risk from a lot of diseases and avoiding certain foods was important for our health. Now, we don’t face those same risks so many people have moved on from those rules.


>those moral values become more and more outdated.

Do you think there was ever a time in human societies where the vast majority of people didn't have to "work" in some capacity, at least since the rise of psychologically modern humans? If not, why think humanity as a whole can thrive in such an environment?


Our environment today is completely different that it was even 100 years ago. Yes, you have to ask this question for every part of modern society (fast travel, photographs, video, computers, antibiotics, vaccines, etc), so I am not sure why work is different.


Part of the problem is that we don't ask these questions when we should be. Social media, for example, represents a unique assault on our psychological makeup that we just uncritically unleashed on the world. We're about to do it again, likely with even worse consequences.


What would "asking these questions" entail? Would you have a committee that decides what new things we would allow? Popular vote? I get the idea, I just know see how you could ever actually do anything about this issue unless you completely outlawed anything new.


I don't think its plausible to have a committee to approve all new technology. But it is plausible to have a committee empowered to place limits on technology that we can predict will cause a social upheaval the likes of which we've never seen in modern times. It's not like we haven't done the equivalent of this before with e.g. nuclear and bioengineering technology. The difficulty is that the speed in which AI is being developed makes it so government bureaucracies are necessarily playing catchup. But it can be done. We just need to accept that we're not powerless to shape our collective futures. We are not at the mercy of technology and the few accelerationists who stand to be the new aristocracy in the new world.


I find this comment to be completely shortsighted.

We now have western societies with a growing population of homeless people, that despite having access to tons of resources at their disposal, still can't get their shit together. A great majority are doing drugs and smoking/abusing alcohol.

And it's enough to have 20 crackheads to destroy a neighborhood of 10000 hard-working, peaceful people.


The way most of the world is setup we will need to first address the unprecedented crisis of financing our day to day lives. We figure that out and I’m sure people will find other sources of meaning in their lives.

The people that truly enjoy their work and obtain meaning from it are vastly over represented here on HN.

Very few would be scared of AI if they had a financial stake in its implementation.


Even then he’ll probably like employing AI more.

Lots of new taxes and UBI!


“We should do away with the absolutely specious notion that everybody has to earn a living. It is a fact today that one in ten thousand of us can make a technological breakthrough capable of supporting all the rest. The youth of today are absolutely right in recognizing this nonsense of earning a living. We keep inventing jobs because of this false idea that everybody has to be employed at some kind of drudgery because, according to Malthusian Darwinian theory he must justify his right to exist. So we have inspectors of inspectors and people making instruments for inspectors to inspect inspectors. The true business of people should be to go back to school and think about whatever it was they were thinking about before somebody came along and told them they had to earn a living.” — Buckminster Fuller


everything points to the opposite


It may be impossible in this world to expect a form of donation, but it is certainly not impossible to expect forms of investment.

One idea I had is everyone is paid a thriving wage, and in exchange, if they in the future develop their passion into something that can make a profit, they pay back 20% of their profits they make up to some capped amount.

This allows for extreme generality. It truly frees people to pursue whatever they fancy every day until they catch lightning in a bottle.

There would be 0 obligation as to what to do, and when to pay back the money. But of course would have to be only open to honest people, so that neither side is exploiting the other.

Both sides need a sense of gratitude, and wanting to give back. A philanthropic 'flair' "If it doesn't work out, it's okay", and a gratitude and wanting to give back someday on the side of the receiver, as they continue working on probably the most resilient thing they could ever work on (the safest investment), their lifelong passion.


I think of ChatGPT as a faster Google or Stackoverflow and all of my colleagues are using it almost exclusively in this way. That is still quite impressive but it isn’t what Altman set out to achieve (and he admits this quite candidly).

What would make me change my mind? If ChatGPT could take the lead on designing a robot through all the steps: design, contract the parts and assembly, market it, and sell it that would really be something.

I assume for something like this to happen it would need all source code and design docs from Boston Dynamics in the training set. It seems unlikely it could independently make the same discoveries on its own.


> I assume for something like this to happen it would need all source code and design docs from Boston Dynamics in the training set. It seems unlikely it could independently make the same discoveries on its own.

No, to do this it would need to be able to independently reason, if it could do that, then the training data stops mattering. Training data is a crutch that makes these algos appear more intelligent than they are. If they were truly intelligent they would be able to learn independently and find information on their own.


> I’m about 95% sure that will be terrible for the vast, vast majority of humans, we'll be obsolete.

This isn't a criticism of you, but this is a very stupid idea that we have. The economy is mean to serve us. If it can't, we need to completely re-organize it because the old model has become invalid. We shouldn't exist to serve the economy. That's an absolutely absurd idea that needs to be killed in every single one of us.


> we need to completely re-organize it because the old model has become invalid

that's called social revolution, and those who benefit from the old model (currently that would be the holders of capital, and more so as AI grows in its capabilities and increasingly supplants human labor) will do everything in their power to prevent that re-organization


The economy isn't meant to serve us. It's an emergent system that evolves based on a complex incentive structure and its own contingent history.


Economic activity is meant to serve us. Don't be a pedant.


But surely you can see that the economy does not serve us. Who is "us", anyway?

The economy, and the worldwide technological system that it fuels, behaves like its own organism with its own ultimate goal unbeknownst to us.

By looking around you, is it not perfectly clear to you too that it does not have anything to do with the well-being of people?


People work to eat. You missed the point entirely. I refer you to my first post on this.


Nevertheless the modern economy has been deliberately designed. Emergent behaviors within it at the highest levels are actively monitored and culled when deemed not cost effective or straight out harmful.


The problem is no one is talking about this. We’re clearly headed towards such a world, and it’s irrelevant whether this incarnation will completely achieve that.

And anyone who poo poos ChatGPT needs to remember we went from “this isn’t going to happen in the next 20 years” to “this is happening tomorrow” overnight. It’s pretty obvious I’m going to be installing Microsoft Employee Service Pack 2 in my lifetime.


Very true but the question, as always, is by what means we can enact this change? The economy may well continue to serve the owner class even if all workers are replaced with robots.


Workers have been replaced with machines many times over the last 250 years, and these fears have always been widespread, but never materialized.

I concede that this time it could be different, but I'd be very surprised while I starved to death.


I think the options are pretty clear. A negotiation of gradual escalation: Democracy, protests, civil disobedience, strikes, sabotage and if all else fails then at some point, warfare.


The economy is meant to serve some people; some people take out of economy more than they give, some people give more than they take.


A position shared by both Lenin and Thatcher


Great theory. In reality the vast majority us serves only the economy without getting anything truly valuable in return. We serve it only, with noticing it, to grow into less human and more individual shells of less human. Machines of the Economy.


This doesn't engage with the problem of coordinating everyone around some proposed solution and so is useless. Yes, if we could all just magically decide on a better system of government, everything would be great!


Identifying the problem is never useless. We need the right understanding if we're going to move forward. Believing we serve the economy and not the other way around hinders any progress on that front and so inverting it is a solid first step.


It's not _that_ scary. I kind of like the idea of going out to the country and building a permiculture garden to feed myself and my family.


Until you try and you find that all the arable land is already occupied by industrial agriculture, the ADMs/Cargills of the world, using capital intensive brute force uniformity to extract more value from the land than you can compete with, while somehow simultaneously treating the earth destructively and inefficiently.

This is both a metaphor for AGI and not a metaphor at all.


Sure, if you can survive the period between the obsolescence of human labor and the achievement of post-scarcity. Do you really think that period of time is zero, or that the first version of a post-scarcity economy will be able to carry the current population? No, such a transition implies a brutish end for most.


Sorry, I was being too subtle. When nobody has a job anymore and the economy is crashing, I'm looking forward to moving into the country and becoming self sufficient.

We'll be very very poor, and it will be really hard work, but I'm looking forward to the challenge.

Human labour will never be obsolete because you can always work for yourself.

Post scarcity will never happen unless some benevolent AI god chooses to give it to us like in a Banks novel.


think more deeply . who benefits with super intelligence ? at the end it is game of what humans desire naturally. AI has no incentive and are not controlled by hormones.


It's already impacting some of us. I hope it never appears until the human civilization undergoes a profound change. But I'm afraid many rich people want that happen.

It's the real Great Filter in the universe IMO.


LLMs still completely won't admit that they're wrong, don't have enough information or that the information could have changed - Asking anything about Svelte 5 is an incredible experience currently.

At the end of the day it's a tool currently, with surface-level information it's incredibly helpful in my opinion - Getting an overview of a subject or even coding smaller functions.

What's interesting in my opinion is "agents" though... not in the current "let's slap an LLM into some workflow", but as a concept that is at least an order of magnitude away from what is possible today.


Working with Svelte 5 and LLMs is a real nightmare.

AI agents are really interesting. Fundamentally they may represent a step toward the autonomization of capital, potentially disrupting "traditional legal definitions of personhood, agency, and property" [0] and leading to the need to recognize "capital self-ownership" [1].

[0] https://retrochronic.com/#teleoplexy-17

[1] https://retrochronic.com/#piketty


It's fairly easy to prompt an LLM in a way where they're encouraged to say they don't know. Doesn't work 100% but cuts down the hallucinations A LOT. Alternatively, follow up with "please double check..."


Your problem may be with those jackasses at work.

I get very useful answers from ChatGPT several times a day. You need to verify anything important, of course. But that's also true when asking people.


I have never personally met any malicious actor that knowingly dump unverified shit straight from GPT. However, I have met people IRL who gave way too much authority to those quantized model weights, got genuinely confused when the generated text doesn't agree with human written technical information.

To them, chatgpt IS the verification.

I am not optimistic about the future. But also perhaps some amazing people will deal with the error for the rest of us, like how most people don't go and worry about floating point error, and I'm just not smart enough to see how it looks like.


Reminds me of the stories about people slavishly following Apple or Google maps navigation when driving, despite the obvious signs that the suggested route is bonkers, like say trying to take you across a runway[1].

[1]: https://www.huffpost.com/entry/apple-maps-bad_n_3990340


There’s some people I trust on certain topics such that I don’t really need to verify them (and it would be a tedious existence to verify everything).


Exactly. If you don't trust anybody, who would you verify with?


This comment reads like a culture problem not an LLM problem.

Imagine for a moment that you work as a developer, encounter a weird bug, and post your problem into your company’s Slack. Other devs then send a bunch of StackOverflow links that have nothing to do with your problem or don’t address your central issue. Is this a problem with StackOverflow or with coworkers posting links uncritically?


I develop sophisticated LLM programs every day at a small YC startup — extracting insights from thousands of documents a day.

These LLM programs are very different than naive one-shot questions asked of ChatGPT, resembling o1/3 thinking that integrates human domain knowledge to produce great answers that would have been cost-prohibitive for humans to do manually.

Naive use of LLMs by non-technical users is annoying, but is also a straw-man argument against the technology. Smart usage of LLMs in o1/3 style of emulated reasoning unlocks entirely new realms of functionality.

LLMs are analogous to a new programming platform, such as iPhones and VR. New platforms unlock new functionality along with various tradeoffs. We need time to explore what makes sense to build on top of this platform, and what things don’t make sense.

What we shouldn’t do is give blanket approval or disapproval. Like any other technology, we should use the right tool for the job and utilize said tool correctly and effectively.


There is nothing to build on top of this AI platform as you call it. AI is nothing but an autocorrect program, AI is not innovating anything anywhere. Surprises me how much even the smartest people are deceived by simple trickery and continue to fall for every illusion.


>Naive use of LLMs by non-technical users is annoying, but is also a straw-man argument against the technology. Smart usage of LLMs in o1/3 style of emulated reasoning unlocks entirely new realms of functionality.

I agree in principle, but disagree in practice. With LLMs available to everyone, the uses we're seeing currently will only proliferate. Is that strictly a technology problem? No, but it's cold comfort given how LLM usage is actually playing out day-to-day. Social media is a useful metaphor here: it could potentially be a strictly useful technology, but in practice it's used to quite deleterious effect.


Do you mean you implement your own CoT on top of some open source available GPT? (Basically making the model talk to itself to figure out stuff)


what is o1/3?


o1 and o3 are new models from openai


Might just be me, but I also read in a condescending tone to these types of responses akin to “let me google that for you”


Pretty much. It should be considered rude to send AI output to others without fact checking and editing. Anyone asking a person for help isn’t looking for an answer straight from Google or ChatGPT.


This is the "cell phones in public" stage of technology.

As with cell phones, eventually society will adapt.


This may be the "cell phones in public" stage, but society has completely failed to adapt well to ubiquitous cell phone usage. There are many new psychological and behavioral issues associated with cell phone usage.


Cell phones were definitely a net loss for society, so I hope you're wrong.


Wouldn't that mean that you want LLMs to advance further, not be at a dead end?


You can tell. The tiresome lists.


Yep. Why does every answer has to be a list nowadays?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: