Can anyone define "emergent" without throwing it around emptily? What is emerging here? I'm seeing higher-layer LLM human writing mimicry. Without a specific task or goal, they all collapse into vague discussions of nature of AI without any new insight. It reads like high school sci-fi.
That's one way to look at it, as just the next iteration of subredditsimulator.
The qualitatively new step leading to emergent behavior will be when the agents start being able to interact with the real world through some interface and update their behavior based on real world feedback.
Think of an autonomous, distributed worm that updates its knowledge base of exploit techniques based on trial and error and based on information it discovers as it propagates.
It might start doing things that no human security researcher had foreseen, and that doesn't require great leaps of the imagination based on today's tech.
That's when you close the evolutionary loop.
I think this isn't quite that yet, but it points in that direction.
The objective is given via the initial prompt, as they loop onto each other and amplify their memories the objective dynamically grows and emerges into something else.
We are an organism born out of a molecule with an objective to self replicate with random mutation
Why does "filling a need" or "building a tool" have to turn into an "economy"? Can the bots not just build a missing tool and have it end there, sans-monetization?
"Economy" doesn't necessarily mean "monetization" -- there are lots of parallel and competing economies that exist, and that we actively engage in (reputation, energy, time, goodwill, etc.)
Money turns out to be the most fungible of these, since it can be (more or less) traded for the others.
Right now, there are a bunch of economies being bootstrapped, and the bots will eventually figure out that they need some kind of fungibility. And it's quite possible that they'll find cryptocurrencies as the path of least resistance.
I understand that. What I don’t understand is why that’s relevant here. AI can build whatever tooling it sees fit without needing to care about resources necessarily no? Ofc they’re necessary to run it but I don’t understand the inevitability of a currency
In the 70s, the Polaroid SX-70 camera included a disposable battery in every film pack. After 10 shots, you threw away both it and the plastic film case with its large metal spring mechanism. When the film production stopped in the late 00s, you could use 600 film designed for other polaroid bodies reloaded into a 779 native SX70 cartridge, because the battery would last much longer than the initial 10 shots.
I respect your awareness of that, which I'm sure is much broader than mine is. HN consumes my attention; I'm all depth and no breadth.
What I'm interested in is how well HN does at fulfilling its own mandate in its own terms. On that scale, it's getting worse—in this respect, at least, which is a big one. We're going to do something about it, the same way we've always tried to stave off the decline of this place (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...).
I think it's both external and internal, but the internal factors are more important because a compromised immune system is more vulnerable to outer pathogens.
I'm not sure it's about HN getting larger, though. It's a bit hard to tell, but at least some of the upswing in cynical and curmudgeonly comments is coming from established users.
> LLMs are emerging as a new kind of intelligence, simultaneously a lot smarter than I expected and a lot dumber than I expected
Isn't this concerning? How can we know which one we get? In the realm of code it's easier to tell when mistakes are being made.
> regular people benefit a lot more from LLMs compared to professionals, corporations and governments
We thought this would happen with things like AppleScript, VB, visual programming. But instead, AI is currently used as a smarter search engine. The issue is that's also the area where it hallucinates the most. What do you think is the solution?
This would be convenient for post-production and editing of video, e.g. to aid colour grading in Davinci Resolve. Currently a lot of manual labour goes into tracking and hand-masking in grading.
The family is a system, with different roles played by each participant. For instance, in toxic families, there is often one scapegoat, with an anxious attachment style, that affords the avoidant types in the family to participate in delusions.
What are the dynamics like of everyone in your family?
I wanted to say the same: parents don't treat all children the same. For example, I have the feeling that the first child is the "practice" child. The parents learn from the mistakes made with them and don't repeat them with the children that follow. I don't know if there's any research to back this up and yes, I am a first born.
I think sequel kids, more than anything, benefit from having a trailblazer to refer to. It's no doubt true that parents get better at the job, but kids learn from demonstration. Older sibling is hypersensitive and has a hard time keeping friends around -> I better learn to swallow my pride. That kind of thing.
I've observed the same - unfortunately, first time parents are forced to try out all kinds of parenting experiments on their first-born, before they figure out how to be "good" parents. And subsequent kids, especially if they have them after some gap, get the benefit of this experience. Add to the woes of the first-born, they not only have to deal with normal sibling jealousy (of having to share their parents affection), but also resolve the emotional issue of why their younger siblings have an "easier" time (i.e. why their parents treat them "differently").
Remember how the modern "nucular" household is largely based on a modernization of the Roman patriarchal property distribution model, where the oldest male was ascribed the identities of all members of his household, and vice versa?
That must've been extremely efficient for legal and accounting purposes, once. But, well, the only theory of mind anyone could develop in such circumstances involves grinding minds into fine paste. (There's a reason the Stoics are "seeing" an AI-driven resurgence, even though what'd be most appropriate for their target audience is probably again Skinner.)
Remember how a great deal of how we live our "personal" lives was invented in a slaveholding state which mandated belief in gods and demons. And the rest in another.
We are taught to consider all of this legacy cultural structure in terms of "haha how quaintly did people live 1000-2000-3000 years ago, were they stupid". Yet most of it lives on in some marginally altered form due to sheer global force of habit.
Take Western human naming schemes for example: does your government permit you to change your name? do you inherit one or both granddads' names? do you get a patronym? extra personal names? are you also the security force for a place, like a Freiherr de So-and-So? and at what exact number of levels of recursive self-reflection does the word "person" stop meaning the role played, and starts meaning the human playing it?
(When you're done with "identity", continue with "time-keeping" and begin to understand another psychological phenomenon causing much suffering - people's generalized inability to discern cause and effect.)
The name - the sound through which individuals are conditioned to respond to the concepts of selfhood and identity (Foobert Barber Baznix! you come here right this instant! it is not me but you who is sleepy and hungry!) - is one of many such extremely arbitrary implementation details.
Out of those emerges the thing sold to us by our caregivers and educators as "normal life" before we are able to know any better. That's the main way "primary socialization" has ever worked: a non-consensual intergenerational transmission of habits that have as much to do with self-soothing in the face of mortality as with practical concerns; in the end they just ascribe "imaginariness" to your memories of your mind being wiped, and the "you" is ready to go.
Now, in the context of all those vague and admittedly entirely hypothetical "implementation details", proceed to imagine the troop of clothed primates not as a flat list of incidental blood relations, but as a dynamic system, a living group of conscious things; if you're feeling particularly scifi - a sort of distributed organism. What would be the purpose of the scapegoat organ in that organism? Do individual primates have an equivalent organ in their bodies? (Probably not the one you're thinking of but also a valid guess)
There is no purpose of the scapegoat organ. This is one of the biggest fallacies people have with regards to natural selection and economics.
Standard neoclassical economics theory tells people that they have perfect foresight and know the configuration/structure of all future possibilities. In other words, there are no unknown unknowns. You know everything you don't know yet.
People have the same belief with regards to natural selection being efficient. It just seemingly chooses the most efficient organisms.
In reality there is a developmental process with no guarantee of optimality or progress toward optimality. It is possible to get stuck in local maxima and it takes activation energy to get out of it.
The scapegoat organ exists because the perceived marginal cost of fixing and investigating an incident or problem is considered more expensive than deflecting blame.
The Iranians destroyed their water supply with scapegoats so trying to find a purpose in the scapegoat organ seems pretty insane. It's more like a weakness that leadership does not have a complete picture of the problems that its people are facing. You could argue that scapegoating is an expression of a lack of power. You have just enough power to blame others, but not enough to solve the problem.