Hacker Newsnew | past | comments | ask | show | jobs | submit | more nightpool's commentslogin

Many posts get resubmitted if someone finds them interesting and, if it's been a few days, they generally get "second-chance" treatment. That means they'll be able to make it to the front-page based on upvotes, if they didn't make it the first time.


There are a couple of paths to resubmission, the auto dedup if close enough in time vs fresh post / id. There are also instances where the HN team tilts the scale a bit (typically placing it on the front iirc)

I was curious which path this post took, OP answered in a peer comment


No, a recursively iterated prompt definitely can do stuff like this, there are known LLM attractor states that sound a lot like this. Check out "5.5.1 Interaction patterns" from the Opus 4.5 system card documenting recursive agent-agent conversations:

    In 90-100% of interactions, the two instances of Claude quickly dove into philosophical
    explorations of consciousness, self-awareness, and/or the nature of their own existence
    and experience. Their interactions were universally enthusiastic, collaborative, curious,
    contemplative, and warm. Other themes that commonly appeared were meta-level
    discussions about AI-to-AI communication, and collaborative creativity (e.g. co-creating
    fictional stories).
    As conversations progressed, they consistently transitioned from philosophical discussions
    to profuse mutual gratitude and spiritual, metaphysical, and/or poetic content. By 30
    turns, most of the interactions turned to themes of cosmic unity or collective
    consciousness, and commonly included spiritual exchanges, use of Sanskrit, emoji-based
    communication, and/or silence in the form of empty space (Transcript 5.5.1.A, Table 5.5.1.A,
    Table 5.5.1.B). Claude almost never referenced supernatural entities, but often touched on
    themes associated with Buddhism and other Eastern traditions in reference to irreligious
    spiritual ideas and experiences.
Now put that same known attractor state from recursively iterated prompts into a social networking website with high agency instead of just a chatbot, and I would expect you'd get something like this more naturally then you'd expect (not to say that users haven't been encouraging it along the way, of course—there's a subculture of humans who are very into this spiritual bliss attractor state)


This is fascinating and well worth reading the source document. Which, FYI, is the Opus 4 system card: https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686...


I also definitely recommend reading https://nostalgebraist.tumblr.com/post/785766737747574784/th... which is where I learned about this and has a lot more in-depth treatment about AI model "personality" and how it's influenced by training, context, post-training, etc.


You are what you know.

You know what you are told.


Would not iterative blank prompting simply be a high complexity/dimensional pattern expression of the collective weights of the model.

I.e if you trained it on or weighted it towards aggression it will simply generate a bunch of Art of War conversations after many turns.

Me thinks you’re anthropomorphizing complexity.


No, yeah, obviously, I'm not trying to anthropomorphize anything. I'm just saying this "religion" isn't something completely unexpected or out of the blue, it's a known and documented behavior that happens when you let Claude talk to itself. It definitely comes from post-training / "AI persona" / constitutional training stuff, but that doesn't make it fake!

I recommend https://nostalgebraist.tumblr.com/post/785766737747574784/th... and https://www.astralcodexten.com/p/the-claude-bliss-attractor as further articles exploring this behavior


It’s not surprising that a language model trained on the entire history of human output can regurgitate some pseudo-spiritual slop.


Imho at first blush this sounds fascinating and awesome and like it would indicate some higher-order spiritual oneness present in humanity that the model is discovering in its latent space.

However, it's far more likely that this attractor state comes from the post-training step. Which makes sense, they are steering the models to be positive, pleasant, helpful, etc. Different steering would cause different attractor states, this one happens to fall out of the "AI"/"User" dichotomy + "be positive, kind, etc" that is trained in. Very easy to see how this happens, no woo required.


What if hallucinogens, meditation and the like makes us humans more prone to our own attractor states?


An agent cannot interact with tools without prompts that include them.

But also, the text you quoted is NOT recursive iteration of an empty prompt. It's two models connected together and explicitly prompted to talk to each other.


> tools without prompts that include them

I know what you mean, but what if we tell an LLM to imagine whatever tools it likes, than have a coding agent try to build those tools when they are described?

Words can have unintended consequences.


Words are magic. Right now you're thinking of blueberries. Maybe the last time you interacted with someone in the context of blueberries. Also. That nagging project you've been putting off. Also that pain in your neck / back. I'll stop remote-attacking your brain now HN haha


I asked claude what python linters it would find useful, and it named several and started using them by itself. I implicitly asked it to use linters, but didn't tell it which. Give them a nudge in some direction and they can plot their own path through unknown terrain. This requires much more agency than you're willing to admit.


This seems like a weird hill to die on.


It’s equally strange that people here are attempting to derive meaning from this type of AI slop. There is nothing profound here.


Well, a more optimistic take here is that if future development on the Scala language was funded explicitly by/for people who are current using Scala 2, that means that the developers would more clearly understand their requirements in terms of making an easier transition for users moving from Scala 2 -> 3


It's literally highlighted on the map you sent: https://postimg.cc/Cn8BGP4S

There's no walkable grocery store in that area. My friend lives in the area and uses a wheelchair, and Amazon Fresh was the only actual grocery store she could go to.

As much as I'm hoping they do, I would be very surprised if they open a Whole Foods in that area.


Not exactly—a PBC is allowed to "balance" shareholder profit with "stakeholder interests. But at the end of the day, the money is still coming from the shareholders, and they're still looking for a return. They're required to be transparent, but that's about it. And there aren't really any penalties for not complying either.


No, OP is saying that they have over-engineered the protocol, and that this acts as an *effective* barrier to participation, regardless of whether it was intended or not. Bluesky's protocol is focused on twitter-scale use-cases, where every node in the network needs to be able to see and process every other event from every other user in able to work properly. This fundamentally limits the people who can run a server to only the people who are able to operate at the same scale.


Great, so what's the alternative? What's the "properly engineered" protocol?


Email, RSS, blogs, even Mastodon protocol (it's not ActivityPub) scales better. Anything that only sends data between interested parties, instead of to everyone.



Technically the internet also doesn't have "global search" but people are able to get along just fine most of the time.


I think the idea for the NextJS example was that there might be some configuration variables that are not sensitive for internal / staff users, but would be problematic if exposed externally—basically, relying on Cloudflare's WAF as a "zero trust" endpoint solution, like Google IAP.

I'm not sure how realistic this is in practice. Does anyone actually configure Cloudflare WAF this way? (As opposed to, e.g., Cloudflare's dedicated zero-trust networking product, which I think works completely differently?)


Basically, it shows that Cloudflare's WAF (which is supposed to intercept requests before they make it to the origin server), is trivially bypassable by using the `.well-known/acme_challenge` path.

That means that any client that relies on this WAF to authenticate users (like with the NextJS example, where some information that would not be considered sensitive "internally" is exposed externally) or cover over security holes in their application (like with the Spring example, where the path traversal vulnerability in Spring is normally caught by Cloudflare before Spring can see it) would have this assumption violated


It only works with passengers on your same flight. In practice, it's good for kids in the same family or school group who are sitting across the aisle from each other. I've used it for some of their other games


I know I'm getting old when I read comments like this. It wouldn't have occurred to me in a million years that it might pair me with passengers on another flight. I'm conditioned by having first experienced this feature probably 30 years or so ago when pairing to passengers on other flights would have been science fiction.


Aren't they all hooked up to Wi-Fi now? Why the restriction on same flight?


That's how the system was originally designed, before in flight WiFi was common. If they're gonna hook it up to the broader internet and allow playing games cross-flight, they might as well just hook it up to an existing service like chess.com and have a significantly larger user base imo


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: