Except the tape gets overwritten/looped over. What GP was referring to is event-sourcing the state in an append-only log and running finite state automata on this sequence.
Here is a java lib that apply regex to stream of Objects that could be used to achieve this purpose.
This is a very common pattern in Ruby to manage state. It's especially useful to guard entering impossible states with respect to business logic and figuring out what went wrong. Something along the lines of:
Can't transition Command from 'ordered' to 'to-deliver': paid() == false
I've started development of something similar for Clojure's core.async library, which implements Go-like channels. This is something I cobbled together to trace the origin of a subtle sporadic bug that would happen once every 10,000 runs in a program that makes heavy use of channels. I'm not really working actively on it, so it only covers a subset of the functions in core.async but I keep on expanding it, adding debugging support for functions every now and then when I need it.
It represents the flow of data between channels/threads as a sequence diagram using PlantUML. Today I implemented core.async/onto-chan! (it spins a thread that will take items from a sequence and put them onto a channel). Here's what it looks like:
It's especially useful in Clojure, as the experience with channels is not as polished as in Go (or so I heard): when a put or a take hangs, your only recourse is to sit and squint at your code. This tool will color them red, so you can immediately spot what's wrong. It also allowed me to spot a channel spaghetti plate I had not anticipated and wouldn't have remarked otherwise.
For now I've taken a "maximalist" approach which includes inspecting buffers, so it incurs a heavy runtime penalty that's only ok at dev-time (+ it keep tracks of the whole trace, so no flight recording in sight for now).
In the future I'd like to give it a proper interface (it's just a svg file with hoverable elements for now), maybe using constraint-based graph layout to lay out the sequence diagram using cola.js/adaptagrams or the sequence diagram from Kiel university Kieler project when it's integrated into the Eclipse Layout Toolkit. Thesis from the developer of this module:
Am I the only one visualizing some of my most creative thoughts in a mental palace that is formed by many distinct (euclidian) spaces, whose axis connect to each other through a graph ? Closest thing that can describe this I found are simplicial sets:
I wish I had something like this in my head to tie things in together. Right now I feel like my understanding of things is so disorganised and "lucky" in a sense. I feel lucky that I have grasp of anything.
Wow, well expressed. That's exactly hoe i feel. Not momentarily, but with everything. Though i am actually not intelligent, i just have good intuition and luck to grasp some of what i need to "unddrstand".
Reminds me of the saying about a poet vs mathematician, the first gives different names to the same thing and the latter the same name to different things. Maybe that's why I can't stand highly descriptive prose (aka describing the water while I'm drowning over here).
Now what if you're a poetic mathematician (or mathematical poet), what's that mind map look like?
Well... what about that palace of mind thing, and the ability to rewind into almost all older memories at will, and on demand being able to look up things from there, like reading, without having it memorized at all? Also full stream of consciousness, like smells, tastes, light wind on your skin, 'silken air' at just the right temperature and humidity.
All of that arranged in something like 'eigengrau', represented by glitterlike points connected by graphs, mostly in 'phospene' colors, but not exclusively so.
Sometimes very non-euclidean, moving/warping.
KNOWING what's behind every glitter point, like small cinema, large home theatre, from several points of view at the same time.
No words involved. Just visuals.
Thinking, like juggling/weighing blobs, like that glowing stuff which moves slowly up and down in a lava-lamp.
Somehow 'knowing' what each blob, its size/form/viscosity/weight/speed/color/brightness/'feel'/smell represents.
Slowly emerging new 'visuals' from this. Which are then translated into 'language', if ever.
Not sure whether you talk about the uranium yellow/green color, or the brief hallucination of a light spot (happened to me just a few minutes ago, hadn't had one in a long time).
I don't have such an hyperbolic mental palace, and this doesn't really give me the ability to establish a global map but I relate a lot to what you wrote. Sometimes as I reach the climax of a long deep thought, I'm thinking via vision exclusively to the extent I don't even pay attention to what my outer eye sees and I stumble upon some insight that is sometimes almost impossible to convey in language, not because it lies beyond, but because the intrusion of language causes the idea to collapse: words points to dangling shapes that mean barely anything because the rest of the painting has gone away.
To those that have read this far and can't relate to this way of thinking, this isn't a superpower, those are rather rare experiences of altered states.
Talking about this is a kind of taboo and may cause some smiles, and indeed if there is a deeper truth to these experiences about the computational or geometric nature of the mind, maybe in the same way synaesthesia mirrors spectrograms, it won't help people working in machine learning a lot (even though some like Lecun seem to use their own visual introspective abilities as a source of inspiration).
However they may prove to be crucial in conceiving what kind of use brain chips should be put too. For now it seems we're walking through a thick fog in that direction with envisioned application being confined to interfacing to external computers or increasing cognitive abilities quantitatively, such as perfect memory and so on. If I could sustain such experiences durably, with a high level of control and enhanced geometric/mathematical understanding, I believe this would be akin to a superpower, yes.
>Now what if you're a poetic mathematician (or mathematical poet), what's that mind map look like?
Well look at the drawings I posted below: mathematical notions mixed with ad-hoc diagrammatic distinctive elements such as colors and marks. With maybe a theorem that posits that every mixed representation like theses matches a colorless, unannotated, rigorous mathematical object ?
In fact I come from a structural linguistics background, and when I pictured how one could extrude a semiotic square into another one, I felt like I understood the vague intuition behind homotopy type theory: the metaphor goes like this – the extrusion volume must be water tight for the squares to make sense.
Suppose you read Dostoyevsky's short story "Another Man's Wife and a Husband Under the Bed." In that case, you might notice that the protagonist's vertical position, as he eavesdrops on what he believes to be his wife through the wall of another man's apartment while standing alone in a corridor, mirrors the horizontal position he later assumes when hiding under the bed of his wife's presumed lover. This physical positioning reflects his moral descent, particularly as he is not alone this time. Beneath the bed with him is another man, clandestinely involved with yet another man's wife. This leads to help us picture that our protagonist is just as disconnected from his wife as the man lying next to him under the bed or the husband unknowingly sleeping above them—if not more so.
Granted I don't have the detailed vision of this semiotic diagram, but coming up with the skeletal structure is exactly what the job of a semiotician consists in (which I'm not). What matters is that all these equivalence classes the writer lays down, just like in mathematics, allows meaning to flow. His vertical loneliness must match his horizontal promiscuity for the story to operate this crescendo. Clog theses connections, and the inner structure of the object they tie together disappear too. Digging into Saussure and Voeivodsy one can realize they shared a common obsession about identity, for it is precisely when physical objects become indistinguishable that they can be referred to with the same terms and that conceptuality arises (Aerts, 2010s and onward).
"Different names to the same thing" and the "same name to different things": the two directions on the homotopical ladder.
Note: I'm 100% in postmodern mode here, this goes way above my head of course.
I don't know what a simpilician set is and wikipedia didn't really helped me. However I could roughly describe my "mind" as many mental maps where concepts are laid out and connected in different ways. Learning means putting new things on these maps a thinking is navigating through them.
This is just a deleuzian metaphor for the weird kind of space I perceive certain abstract thoughts with.
>many distinct (euclidian) spaces, whose axis connect to each other through a graph
Imagine having pictures hanged on the walls of your mental palace that act as portals to others rooms and corridors within that palace, and that must exist parallelly to each other, in different "universes" otherwise their volumes would intersect. The kind of geometry the Antichamber video game features.
Or picture this: a representation that relies on its axis to convey meaning, for instance the political compass meme. Walk along an axis long enough and it will connect orthogonally to another axis, for instance, authoritarianism may connect to anger from the emotional compass.
Simplexes: a generalization of triangles to n dimensions. A 2-axis representation (the political compass for example) could connect to spaces with 3 axis (the ascended political compass: https://external-preview.redd.it/UQgZCVQ4OLg_Hz16FGdu9-qxfq9...).
To represent this you could connect one tip of a segment (a 1-simplex) to the tip of a triangle (a 2-simplex), each vertex in these figures representing an axis. This is where my deleuzian metaphore collapses because I'm conflating the notion of axis with the notion of the "left" and "right" part of an axis. And I'd also be tempted to consider that planes should be allowed to connect to axis (to support that portal through a painting I mentioned above).
So this is just a sketchy thought, but this seems legitimate as it's not something I conceptualize but something I perceive (sometimes). But I think there may be something interesting behind these perceptions because it seems they deal with separate concerns through some kind of orthogonal geometry that is structured: putting a concept in a dimension orthogonal to another concept doesn't lead that dimension to be orthogonal to all other dimensions/concepts in your mental palace, as that would be the case if it took the shape of a n-dimensional space. And because the orthogonality is structured, it allows to deal with more than 3 concepts spatially at the same time and embed them within something your eye can picture in 2D or 3D, using diagrammatic annotations (colors, marks, etc). Finally it allows to put a concept C in several orthogonal relationships to distinct concepts, for instance A and B, and to keep these different instantiations of concept C orthogonal to each other.
This is what my mind pictured as I was explaining this ; colors and graduation marks/boxes faithfully representing what I just perceived: https://pasteboard.co/kMecyenyZdzg.png
Note that the two colors, the green of the axis and of red of the sticks could be thought as two individual concepts of their own, orthogonal to each other.
This is basically what I tried this morning at the prompt level (awful results), but the sketchy idea I had in mind went further by introducing control-flow "meta-tokens" to help the LLM renavigate its context. In this perspective the context would be rethought as a self-editing structured mind-map, with the linear aspect of the context at a time T standing for the execution trace of the exploration of this mind-map so far. Some of those meta-tokens would be able to have side effects on the context, to highlight, give structure, summarize, forget and so on, some of its parts. This could allow for native structured output without using a syntactic format such as json, programmatic constructs in the style of LMQL, implementing memory, etc. The goal: not just to give logical/reasoning abilities to a LLM, but to give it the means to come up with its own cognitive architecture. Implementing structured output (using a <label name="stuff">...</label> token) to also implement memory/scratchpads, would also bring inspectability of those cognitive structures for free. Of course I have no idea how to implement this (I'm a ML tourist).
I recently had an introspective dream revealed to be based on a literal prompt at the end: "Game to learn to talk about It and its player." When I asked GPT to craft a plot from this prompt's title (and the fact it is revealed at the end), it reproduced the dream's outline, down to the final scene:
GPT reconstruction:
The dream reaches its peak when you meet the "final boss" of the game: an entity that embodies the ultimate barrier to communication. To overcome this obstacle, you must synthesize everything you've learned about "it" in the dream and present a coherent vision that is true to yourself. As you articulate your final understanding of "it", the maze dissolves around you, leaving you in front of a giant mirror. In this mirror, you see not just your reflection but also all the characters, passions, and ideas you encountered in the dream. You realize that "it" is actually a reflection of yourself and your ability to understand and share your inner world. The dream ends with the title revealed, "Game to Learn to Communicate about It and Its Player", meaning the whole process was a metaphor for learning to know and communicate your own "it" - your personality, thoughts, and emotions - with others, and that you are both the creator and the discoverer of your own communication game.
My note:
The continuation of the dream corresponds to an abrupt change of scene. I find myself in my bed, in the dim light of my room, facing a mysterious silhouette. As I repeatedly inquire about its identity, I stretch my hands towards its face to feel its features as I cannot clearly see them. Then, a struggle begins, during which I panic, giving the dream a nightmarish turn. Noticing that the dark figure mirrors my movements, I realize it's myself. Suddenly under my duvet and as I struggle to get out, I feel jaws and teeth against the sheets. I call out for my mother, whom I seem to hear downstairs, and that's when my vision fades, and I see the dream's source code displayed behind. It consists of ChatGPT prompts shared on the lime green background of an image-board. At the bottom, I then see the dream's title: "Game to learn how to communicate about It and its player."
Look I don't mean to downplay. Or maybe I do. But we're talking about LLM replacing professional problem solvers, software architects, not generating great sounding probability modeled token distributions.
>Recent research suggests that judgmental anchoring is mediated by a selective increase in the accessibility of knowledge about the judgmental target. Anchoring thus constitutes one instance of the judgmental effects of increased knowledge accessibility. Such knowledge accessibility effects have repeatedly been demonstrated to be fairly durable, which suggests that the effects of judgmental anchoring may also persist over time. Consistent with this assumption, three experiments demonstrate that judgmental anchors influence judgment even if they were present one week before the critical judgment is made. In fact, the magnitude of anchoring was found to remain undiminished over this period of time.
I didn't get access to the full text, but had a look at other papers from the same researcher [0] on what kind of methodology they use.
In the case of recruiting, I think the main factor when moving the decision further down the line is the change in information ("a selective increase in the accessibility of knowledge about the judgmental target"), in two specific ways:
- we actually remember less about the subject, for better or worse. A candidate might have had a weird look, and the notes are probably impacted by that bias, but we can look back at their coding test without that impression and come out with a slightly different conclusion.
- we get to compare to other subjects in a different order. In particular, that helps catching weird expectations. For instance if every candidates has been falling through the same trap, it helps give them a pass and assume the question was at fault. If we had to do that in real time, only the last few ones would get a kinder judgement.
I've never had such a thing but many years ago, not long out of university, in my previous career as an electronics engineer I was asked to design a simple amplifier before the interview proper. The interviewer explained, slightly apologetically, at the end of the interview that he did this just to sort out those who were good at talking but didn't have a thorough grounding in the basics from those who were well grounded but perhaps not so good at blowing their own trumpet. I was pleased to find that I passed that part with flying colours :-)
But I would not want such things to be taken very seriously unless you trying to fill a very narrowly defined post because it is all to easy to create a test that a good candidate would fail.
I think they're very valuable if the position requires any coding at all.
In particular very simple tests (like an API interface, or reversing a string etc.) done in any language they feel comfortable is are usually a trove of info about the candidate. The result doesn't really matter, it doesn't need to run, it doesn't need to be complete, as long as you got to hear a lot about how the candidate thinks, how he moves through the problem, and how much they can write something basic, what they're confident in and what they're not used to do etc.
which makes sense, if someone made an impression on you that impression doesn't disappear in just a few days. At best it may be fuzzier, which could be good or bad.
No this is well studied, although you're right this is a "popular-psychology thing" since ctrl+f "anchoring" returns nothing in the discussed study paper.
>A number of studies have shown the robustness of the anchoring effect. Anchors can influence judgment even after weeks or months (Reference MussweilerMussweiler, 2001; Reference Yoon, Fong and DimokaYoon & Fong, 2019). The anchoring effect is present even in experts in the judgmental domain (Reference Englich and MussweilerEnglich & Mussweiler, 2001; Reference Englich, Mussweiler and StrackEnglich et al., 2006). An anchor can influence subsequent judgment even if it is clearly implausible (Reference Strack and MussweilerStrack & Mussweiler, 1997) or when it is compared to a different object (Reference Frederick and MochonFrederick & Mochon, 2012; Reference Mochon and FrederickMochon & Frederick, 2013).
Here is a java lib that apply regex to stream of Objects that could be used to achieve this purpose.
https://github.com/norswap/skelex