I don't know physics but it is amazing and wonderful to see something of this magnitude (even if it doesn't pan out, but it sounds… wow) posted on HN! Congratulations on this step of the preprint release! I hope you hear from more informed people here shortly.
It is definitely daunting to put a proposal for such a massive problem out there as an independent researcher, but this community's spirit is exactly why I wanted to share it here. Even if it turns out I missed a subtle coefficient somewhere, the discussion is always worth it. Hoping for that technical grilling soon!
I see more hate and misinformation on Mastodon than I see on X. Here is a very mild one:
[edit: link removed; I don't want to promote that guy but to give the gist he was saying that people who believe in free speech are trash, targeting X users with hate. Mastodon is absolutely saturated with this.]
Most of the criticism I see of X seems completely made up out of malice or is regurgitation of things other poorly informed or resentful people have said.
The supposed FSF in Europe should post links to the sections of the open source algorithm they claim to be criticizing, and show us their PR.
My criticism of X is primarily rooted in 2 things: the massive degradation of my experience using the platform and a distrust that Musk wouldn't use the platform to manipulate public opinion to achieve political goals.
On the first point the simplest thing is I used to report people who use overt slurs or anti-semitic language. When Musk took over it started taking months for them to follow up and the response was simply to lock the account until they deleted the offending tweet. Eventually when I would report those people X just switched to saying they weren't breaking the rules. Now the replies of tons of seemingly normal posts that get lots of visibility are full of vile people trying to derail conversation with racism or anti-semitism.
Another big problem is the way that blue-check accounts are boosted has incentivized every account to act like click-bait all the time. Whenever a post gets semi-viral the blue-check replies are artificially lifted to the top and most of them are totally worthless because the commenters are just trying to 'grab space' so people click their profile and follow them. It used to be that if big accounts posted something interesting you might see a bunch of interesting follow up replies. Now it's spammers at the top and then racists / crazies mixed in with more thoughtful replies if you scroll down a few pages past the blue-checks. It used to be that the algorithmic feed would surface me all sorts of interesting and novel work from people across the tech world but now there's a whole category of people trying to make every single Tweet viral enough to get payouts.
And then there's Musk himself. He's ordered the algorithm to be manipulated to boost himself more. He's clearly expressed discontent when the algorithm doesn't work the way he wants, he's meddled heavily in the platform's AI bot to make it say things Musk prefers, and he's been rather unscrupulous chasing his political goals. I think it's not unlikely he'd use the platform to guide public opinion, perhaps even using AI to do it discretely and intelligently. I view that as a significant risk.
So the platform has gone from something that's highly useful to me, and a place I greatly enjoyed, to something that more often than not wastes my time and exposes me to people that disturb me. And on top of all that I think contributing to the platform may empower someone who I deeply distrust to manipulate public opinion towards their political goals.
He is one of these people who think that humans have a direct experience of reality not mediated by as Alan Kay put it three pounds of oatmeal. So he thinks a language model can not be a world model. Despite our own contact with reality being mediated through a myriad of filters and fun house mirror distortions. Our vision transposes left and right and delivers images to our nerves upside down, for gawd’s sake. He imagines none of that is the case and that if only he can build computers more like us then they will be in direct contact with the world and then he can (he thinks) make a model that is better at understanding the world
Isn't this idea demonstrably false due to the existence of various sensory disorders too?
I have a disorder characterised by the brain failing to filter own its own sensory noise, my vision is full of analogue TV-like distortion and other artefacts. Sometimes when it's bad I can see my brain constructing an image in real time rather than this perception happening instantaneously, particularly when I'm out walking. A deer becomes a bundle of sticks becomes a muddy pile of rocks (what it actually is) for example over the space of seconds. This to me is pretty strong evidence we do not experience reality directly, and instead construct our perceptions predictively from whatever is to hand.
Pleased to meet someone else who suffers from "visual snow". I'm fortunate in that like my tinnitus, I'm only acutely aware of it when I'm reminded of it, or, less frequently, when it's more pronounced.
You're quite correct that our "reality" is in part constructed. The Flashed Face Distortion Effect [0][1] (wherein faces in the peripheral vision appear distorted due the the brain filling in the missing information with what was there previously) is just one example.
Only tangentially related but maybe interesting to someone here so linking anyways: Brian Kohberger is a visual snow sufferer. Reading about his background was my first exposure to this relatively underpublicized phenomenon.
Ah that's interesting, mine is omnipresent and occasionally bad enough I have to take days off work as I can't read my own code; it's like there's a baseline of it that occasionally flares up at random. Were you born with visual snow or did you acquire it later in life? I developed it as a teenager, and it was worsened significantly after a fever when I was a fresher.
Also do you get comorbid headaches with yours out of interest?
I developed it later in life. The tinnitus came earlier (and isn't as a result of excessive sound exposure as far as I know), but in my (unscientific) opinion they are different manifestations (symptoms) of the same underlying issue – a missing or faulty noise filter on sensory inputs to the brain.
Thankfully I don't get comorbid headaches – in fact I seldom get headaches at all. And even on the odd occasion that I do, they're mild and short-lived (like minutes). I don't recall ever having a headache that was severe, or that lasted any length of time.
Yours does sound much more extreme than mine, in that mine is in no way debilitating. It's more just frustrating that it exists at all, and that it isn't more widely recognised and researched. I have yet to meet an optician that seems entirely convinced that it's even a real phenomenon.
Interesting, definitely agree it likely shares an underlying cause with tinnitus. It's also linked to migraine and was sometimes conflated with unusual forms of migraine in the past, although it's since been found to be a distinct disorder. There's been a few studies done on visual snow patients, including a 2023 fMRI study which implicated regions rich in glutamate and 5HT2A receptors.
I actually suspected 5HT2A might be involved before that study came out, since my visual distortions sometimes resemble those caused by psychedelics. It's also known that both psychedelics and anecdotally from patient's groups SSRIs too can cause a similar symptoms to visual snow syndrome, I had a bad experience with SSRIs for example but serotonin antagonists actually fixed my vision temporarily - albeit with intolerable side-effects so I had to stop.
It's definitely a bit of a faff that people have never heard of it, I had to see a neuro-ophthalmologist and a migraine specialist to get a diagnosis. On the other hand being relatively unknown does mean doctors can be willing to experiment. My headaches at least are controlled well these days.
scoot, you may find the current mini-series by the podcast Unexplainable to be interesting. It's on sound, and one episode is about tinnitus and research into it.
The default philosophical position for human biology and psychology is known as Representational Realism. That is, reality as we know it is mediated by changes and transformations made to sensory (and other) input data in a complex process, and is changed sufficiently to be something "different enough" from what we know to be actually real.
Direct Realism is the idea that reality is directly available to us and any intermediate transformations made by our brains is not enough to change the dial.
Direct Realism has long been refuted. There are a number of examples, e.g. the hot and cold bucket; the straw in a glass; rainbows and other epiphenomena, etc.
the fact that a not-so-direct experience of reality produces "good enough results" (eg. human intelligence) doesn't mean that a more-direct experience of reality won't produce much better results, and it clearly doesn't mean it can't produce these better results in AI
your whole reasoning is neither here not there, and attacking a straw man - YLC for sure knows that human experience of reality is heavily modified and distorted
but he also knows, and I'd bet he's very right on this, that we don't "sip reality through a narrow straw of tokens/words", and that we don't learn "just from our/approved written down notes", and only under very specific and expensive circumstances (training runs)
anything closer to more-direct-world-models (as LLMs are ofc at a very indirect level world models) has very high likelihood of yielding lots of benefits
But he seems to like pretending that we can’t reconfigure that straw of tokens into 4096 straws or a couple billion straws for that matter. LLMs are just barely getting started. That’s not to say there’s no other or better way, but yucking our yum he fails to acknowledge there’s a lot more that can be done with this stuff.
The world model of a language model is a ... language model. Imagine the mind of a blind limbless person, locked in a cell their whole life, never having experienced anything different, who just listens all day to a piped in feed of randomized snippets of WikiPedia, 4chan and math olypiad problems.
The mental model this person has of this feed of words is what an LLM at best has (but human model likely much richer since they have a brain, not just a transformer). No real-world experience or grounding, therefore no real-world model. The only model they have is of the world they have experience with - a world of words.
Whatever idea yann has of JEPA and its supposed superiority compared to LLMs, he doesn't seem to have done a good job of "selling it" without resorting to strawmanning LLMs. From what little I gathered (which may be wrong), his objection to LLMs is something like the "predict next token" inductive bias is too weak for models to be able to meaningfully learn models of things like physics, sufficient to properly predict motion and do well on physical reasoning tasks.
And LLMs are trained on the humans trying to describe all of this through text. The point is not if humans have a true experience of reality, it’s that human writings are a poor descriptor of reality anyway, and so LLMs cannot be a stepping stone.
I will rephrase GP. Most taxis/Uber drivers have less than one minor accident every 250k miles. The fact that "FSD"+dedicated driver have more indicate to me that FSD is more dangerous for an experienced driver in urban settings than nothing.
We don’t know the details. It could be that the human drivers were in control at the time of the incidents and caused some of them. It could also be that other cars driven by humans caused them.
Web sites hosting these clickbait articles have zero incentive to make things sound less dramatic.
An obvious tell is that they’ll use the word “crash” for a Tesla bumping a parking bollard.
It’s also known that things get better over decades even if problems have been reported in the past, so it’s good information here showing that the problems are not yet fixed.