But I-JEPA is non-generative. It does semantic image interpretation.
Okay, I guess it is related as my brain only does semantic image interpretation.
(edit: my brain can create images, but only when I'm unconscious)
So with such a model, if you ask it to create an image, it would first create a semantic grammatical model of what you had asked for, and then perhaps draw it with colored pencils. I sort of like that. It's all that I could do. And it would be unlikely to violate any copyrights.
I really want to use helix but was turned off by the lack of Copilot support and the developers' contempt and holier than thou attitude towards people who use AI assisted coding.
I never got the impression that the developers had an attitude. The discussion I've seen has been mostly people making demands or saying Helix will fail if it doesn't support AI tool integration right now, but none of the devs are interested in using that kind of tooling with Helix, so they don't implement it.
When they have poked fun at someone, it has been because that someone is very quick to demand the feature, but unwilling to submit a PR.
There is an open PR for getting copilot support, though it's currently just a hotfix and likely won't be accepted into core. You can still patch your own version and compile it.
In Zuck's interview with Lex he was talking about a text-based social media platform they Meta has been experimenting with recently. This would be a great time for them to try to acquire users
There are pills for beauty. Look at Kylie Jenner, you think that's all natural? In fact I'd say you have less control over intelligence then you do over beauty.
I never hear any business or firm proudly announcing that they hire mediocre engineers, always and only the best who are tested and assessed with rigour.
GPT-4 just replaces the low-hanging fruit, mediocre engineers and finally exposes inflated titles and impostors.
Predicting the next word is a much deeper problem than people like you realise. To be able to be good at predicting the next word you need to have an internal model of the reality that produced that next word.
GPT-4 might be trained at predicting the next word, but in that process it learns a very deep representation of our world. That explains how it has an intuition for colours despite never having seen colours. It explains why it knows how physical objects in the real world interact.
Now, if you disagree with this hypothesis it's very easy to disprove it by presenting a problem to GPT4 that is very easy for humans to solve but not for GPT4. Like the Yann Lecun gear problem, which GPT4 is also able to solve.
“To be able to be good at predicting the next word you need to have an internal model of the reality that produced that next word.”
Now that’s an interesting claim - that I would deeply dispute. It learns from text. Text itself is a model of reality. So chatgtp if anything proves that in order to be good at predicting the next word all you need is a good model of a model of reality. GTP knows nothing of actual reality only the statistics around symbol patterns that occur in text.
You are being given a chance to dispute it. Give an example of a problem that any human would be easily able to solve but GPT4 wouldn't.
>> "good model of a model of reality"
That is just a model of reality. Also, a "model of reality" is what you'd typically call a world model. Its an intuition for how the world works, how people behave, that apples fall from trees and that orange is more similar to red than it is to grey.
Your last line shows that you still have a superficial understanding of what its learning. Yes it is statistics, but even our understanding of the world is statistical. The equations we have in our head of how the world works are not exact, they're probabilistic. Humans know that "Apples fall from the _____" should be filled with 'tree' with a high probability because that's where apples grow. Yes, we have seen them grow there, whereas the AI model has only read about the growing on trees. But that distinction is moot because both the AI model and humans express their understanding in the same way. The assertion we're making is that to be able to predict the next word well, you need an internal world model. And GPT4 has learnt that world model well, despite not having sensory inputs.
Can chatgtp ride a bicycle? Can you ride a bicycle? If you ‘d never rode on a bicycle before - do you think if you read enough books on bicycle riding, the physics of bicycle riding, the physics of the universe - you would have anywhere near as complete a model of bicycle riding as someone who’d actually rode on a bicycle before. Sure you’d be able to talk a great game about riding bicycles - but when it comes to the crunch, you’d fall flat on your face.
That’s because riding a bicycle involves a large number of incredibly complex emergent control phenomena embedded within the marvel of engineering that is the human body - not just the small part of the brain that handles language.
So call me when LLM’s can convert their ‘world models’ learned from statistics on human language use into being able to ride a bicycle first time. Until then I feel comfortable in the knowledge they know virtually nothing of our objective reality.
In my experience of using LunarVim (which is also a preconfigured neovim). It is incredibly unstable. Plugins often don't work well with each other. Keybindings overlap. You do it with the hope of having fewer things to configure but what ends up happening is you create more config to disable slow default plugins, or add workarounds to existing ones. Updating neovim or lunarvim always breaks my config. These things aren't the least bit backward compatible.
I'm slowly switching to helix, and this time around will be forcing myself to use the defaults, and not adding a single line of config.
Yes but if you're in a Jupyter notebook, you may not be directly connected to a DB. If you're using pandas, this unlocks some scalability before needing dask and a cluster.