If by "reality" you mean "the universe", then the way the universe is depends on a cause, as the existence of the universe is not explained by the universe itself (even an "eternal" universe). Its existence is contingent on some other cause that ultimately cannot be contingent and thus does not require explanation.
So the cause or dare I say reason for the universe being the way it is will depend on its cause.
I feel like the language of this argument is self-undermining. The existence of the universe being dependent on a cause that is itself not contingent on anything else… As easily the existence of the universe could be not contingent with anything else.
And you have an implied axiomatic assertion that everything must have a cause, even though that necessarily results in an infinite recursion of cause-finding.
Nowhere did I say or imply everything needs a cause. That's your baggage. In fact, it is the exact opposite: that because an infinite regress is incoherent and impossible, there must be some necessary uncaused cause where the buck stops.
"The universe" cannot be that cause, as the universe and everything in it is contingent.
The cause of the universe must itself be uncaused, or else it is only an intermediate cause that must itself refer ultimately to an uncaused cause. An infinite regress is impossible with respect to existence. Unlike causes per accidens which can in principle be infinite in length, a cause `per se` cannot; without a terminus, there would be nowhere from which the latter causes would derive their force, so to speak, like an arm pushing a stick that is pushing a rock that is pushing a leaf. Meaning, the cause is not some distant one in time, but one always acting; otherwise, everything would vanish. The only cause that could have this property is self-subsisting being.
From there, you can know quite a bit about what else must be true of self-subsisting being.
There doesn’t need to be a why the world exists. It does that’s all there is to know. There doesn’t have to be purpose just an explanation of how not why
Sure but that's somewhat tautological and not very helpful if you seek an empirical or predictive understanding of it. The question really is what complexity of the system (meaning: all of it) is irreducible and what can at least be approximated with simplified models.
You may balk at this as being ultimately futile but our entire existence is built on trying to break apart and simplify the world we exist in, starting with the first cut between self/inside and other/outside (i.e. "this is me" vs "this is where I am" - a distinction that becomes immensely relevant after the moment of birth). Language itself only functions because we can create categories it can operate on - regardless of whether those categories consistenly map to reality itself.
I have a 4080 RTX and Kontext runs great at fp8. I run several other models besides. If you want to get at all good at this, you need tons of throwaway generations and fast iteration and an API quickly becomes pricier than a GPU.
Precisely. Even inflated if the inflated 16,000 api calls was accurate for how much the cost of mediocre GPU would get you, that’s not an endless store of api calls. I’m also on a 4080 for lighter loads, and even just writing benchmarks, exploring attention mechanisms, token salience, etc, without image gen being my specific purpose I may trash half a thousand generations from output every few days. More if I count the stuff that never made it that far too.
Good effort, somewhat marred by poor prompting. Passing in “the tower in the image is leaning to the right,” for example, is a big mistake. That context is already in the image, and passing that as a prompt will only make the model apt to lean the tower in the result.
I should have been more clear. Those are NOT the direct prompts. They are the starter prompts. In fact that's why the attempt numbers change, we adapt the exact prompts depending on the model.
I understood that much, at least from the description you added on the Kontext result. I agree that you should provide more information here, though, especially around "we adapt the exact prompts depending on the model", since your strategy here could also reflect model strengths and weaknesses.
Prompt: "Keeping the glass and the hand behind the glass the same, please change only the three brown candies in the glass into green, yellow, red, and orange candies. Make no other changes. Change the reflection to remove the brown candy too." Seed was 1070229954903864, but your setup is probably too different for that to help.
It seems like Gemini 2.5 Flash was the only model that successfully removed the reflections...it should get some points for that!
I quit Google last year because I was just done with the incessant push for "AI" in everything (AI exclusively means LLMs of course). I still believe in the company as a whole, the work culture just took a hard right towards kafkaville. Nowadays when my relatives say "AI will replace X" or whatever I just nod along. People are incredibly naive and unbelievably ignorant, but that's about as new as eating wheat.
Did you read the whole thread and all of your own comment each time you had to type another half-word? If not, I’m afraid your first statement doesn’t hold.
All models are wrong, but some are useful. However when it comes to cognition and intelligence we seem to be in the “wrong and useless” era or maybe even “wrong and harmful” (history seems to suggest this as a necessary milestone…anyone remember “humorism”?)
I think there is a kernel of truth in what you said but your language is a bit of an overreach. No athlete who trains only when told to train is making it to the Olympics.