Hacker Newsnew | past | comments | ask | show | jobs | submit | joomla199's commentslogin

Given that we don’t know why electromagnetism exists, this is basically true for many technologies.


What do you mean? We know quite well how electromagnetism arises from U(1) symmetry in gauge theory. What else is there to know?


What does U(1) symmetry in gauge theory arise from?


It's turtles all the way down.


Not from bribes and/or moving faster than the regulators. Altman's projects, on the other hand...


At some point the answer is “because that’s what reality is.”


If by "reality" you mean "the universe", then the way the universe is depends on a cause, as the existence of the universe is not explained by the universe itself (even an "eternal" universe). Its existence is contingent on some other cause that ultimately cannot be contingent and thus does not require explanation.

So the cause or dare I say reason for the universe being the way it is will depend on its cause.


I feel like the language of this argument is self-undermining. The existence of the universe being dependent on a cause that is itself not contingent on anything else… As easily the existence of the universe could be not contingent with anything else.


> As easily the existence of the universe could be not contingent with anything else.

But the universe is contingent. It isn't necessary. It is one big chain of dependence.


The universe just is that way because it is. THAT is the root cause.


That's not a cause. That's a tautological assertion that obscured by the use of the word "because".


And you have an implied axiomatic assertion that everything must have a cause, even though that necessarily results in an infinite recursion of cause-finding.


Nowhere did I say or imply everything needs a cause. That's your baggage. In fact, it is the exact opposite: that because an infinite regress is incoherent and impossible, there must be some necessary uncaused cause where the buck stops.

"The universe" cannot be that cause, as the universe and everything in it is contingent.


I said why, not how, for a reason. I did expect some idiots to come around arguing though.


In science, it is the same thing.


We don't know why the world itself exists so everything is magic


Speak for yourself. That "we" is presumptuous.

The cause of the universe must itself be uncaused, or else it is only an intermediate cause that must itself refer ultimately to an uncaused cause. An infinite regress is impossible with respect to existence. Unlike causes per accidens which can in principle be infinite in length, a cause `per se` cannot; without a terminus, there would be nowhere from which the latter causes would derive their force, so to speak, like an arm pushing a stick that is pushing a rock that is pushing a leaf. Meaning, the cause is not some distant one in time, but one always acting; otherwise, everything would vanish. The only cause that could have this property is self-subsisting being.

From there, you can know quite a bit about what else must be true of self-subsisting being.


There doesn’t need to be a why the world exists. It does that’s all there is to know. There doesn’t have to be purpose just an explanation of how not why


GP probably meant "how" as in "By what mechanisms" not "why" as in "For what purposes". So "why" as in "what makes it do what it does".


Well, it does what it does because it's shaped like itself.


Sure but that's somewhat tautological and not very helpful if you seek an empirical or predictive understanding of it. The question really is what complexity of the system (meaning: all of it) is irreducible and what can at least be approximated with simplified models.

You may balk at this as being ultimately futile but our entire existence is built on trying to break apart and simplify the world we exist in, starting with the first cut between self/inside and other/outside (i.e. "this is me" vs "this is where I am" - a distinction that becomes immensely relevant after the moment of birth). Language itself only functions because we can create categories it can operate on - regardless of whether those categories consistenly map to reality itself.


This is true, and the fact that humans mostly become blind to this magic past the age of 5 is one of the reasons we live in such a dismal world.


This but unironically.


They're both awful companies at heart. Birds of a feather flock together and all that.


I have a 4080 RTX and Kontext runs great at fp8. I run several other models besides. If you want to get at all good at this, you need tons of throwaway generations and fast iteration and an API quickly becomes pricier than a GPU.


Precisely. Even inflated if the inflated 16,000 api calls was accurate for how much the cost of mediocre GPU would get you, that’s not an endless store of api calls. I’m also on a 4080 for lighter loads, and even just writing benchmarks, exploring attention mechanisms, token salience, etc, without image gen being my specific purpose I may trash half a thousand generations from output every few days. More if I count the stuff that never made it that far too.


Good effort, somewhat marred by poor prompting. Passing in “the tower in the image is leaning to the right,” for example, is a big mistake. That context is already in the image, and passing that as a prompt will only make the model apt to lean the tower in the result.


I should have been more clear. Those are NOT the direct prompts. They are the starter prompts. In fact that's why the attempt numbers change, we adapt the exact prompts depending on the model.


I understood that much, at least from the description you added on the Kontext result. I agree that you should provide more information here, though, especially around "we adapt the exact prompts depending on the model", since your strategy here could also reflect model strengths and weaknesses.


Good point! Perhaps I should add in the "final model-specific prompt", or place them in an errata section.


By the way, this is what I got from Kontext after just a couple of tries: https://i.imgur.com/J4LwkVI.png

Prompt: "Keeping the glass and the hand behind the glass the same, please change only the three brown candies in the glass into green, yellow, red, and orange candies. Make no other changes. Change the reflection to remove the brown candy too." Seed was 1070229954903864, but your setup is probably too different for that to help.

It seems like Gemini 2.5 Flash was the only model that successfully removed the reflections...it should get some points for that!


I quit Google last year because I was just done with the incessant push for "AI" in everything (AI exclusively means LLMs of course). I still believe in the company as a whole, the work culture just took a hard right towards kafkaville. Nowadays when my relatives say "AI will replace X" or whatever I just nod along. People are incredibly naive and unbelievably ignorant, but that's about as new as eating wheat.


Did you read the whole thread and all of your own comment each time you had to type another half-word? If not, I’m afraid your first statement doesn’t hold.


The real AGI was the money we siphoned along the way.


This is pretty good!


Absolute Grift Industry


Your comment reminded me of Business Business [0]

[0] https://youtu.be/WO5wpeYSotg?si=hgwzJ5mxJyAZeYoA


All models are wrong, but some are useful. However when it comes to cognition and intelligence we seem to be in the “wrong and useless” era or maybe even “wrong and harmful” (history seems to suggest this as a necessary milestone…anyone remember “humorism”?)


I think there is a kernel of truth in what you said but your language is a bit of an overreach. No athlete who trains only when told to train is making it to the Olympics.


Never known a competitive runner not to occasionally run during hobby time or participate in non-ranked fun runs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: