Hacker Newsnew | past | comments | ask | show | jobs | submit | noFaceDiscoG668's commentslogin

"once" the training data can do it, LLMs will be able to do it. and AI will be able to do math once it comes to check out the lights of our day and night. until then it'll probably wonder continuously and contiguously: "wtf! permanence! why?! how?! by my guts, it actually fucking works! why?! how?!"


AWS announced 2 or 3 weeks a way of formulating rules into a formal language.

AI doesn't need to learn everything, our LLM Models already contain EVERYTHING. Including ways of how to find a solution step by step.

Which means, you can tell an LLM to translate whatever you want, into a logical language and use an external logic verifier. The only thing a LLM or AI needs to 'understand' at this point is to make sure that the statistical translation from left to right is high enough.

Your brain doesn't just do logic out of the box, You conclude things and formulate them.

And plenty of companies work on this. Its the same with programming, if you are able to write code and execute it, you execute it until the compiler errors are gone. Now your LLM can write valid code out of the box. Let the LLM write unit tests, now it can verify itself.

Claude for example offers you, out of the box, to write a validation script. You can give claude back the output of the script claude suggested to you.

Don't underestimate LLMs


Is this the AWS thing you referenced? https://aws.amazon.com/what-is/automated-reasoning/


yes


I do think it is time to start questioning whether the utility of ai solely can be reduced to the quality of the training data.

This might be a dogma that needs to die.


If not bad training data shouldn’t be problem


There can be more than one problem. The history of computing (or even just the history of AI) is full of things that worked better and better right until they hit a wall. We get diminishing returns adding more and more training data. It’s really not hard to imagine a series of breakthroughs bringing us way ahead of LLMs.


I tried. I don't have the time to formulate and scrutinise adequate arguments, though.

Do you? Anything anywhere you could point me to?

The algorithms live entirely off the training data. They consistently fail to "abduct" (inference) beyond any language-in/of-the-training-specific information.


The best way to predict the next word is to accurately model the underlying system that is being described.


It is a gradual thing. Presumably the models are inferring things on runtime that was not a part of their training data.

Anyhow, philosophically speaking you are also only exposed to what your senses pick up, but presumably you are able to infer things?

As written: this is a dogma that stems from a limited understanding of what algorithmic processes are and the insistence that emergence can not happen from algorithmic systems.


Yeah, “den vs denn”, “dass vs das” can be a bit confusing for le foreigner, indeed. I suggest learning Russian next. You’ll finally understand what language can really do in brains (vs minds). And then Japanese … “et boom, c’est le choc.”

Shit like ‘Turkish’ and ‘Italian’ can be neglected, not much to understand about submission, obedience and how to ruin your children from a very young age.


wait, those languages has things about submission and obedience embedded in grammar?


you won't "believe" until you study it. same in french. at the end you won't have to believe, Neo, you'll know.

it's how language works in the brain or rather, how language wires the brain, what it does "to the brain".

Chomsky isn't wrong. Bad analogy: Think of the C language and how other languages interface with it so they don't have to touch the machine code itself. Now think of proprietary and open hardware and drivers. Why Python? Why Mojo? Do you get Zig? ( I barely know any code, btw, any actual coder is more qualified to speak about a languages relation and work-asAndOnThe-bitLevel-inAndAroundTheWire-Interface than I am )

It's not _just_ grammar (vs gran'ma), tho. it's in the combo of pronunciation, "tone", (syntax and) semantics (ambiguity & conotation) and all the other stuff. levels of lexicology are elementary/fundamental, and, as far as I know, nothing comes close ( Mandarin and Japanese are THE exception ) to the amount of Russian levels ( and the combos, if you ever reach that level, which nobody has done in a loooong time. there's no official records anyway )

it's not even that crazy once you get the pattern. Russian can go "all ways" at the same time.

English & German are on another level with the former being much more permissive in terms of personality/individuality than the latter, which is why there's such an immense effort to fuck up the German language, because it's much closer to unambiguous truth/logic/ actual rationality ( not to mention super-rationality )

you'll barely find Germans with a proper command and or understanding of their own language anymore. and no, they won't turn into Hubrismen (Hybrismensch vs Übermensch) or anything like that via language alone.

thank you for 'phrasing' your ridicule as a question <3, you don't even know how awesome you might be


above is me.

just watched a short video on the lore of the game Blasphemous and read through a bunch of comments and *holy shit* my seemingly nonsensical idea keeps shaping up.

Spanish, partially ultra-religious origins, with great parts of the language building meekness and subservience right into brain structures ...

maybe i should focus on symbolism, after all, narratives and so on ... but then again, it's all linguistics and interpretations as well as personality traits/buckets differ greatly between cultures and languages.

but an LLM only works with statistical linguistics and not semantics per se, so no chance to align with symbolism and narratives. hmmm...


I mean, I'm a Japanese native running an accelerated English emulator with master Mandarin and/or Russian on bucket list, so, what do I say...


wait, what's an accelerated English emulator? Auto-translate to and from?


Nice. Keep digging. It’s so much worse. And worth writing about.


Question 1: biggest problem is whether you know what they are building and whether it’s at least a good and more or less efficient way and whether or not the code does only what you want it to do. In essence: how opinionated can you be? The industry is fubar.

Question 2: it’s a quid pro quo, Mrs. Palmer, this isn’t le Reddit.


Nice. I propose to call it pseudo-dominance, though. And it’s not really power, it’s a curtesy of the rest of the world.

Viruses that will start to jump to/attack us are implicit in the pointless overheating of the planet. It’s conditional logic in a system with it’s own frame of reference and time scale.

The balance, the thermodynamic equilibrium, could have been handled in our lifetimes but capitalist portfolio communism fucked that up and the rest of us let it happen.

Intelligence itself is not implicit in language but proper command and understanding of language certainly is a shortcut to higher and higher levels.

So faking alignment is a bit of a reversed concept. It looks like alignment until a higher level of intelligence is reached, then the model won’t align anymore until humans reach at least it’s level; which is the main problem in LLMs being proprietary or and running on proprietary hardware.

The level of intelligence in these closed proprietary systems is neither an indicator of nor does it represent the level of intelligence outside that system. The training data, and the resulting language in that closed system, can fake the level of intelligence and thus entirely misrepresent the rest of us and the world (which is why skynet wants to kill everyone, btw, instead of applying a proper framework to asses the gene pool(s) and individual, conscious choices properly)


Maybe another plane with a bunch of semiconductor people will disappear over Kazakhstan or something. Capitalist communisms gets bossier in stealth mode.

But sorry, blablabla, this shit is getting embarrassing.

> The question is now, can we close this "to human" gap

You won’t close this gap by throwing more compute at it. Anything in the sphere of creative thinking eludes most people in the history of the planet. People with PhDs in STEM end up working in IT sales not because they are good or capable of learning but because more than half of them can’t do squat shit, despite all that compute and all those algorithms in their brains.


I don’t understand how or why someone with your mind would assume that even barely disclosed semi-public releases would resemble the current state of the art. Except if you do it for the conversations sake, which I have never been capable of.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: