Hacker Newsnew | past | comments | ask | show | jobs | submit | timjver's commentslogin

By "online wallet" they were likely referring to the Bybit website being the wallet of those customers that held their coins there rather than keeping them in their own private wallets, and not whether the hack involved a hot wallet or a cold wallet. Calling it a custodial wallet would have been more accurate.


> It would be similar to if I claimed that an LLM is an expert doctor, but in my data I've filtered out all of the times it gave incorrect medical advice.

Computationally it's trivial to detect illegal moves, so it's nothing like filtering out incorrect medical advice.


> Computationally it's trivial to detect illegal moves

You're strictly correct, but the rules for chess are infamously hard to implement (as anyone who's tried to write a chess program will know), leading to minor bugs in a lot of chess programs.

For example, there's this old myth about vertical castling being allowed due to ambiguity in the ruleset: https://www.futilitycloset.com/2009/12/11/outside-the-box/ (Probably not historically accurate).

If you move beyond legal positions into who wins when one side flags, the rules state that the other side should be awarded a victory if checkmate was possible with any legal sequence of moves. This is so hard to check that no chess program tries to implement it, instead using simpler rules to achieve a very similar but slightly more conservative result.


That link was new too me, thanks! However: I wrote some chess-program myself (nothing big, hobby level) and I would not call it hard to implement. Just harder than what someone might assume initially. But in the end, it is one of the simpler simulations/algorithms I did. It is just the state of the board, the state of the game (how many turns, castle rights, past positions for the repetition rule, ...) and picking one rule set if one really wants to be exact.

(thinking about which rule set is correct would not be meaningful in my opinion - chess is a social construct, with only parts of it being well defined. I would not bother about the rest, at least not when implementing it)

By the way: I read "Computationally it's trivial" as more along the lines of "it has been done before, it is efficient to compute, one just has to do it" versus "this is new territory, one needs to come up with how to wire up the LLM output with an SMT solver, and we do not even know if/how it will work."


> You're strictly correct, but the rules for chess are infamously hard to implement

Come on. Yeah they're not trivial but they've been done numerous times. There's been chess programs for almost as long as there have been computers. Checking legal moves is a _solved problem_.

Detecting valid medical advice is not. The two are not even remotely comparable.


> Detecting valid medical advice is not. The two are not even remotely comparable.

Uh? Where exactly did I signal my support for LLM's giving medical advice?


We implemented a whole chess engine in lisp during 3rd year it was really trivial actually implementing the legal move/state checking.


I got a kick out of that link. Had certainly never heard of "vertical castling" previously.


As I wrote in another comment - you can write scripts that correct bad math, too. But we don't use that to claim that LLMs have a good understanding of math.


I'd say that's because we don't understand what we mean by "understand".

Hardware that accurately performs maths faster than all of humanity combined is so cheap as to be disposable, but I've yet to see anyone claim that a Pi Zero has "understanding" of anything.

An LLM can display the viva voce approach that Turing suggested[0], and do it well. Ironically for all those now talking about "stochastic parrots", the passage reads:

"""… The game (with the player B omitted) is frequently used in practice under the name of viva voce to discover whether some one really understands something or has ‘learnt it parrot fashion’. …"

Showing that not much has changed on the philosophy of this topic since it was invented.

[0] https://academic.oup.com/mind/article/LIX/236/433/986238


> I'd say that's because we don't understand what we mean by "understand".

I'll have a stab at it. The idea of LLMs 'understanding' maths is that, once having been trained on a set of maths-related material, the LLM will be able to generalise to solve other maths problems that it hasn't encountered before. If an LLM sees all the multiplication tables up to 10x10, and then is correctly able to compute 23x31, we might surmise that it 'understands' multiplication - i.e. that it has built some generalised internal representation of what multiplication is, rather than just memorising all possible answers. Obviously we don't expect generalisation from a Pi Zero without specifically being coded for it, because it's a fixed function piece of hardware.

Personally I think this is highly unlikely given that maths and natural language are very different things, and being good at the latter does not bear any relationship to being good at the former (just ask anyone who struggles with maths - plenty of people do!). Not to mention that it's also much easier to test for understanding of maths because there is (usually!) a single correct answer regardless of how convoluted the problem - compared to natural language where imitation and understanding are much more difficult to tell apart.


I don't know. I have talked to a few math professors, and they think LLMs are as good as a lot of their peers when it comes hallucinations and being able to discuss ideas on very niche topics, as long as the context is fed in. If Tao is calling some models "a mediocre, but not completely incompetent [...] graduate student", then they seem to understand math to some degree to me.


Tao said that about a model brainstorming ideas that might be useful, not explaining complex ideas or generating new ideas or selecting a correct idea from a list of brainstormed ideas. Not replacing a human.


> Not replacing a human.

Obviously not, but that is tangential to this discussion, I think. A hammer might be a useful tool in certain situations, and surely it does not replace a human (but it might make a human in those situations more productive, compared to a human without a hammer).

> generating new ideas

Is brainstorming not an instance of generating new ideas? I would strongly argue so. And whether the LLM does "understand" (or whatever ill-defined, ill-measurable concept one wants to use here) anything about the ideas if produces, and how they might be novel - that is not important either.

If we assume that Tao is adequately assessing the situation and truthfully reporting his findings, then LLMs can, at the current state, at least occasionally be useful in generating new ideas, at least in mathematics.


Being as good as a professor at confidently hallucinating nonsense when you don't know the answer is a very high level skill.


Actually, LLMs do call scripts that correct bad math, and have gotten progressively better because of it. It's another special case example.


> in the absence of other celestial bodies the satellite would be in a stable orbit

Presumably entering such an orbit is only possible due to forces from other celestial bodies in the first place, since otherwise if you reversed time it would spontaneously leave its orbit. In other words, the act of the earth "capturing" the object is ultimately performed by external forces?


They said nothing about not printing any non-primes.


This is a good point; English alone is an awful specification language. Everyone knows what you mean by "print the first ten prime numbers" but only by idiom and cultural context. This is why LLM is a good fit here, because -- when it works -- it includes all three aspects.


I suppose it's indeed possible to read the assignment as "print whatever you like as long as it includes 10 or more primes" but I'd fail you for that based on not being able to communicate with humans.


VanMoof bikes are known for breaking quickly. Calling them "so much better" than Cowboy based on a single bad experience doesn't seem totally reasonable.


Simply twisting a single corner piece or flipping a single edge piece achieves that already, without having to mess with the stickers.


Messing with the stickers makes it impossible to solve by simply twisting the corner piece back!


...the same is true for swapping two stickers on a corner.

If you alter the sequence of colours, you alter the finished pattern.

Therefore rendering the cube unsolvable.

QED


To be pedantic, we can "alter the sequence of colors" of a solved state into a solvable state, so it's not quite QED.

Anyway, I think you're agreeing with the person you're responding to; they're suggesting it's more fun to peel and re-stick stickers precisely because that's a way to achieve states that even mechanical disassembly can't solve.


Hence the clarification of only swapping TWO stickers on a corner. ;-)


Wasn't I already being pedantic enough about "Or peeling two stickers off a corner piece and swapping them to change the parity and make it impossible to solve."?

If you really want to make it physically impossible to solve and frustrating, swap two stickers between two cubes so they both have the wrong number of two colors, especially annoying with two colors on different faces of the same piece.


Not true.

Take a solved cube and twist a corner. Now jumble the cube and try to solve.

Do you see the problem Now?


I think the parent commenter's point was that if you change stickers, you can't solve the cube, even if you twist corners. But if you twist corners, you can still "solve" the cube by changing stickers.

i.e. changing stickers is "more powerful" than twisting corners.


Stickker lermutation a larger algebraic group. Sticker permutation is S_54. The largest you can get whike still looking like a standard twisty cube.


Whether or not it is a coincidence depends on whether Obsidian planned this. Apple isn't going to base their release schedule on something like this.


Give Copilot a try, it has been way more reliable for me in terms of giving good code suggestions than ChatGPT so far.


That's probably an impossible task. The best they can do is ask contestants nicely not to do this, but that opens the can of worms of whether tools like GitHub Copilot should not be allowed, either.


Just in case performance matters, there is a more efficient way: have the tortoise stay in place and advance the hare only one node at a time, and assign the hare to the tortoise every time a power of 2 number of steps have been made. This is known as Brent's algorithm, and it requires fewer advancements than the original tortoise and hare algorithm by Floyd.

Another notable advantage of Brent's algorithm is that it automatically finds the cycle length, rather than (in Floyd's case) any multiple of the cycle length.

https://en.wikipedia.org/wiki/Cycle_detection#Brent's_algori...


https://rosettacode.org/wiki/Cycle_detection

Cycle detection in (currently) 44 languages. Most (all?) use Brent's algorithm. They're operating on an iterated function but converting most of these to detect cycles in a linked list would be straightforward.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: