Hacker Newsnew | past | comments | ask | show | jobs | submit | PurpleRamen's commentslogin

> I challenge anyone to work out what this code does

It's unusual, but pretty obvious. In single steps it's basically this:

    a, b = 1, [0, 1, 2, 3]
    b[a] = b 
which would be b[1]=b because a==1. So this creates a self referencing list. The deeper reason here is, everything is a pointer to data in memory, even if in source code we see the actual data. That's why b[1] is storing the pointer to the list, not the data of the list.

If someone is doing BS like this, they deserve spanking. But banning the whole concept because people are unaware or how something is to be used properly is strange. But then again, it seems people have a bit of a problem how python really works and how it's different from other languages.


Basically, but not quite. :-) The original result for b is:

  0, (1, [...]), 2, 3]
vs your version:

  [0, [...], 2, 3]
The equivalent statements to the original chained assignment are:

  a, b = 1, [0, 1, 2, 3]
  b[a] = 1, b  # relies on cpython implementation detail¹

¹: Using 1 works because the integers -5 thru 256 are interned in cpython. Otherwise, or if you don't want to rely on an implementation detail, to be truly equivalent (i.e. `id(b[1][0]) == id(a)`), it's this:

  a, b = 1, [0, 1, 2, 3]
  b[a] = a, b

An adjacent thread has some confusion about whether chained assignments happen left to right or right to left. Honestly that’s a factoid I don’t expect most Python programmers to know. It’s usually a bad idea to rely on people knowing arcade details of a language, especially a language like Python that has attracted many non-programmers like data scientists. (I have nothing against data scientists but their brainpower shouldn’t be wasted on remembering these kind of details.)

I've been programming Python since 1.5.2 days and indeed, I didn't know the order of evaluation of chained assignments.

That said, it's the self-referencing list in your example that's the more confusing part. It's atypical to have self-referencing data structures, so that's something I'd comment in the design if I needed one.


A decade ago, it happened regularly, but not sure if they are still doing this now. But the laws haven't changed much since then.

Any good filemanager is better. Out of the box, it's also very lacking in necessary abilities. There are also some errors around the edges, like hiding vault-internal links, which makes it a bit questionable for good filehandling.

But sure, when all you have are supported filetypes, then it can be useful.


Search becomes useless if you drown in results. A good organization should assist in shortening paths, but if you start the path manually or through a search doesn't matter much.

But shouldn’t search be responsible for reducing or at least ranking the results in such a way that no matter how many results you find what you are looking for in the top N?

Using a hybrid of traditional and semantic is so trivial to implement these days that I think we have past the point of needing good organization.


> But shouldn’t search be responsible for reducing or at least ranking the results in such a way that no matter how many results you find what you are looking for in the top N?

Maybe if you are Google, having well paid teams and excessive data on what people with your profile are usually searching at the moment. Obsidian is lacking all this, so the search-quality is very depending on the amount of files and results.


The basic search one gets out of the box is closer to regex matching than search. IMHO something like omnisearch should be sherlocked next.

Even better, the payments can be used to gain even more crucial personal data.

Payments? it's one single payment to one winner

Also, how is it more data than when you buy a coffee? Unless you're cash-only.

I know everyone has their own unique risk profile (e.g. the PIN to open the door to the hangar where Elon Musk keeps his private jet is worth a lot more 'in the wrong hands' than the PIN to my front door is), but I think for most people the value of a single unit of "their data" is near $0.00.


> Payments? it's one single payment to one winner

How do you know? They can tell everyone they've won and snack their data. It's not a verifiable public contest.

> Also, how is it more data than when you buy a coffee?

Coffee-shop has no other personal data and is usually using other payment-methods. But still, there have been cases of misusage.

> but I think for most people the value of a single unit of "their data" is near $0.00.

This is a classical scenario for social engineering, and we are in a high profile social group here. There is a good chance that someone from a big company is participating here. This is not about stealing some peanuts or selling a handful or data on the darknet. It's about collecting personal data and scouting potential victims for a future attacks.

And I'm not saying this is an actual case happening here, but to not even see the problem is..interessting.


You can have my venmo if you send me $100 lmao, fair trade

> Yet we're not seeing any collapse, despite models being trained mainly on synthetic data for the past 2 years.

Maybe because researchers learned from the paper to avoid the collapse? Just awareness alone often helps to sidestep a problem.


No one did what the paper actually proposed. It was a nothing burger in the industry. Yet it was insanely popular on social media.

Same with the "llms don't reason" from "Apple" (two interns working at Apple, but anyway). The media went nuts over it, even though it was littered with implementation mistakes and not worth the paper it was(n't) printed on.


Who cares? This is a place where you should be putting forth your own perspective based on your own experience. Not parotting what someone else already wrote.

How well does this work when you slightly change the question? Rephrase it, or use a bicycle/truck/ship/plane instead of car?

I didn't test this but I suspect current SotA models would get variations within that specific class of question correct if they were forced to use their advanced/deep modes which invoke MoE (or similar) reasoning structures.

I assumed failures on the original question were more due to model routing optimizations failing to properly classify the question as one requiring advanced reasoning. I read a paper the other day that mentioned advanced reasoning (like MoE) is currently >10x - 75x more computationally expensive. LLM vendors aren't subsidizing model costs as much as they were so, I assume SotA cloud models are always attempting some optimizations unless the user forces it.

I think these one sentence 'LLM trick questions' may increasingly be testing optimization pre-processors more than the full extent of SotA model's max capability.


> You shouldn't need to read every line. You should have test coverage, type checking, and integration tests that catch the edge cases automatically.

Because tests are always perfect and fetch every corner-case, and are even detecting all unusual behaviour they are not testing for? Seems unrealistic. But explains the sharp rise of AI-slop and self-inflicted harm.


I think the point was about the frequency of switching your frontend. With a proper frontend you can switch the backend on each request if you want, but usually people will stay with one main-interface of their choice. For AI, OpenClaw, Moltic, Rowboat are now such a frontend, but not many will use them all at once.

It's similar to how people usually only use one preferred browser, editor, shell, OS.


It's the old story: evil, irresponsible behaviour has a higher chance of success, than being good and responsible. AIs recent history is a good example. Google had the lead, but lost it (temporary) to OpenAI, because Google was responsible and were not willing to open pandoras box. Apple seems to have something similar to OpenClaw for a while now, but withholds from releasing it, because it's too unsecure. History is full of people burning the world for their own greed, and getting rewarded for it; and they then call it "taking risks" and "thinking outside the box"... I think the underlying reason might be in too many people thinking there is some level of competence behind the irresponsible behaviour and it's alls just controlled harm or something like that.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: