Hacker Newsnew | past | comments | ask | show | jobs | submit | billyzs's commentslogin

I already prefer using pipe as separator in logging; now you're telling me there is a chance that my logs can be automatically ingested as tabular data? Sign me up for this branch of the multiverse :)


You might be interested in https://draradech.github.io/jigsaw/index.html

Used its output with a laser cutter and plywood, result was neat


To me the chess AI example he used was perhaps not the most apt. Human players may not be able to reason on as far a horizon as AI and therefore find some of AI's moves perplexing, but they can be more or less sure that a Chess AI is optimizing for the same goal under the same set of rules with them. With Reasoners, alignment is not given. They may be reasoning under an entirely different set of rules and cost functions. On more open ended questions, when Reasoners produce something that human don't understand, we can't easily say whether it's a stroke of genius, or misaligned thoughts.


if kernel is updated the OS would have to be rebooted anyway unless live patching is configured. Rebooting after an update is probably more common & less annoying than one would think.


> a2, c = a2+c, c+2 > is faster than > a2 += c > c += 2 > My guess is that in the first case the two evaluations and assignments can happen in parallel, and so may happen on different cores

Not sure I follow, isn't Python single threaded by default? Changes to GIL is coming but does it change how the interpreter uses CPU?


Yeah, this isn't benchmarking anything related to the CPU etc. It is benchmarking the quirks of the Python interpreter.


There is instruction level parallelism in modern CPUs. They have multiple "calculation units", that do for example addition. If one doesn't depend on the other they get executed at the same time.


But there is no dependency on the expressions on either variant, so there is no reason why the first variant is faster than the second in principle (of course python internals will get in the way and make it hard to reason about performance at all).


Not sure why you're being downvoted, because you're right.

To back up parent's point, if you compile the code and the resulting assembly is a direct translation, renaming will break the dependency and the CPU will execute the instructions in parallel. Write after read hazard is the applicable section:

https://en.wikipedia.org/wiki/Register_renaming


[off topic, but I expect that some amount of downvotes are people misclicking; I know that I found myself correcting many of my own, I wonder how many I don't catch]


LOL. The amount of machinery going on under the hood in evaluating those expressions in CPython is staggering. A microscopic detail like a single instruction data dependency has nothing to do with it. (How many CPU add instructions are executed just for those statements? Probably hundreds.)

This is much more likely a quirk of the interpreter (or possibly a fucked up test). CPU details are like 10000 feet down.


You're absolutely right, but laughing at the notion is unnecessarily rude.


You’re right. Though it is the scenario in my head playing out of someone slaving over an architecture optimization manual while, zoom out, editing Python that was comical, rather than making fun of anyone specifically.


It wouldn't run in separate cores but single-threaded can also get some measure of instruction-level parallelism.

A CPU can do more than one thing at once by computing the next instruction while it's still writing the result of the previous one. However, the CPU can only do that if it's 100% sure that the next instruction does not depend on the previous instruction. This optimization sometimes can't trigger in an interpreter, because of highly mutable variables such as the program counter or the top of the interpreter stack. Fun illustration: https://www.youtube.com/watch?v=cMMAGIefZuM&t=288s


Running that across two cores would be a slowdown, not a speedup. You cannot parallelize work like this, because it's too small to be worth it.


The evaluations don't magically/implicitly happen on many cores.

I guess the first thing worth doing when analyzing this would be looking at the differences in the bytecode, then looking at the C code implementing the differing bytecode ops. But there also other factors, like the new adaptive interpreter trying to JIT the code.


For Python 3.12, Godbolt gave almost identical bytecode for both (albeit in different order.) I'm guessing wildly but might this be because `BINARY_OP(+=)` stores the result (because it's `INPLACE`) and then you also do `STORE_FAST(x)` which gives you two stores for the same value compared with one store in the single-line version?

Single-line dual assignment:

    2         2 LOAD_FAST                0 (a2)
              4 LOAD_FAST                1 (c)
              6 BINARY_OP                0 (+)
             10 LOAD_FAST                1 (c)
             12 LOAD_CONST               1 (2)
             14 BINARY_OP                0 (+)
             18 STORE_FAST               1 (c)
             20 STORE_FAST               0 (a2)
vs the two-line version:

    2         2 LOAD_FAST                0 (a2)
              4 LOAD_FAST                1 (c)
              6 BINARY_OP               13 (+=)
             10 STORE_FAST               0 (a2)

    7        12 LOAD_FAST                1 (c)
             14 LOAD_CONST               1 (2)
             16 BINARY_OP               13 (+=)
             20 STORE_FAST               1 (c)


> I just wish we had something like Matplotlib in R

plotly could be worth a try, i use its python bindings and much prefer it to matplotlib, but i don't know much about the quality of it's R API


There used to a standalone Firefox Password app for mobile but it got the axe.


Lockwise.

And it had way better UX when auto-fill didn't work, because for some reason about:logins doesn't work on mobile, so you have to hunt through a menu mess to copy the password manually from Firefox.

I continued to use it (because they didn't stop it syncing) until I switched phone and so couldn't easily install it.


Yeah, let’s not forget https://www.thebureauinvestigates.com/stories/2021-12-06/swi...

I’m not under any illusion that there’s any “Swiss exceptionalism” when it comes to privacy.


RDI for added sugar itself is 24-36 grams which a single 12oz can of Coke exceeds. Measuring in Calories, it's recommended to limit daily intake from added sugar to below 100-150 Calorie. [0]

% of Caloric RDI isn't the whole story. A can of Coke is less of a blip on the radar as you suggested.

[0] https://www.hsph.harvard.edu/nutritionsource/carbohydrates/a...


I use worktrees, but some builds which uses git hash will fail in worktree checkouts, thinking that "it's not a git directory" (forgot the exact error, it's been a while); haven't found a solution to this


One way to get an error when retrieving a git hash is by building inside a Docker container. If you mount the root directory of a work tree, but not the “git-common-dir”, git will fail to give you the commit hash.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: