Hacker Newsnew | past | comments | ask | show | jobs | submit | more Too's commentslogin

But that dependency order is usually just one big blob of ”COPY src/ . + RUN make”, within that block you have none of the benefits. Bazel/Buck has much finer awareness down to every individual file.

Out of curiosity, would it be feasible to take a big cmake project and generate thousands of compile rules into dagger and use it as a substitute for make with sandboxing? I’ve never seen builkit used with such many nodes, how would it fare?


Dagger is a declarative DAG engine. So, yes, you can do that.


There is an overhead per container launched so it would probably not be worth it.


Can someone give a tldr of what makes fil-c different from just compiling with clang’s address sanitizer?

Calling it memory safe is a bit of a stretch when all it does is convert memory errors to runtime panics, or am I missing something? I mean, that’s still good, just less than I’d expect given the recent hype of fil-c being the savior for making C a competitive language again.


ASan does not make your code memory safe! It is quite good at catching unintentional bugs/oob memory writes in your code, and it is quite reliable (authors claim no false positives), but it has false negatives i.e. won't detect everything. Especially if you're against someone who tries to corrupt your memory intentionally.

ASan works by (simplifying a lot) padding allocations and surrounding them with untouchable "red zone". So with some luck even this can work:

  char *a = new char[100];
  char *b = new char[1000];
  a[500] = 0; // may end up in b


Address sanitizer won’t panic/crash your program on all memory safety violations. Attackers know how to achieve remote code execution in processes running Asan. Asan’s docs specifically call out that you should not use it in prod. In other words, Asan is not memory safe. It’s just a bug finding tool.

Fil-C will panic your program, or give some kind of memory safe outcome (that is of no use to the attacker) in all of the cases that attackers use to achieve remote code execution. In other words, Fil-C is memory safe.

The fact that Fil-C achieves memory safety using runtime checks doesn’t make it any less memory safe. Even rust uses runtime checks (most importantly for array bounds). And, type systems that try to prove safety statically often amount to forcing the programmer to write the checks themselves.


If you can rely on memory errors panicing before the memory error can have an effect, you're memory safe. Memory safety doesn't require "can't crash".


Exactly. Or Rust wouldn't be memory safe due to the existence of unwrap().

Not that crashing can't be bad, as we saw recently with Cloudflare's recent unwrap-based incident.


Even without unwrap, Rust could still crash on array out of bounds access. And probably more similar cases.


From a definition point of view that might be right and it’s no doubt a good step up, compared to continuing with tainted data. In practice though, that is still not enough, these days we should expect higher degree of confidence from our code before it’s run. Especially with the mountains of code that LLMs will pour over us.


It's a nice ambition, but it's a different thing than memory safety


To optimize that code snippet, use temporary variables instead of member lookups to avoid slow getattr and setattr calls. It still won’t beat a compiled language, number crunching is the worst sport for Python.


Which is why in Python in practice you pay the cost of moving your data to a native module (numpy/pandas/polars) and do all your number crunching over there and then pull the result back.

Not saying it's ideal but it's a solved problem and Python is eating good in terms of quality dataframe libraries.


All those class variables are already in __slots__ so in theory it shouldnt matter. Your advice is good

     self.shift_index -= 16
     shift_byte = (self.shift >> self.shift_index) & 0x5555
     shift_byte = (shift_byte + (shift_byte >> 1)) & 0x3333
     shift_byte = (shift_byte + (shift_byte >> 2)) & 0x0F0F
     self.shift_byte = (shift_byte + (shift_byte >> 4)) & 0x00FF


 but only for exactly 2-4 milliseconds per 1 million pulses :) Declaring local variable in a tight loop forces Python into a cycle of memory allocations and garbage collection negative potential gains :(

    SWAR                           :     0.288 seconds  ->    0.33 MiB/s
    SWAR local                     :     0.284 seconds  ->    0.33 MiB/s
This whole snipped is maybe what 50-100 x86 opcodes? Native code runs at >100MB/s while Python 3.14 struggles around 300KB/s. Python 3.4 (Sigrok hardcoded requirement) is even worse:

    SWAR                           :     0.691 seconds  ->    0.14 MiB/s
    SWAR local                     :     0.648 seconds  ->    0.14 MiB/s
You can try your luck https://github.com/raszpl/sigrok-disk/tree/main/benchmarks I will appreciate Pull requests if anyone manages to speed this up. I give up at ~2 seconds per one RLL HDD track.

This is what I get right now decoding single tracks on i7-4790 platform:

    fdd_fm.sr 0.9385 seconds
    fdd_mfm.sr 1.4774 seconds
    fdd_fm.sr 0.8711 seconds
    fdd_mfm.sr 1.2547 seconds
    hdd_mfm_RQDX3.sr 1.9737 seconds
    hdd_mfm_RQDX3.sr 1.9749 seconds
    hdd_mfm_AMS1100M4.sr 1.4681 seconds
    hdd_mfm_WD1003V-MM2.sr 1.8142 seconds
    hdd_mfm_WD1003V-MM2_int.sr 1.8067 seconds
    hdd_mfm_EV346.sr 1.8215 seconds
    hdd_rll_ST21R.sr 1.9353 seconds
    hdd_rll_WD1003V-SR1.sr 2.1984 seconds
    hdd_rll_WD1003V-SR1.sr 2.2085 seconds
    hdd_rll_WD1003V-SR1.sr 2.2186 seconds
    hdd_rll_WD1003V-SR1.sr 2.1830 seconds
    hdd_rll_WD1003V-SR1.sr 2.2213 seconds
    HDD_11tracks.sr 17.4245 seconds <- 11 tracks, 6 RLL + 5 MFM interpreted as RLL
    HDD_11tracks.sr 12.3864 seconds <- 11 tracks, 6 RLL + 5 MFM interpreted as MFM


You should also factor in that a zero-day often isn’t surfaced to be exploitable if you are using the onion model with other layers that need to be penetrated together. In contrast to a supply chain vulnerability that is designed to actively make outbound connections through any means possible.


Thank you. I was scanning this thread for anyone pointing this out.

The cooldown security scheme appears like some inverse "security by obscurity". Nobody could see a backdoor, therefor we can assume security. This scheme stands and falls with the assumed timelines. Once this assumption tumbles, picking a cooldown period becomes guess work. (Or another compliance box ticked.)

On the other side, the assumption can very well be sound, maybe ~90% of future backdoors can be mitigated by it. But who can tell. This looks like the survivorship bias, because we are making decisions based on the cases we found.


Rust compiles to wasm right?


Obligatory link to Latency numbers every programmer should know. https://colin-scott.github.io/personal_website/research/inte...


I thought you were joking until I saw the video where this is an actual quote.


There are two more quotes that made me giggle;

> You can verify code quality as a glance, and ship absolute with confidence.

> You can confidently trust and merge the code without hours of manual review.

I couldn't possibly imagine that going wrong.


Automatic disengage when accelerating and automatic engage when parking means means one less thing to think about. Just get in the car, put in gear and go.

EVs also have very powerful motor braking that can get the car to a stop if the hydraulic brakes are busted.


This. I don't think about parking brakes. Get into parking spot, open doors - car engages parking brake itself. It's nuts all cars don't do this yet.


Accidentally pulling in a unused dependency during development is, if not a purely hypothetical scenario, at least an extreme edge case. During debug, most of the times you already built those 5000000000 lines while trying to reproduce a problem on the original version of the code. Since that didn’t help, you now want to try commenting out one function call. Beep! Unused var.


If by perfect you mean that you can’t have two chats open next to each other and toggling between chats is slow as molasses then yes.


Are you running it on a particularly potatoey PC?

On my fairly ancient Core i7-8700 I can have a video call open in one screen and be editing in Resolve on another.


There's something weird going on honestly.

On an i9-14900K, arguably one of the fastest CPUs in the previous few years (and excusing their design defect that causes them to die); Teams is significantly slower than on the Quallcom Snapdragon X-Elite, or my Macbook.

It seems to perform the same as it would on an i9 platform as it does on i5 laptop's of the same generation (in terms of input latency and drawing to the screen etc;)

I know it's apples/oranges, that ARM CPUs are substantially different than x86 ones, but the fact that it seems to be the same on significantly lower clocked (and lower consumptive) chips indicate to me that something very bizarre is happening when it comes to Teams.

ARM chips seem to be significantly better for electron applications, but something unique exists within Teams here.


Hypothesis: that Qualcomm and that Macbook have higher memory bandwidth than your i9 system. This is dependent on your memory and your mainboard, not so much on the CPU itself. Perhaps Teams just uses way too much memory, and actually uses it all the time.


That is an interesting hypothesis, and makes sense based on the generationality of the issue.

An i5-14500 has a comparable memory bandwidth as an i9-14900k

https://www.intel.com/content/www/us/en/products/compare.htm...


I mean dude said he's doing multitaskign stuff --in-- teams and it's slow.

To me memory latency being whatever, 30% higher, ought not to explain the issue here, in part because that's a priori assuming all is memory-bandwidth-limited vs say network limited or CPU limited far as the bottleneck

What makes more sense to me is the software is "slow and clunky" that is maybe a global mutex, maybe poor multithreading sync making it effectively single threaded, with a sprinkling of particularly slow algorithms or syscalls that are realized as a frozen GUI, or as we call such cases, Microsoft standard


but a sprinkling of singleton’s/mutex’s would indeed be less painful with higher memory bandwidths and latencies.


No idea, but I have found that edge can be more conservative in its use of GPU acceleration than Chrome. Maybe that is the case in the webview Teams uses.


No potato, quite the contrary, and it’s not that it hogs resources. It’s just slow within. I can also keep a video call open and do other things outside of teams. But doing any multitasking within teams is just a nightmare. Open a second chat while in a video call makes the video into a thumbnail. Searching through other chats to copy and forward into a third chat... Just not possible, because everything is modal and resetting the scroll location when toggling between. On top of that it’s just overall slow slow slow.


It's not that it hogs resources. The app is just slow. So so so so terribly slow.

And half of the time it crashes. Or the video/audio doesn't work.


Exactly my case, powerful Ubuntu setup, App sucks, Teams in Chrome sucks, but soon as I run on Edge, no problems


At my company we typically use Firefox with containers because Teams didn't have account switching. But then actually calling is so unstable we regularly have to switch to chromium.

Not surprised it properly works on Edge at all.


Edge on Linux is probably my current favourite browser and I kind of want to hate that.


I didn't even know they had Linux builds, but I guess it's shipped by their linux repositories.[1]

[1]: https://packages.microsoft.com/config/


It also doesn't use 1.5 CPU cores non-stop.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: