But that dependency order is usually just one big blob of ”COPY src/ . + RUN make”, within that block you have none of the benefits. Bazel/Buck has much finer awareness down to every individual file.
Out of curiosity, would it be feasible to take a big cmake project and generate thousands of compile rules into dagger and use it as a substitute for make with sandboxing? I’ve never seen builkit used with such many nodes, how would it fare?
Can someone give a tldr of what makes fil-c different from just compiling with clang’s address sanitizer?
Calling it memory safe is a bit of a stretch when all it does is convert memory errors to runtime panics, or am I missing something? I mean, that’s still good, just less than I’d expect given the recent hype of fil-c being the savior for making C a competitive language again.
ASan does not make your code memory safe! It is quite good at catching unintentional bugs/oob memory writes in your code, and it is quite reliable (authors claim no false positives), but it has false negatives i.e. won't detect everything. Especially if you're against someone who tries to corrupt your memory intentionally.
ASan works by (simplifying a lot) padding allocations and surrounding them with untouchable "red zone". So with some luck even this can work:
char *a = new char[100];
char *b = new char[1000];
a[500] = 0; // may end up in b
Address sanitizer won’t panic/crash your program on all memory safety violations. Attackers know how to achieve remote code execution in processes running Asan. Asan’s docs specifically call out that you should not use it in prod. In other words, Asan is not memory safe. It’s just a bug finding tool.
Fil-C will panic your program, or give some kind of memory safe outcome (that is of no use to the attacker) in all of the cases that attackers use to achieve remote code execution. In other words, Fil-C is memory safe.
The fact that Fil-C achieves memory safety using runtime checks doesn’t make it any less memory safe. Even rust uses runtime checks (most importantly for array bounds). And, type systems that try to prove safety statically often amount to forcing the programmer to write the checks themselves.
From a definition point of view that might be right and it’s no doubt a good step up, compared to continuing with tainted data. In practice though, that is still not enough, these days we should expect higher degree of confidence from our code before it’s run. Especially with the mountains of code that LLMs will pour over us.
To optimize that code snippet, use temporary variables instead of member lookups to avoid slow getattr and setattr calls. It still won’t beat a compiled language, number crunching is the worst sport for Python.
Which is why in Python in practice you pay the cost of moving your data to a native module (numpy/pandas/polars) and do all your number crunching over there and then pull the result back.
Not saying it's ideal but it's a solved problem and Python is eating good in terms of quality dataframe libraries.
All those class variables are already in __slots__ so in theory it shouldnt matter. Your advice is good
self.shift_index -= 16
shift_byte = (self.shift >> self.shift_index) & 0x5555
shift_byte = (shift_byte + (shift_byte >> 1)) & 0x3333
shift_byte = (shift_byte + (shift_byte >> 2)) & 0x0F0F
self.shift_byte = (shift_byte + (shift_byte >> 4)) & 0x00FF
but only for exactly 2-4 milliseconds per 1 million pulses :) Declaring local variable in a tight loop forces Python into a cycle of memory allocations and garbage collection negative potential gains :(
SWAR : 0.288 seconds -> 0.33 MiB/s
SWAR local : 0.284 seconds -> 0.33 MiB/s
This whole snipped is maybe what 50-100 x86 opcodes? Native code runs at >100MB/s while Python 3.14 struggles around 300KB/s. Python 3.4 (Sigrok hardcoded requirement) is even worse:
You should also factor in that a zero-day often isn’t surfaced to be exploitable if you are using the onion model with other layers that need to be penetrated together. In contrast to a supply chain vulnerability that is designed to actively make outbound connections through any means possible.
Thank you. I was scanning this thread for anyone pointing this out.
The cooldown security scheme appears like some inverse "security by obscurity". Nobody could see a backdoor, therefor we can assume security. This scheme stands and falls with the assumed timelines. Once this assumption tumbles, picking a cooldown period becomes guess work. (Or another compliance box ticked.)
On the other side, the assumption can very well be sound, maybe ~90% of future backdoors can be mitigated by it. But who can tell. This looks like the survivorship bias, because we are making decisions based on the cases we found.
Automatic disengage when accelerating and automatic engage when parking means means one less thing to think about. Just get in the car, put in gear and go.
EVs also have very powerful motor braking that can get the car to a stop if the hydraulic brakes are busted.
Accidentally pulling in a unused dependency during development is, if not a purely hypothetical scenario, at least an extreme edge case. During debug, most of the times you already built those 5000000000 lines while trying to reproduce a problem on the original version of the code. Since that didn’t help, you now want to try commenting out one function call. Beep! Unused var.
On an i9-14900K, arguably one of the fastest CPUs in the previous few years (and excusing their design defect that causes them to die); Teams is significantly slower than on the Quallcom Snapdragon X-Elite, or my Macbook.
It seems to perform the same as it would on an i9 platform as it does on i5 laptop's of the same generation (in terms of input latency and drawing to the screen etc;)
I know it's apples/oranges, that ARM CPUs are substantially different than x86 ones, but the fact that it seems to be the same on significantly lower clocked (and lower consumptive) chips indicate to me that something very bizarre is happening when it comes to Teams.
ARM chips seem to be significantly better for electron applications, but something unique exists within Teams here.
Hypothesis: that Qualcomm and that Macbook have higher memory bandwidth than your i9 system. This is dependent on your memory and your mainboard, not so much on the CPU itself. Perhaps Teams just uses way too much memory, and actually uses it all the time.
I mean dude said he's doing multitaskign stuff --in-- teams and it's slow.
To me memory latency being whatever, 30% higher, ought not to explain the issue here, in part because that's a priori assuming all is memory-bandwidth-limited vs say network limited or CPU limited far as the bottleneck
What makes more sense to me is the software is "slow and clunky" that is maybe a global mutex, maybe poor multithreading sync making it effectively single threaded, with a sprinkling of particularly slow algorithms or syscalls that are realized as a frozen GUI, or as we call such cases, Microsoft standard
No idea, but I have found that edge can be more conservative in its use of GPU acceleration than Chrome. Maybe that is the case in the webview Teams uses.
No potato, quite the contrary, and it’s not that it hogs resources. It’s just slow within. I can also keep a video call open and do other things outside of teams. But doing any multitasking within teams is just a nightmare. Open a second chat while in a video call makes the video into a thumbnail. Searching through other chats to copy and forward into a third chat... Just not possible, because everything is modal and resetting the scroll location when toggling between. On top of that it’s just overall slow slow slow.
At my company we typically use Firefox with containers because Teams didn't have account switching. But then actually calling is so unstable we regularly have to switch to chromium.
Out of curiosity, would it be feasible to take a big cmake project and generate thousands of compile rules into dagger and use it as a substitute for make with sandboxing? I’ve never seen builkit used with such many nodes, how would it fare?