Hacker News new | past | comments | ask | show | jobs | submit | more delifue's comments login

The GPU Glossary mentions that a warp scheduler can context switch https://modal.com/gpu-glossary/device-hardware/warp-schedule... but you said there is no such thing as an SM "context switch between threads". Is there some ambiguity in context switch


In my opinion, the best way of fighting with crawlers is not giving error feedback (403). The best way is to give the crawlers low-quality AI-generated data.


Self plug, but I made this to deal with bots on my site: https://marcusb.org/hacks/quixotic.html. It is a simple markov generator to obfuscate content (static-site friendly, no server-side dynamic generation required) and an optional link-maze to send incorrigible bots to 100% markov-generated non-sense (requires a server-side component.)

I do serve a legit robots.txt file to warn the scrapers I know about away.


I may have a system in place that starts the pipeline for fetching a very, very large file (16TB, text file designed for testing). Not hosted by myself, except the first shard.

A surprising number of agents try to download the whole thing.


Right, and that's why honeypots work against many targets. Why serve them an actual file, when a cgi script or whatever can just generate output in a loop.


Someone has to front the bandwidth.


Ah, speaking of that, of course you don't generate the fake data as fast as you can. You just trickle it out often enough for them not to time out.


That's why you should run a tarpit instead.


The bugs coming from porting from X86 to ARM may be related to memory order. ARM has weaker memory order than X86. You may need to add memory barriers or synchronization. Of course there are other causes.


With no further context, I think good ol' UB is more likely. Every C codebase I've seen that's not scrutinized with tooling to detect UB, is full of UB.


> tooling to detect UB

My tooling is not showing anything.

What tooling do you recommend?


A common problem of immediate mode GUI is how to make the parent component's size dynamically determined by child component's content (2-phase dynamic layout). Does Egui solve this issue?


You can instruct egui to discard a frame. That way you can perform this two phase layout across two frames without showing an unfinished UI.

This mechanism can get you there most of the time.


Take the Code Example demo initially opened: drag age up to 120, then press the Increment button, and watch the label “Arthur is 120” briefly blip to “Arthur is 121” (while the DragValue still shows 120!) before being dragged back down to 120 presumably by rerendering and egui::DragValue::new(age).range(0..=120) clamping it.

That’s… eww. It probably doesn’t often cause real problems—though it surely could, as it’s allowing temporary rendering with out-of-bounds values that a developer might have expected to be clamped—but it’s still very eww, the idea that mutating data leads to parts of a frame being rendered with different data.


I’m currently on mobile, where I couldn’t reproduce this.

I agree, that shouldn’t happen and it might be a bug, because the input is handled after drawing the initial frame and should be clamped before starting to draw the next frame. Drag events are tricky though, because they come with a frame delay by default (you have to recognize the drag).

Does this reproduce reliably on desktop? If so then I can create an issue for this.


The test code of sqlite is not public.


Yes and no. Part of it is public, just not the "best" part: https://www.sqlite.org/testing.html


Thanks for the link. It looks like the public part is 27k lines of code (vs the 92,000k lines of code in the proprietary closed-source part).


Can you give examples of what "bad practices" does k8s force to fix?


To name a few:

K8s really kills the urge to say “oh well I guess we can just do that file onto the server as a part of startup rather than use a db/config system/etc.” No more “oh shit the VM died and we lost the file that was supposed to be static except for that thing John wrote to update it only if X happened, but now X happens everyday and the file is gone”.. or worse: it’s in git but now you have 3 different versions that have all drifted due to the John code change.

K8s makes you use containers, which makes you not run things on your machine, which makes you better at CI, which.. (the list goes on, containers are industry standard for a lot of reasons). In general the 12 Factor App is a great set of ideas, and k8s lets you do them (this is not exclusive, though). Containers alone are a huge game changer compared to “cp a JAR to the server and restart it”

K8s makes it really really really easy to just split off that one weird cronjob part of the codebase that Mike needed and man, it would be really nice to just use the same code and dependencies rather than boilerplating a whole new app and deploy, CI, configs, and yamls to make that run. See points about containerization.

K8s doesn’t assume that your business will always be a website/mobile app. See the whole “edge computing” trend.

I do want to stress that k8s is not the only thing in the world that can do these or promote good development practices, and I do think it’s overkill to say that it MAKES you do things well - a foolhardy person can mess any well-intentioned system up.


A lot of modern AAA games use dithering transparency. It doesn't suffer from ordering problems. Although it looks weird in low resolution, it looks fine in high resolution.


It's not appropriate to say that "having trouble with borrow checker means code is wrong". Sometimes you just want to add a new feature and the borrow check force you to do a big refactor.

See also: https://loglog.games/blog/leaving-rust-gamedev/


I hear this constantly but never see any examples of what they actually mean, or it's coming from misunderstandings of what the language is. I've seen people say how well the async code could be if Rust got a garbage collector. For the borrow checker specifically, I think it's important to understand smart pointers, Cell/RefCells, other primitives, and the fact that you don't have to solve everything with references.


> it's coming from misunderstandings of what the language is

“You are just not holding it right.”

Rust borrow checker indeed does force you to make contorsion to keep it happy and will bite you if you fail to take its particularity into account. It’s all fine and proper if you think the trade-off regarding safety is worth it (and I think it is in some case) but pretending that’s not the case is just intentionally deluding yourself.

The people here implying that the BC forces you to use a good architecture are also deluding themselves by the way. It forces you to use an architecture that suits the limitations of the borrow checker. That’s pretty much it.

The fact that such delusions are so prevalent amongst part of the community is from my perspective the worst part of using Rust. The language itself is very much fine.


Can you give me an example of what the borrow checker prevents you from doing without calling Rust developers delusional? The hardships are way overstated, in my opinion. One issue I can think of newcomers might have is immutable and mutable references. You can't borrow as mutable something already borrowed. Let's say you have a reference to a Vec element, then you add an item to Vec. The realloc could invalidate that reference, which Rust prevents. Using Cell/RefCell also helps if you don't want to deeply nest &mut. Rust is definitely hard, but after a while it's fine and you kind of get how even if using another language you still have to think about memory.


I’m not calling Rust developers delusional. I’m calling people pretending the borrow checker doesn’t force you to specifically architecture your code to suit it delusional.

> Rust is definitely hard, but after a while it's fine and you kind of get how even if using another language you still have to think about memory.

That kind of comment is a typical exemple. It’s not that you have to think about memory. You have to understand the exact limitations of the analyser Rust uses to guarantee memory safety. I don’t understand why it’s so hard to accept for some part of the Rust community.


The advantage of satisfying the borrow checker isn't all that obvious. The BC is designed to make the program behave well with the fundamental machine model of a common computing device. You may be able to get away with spaghetti code for a new feature in a GC-based language. However my experience is that such technical debt grows over time and you're forced to carry out a massive refactor later anyways. GC isn't going to help you there. You might as well refactor the code in the beginning itself with the guidance of the BC in order to avoid pain in the end. This is why Rust programs have a reputation to run correctly almost always if they compile.

And as the other commenter said, the borrow checker isn't all that hard to satisfy. BC complaints are often related to serious memory handling bugs. So if you know how to solve such bugs (which you need to know with C or C++ anyway), BC won't frustrate you. You may occasionally face some issues that are hard to solve under the constraints of the BC. But you can handle them by moving the safety checks from compile-time to runtime (using RefCell, Mutex, etc) or handle it manually (using unsafe) if you know what you're doing.

Like the other commenter, I find some of the complaints about programming friction and refactor to be exaggerated very often. That's unfair towards Rust in that it hurts its adoption.


In lifetime, subtype means longer lifetime (it's unintuitive). 'a : 'b means 'a is a subtype of 'b, which contains 'b and can be longer.

Rust can improve this by introducing syntax like `'a contains 'b`


> But now we get to use a key feature of infinitesimal changes: that they can always be thought of as just “adding linearly” (essentially because ε2 can always be ignored to ε). Or, in other words, we can summarize any infinitesimal change just by giving its “direction” in weight space

> a standard result from calculus gives us a vastly more efficient procedure that in effect “maximally reuses” parts of the computation that have already been done.

This partially explains why gradient descent becomes mainstream.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: