mine too, but none was such a dick. also, anything related to school (particularly at a young age), is not viewed as something to boast of (at least in my experience in italy, serbia and portugal).
That will be less of a problem since OAI can spill out to other providers as needed if their own capacity is under high utilization. They already use coreweave, aws, azure, etc. Google doesn't do that as far as I know and don't see why they would, so they are stuck eating the capacity planning.
> But the extent and the way in which Zig specifically puts it to use -- which includes, but is not limited to, how it is used to replace other features that can then be avoided (and all without macros) -- is unprecedented.
That MrWhite wanted to knkw an example of Zig's comptime that is not merely a "macro", rather the usage as a replacement of other features (I guess more complex..)
PS just interested in zig, I'd like some pointer to these cool feature :)
so they are basically using a similar idea to that of a stirling engine in thermoelectric generator or they use a different mechanism to produce energy?
Two materials (often n-type and p-type semiconductors) are joined at two junctions, one junction is heated and the other cooled. The temperature difference makes charge diffuse from the hot side toward the cold side, and this charge is what turns into the seebeck voltage they describe. It was just very hard to get anything meaningful out of this because you can't easily get such a temperature difference. If you've read of the peltier effect, it's the same thing as this, just in reverse.
I don't have much experience with ROCm for large trainings, but NVIDIA is still shit with driver+cuda version+other things. The only simplification is due to ubuntu and other distros that already do the heavy lift by installing all required components, without much configuration.
why don't you ask the model about the shrinked system prompt and the original system prompt? in this way you can infer whether the same relevant informations are "stored" in the hidden state of the model.
Or better yet, check directly the hidden state difference between a model feed with the original prompt and one with the shrinked prompt.
This should avoid remove the randomness of the results.
reply