AFAIK, the worst you could do is serve the victim stale (valid) packages, and prevent them from seeing that there are new updates available.
I maintain a (somewhat) popular mirror server at a university, and we actually ran into this issue with one of our mirrors. The Tier 1 we were using as an upstream for a distro closed up shop suddenly, leaving our mirror with stale packages for some time before users told us they never got any updates.
I don't think that would work with most distros, since you're fetching an (also signed) update list and you'd get notified that the update failed due to a stale list, or that the expected updated package was missing on the mirror.
You could, but then the signature check would fail. Usually the public keys of developers or packagers are shipped with a linux distribution.
However, you shouldn't blindly trust in this in "linux" either. The implementation varies between package managers. Eg. DNF in Fedora has signature checks not enabled for local package installations, by default. There is no warning, nothing. If you want to infect new Fedora users, you MITM RPMFusion repo (codecs etc) installation, because that's a package almost everyone installs locally and the official install instructions don't show how to import the relevant keys beforehand. Arch was also very late to the validation party.
How is Arch vulnerable? While I don't have an Arch system handy, I do have a steam deck that I play around with (in an overlay), and I've certainly run into a lot of signature issues due to Valve making a hackish "pin" of the evergreen Arch with signatures in the Valve tree's snapshot being often out of date.
Those signatures are also checked for local installs unless you explicitly disable them.
Pacman has signature checks by default, for over a decade now, I think, but they have been ridiculously late with universal usage of this feature, relatively speaking. They were still barebacking their machines, when everybody trivially knew the internet was serious business and expected signature checks, therefor.
I realize now it was a stupid question, but the excellent refresher and ensueing discussion of edge cases was well worth the downvote someone felt compelled to leave, haha
Indeed, one should test any regex one puts any trust in, but the problem is that if you take as a fact something that is actually a false assumption (as the author did here), your test may well fail to find errors which may cause faults when the regex is put to use.
This, in a nutshell, is the sort of problem which renders fallacious the notion that you can unit-test your way to correct software.
„In general, a lot of the AI takes I see assert that AI will be able to assume the entire _responsibility_ for a given task for a person, and implicitly assume that the person’s _accountability_ for the task will just sort of…evaporate?“
Not hard to anticipate how this might play out. I can see how it first becomes the "official" resolver, which in a later steps is maybe "recommended," then required if you want certain kinds of funding or subsidy, or just serve public contracts, etc. Before you know it it's mandatory.
They can make it “mandatory” but if it sucks then people will just change their DNS to something else. The people who don’t know how to do that probably don’t leave the mainstream internet anyway.
25 years ago I also had a lot of trust in freedom on the internet, because it would just "route around censorship" and all that. I don't have the same confidence anymore.
The internet seems to be doing just fine at routing around censorship. I can still access TPB and Libgen a multitude of different ways despite the efforts of my govt to block them.
DNS is harder to block because it’s not illegal, and it’s a core internet service, which means there’s a lot of interest in preserving access.
UBI is the weirdest thing. I'm certainly no fan of an economic system which casually threatens each of it's member with starvation and homelessness if personal misfortune strikes. But universal basic income is something different than support for someone in need. It has to be paid by someone, but this someone isn't ever mentioned. It's somehow the government. But government needs to be funded, too. So ultimately it's an attempt make everyone hand over responsibility of our lives and livelihoods to some opaque sustenance function, without thinking or asking any questions.
Government should be forced to compete in the free market like the plebs if they want money for their programs and salaries, rather than sending men with machine guns to prey on victims. Hope the fucks in office are good at flipping burgers and can cut out the avocado toast.
REPL is kind of the point: did you try to type and run any of the examples in the tutorial? Just skimming it would be less useful (for the very basics).
If you already know everything in the tutorial, then try a project in Python that you did previously (familiar project, new language—Python).
It’s the shift of perception from technology being “geek stuff” to a lucrative field on par with finance, medicine and law. Before, someone who knew a bit of programming was likely to be passionate and with high potential. Now they may still be, but also they may just a way to get rich quick.
But the vast majority of people who go to programming bootcamps don't get rich quickly, or rich at all. Same with most people in the world who practice medicine or law
Or did you mean to convey that people who go to boot camps think it's a way to get rich quick? Your wording is a bit confusing toward the end there
I think he means that ‶guy who knows to code″ used to be a statistically strong marker of ‶tinkering guys with a potential″, which is less true now that IT is now socially internationally perceived as a high-reward career, hence a one where people may go either because they like it, or just because they want to make stacks.
The problem with this theory is people have been saying it for 25 years. At least as long as I’ve been in the profession. And it’s not historically accurate in the past either. Programming was a clerical job for much of it’s history.
Boot camps take the “seems to be passionate about coding in their free time” signals and help their students try to fit those signals by encouraging them to build personal projects on GitHub etc. This somewhat dilutes the ability of recruiters to check off “has some GitHub projects” as a heuristic, however useful that was to begin with.
>But the vast majority of people who go to programming bootcamps don't get rich quickly, or rich at all. Same with most people in the world who practice medicine or law
but they do get a reasonable shot at a middle class lifestyle with potential to do really well at some point.
That's better than a lot of career paths give you nowadays.
Yes, it seems like advertising as a specialist while being a generalist is a winning strategy. This is only true as long as there is no personal rapport though, as soon as some personal trust relationship is built it matters much less, almost not at all.
It's actually quite weird. I got tasked with things I explicitly didn't have experience with from people who knew me several times.
I think even a career change can be engineered more easily like that, within the context of client or employment relationship, and the opportunity comes up.
Note that we have no reason to believe that the underlying LLM inference process has suffered any setbacks. Obviously it has generated some logits. But the question is how is OpenAI server configured and what inference optimization tricks they're using.
The operation of this server is very uniform, in my imagination. Just emitting chunks of string. That this can be disrupted and an edge case occur, by the content of the strings - I find it puzzling.
NaNs are not only possible by design, but are extremely common. Training of LLMs involve many tricks about how to deal with training steps that result in NaNs. Quantisation of LLMs also require dealing with huge outlier values.