Usually people are talking about race conditions. When you say contrived you're thinking races conditions are difficult to win and unrealistic but attackers who have a lot of money on the line spend the time to win all sorts of wild race conditions consistently.
is there actually a programming language that makes race conditions impossible (I am not being facetious, I actually do not know)? if the existence of races makes a language unsafe, then aren't all languages unsafe?
It's not that race conditions are generally memory-unsafe. The same race conditions would not be memory-unsafe in, say, Java or Python.
Go has a memory model that basically guarantees that the language is memory-safe except with a few marked "unsafe" functions or in case of race conditions involving interfaces or arrays. It's pretty easy to come up with an example of such a race condition that will cause reads or writes from/to unpredictable memory addresses. I imagine it's quite feasible to turn this into reads or writes from/to crafted memory addresses, which would be a mean to defeat pretty much any security measure implemented in the language.
The Rust community caters to people who are a bit obsessive about safety (including myself) and Rust developers tend to consider this a bug in the design of the Go language (there are a few, albeit much harder to achieve, issues that are vaguely comparable in Rust and they are considered bugs in the current design of Rust). The Go community tends to attract people who are more interested in shipping than in guarantees, and Go developers who are aware of this issue tend not care and assume that this is never going to happen in practice (which may or may not be true, I haven't checked).
>is there actually a programming language that makes race conditions impossible
It'd be very hard to make something that offers that guarantee in the real world. One of the most common, IRL exploitable race conditions are ones that involve multiple services/databases, and even if your programming language would have such a feature, your production system would not.
> is there actually a programming language that makes race conditions impossible
To my knowledge, no.
> if the existence of races makes a language unsafe, then aren't all languages unsafe?
Are we talking about "data races" or "race conditions" One can lead to the other, but race conditions are a much bigger set.
AIUI It's impossible for any language level controls to prevent any and all race conditions, because some are happening outside of the binary/process/computer.
Data races, OTOH are almost trivial to protect against - a contestable thing must have a guard that ensures a writer has exclusive access to that thing for the duration of the write.
Some languages do this with mutually exclusive locks (mutex/semaphore/go channels), some languages/paradigms do this by never having shareable objects (Functional Programming/Pass by Value), and some (Rust) are doing this with the compile time checks and firm rules on a single writer.
Edit: Never having shareable objects should really be "never allowing an outside thread/coroutine/process/whatever mutate your copy of an object" meaning that an object is immutable to them, and they have to have a copy that they can mutate to their heart's content. They have to communicate any changes back, and then you choose whether to integrate those changes, or not
Python has that property when you don't bring C extensions into the conversation. Data races exist, but can never cause memory corruption due to the GIL.
I've played with something similar with my M1 using Apple's MLX framework. The problem is I'm compute bound. I've never managed to get my M1 Max's GPU to process more than ~7.8k tokens per second at bf16 precision, so to train a 112M parameter model on ~20 billion tokens I'd need to run the model training for ~30 days.
One solution is to reduce the scope of the problem -- you can train on a smaller less diverse dataset such as TinyStories which is a collection of 1 billion tokens of chatGPT generated children's stories. After about 40 hours, less than one weekend, you'll have a model which can generate mostly grammatical children's stories.
If you have a newer mac and/or an ultra chip you'll have more and faster GPU cores, and might be able to train on FineWeb or a similar, larger and more diverse dataset.
OP here -- with a 112M model you should be able to get something worth playing with using 2.24B tokens. The Chinchilla heuristic is tokens = 20 x parameters. Obviously you cam get a better result by grinding through more tokens, but it will be very slow progress. It's worth noting that Andrej Karpathy is using the 20x thing for his nanochat project.
I try to explain the Chinchilla paper in the post, but your favourite AI should be able to explain it well, and has the benefit that you can ask follow-up questions.
Apple announces MIE, then Intel and AMD say they have something similar except they don't actually have something similar, only plans to eventually implement it, but they're advertising it as if they do already have it. That sounds like super blatant "panicking to copy Apple" to me.
The submission title goes "Intel and AMD standardize ChkTag ..." but the actual text says "Intel and AMD are working together, along with their ecosystem partners in the EAG, to address the need for memory safety. They are creating a unified specification ..." (emphasis mine). They don't even have a specification yet, let alone an implementation, but they want to make PR waves about it already. This is so funny (and sad).
I’m sure it was, Intel has tried this before. It’s a good idea.
But it wouldn’t surprise me if Apple‘s announcement they were shipping it already pushed Intel and AMD to agree on implementation details so they could announce this and get it going.
They wouldn't announce it with zero results nor even any actual specification to show unless they were trying to show it off as soon as possible. Otherwise they could just wait until they, you know, actually did something, to announce it.
I could be mistaken, but I think the mac studio comes with either an M3 ultra or an M4 max, and the ipad comes with an M4 chip. I think they decided not to make an ultra for the M4 generation, but don't take my word for it.
The article says Mac Studio M3 Ultra owners can’t update to macOS Tahoe. So while the lower-end Studio uses M4, the $4k-$10k M3 Ultra Version with all that great RAM for inference still runs M3 Ultra. It’s slower than the iPad Pro released 10 months earlier in single core performance and, for now, according to this article isn’t compatible with macOS Tahoe.
I think the audio quality gives this recording character. What could be more cyberpunk than hearing the quirky artifacts resulting from ripping an obsolete recording medium?
No I know but I mean it has actual mp3 encoding errors because the files are getting corrupted over time lol, like it's an issue in the storage medium not the original analog-to-digital conversion :(
That wasn't the MPEG design goal. It was to stream video through a distribution network where dropouts would be tolerated as part of doing business. People were accustomed to snowy analog broadcast video. That is more disruptive when listening to purely audio. This is incidentally why CDs had their error handling significantly improved over Phillips' original prototype which would have been much more susceptible to scratches if commercialized.
The article is written for a different audience than you might be used to. oregonlive is the website for the newspaper The Oregonian, which is the largest newspaper in the state of Oregon. Intel has many of its largest fabs in Oregon and is a big employer there. The local news is writing about a hip new startup for a non-technical audience who know what Intel is and why it's important, but need to be reminded what a CPU actually is.
The fact that California housing pushed Intel to Oregon probably helped lead to its failures. Every time a company relocates to get cost of living (and thus payroll) costs down by relocating to a place with fewer potential employees and fewer competing employers, modernity slams on the breaks.
That might have been true in the early 2000s when they were growing the Hillsborough Oregon campus but most new fabs are opening in Arizona for taxation and political reasons. I don't have the numbers to back it up, but based on articles about Intel layoffs I believe that Intel has been shedding jobs in Oregon for a while now.
I am saying that all this stuff should have never left the bay area, and the bay area should have millions more people than it does today.
Arizona is also a mistake --- a far worse place for high tech than Oregon!. It is a desert real estate ponzi scheme with no top-tier schools, no history of top-tier high-skill intellectual job markets. In general the sun belt (including LA) is the land of stupid.
The electoral college is always winning out over the best economic geography, and it sucks.