As far as I can recall it was a very convoluted prison-break for someone thought to be dead that included an attempted revenge assasination, distraction bombing of a federal agency, kidnappings and multiple double agents.
I've really enjoyed using them but I guess I don't do much with the web interface.
> TS_DEST_IP
So you run tailscale in your git server container so it gets a unique tailnet ip which won't create a conflict because you don't need to ssh into the container?
I might give that a go. I run tailscale on my host and use a custom port for git which you set once in your ~/.ssh/config for host/key config on client machines and then don't need to refer to it repo uris.
TBH, I think it's tailscale I'd like a light/fast alternative to! I have growing concerns because I often find it inexplicably consuming a lot of CPU, pointlessly spamming syslog (years old github issues without response) or otherwise getting fucked up.
They're plenty fast, but it's hard to match the speed of terminal tools if you're used to working that way. With Soft Serve, I'm maybe 10 keystrokes and two seconds away from whatever I want to access from a blank desktop. Even a really performant web application is always going to be a bit slower than that.
Normally that kind of micro-optimization isn't all that useful, but it's great for flitting back and forth between a bunch of projects without losing your place.
> So you run tailscale in your git server container so it gets a unique tailnet ip which won't create a conflict because you don't need to ssh into the container?
Pretty much. It's a separate container in the same pod, and shows up as its own device on the tailnet. I can still `kubectl exec` or port forward or whatever if I need to access Soft Serve directly, but practically I never do that.
> TBH, I think it's tailscale I'd like a light/fast alternative to!
I've never noticed Tailscale's performance on any clients, it "just works" in my experience. I'm running self-hosted Headscale, but wouldn't expect it to be all that different performance-wise.
The time pressure is probably more important than you realise.
The guests are often pretty eminent academics, feted in their field and used to being indulged. There have been some I know that very much enjoy the sound of their own voice as they tediously ramble for hours, bending any topic to their own pet themes, with colleagues and students obediently hanging on their words. Melvyn has the stature to get testy "Enough about his wife, you still haven't answered the question, get on with it!" and the Oxford emeritus professor complies.
The after show chat works because it's post-time-crunch. It's pressure release and reflection. If you do recruitment this is something to learn. You will have a much more valuable interaction after you have scraped off interviewee armour.
I generally agree with you and the "short" format is what makes it successful, but Melvyn said himself that they choose teaching professors because they would know how to explain subject clearly and after almost two decades of listening I'd say it has mostly worked.
> they choose teaching professors because they would know how to explain subject clearly
Hmm! The majority of academics in the UK teach... most of them reluctantly and badly because it's mandatory for their research contract!
Those that revel in it are used to monologuing extemporanously for hours every day in the lecture hall and supervision without interuption. It's quite far from a snappy conversational media performer.
It would hardly be surprising given the Max+ 395 has more, and on average, better cores fabbed with 5nm unlike the M4's 3nm. Die size is mostly GPU though.
Looking at some benchmarks:
> slightly more MT.
AMD's multicore passmark score is more than 40% higher.
M1 Pro is ~250mm2. M4 Pro likely increased in size a bit. So I estimated 300mm2. There are no official measurements but should be directionally correct.
AMD's multicore passmark score is more than 40% higher.
It's an out of date benchmark that not even AMD endorses and the industry does not use. Meanwhile, AMD officially endorses Cinebench 2024 and Geekbench. Let's use those.
The AMD is an older fab process and does not have P/E cores. What are you measuring?
Efficiency. Fab process does not account for the 3.65x efficiency deficit. N4 to N3 is roughly ~20-25% more efficient at the same speed.
The P/E design choice gives different trade-offs e.g. AMD has much higher average single core perf.
Citation needed. Further more, macOS uses P cores for all the important tasks and E cores for background tasks. I fail to see why even if AMD has a higher average ST would translate to better experience for users.
14.8 TFLOPS vs. M4 Pro 9.2 TFLOPS.
TFLOPs are not the same between architectures.
19% higher 3D Mark
Equal in 3DMark Wildlife, loses vs M4 Pro in Blender.
34% higher GeekBench 6 OpenCL
OpenCL has long been deprecated on macOS. 105727 is the score for Metal, which is supported by macOS. 15% faster for M4 Pro.
The GPUs themselves are roughly equal. However, Strix Halo is still a bigger SoC.
Shouldn't they be the same if we are speaking about same precision? For example, [0] shows M4 Max 17 TFLOPS FP32 vs MAX+ 395 29.7 TPLOFS FP32 - not sure what exact operation was measured but at least it should be the same operation. Hard to make definitive statements without access to both machines.
M4 Max doesn't even disclose TFLOPS so no clue where that website got the numbers from.
TFLOPS can't be measured the same between generations. For example, Nvidia often quotes sparsity TFLOPS which doubles the dense TFLOPS previously reported. I think AMD probably does the same for consumer GPUs.
Another example is Radeon RX Vega 64 which had 12.7 TFLOPS FP32. Yet, Radeon RX 5700 XT with just 9.8 TFLOPS FP32 absolutely destroyed it in gaming.
"directionally correct"... so you don't know and made up some numbers? Great.
AMD doesn't "endorse benchmarks" especially not fucking Geekbench for multi-core. No-one could because it's famously nonsense for higher core counts. AMD's decade old beef with Sysmark was about pro-Intel bias.
"directionally correct"... so you don't know and made up some numbers? Great.
I never said it was exactly that size. Apple keeps the sizes of their base, Pro, and Max chips fairly consistent over generations.
Welcome to the world of chip discussions. I've never taken apart and M4 Pro computer and measured the die myself. It appears no one has on the internet. However, we can infer a lot of it based on previously known facts. In this case, we know M1 Pro's die size is around 250mm2.
AMD doesn't "endorse benchmarks" especially not fucking Geekbench for multi-core. No-one could because it's famously nonsense for higher core counts. AMD's decade old beef with Sysmark was about pro-Intel bias.
Your source is an article based on someone finding a Geekbench result for a just released CPU and you somehow try to say its from AMD itself and its an endorsed benchmark, huh.
Their "main benchmark"? Stop making things up. It's no more than tragic fanboy addled fraud at this point.
That three-year old press-release refers to SINGLE CORE Geekbench and not the defective multicore version that doesn't scale with core counts. Given AMD's main USP is core counts it would be an... unusual choice.
AMD marketing uses every other product under the sun too (no doubt whatever gives the better looking numbers)... including Passmark e.g. it's on this Halo Strix page:
Enough. You don't know what you are talking about.
What's with posting 5 year old medium articles about a different version of Geekbench? Geekbench 5 had different multicore scaling so if you want to argue that version was so great then you are also arguing against Geekbench 6 because they don't even match.
"AMD Ryzen Threadripper 3995WX, a huge 64 core/ 128 thread part, was performing at only 3-4x the rate of an Intel D-1718T quad-core part, even despite the fact it had 16x the core count and lots of other features."
"With the transition from Geekbench 5 to Geekbench 6, the focus of the Primate Labs team shifted to smaller CPUs"
I am unreasonably upset by the tiktok-goofy-jazz music they chose.
For science.org I want something a bit more nature-documentary e.g. a thoughtful classical/ambient soundscape with David Attenborough gentle tones "And here we see...". If seeking to amuse me then go for it e.g. Ride of Valkeries/Rohirrim.
I found the music a great fit and I would not have enjoyed it as much without. I Would have closed the window if it had been Ride of the Valkyries which is far more generic and overplayed for me than anything tiktok, but I have never used tiktok.
I'm a wet blanket, but I despise documentaries that lie about the sound of their recording. I was pretty far into adulthood before I realized most nature docs have fake audio. The backing track of this video helps me know there is no useful audio for this video - no Foley or ambient mic. It's not quite as narrative as Ride of the Valkyries, granted ;)
That's not entirely wrong but I dislike the framing.
It appears to transfer the guilt of a successful deception that manufactures consent to public morality and the vulnerable. The real issue is it couldn't succeed without mendacious officials that suffer no consequences and uncritical/supportive media pushing the ball across the line.
It's also a much broader phenomenon than "protect the vulnerable". There are many other overused buttons they press to seek consent e.g. fear being the most common. Fear of terrorism, fear of job losses or tax rises, prejudice of others etc.
> There is nothing in the word “theft” that implies depriving someone of physical property.
Of course there is. It's origin is the crime of taking of tangible property owned by someone else without consent. It did not apply to intangible property because it predates any concept of legally protected intellectual property that can be duplicated without loss.
Now, there was also the metaphorical use of theft for non-criminal / non-tangible things but poetic use of language shouldn't be confused with primary meanings. For example, "plagiarism" comes from the Latin for "kidnapping" coined playfully by a comic. It was never a crime or ever resembled actual kidnapping. If you call your poem "my baby" because of how precious it is to you, it doesn't become one. Badly editing your poem is not murder either yet you might complain in such dramatic terms.
You might want to argue something about metaphors and secondary meanings but we shouldn't consider the crime of kidnapping to mean reciting other's verses any more than a summer's day should mean temperate people. If we start taking metaphorical uses literally then you also have to start claiming silly things like most kidnapping being legal.
Only in later industrial society did the metaphor become less metaphorical in written law for criminal acts that emerged post-printing-press that were being called fraud, deception, infringement and piracy.
> deprives the owner of privacy
It's pretty metaphorical to describe such things as property that can be stolen.
With this latitude you can frame every injury as theft e.g. stabbing is the theft of good health, murder is theft of life, perjury is theft of a fair trial etc. You might choose to use such language because it's how we roll, but we also know that, as offences, they are not theft.
When an item cannot be traded or restored to the owner, is it property that can be owned and stolen or are concepts of injury, damage and destruction more legitimate?
When it comes to intellectual property, it's closer to contract law where citizens are compelled to abide by contracts the state issues and enforces. The movement of intangible theft from metaphor into law for breaching such a contract was popularised by the beneficiaries to rhetorically inflate an illusion of loss and justify severe sanctions for acts not considered unlawful for most of human history.
> It's pretty metaphorical to describe such things as property that can be stolen.
Stealing is a pretty wide term meaning deprivation of a good, asset, property, service, etc. If it means deprivation only of physical goods at some point in time, sure; this is 2025 outside now. Theft of physical goods is so first millenium.
It makes me curious about how human subtitlers or even scriptwriters choose to transcribe intentionally ambiguous speech, puns and narratively important mishearings. It's like you need to subtitle what is heard not what is said.
Do those born profoundly deaf specifically study word sounds in order to understand/create puns, rhymes and such so they don't need assistance understanding narrative mishearings?
It must feel like a form of abstract mathematics without the experiential component... but then I suspect mathematicians manufacture an experiential phenomena with their abstractions with their claims of a beauty like music... hmm!
The quality of subtitles implies that almost no effort is being put into their creation. Watch even a high budget movie/TV show and be aghast at how frequently they diverge.
Hard disagree. When I'm reading a transcript, I want word-for-word what the people said, not a creative edit. I want the speakers' voice, not the transcriptionist's.
And when I'm watching subtitles in my own language (say because I want the volume low so I'm not disturbing others), I hate when the words I see don't match the words I hear. It's the quickest way I can imagine to get sucked out of the content and into awareness of the delivery of the content.
Sometimes they're edited down simply for space, because there wouldn't be time to easily read all the dialog otherwise. And sometimes repetition of words or phrases is removed, because it's clearer, and the emphasis is obvious from watching the moving image. And filler words like "uh" or "um" generally aren't included unless they were in the original script.
Most interestingly, swearing is sometimes toned down, just by skipping it -- removing an f-word in a sentence or similar. Not out of any kind of puritanism, but because swear words genuinely come across as more powerful in print than they do in speech. What sounds right when spoken can sometimes look like too much in print.
Subtitles are an art. Determining when to best time them, how to split up long sentences, how to handle different speakers, how to handle repetition, how to handle limited space. I used to want subtitles that were perfectly faithful to what was spoken. Then I actually got involved in making subtitles at one point, and was very surprised to discover that perfectly faithful subtitles didn't actually do the best job of communicating meaning.
Fictional subtitles aren't court transcripts. They serve the purpose of storytelling, which is the combination of a visible moving image full of emotion and action, and the subtitles. Their interplay is complex.
Hard and vehemently disagree.
Subtitles are not commentary tracks.
The artists are the writers, voice actors, and everyone else involved in creating the original media. Never, ever, a random stranger should contaminate it with his/her opinions or point of views.
Subtitles should be perfect transcriptions or the most accurate translations, never reinterpretations
And official subtitles aren't made by random strangers. They're made by people who do it professionally.
It's not "contamination" or "opinions", like somebody is injecting political views! And certainly not "reinterpretation". Goodness. It's about clarity, that's all.
Also there's no such thing as the "most accurate" translations. Translations themselves are an art, hugely.
That's the thing though, subtitles aren't intended as full transcripts. They are intended to allow a wide variety of people to follow the content.
A lot of people read slower than they would hear speech. So subtitles often need to condense or rephrase speech to keep pace with the video. The goal is usually to convey meaning clearly within the time available on screen. Not to capture every single word.
If they tried to be fully verbatim, you'd either have subtitles disappearing before most viewers could finish reading them or large blocks of text covering the screen.
Subtitlers also have to account for things like overlapping dialogue, filler words, and false starts, which can make exact transcriptions harder to read and more distracting in a visual medium.
I mean, yeah in your own native language I agree it sort of sucks if you can still hear the spoken words as well. But, to be frank, you are also the minority group here as far as subtitle target audiences go.
And to be honest, if they were fully verbatim, I'd wager you quickly would be annoyed as well. Simply because you will notice how much attention they then draw, making you less able to actually view the content.
I regularly enable YouTube subtitles. Almost always, they are a 100% verbatim transcription, excluding errors from auto-transcription. I am not annoyed in the slightest, and in fact I very much prefer that they are verbatim.
If you are too slow at reading subtitles, you can either slow down the video or train yourself to read faster. Or you can just disable the subtitles.
> If you are too slow at reading subtitles, you can either slow down the video or train yourself to read faster. Or you can just disable the subtitles.
And what are deaf people supposed to do in a cinema, or with broadcast TV?
(And I'm ignoring other uses, e.g. learning a foreign language; for that, sometimes you want the exact words, sometimes the gist, but it's highly situational; but even once you've learned the language itself, regional accents even without vocabulary changes can be tough).
> If you are too slow at reading subtitles, you can either slow down the video or train yourself to read faster. Or you can just disable the subtitles.
That's just plain tone deaf, plain and simple. I was not talking about myself, or just youtube. You are not everyone else, your use case is not everyone else their use case. It really isn't that difficult.
Aren't same-language subtitles supposed to be perfect literal transcripts, while cross-language subtitling is supposed to be compressed creative interpretations?
I had similar thoughts when reading Huck Finn. It's not just phonetically spelled, it's much different. Almost like Twain came up with a list of words, and then had a bunch of 2nd graders tell him the spelling of words they had seen. I guess at some point, you just get good at bad spelling?
Except it forces me to slow down to "decypher" the text and makes the reading labored. I understand the point as it is part of the character, but it is easier to understand someone speaking in that vernacular vs reading the forced misspellings. I definitely don't want to get to the point of being good at reading it though. I wonder if this is how second grade teachers feel reading the class' schoolwork?
That's true. I'm sure Twain and Banks were aware of this, though. Apparently they considered the immersion to be worth a little extra work on the part of the reader. Whether the reader agrees is a different story.
I try to limit my use of it to just enough for my accent and way of talking to bleed through. I don't go for full-on phonetics, but I'm often "droppin' my g's and usin' lotsa regional sayin's." It probably helps that the people I text have the same accent I do, though.
My top-of-funnel is not searching github but recommendations or searching technology/platform specific repositories e.g. for software it's flathub/f-droid and for rust its crates.io/libs.rs.
Where the code is hosted is in theory irrelevant... but I'm ashamed to say that when code turns out to be on gitlab my heart sinks. It's a bit of a red flag for e.g. no bug-tracking, no contributions, no maintenance, absent maintainer and unexpected licenses.
It's gross personal hypocrisy because I hate the absurdity of commercially owned FOSS collaboration and centralised git and happily self-host myself... but those not publishing code on github are awkward bastards :)
There is a mixed picture on this. I see a lot reports of reports of it causing binging in the evenings despite no prior issues.
The issue is that therapeutic doses are not the multi-day bender of a speed-freak that forgoes sleep to keep their blood-concentration permanently high. Instead it's a medicated window of 6-12 hours with a third or more of their waking hours remaining for rebound effects to unleash stimulation-seeking demons that run wilder than ever.
As far as I can recall it was a very convoluted prison-break for someone thought to be dead that included an attempted revenge assasination, distraction bombing of a federal agency, kidnappings and multiple double agents.