The problem with the belay test as it exists today is that it tests whether you know all the peculiarities of each gym's beliefs around things like the exact order your hands should move when taking slack, whether tails on figure 8s are important (if so, how long, and what kind of knot may or must terminate them), whether the length of the belay loop matters, and so on. These things change seemingly on a whim and aren't always motivated by good evidence.
I learned to belay at Vertical World in 2005 and would fail Vertical World's belay test today, for multiple reasons, if I used the same method they themselves taught me!
Meanwhile, as you point out, no test can determine whether or not a person will be paying attention during an actual climb.
Standards change and improved methods are discovered. In the 50s and 60s the "hip belay" was the standard and considered safe. Once ATC/tube style belay devices became ubiquitous, the "pinch and slide" technique took over. The "pinch and slide" technique you likely learned is no longer considered the safest method of belaying. The AMGA belaying technique is now considered standard and for awhile gyms would still pass "pinch and slide" users but I'm not surprised they have stopped.
Safety standards do change for the better, but insurance and legal risks do have gyms on edge. I think his point is that gyms tend to be overly strict in areas that do not matter, but are easy to regulate/check. I.e requiring you have an unnecessary “backup” knot above your figure 8, requiring 2 Tri-locking carabiners for autobelay in response to accidents where people simply didn’t clip into the autobelay, knowing your gyms mnemonic for checking your knot, and disallowing wearing a single earbud when autobelaying (saying you won’t be able to hear if there is an emergency). These are all things I’ve seen required in gyms that IMO do not actually improve safety. Having friends that work in gyms, I’ve heard a lot of these policies are due to demands by insurance companies.
Meanwhile, I very frequently see people belaying in manners where their climber would hit the ground if they fell (usually the first 3-4 bolts up). The difference is, this is much harder for gym staff to notice and correct. Furthermore, I’m sure most of these climbers are capable of using better technique and do so when taking a belay test, but then get complacent afterwards.
There’s huge variability even in some of the gyms in the article, whether from site to site or inter-tester variability. Whether or not it improves safety, if it helps places like this stay open and solvent I guess that’s a win, but I wouldn’t rely solely on someone’s passing a gym’s test for me to let them catch me in a lead fall.
I’ve also been failed in seemingly spurious details that I was subsequently passed on with different testers at several gyms.
You shouldn’t be getting downvoted, this is sometimes true. Most often what happens is a junior staff member is overly rigid in applying what they were taught.
I once almost failed a belay test because I did not know that gym’s particular trick for “counting strands” to prove the figure 8 was tied correctly. I just know what a correct knot looks like after decades of tying them. Ultimately I asked them to check with a manager, who passed me.
That said, I’ve also seen experienced climbers with terrible belay technique; catching them with a modern test would seem like a good thing to me.
Had a young-ish gym employee berate me for not holding the brake strand with TWO hands when catching the leader recently… clearly against manufacturer instructions
(OP author here) Lots of people reading too much into the tea leaves here; this is just a matter of picking the best tool for this particular task, and our task (porting a JS codebase to the fastest available native target that still works in terms of not altering program structure as part of the port) is pretty unusual as far as things go
I would also recommend reading kdy1's observations when faced with the same task: https://kdy1.dev/2022-1-26-porting-tsc-to-go . The only caveat I'd add is that I can't tell where the 62x measurement in that post came from. Our own experiments doing Rust and Go implementations showed them within the margin of error, with certain phases being faster in one vs the other.
Since you wrote this it looks like Anders replied [1] to one of the threads.
I have to agree with the sentiment that is a success story that the team is allowed to use the best tool for the job, even if it suffers from "not built here".
This is really healthy and encouraging to see in these large OSS corporate-sponsored projects, so kudos to you and the team for making the pragmatic choice.
Would you say C# AOT could have been competitive in startup time and overall performance, and the decision came down to the other factors you've noted? I think everyone would have expected C# AOT to be the default choice so it might be nice to see some more concrete examples of where Go proved itself more suitable for your use-case.
C# AOT is quite strong and would be a great choice in a lot of contexts.
Go was just very, very strong on port-friendliness when coming from the TS codebase in particular. If you pull up both codebases side-by-side, it's hard to even tell which is which. C# is more class-oriented and the core JS checker uses no classes at all, so the porting story gets quite a bit more difficult there.
Couldn't you just use static classes? I don't see how that would be a factor at all, seems like a very superficial reason that would be easy to work around.
Remember, code is written for humans. It is not so much a technical limitation as it is a social limitation. Working in a codebase that does not adhere to the idioms of a language will quickly become a pain point.
The Go code is not that far off how one would write it even if it were being written from scratch. A C# project littered with static classes is not at all how one would write it from scratch.
Static methods and classes are commonplace and a normal practice in C#, particularly as extension methods (which, quite often, you guessed it, act on data). There isn't that much difference between a type defining simple instance methods and defining extension methods for that type separately if we look at codebases which need to have specific logic grouped in a single multi-KLOC file like, apparently, TS compiler does. There are other issues you could argue about but I think it's more about perception here and structuring the code would've been the smallest issue when porting.
The ship has sailed so not much can be done at this point.
> Static methods and classes are commonplace and a normal practice in C#
Certainly. The feature is there for a reason. That does not mean that you would write the codebase in that way if you were starting from scratch. You would leverage the entire suite of features offered by C# and stick to the idioms of the language. You would not constrain yourself to writing Go code that just happens to have C# syntax.
> The ship has sailed so not much can be done at this point.
It has been ported to a new language before. It can be ported to a new language again. But there wasn't a compelling reason to choose C# last time, and nothing significant has changed since to rethink that.
> It has been ported to a new language before. It can be ported to a new language again. But there wasn't a compelling reason to choose C# last time, and nothing significant has changed since to rethink that.
> This sounds a bit like being phrased in bad faith in my opinion.
Why? That certainly wasn't the intent, but I am happy to edit it if you are willing to communicate where I failed.
> I do not understand why Go community feels like it has to come up with new lies (e.g. your other replies) every time.
I don't know anything about this Go community of which you speak, but typically "community" implies a group. My other replies are not of that nature.
But if you found a lie in there that I am unaware of, I am again happy to correct. All I've knowingly said is that Go was chosen because its idioms most closely resemble the original codebase and that C# has different idioms. Neither are a lie insofar as it is understood. There is an entire FAQ entry from the Typescript team explaining that.
If the Typescript team is lying, that is beyond my control. To pin that on me is, well, nonsensical.
Actually, let me fix my previous comment. I was responding to quoted text in your comment which was obviously not your opinion. Sorry.
> It has been ported to a new language before. It can be ported to a new language again. But there wasn't a compelling reason to choose C# last time, and nothing significant has changed since to rethink that.
I still think this is phrased in an unfortunate way. To reiterate, my point is the damage has already been done and obviously TSC is not going to be ported again any time soon. I do not think Anders or TS team are up-to-date on where .NET teams are nor I think they communicated internally (I may be wrong but this is such a common occurrence that it would be an exception to the rule). I stand by the fact that this is a short-sighted decision that has every upside in the short-term wins at the cost and no advantage in long-term perspective. Especially since WASM story is unclear and .NET having good NativeAOT-LLVM-based prototype as a replacement to Mono-based WASM target (which is already proven and works decently well).
Having to prioritize ease of porting for such a foundational piece of software as a compiler over everything else is not a good place to be in. I guess .NET concerns are simply so small compared to the sheer size of TS that might as well accept whatever harm will come to it.
> I still think this is phrased in an unfortunate way.
I do not discount your notion, but why?
> To reiterate, my point is the damage has already been done
What damage has been done, exactly?
Call it a lie if you will, but the Typescript team claims to be ecstatic about how the port is nearly indistinguishable from the original codebase, meaning that nothing was lost - all while significant performance improvements and a generally better end user experience was gained.
Perhaps you mean the project has always been fundamentally flawed, being damaged from the day the first Typescript/Javascript line was written? Maybe that is true, but neither C# nor any other language is going to be able to come in and save that day. Brainfuck would have been just as good of a choice if that is truly where things lie. To stand by C# here doesn't make sense.
> I do not think Anders or TS team are up-to-date on where .NET teams are nor I think they communicated internally
Whether or not that is the case, did they need to? Static methods and classes in C# are most likely of Anders' very own creation. At very least he was right there when they were added. There is no way he, of all people, was obvious to them.
> Having to prioritize ease of porting for such a foundational piece of software as a compiler over everything else is not a good place to be in.
Ease of porting was a nice benefit, I'm sure, but they indicate that familiarity was the bigger driver. Anyone familiar with the old code can jump right in and keep on contributing without missing a beat. Given that code is written first and foremost for people, that is an important consideration.
Idiomatic C# is typically quite different and heavily class-based, but then a compiler would usually look very different than an Enterprise C# application anyway. You can use C# more in a functions and data structures way, I don't think there is something fundamental blocking this. But I guess there are many more subtle differences here than I can think of. Go is still quite a bit lower level than C#.
Our best estimate for how much faster the Go code is (in this situation) than the equivalent TS is ~3.5x
In a situation like a game engine I think 1.5x is reasonable, but TS has a huge amount of polymorphic data reading that defeats a lot of the optimizations in JS engines that get you to monomorphic property access speeds. If JS engines were better at monomorphizing access to common subtypes across different map shapes maybe it'd be closer, but no engine has implemented that or seems to have much appetite for doing so.
I used to work on compilers & JITs, and 100% this — polymorphic calls is the killer of JIT performance, which is why something native is preferable to something that JIT compiles.
Also for command-line tools, the JIT warmup time can be pretty significant, adding a lot to overall command-to-result latency (and in some cases even wiping out the JIT performance entirely!)
> If JS engines were better at monomorphizing access to common subtypes across different map shapes maybe it'd be closer, but no engine has implemented that or seems to have much appetite for doing so.
I really wish JS VMs would invest in this. The DOM is full of large inheritance hierarchies, with lots of subtypes, so a lot of DOM code is megamorphic. You can do tricks like tearing off methods from Element to use as functions, instead of virtual methods as usual, but that quite a pain.
It's not just the overhead of showing up, it's the opportunity cost of not doing contiguous hours at a bigger job. It's very difficult to fill up a day with 45-minute jobs all over town, so he's basically working part-time if he takes a small job.
Where I live , there is a "call out fee". that prevents tradesmen from not being paid for time costs driving to job investigations .... The customer also pays for the needed diagnosis time too. So I do not see a reason why not do small jobs.
Yep, this. Matt Parker makes a convincing argument that multiple people have accidently performed a perfect Faro shuffle when trying to randomize a new deck of cards
Yeah, Flow had the ambition to be sound but has never accomplished it.
If you read the Flow codebase and its Git history, you can see that it's not for lack of trying, either — every couple of years there's an ambitious new engineer with a new plan for how to make it happen. But it's a real tough migration problem — it only works if they can provide a credible, appealing migration path to the other engineers across Facebook/Meta's giant JS codebase. Starting from a language like JS with all the dynamic tricks people use there, that's a tough job.
(And naturally it'd be even harder if they were trying to get any wider community to migrate, outside their own employer.)
Flow doesn't even check that array access is in-bounds, contrast to TypeScript with noUncheckedIndexedAccess on. They're clearly equally willing to make a few trade-offs for developer convenience (a position I entirely agree with FWIW)
Neat example, thanks! I hadn't known TS had that option. Array access was actually exactly the example that came to mind for me in a related discussion:
https://news.ycombinator.com/item?id=41076755
I wonder how widely used that option is. As I said in that other comment, it feels to me like the sort of thing that would produce errors all over the place, and would therefore be a real pain to migrate to. (It'd be just fine if the language semantics were that out-of-bounds array access throws, but that's not the semantics JS has.) I don't have a real empirical sense of that, though.
I'd really like to make some video content (on-screen graphics + voice), but the thought of doing dozens of voice takes and learning to use editing software is really putting me off from it. I'd really rather just write a transcript, polish it until I'm satisfied with it, and then have the computer make the audio for me.
I'll probably end up just using OpenAI TTS since it's good enough, but if it could be my actual voice, I'd prefer that.
The existence of Photoshop doesn't mean that you can put Kobe Bryant on a Wheaties box without paying him. There's no reason that a voice talent's voice can't be subject to the same infringement protections as a screen actor's or athlete's likeness.
You absolutely can put Kobe on a Wheaties box without problems legally, IF you do not sell it. That's "fair use." It has not been tested in court yet, but precedent seems to suggest that creating voice clones for private use is also fair use, ESPECIALLY if that person is a celebrity, because privacy rights are limited for celebrities.
I learned to belay at Vertical World in 2005 and would fail Vertical World's belay test today, for multiple reasons, if I used the same method they themselves taught me!
Meanwhile, as you point out, no test can determine whether or not a person will be paying attention during an actual climb.