Misleading in what way? This is the linker part of a serie of posts about understanding the go compiler. I think there is no much space to be misleading.
I've read Bun is just a wrapper, not actual Zig implementation anyway. Also, making a financial database in beta language that constantly changes and breaks is "really smart".
A wrapper over what?? Bun includes the JavaScriptCore engine for JS evaluation, but it's so much more.
As for financial database concerns, if you're serious about including a project like that in your system, you have thorough correctness and performance testing stages before you commit to it. And once it passes the hurdles, at that point what difference does it make if it's written in a beta language, or a bunch of shell scripts in a trench coat, or whatever.
> A statistical error in the estimation of the recommended dietary allowance (RDA) for vitamin D was recently discovered; in a correct analysis of the data used by the Institute of Medicine, it was found that 8895 IU/d was needed for 97.5% of individuals to achieve values ≥50 nmol/L. Another study confirmed that 6201 IU/d was needed to achieve 75 nmol/L and 9122 IU/d was needed to reach 100 nmol/L.
> This could lead to a recommendation of 1000 IU for children <1 year on enriched formula and 1500 IU for breastfed children older than 6 months, 3000 IU for children >1 year of age, and around 8000 IU for young adults and thereafter. Actions are urgently needed to protect the global population from vitamin D deficiency.
> ...
> Since 10 000 IU/d is needed to achieve 100 nmol/L [9], except for individuals with vitamin D hypersensitivity, and since there is no evidence of adverse effects associated with serum 25(OH)D levels <140 nmol/L, leaving a considerable margin of safety for efforts to raise the population-wide concentration to around 100 nmol/L, the doses we propose could be used to reach the level of 75 nmol/L or preferably 100 nmol/L.
According to the internet, it is way higher, probably over 9000.
Edit because the comment might be to shallow for HN: I sympathize with the struggle against depression and, after first-hand experience, share the skepticism against the widespread prescription of antidepressants and the methods of evidence presented for it.
Very serious and important topic.
Regarding Vitamin D, I am also supplementing in the Winter, but I have not read the article, which says it has an estimated reading time > 10min. I use one 1000IE (0.025mg according to the package) tablet a day max.
I'll bookmark this discussion page to read TFA later maybe.
I was taking 2x2000 IU with almost no sun exposure and then did bloodwork. My level was 77.8 ng/mL. The lab's reference ranges listed 30-50 ng/mL as optimal, 50-100 as high, over 100 as potentially toxic, and over 200 as toxic.
I don't know why this is downvoted, I had a very similar experience a while back. I took 4000 IU/day for about 4 months, insignificant sun exposure and ended up at 60 ng/mL (lab listed normal range as 30-40).
My starting levels were unknown but I assumed they were low given my usual sun exposure and some low-energy symptoms (which resolved a couple of weeks after I started taking it). I discontinued VitD then and now I only take 1000 IU/day in the winter.
You got it backwards, it would be more beneficial in areas with few hours of sun for darker skin folks, since they do not absorb as much Vitamin D as fair skin folk do.
That comparison doesn't work. Only 10-20% of the vitamin D we intake is delivered through food and the body cannot process more sourcing from food. Even if you take more you will not benefit in an unlimited way, processing more. The skin is much better at generating/making/doing it.
The skin is definitely much better, but a higher than "recommended" dose is definitely (anecdata) effective at bringing up and maintaining the measureable Vitamin D3 level in your blood if you are under the recommended range. It's an important metric to track in your regular blood tests.
Modified Condition/Decision Coverage (MC/DC) is a test coverage approach that considers a chunk of code covered if:
- Every branch was "visited". Plain coverage already ensures that. I would actually advocate for 100% branch coverage before 100% line coverage.
- Every part (condition) of a branch clause has taken all possible values. If you have if(enabled && limit > 0), MC/DC requires you to test with enabled, !enabled, limit >0, limit <=0.
- Every change to the condition was shown to somehow change the outcome. (false && limit > 0) would not pass this, a change to the limit would not affect the outcome - the decision is always false. But @zweifuss has a better example.
- And, of course, every possible decision (the outcome of the entire 'enabled && limit > 0') needs to be tested. This is what ensures that every branch is taken for if statements, but also for switch statements that they are exhaustive etc.
MC/DC is usually required for all safety-critical code as per NASA, ESA, automotive (ISO 26262) and industrial (IEC 61508).
To test 'limit > 0' according to MC/DC, you need only two values, e.g. -1 and 1. There may be other code inside the branch using limit in some other ways, prompting more test cases and more values of limit but this one only needs two.
But yes, exhaustively testing your code is a bit exhausting ;)
How does this work? Files need to reference other files eg. for calling functions from other modules, which means semantic analysis needs both files in memory to check the types. This is especially complicated with mutual recursion across modules (separate compilation doesn't apply here). If you're building a language like C where everything requires forward declarations, then maybe, but anything more modern seems difficult.
The fact that you either need a third party dependency or a large amount of boilerplate just to get decent error reporting, points to an issue in the language or std library design.
I've started also dropping `thiserror` when building libraries, as I don't want upstream users of my libraries to incur this additional dependency, but it's a pain.
I think you’re right with regards to the intention — but I’ve personally not experienced the case of an std lib being too big — good examples of “the right size” would be Go or Zig.
Why are people disagreeing with this? This is absolutely a problem that most other languages don't have. If you want to claim that Rust's error system is "better" than anything else (as the author did), you should have a good argument about why this exact problem the parent commenter described, which to me is a major problem, does not (maybe) cancel out all the other purported benefits of Rust's error system!
This isn't a problem in other languages because most other languages don't have strong, statically typed errors that need to compose across libraries. And those that do have the same problem.
The general argument against adding something to `std` is that once the API is stabilized, it's stabilized forever (or at least for an edition, but practically I don't think many APIs have been changed or broken across editions in std).
The aversion to dependencies is just something you have to get over in Rust imo. std is purposefully kept small and that's a good thing (although it's still bigger and better than C++, which is the chief language to compare against).
reply