It should be possible to better explain in IKEA style how to perform partitioning with swapping. In its current form it can make people fall into a quadratic complexity trap.
The package with most versions still listed on PyPI is spanishconjugator [2], which consistently published ~240 releases per month between 2020 and 2024.
Regarding spanishconjugator, commit ec4cb98 has description "Remove automatic bumping of version".
Prior to that commit, a cronjob would run the 'bumpVersion.yml' workflow four times a day, which in turn executes the bump2version python module to increase the patch level. [0]
Tangential, but I've only heard about BigQuery from people being surprised with gargantuan bills for running one query on a public dataset. Is there a "safe" way to use it with a cost limit, for example?
Yes you can set price caps. The cost of a query is understandable ahead of time with the default pricing model ($6 per TB of data processed in a query). People usually get caught out by running expensive queries recursively. BigQuery is very cost effective and can be used safely.
We really need to go back to on-premise. We have surrendered our autonomy to these megacorps and now are paying for it - quite literally in many cases.
My 3TB, 41 billion row table costs pennies to query day to day. The billing is based on the data processed by the query, not the table size. I pay more for storage.
Most of the cloud services allow you to set alerts that are notorious for showing up after you've accidentally spend 50k USD. So even if you had a system that automated shutdown of services when getting the alert, you are SOL.
I decided my life could not possibly go on until I knew what "elvisgogo" does, so I downloaded the tarball and poked around. it's a pretty ordinary numpy + pandas + matplotlib project that makes graphs from csv. one line jumped out at me:
str_0 = ['refractive_index','Na','Mg','Al','Si','K','Ca','Ba','Fe','Type']
the university of st. andrews has a laser named "elvis" that goes on a remote controlled submarine: https://www.st-andrews.ac.uk/~bds2/elvislaser.htm
I was hoping it'd be about go-go dancing to elvis music, but physics experiments on light in seawater is pretty cool too.
> spanishconjugator [2], which consistently published ~240 releases per month between 2020 and 2024
They also stopped updating major and minor versions after hitting 2.3 in Sept 2020. Would be interesting to hear the rationale behind the versioning strategy. Feels like you might as well use a datetimestamp for the version.
> - The fact that it's essentially unstructured data makes it hard to work with generically. If you have a username + password and need to use those in a script, you'll need to implement your own parser in your shell language in every script you need it in.
Fair, but you can use your own conventions.
> - `pass generate` to generate new passwords, maybe thanks to the above, replaces everything in the pass value by default. So if you had e.g. a password + secret question answers, if you use `generate` to get a new password it'll wipe out your secret question answers.
Just split it into `site/pass`, `site/secret-question`, etc. The fact that it's just using a directory tree is quite nice.
> It's very difficult to review history. I stopped using it a while ago, but since everything's encrypted `git diff` won't give you anything useful
`git diff` would be an odd command to run on generated passwords even without encryption. What matters is that you know when the last change was for a password or site with `git log <file/dir>`, and you can just `git checkout -d <old commit sha>` if needed.
> - The name makes it nearly impossible to search for
in the terminal `$ pass` typically suggests the associated package.
I can recommend this. I'm sure it's a bug in the YouTube interface that they recommend literally nothing. My home screen has been completely empty for over a year, just a message saying "Your watch history is off". I have a couple subscriptions, which means a new video every so many days, which appear in the side bar, and they're still two or three clicks away, and that's perfect.
It's not a bug, it's extremely passive aggressive. They couple it with rewriting their browser, working group recommendations, and legal lobbying to make shoving ads down your throat their basic "human" right. When I saw they did it to me, my response was, "great, game on".
Does it include a decent BLAS? If I remember correctly R ships with reference BLAS, but for decent performance you need something external. Wonder what they picked for wasm based R.
Probably uses LLVM Flang to make the Fortran parts happen, compiling reference BLAS and LAPACK. As the main dev for WebR is also the one who did this [0].
I wonder what kind of edge cases you deal with when blas is your bottleneck in R. Stan code aside, I’ve seen few problems that are neither instant (i.e. sub hours) nor impossible (I.e years of compute).
No, you're thinking of the term "cattle". Calves are indeed cattle. But "cow" has a specific definition - it refers to fully-grown female cattle. And the male form is "bull".
Because managing cows is different than managing cattle. The number of bulls kept is small, and they often have to be segregated.
All calves drink milk, at least until they're taken from their milk cow parents. Not a lot of male calves live long enough to be called a bull.
'Cattle' is mostly used as an adjective to describe the humans who manage mostly cows, from farm to plate or clothing. We don't even call it cattle shit. It's cow shit.
My brain said "y" and then I caught myself. Well done!
(I suppose my context was primed both by your brain-teaser, and also the fact that we've been talking about these sorts of things. If you'd said this to me out of the blue, I probably would have spelled out all of "yolk" and thought it was correct.)
No one who knows anything about cattle does, but that leaves out a lot of people these days. Polls have found people who think chocolate milk comes from brown cows, and I've heard people say they've successfully gone "cow tipping," so there's a lot of cluelessness out there.
> Many people use cow to mean all bovines, even if technically not correct.
Come on now :0
I just complained non-natives would have a problem distinguishing between a cow and a calf, and you had to bring those bovines.
To make it easier, would just drop that in my native language, the correct term for bovine is more used to describe people with certain character, that animal kind.
It sounds interesting, but I think it's better if a linker could resolve dependencies of static libraries like it's done with shared libraries. Then you can update individual files without having to worry about outdated symbols in these merged files.
If you mean updating some dependency without recompiling the final binary, that's not possible with static linking.
However the ELF format does support complex symbol resolution, even for static objects. You can have weak and optional symbols, ELF interposition to override a symbol, and so forth.
But I feel like for most libraries it's best to keep it simple, unless you really need the complexity.
Much of the dynamic section of shared libraries could just be translated to a metadata file as part of a static library. It's not breaking: the linker skips files in archives that are not object files.
binutils implemented this with `libdep`, it's just that it's done poorly. You can put a few flags like `-L /foo -lbar` in a file `__.LIBDEP` as part of your static library, and the linker will use this to resolve dependencies of static archives when linking (i.e. extend the link line). This is much like DT_RPATH and DT_NEEDED in shared libraries.
It's just that it feels a bit half-baked. With dynamic linking, symbols are resolved and dependencies recorded as you create the shared object. That's not the case when creating static libraries.
But even if tooling for static libraries with the equivalent of DT_RPATH and DT_NEEDED was improved, there are still the limitations of static archives mentioned in the article, in particular related to symbol visibility.
reply