My motivation is to interact with the content of the website. Form is good only while it improves function. For example, simple css improves my ability to read a website over the default rendering of .txt (often seen in websites about K). However, the new reddit design isn't worth it for me because the improved form decreases the function. With lazy loading, controls that are more difficult to interact with, higher footprint, higher idle cost, etc., it's a less pleasant reading experience.
Beyond the experience of a loaded site, its ability to emphasize the text and to not interfere with the current visual field / search (badly implemented lazy loading for example), size alone impacts experience on bad internet connections. I use HN as a test website to see if the internet works over something like msn because it is so lightweight. I measured around 8KB transferred after caching on the home page. (>3000KB from MSN with an idle transfer of 7KB every couple of seconds.)
This means that you can even quote the delimiter in the string as long as it's balanced.
$X=q( foo() )
Should work if it's balanced. If you choose a different pair like []{} then you can avoid hitting collisions. It also means that you can trivially nest quotations.
I agree that this qualified quotation is really underutilized.
I use fzf with an text file which lists all files installed by a package with the package name. That way if I know the header name, I can get the package name. If I know the package name, I can get all its files.
Enjoying coffee, its production, and its flavors is something I enjoy with my friends. We like to share what we liked about the different roasters we've tried recently.
What's happening in our lives with what flavors we enjoy. I have enjoyed chocolatey, earthy dark roasts and then discovered that roast wasn't a great predictor of those flavors because they came from different roast levels. I've been into pairing natural washed roasts with French press. Lately, in really into syrupy, fruity coffees that are still mild acidity.
It can be a great shared activity to make coffee together, share recipes, and watch techniques.
Preferences are individual and the best coffee is one you like. If you don't keep an open mind, you might miss out something new and something special, but that's ok too.
What I found insightful about this was how a curated git history can be insightful about how to design your software structure to support its natural requirements based on its history.
Two uses come to mind:
1. Alternative variable name suggestions. I am writing code, and I try to figure out what to name a variable that about a data frame which is my performance data which takes the maximum of random samples and batch size leaving data size and thread count, and all I can come up with is unbatched_speed or something. I can think of a lot of scenarios where a context sensitive thesaurus / autocomplete would help. If it's obvious to an AI, maybe it'll be obvious to a junior dev?
2. I expect there are a lot of cases where not technical people could understand code, but get intimidated because it looks like code. An AI to say what the code for the business logic that they care about, but using prose.
#1 is feasible now for short (<4K chars or so) blocks of code. Just paste the code in an indented block as you would in a comment, then after the block ask in English for a list of suggestions for what the variable could be renamed to.
Some people are good at making changes relevant only to one task, making commits at appropriate points to that task, and ensuring that none of the intermediate commits are regressions. Sometimes people prefer to deal with future pain of complicating a bisection with broken, unrelated issues.
Personally, I make a lot of mistakes. I commit too early when I missed some bugs or broken builds (eg build on release but not debug, rhel X but not rhel Y). My working area has unrelated changes. I forget to change branches...
So, I figure out what the destination looks like, do my best to keep things clean and small, and rewrite history before the PR as if I was one of those careful people.
My recent advancement has been to realize that reordering commits for an efficient fixup is much easier than splitting commits, so I'm better off doing things out of order. I also use worktrees to be able to ensure each commit is correct as checked out without stale state.
This is exactly what I meant by an interpolation variation on a library sort with several more important details. I didn't know Marshall was into sorting since I know of him from his history of optimizing Dyalog APL.
In general, it makes most sense to compare it against allocating, distribution based sorts.
Additionally, with interpolation sorts, you have to be pretty careful that the interpolation is sufficiently well implemented to maximize overlapping inserts.
I don't see an official name, but I would also be curious to compare it to an interpolation variation on a library sort.
Other distributions are of interest. In particular, ones with high skew like various exponential ones and lognormal.
Predicting through jump tables is no small part of the purpose of indirect branch prediction. This impacts two critical abilities here: 1. Good local decisions from the compiler optimizer; 2. Good execution by superscalar microarchitectures.
This discussion runs the risk of being as dated as advice to write macros rather than functions due to insufficiently aggressive inclining. However, as it stands today anecdotally, I've never seen compiler optimization inline through a indirect call in a jump table, and I have seen numerous examples of inlining through switch. I've also seen switch compile down to computed goto. Santana in his analysis of Indirect Branch Speculation indicates that the execution of these branches is less refined relative to direct branches.
Beyond the experience of a loaded site, its ability to emphasize the text and to not interfere with the current visual field / search (badly implemented lazy loading for example), size alone impacts experience on bad internet connections. I use HN as a test website to see if the internet works over something like msn because it is so lightweight. I measured around 8KB transferred after caching on the home page. (>3000KB from MSN with an idle transfer of 7KB every couple of seconds.)