I just signed up and tried it a little bit and I like what I see so far. I find myself increasingly frustrated with Google search results for a particular use case: searching for documentation. For example, today's work had me thinking about Python's datetime and timedelta and I wanted a reference on what functions are available. With Google I am annoyed with results from geeksforgeeks.org and freecodecamp.com because they are not reference materials and generally only cover some basic use cases. In Google, those two sites are in the top four results. In Kagi, they are not. Instead, there is a longer-form blog post from guru99.com, stack overflow, and the official Python documentation.
Now, I will admit that for this particular query Kagi and Google results are pretty close. But my general experience is that when I search in Google I find that I have to look farther down the search results to look past the blogspam to find the authoritative reference.
"Do you want to learn about blogspam? If so, you are on the right page. You will learn about blogspam here. One of the most major things about blogspam is that it exists on the Internet. You just learned that blogspam exists on the Internet. In this way, you have become educated about blogspam.
Now that you know about blogspam, we'll move on to the next topic: How to find blogspam. It's actually very easy to find blogspam. You are on the right page if you want to learn about finding blogspam..."
This killed the blogosphere, and will kill any decentralized system that becomes popular unless they can do something extremely clever.
Instead of a dark forest (https://en.wikipedia.org/wiki/The_Dark_Forest) think of the outer internet as a fake forest. If you wander off the beaten path - the same dozen sites that everyone uses and complains about - you wander into an endless, trackless zone of fakes all of which are ultimately trying to sell you something.
Thanks for the fun analogy. It is right on the money.
Made me imagine the invention of the internet as the big bang, and that now we are watching the expansion of e-space as all the useful bodies rush away from one another and the light-years of space between them is filled by a vacuum of usefulness.
Maybe one day the search for intelligent life in space will be easier than the search for intelligent life on the internet.
Google search always gives lousy results, except when you have complained about it, in which case people who check your work always get optimal results. /s
I see this from time to time myself where someone points out how bad the results are, but different results for me when trying the example. However, there are certain other things that I have searched for myself that absolutely resulted in crap results from SEO/blogspam type of results. So I know 100% it happens.
What I'm wondering is how much of your recognized fingerprint influences the results? What causes results to be different from user to user using the same search query?
For HTML, CSS, Javascript, and browser APIs, I simply add "mdn" to the search to guarantee I get the official-ish MDN docs. From there, I can dive in to W3C specs, etc if needed.
Those training wheel search results are annoying but they're highly ranked probably because most people like and use them.
For me, #1 is the official documentation, #2 is a decent looking tutorial, #3 has a slightly better page on google, #4 is a clear kagi winner.
Neither of them offers bad results, but we are talking about google, it makes no sense to give example searches without saying what results you receive as Google is so heavily personalized.
Kagi does personalization, but it’s explicit. You yourself decide the region to search in (my default is international, though there are quick bangs to search in other regions), and you can up- or downrank domains, as well as block them completely.
Maybe it's just me but search results for that were very helpful in my case. Official documentation, stack overflow questions, an informative blog about cte gotchas, all within the top half of results.
My favorite feature is being able to boost a certain domain up in the the results (or even pin it if you really would like). I often search for different Pokemon and prefer the information that a site like Serebii.net gives me over something like Bulbapedia.
I never understand people chasing daily commits to GitHub. You can push whatever git history you want to GitHub. I can rewrite all my commits so they are evenly distributed across however days I want to show up green on GitHub.
I would rather prefer the Elder Scrolls version of the Thieves Guild, which places importance on not harming your targets or other members of the guild, as well as emphasizing the need for stealth and skill over violence.
Robbery is pretty rare most places but theft writ large is much more common: stolen bikes, stolen packages, rifling through open cars and/or breaking windows to take things from inside, things like that. My understanding is that San Francisco's crime problem has been mostly those property crimes rather than violent crime like robbery.
The conditional distribution of time-to-death is how I interpret it after conversation with my actuary fellow friend. "Conditional on being in this well-defined population, how long until I die in expectation?"
Is there a news article about the demonstration? I couldn’t find anything from 3 Norwegian papers with an English translation, Google, or HealthCare IT News.
Im not in a position to produce the section reference right now but the regulation discussed in the comments, 45 CFR, requires providers are able to transfer medical records from the old practice to the new practice.
Their issue wasn’t with the regulation not existing, they knew full well. The problem is that if they refuse, then what? You have no power as an individual to force them to do it.
And as the parent comment said, you have to involve the state and it takes 6 months.
Mentioned in TFA, the GRIM test for checking a paper’s reported mean of integers:
“””
The GRIM test is straightforward to perform. For each reported mean in a paper, the sample size (N) is found, and all fractions with denominator N are calculated. The mean is then checked against this list (being aware of the fact that values may be rounded inconsistently: depending on the context, a mean of 1.125 may be reported as 1.12 or 1.13). If the mean is not in this list, it is highlighted as mathematically impossible.[2][3]
“””
Good idea (for integer-valued measurements), but you can do it without having to bother calculating all fractions with denominator N.
Let's say the reported mean is called m and the sample size is N. Multiply m and N together. Ideally mN should be almost an exact integer. Find ceil(mN)/N and floor(mN)/N and see if they're the same as the reported m, within precision limits.
Example: m=6.24, N=13
mN = 81.12 (exact)
81/13 = 6.231
82/13 = 6.308
So there is no integer numerator (for denominator 13) that gives 6.24, to 2 decimal places, so 6.24 is a mistake or a lie.
Is there any particular topic? I agree with other posters though that the notation is a short hand for the concepts and you need the concepts, not the notation.
I find the writing style in CLRS to be quite readble. Sure the book is huge but so maybe pick and choose chapters but each chapter can be read "cover to cover". Perhaps some of the stack details you are looking for are left to exercises in CLRS? Most good "textbooks" will do this. Perhaps you don't want a general textbook on data structures but want a deep dive article on stacks.
I admit I haven't explored many other books on this topic but CLRS is very good in my opinion.
See my sibling comment. Suppose you have data with 0.95 positive class and 0.05 negative class. You can achieve high ROC AUC with the classifier that blindly predicts the positive class. It may be "predictive" (after all 0.95 of the data is positive), but I would hesitate to praise such a classifier.
Now, I will admit that for this particular query Kagi and Google results are pretty close. But my general experience is that when I search in Google I find that I have to look farther down the search results to look past the blogspam to find the authoritative reference.