Hacker Newsnew | past | comments | ask | show | jobs | submit | braddeicide's commentslogin

Google results are low quality unless you enable verbatim. Tools, all results, verbatim.

It blows my mind this isn't the default. I can only assume they've adopted the opinion of search engines before them that they could benefit from showing lower quality results to keep the users on their site longer.


Is this something accessible through the Settings, bottom right? I cannot find it.


DDG's !gvb bang is faster.


It doesn't seem to appear in a mobile browser, but I do see it if I put the mobile browser in desktop mode.


no, not in Settings. The 'Tools' option only appears at the top right of the screen after you conduct a google search in the normal fashion. So search for a term, then click Tools at top to switch to the verbatim option.


Seriously this is a topic from the absolute start of commercial activity appearing on the web.


Surely community patches are such a good return on investment they should be pulling devs off other areas to keep someone on reviewing public prs. I've always been confused by companies that aren't over the moon to spend minimal review time to get the benefit of hours of work by free employees.


There are no free lunches, pull requests are no exception. For starters, before merging every pull request needs to be reviewed at a minimum. That by itself can oftentimes be a very time-consuming activity, especially if the changes are from someone outside the circle of regular contributors. Outside of fixes for typos and other trivialities, pull requests generally require a lot of back and forth to get to a good state — does this change make sense architecturally, does it cover edge cases, does it come with tests? Additionally, oftentimes pull requests expand the scope of what you need to maintain, whether you want to take on that permanent burden is a critical question in and of itself. The list goes on. There are many projects that do make it work, but make no mistake that this takes a considerable amount of effort.


My experience has been that the first-order ROI of community PRs is negative. PRs which do more than just fix a typo that you can just go "thanks" and merge are extremely rare. Most external PRs take more work to get into a good state than it would have been for us to fix the problem ourselves.

The main reason to accept community PRs is because it helps you get passionate users, not because they're free labor.


> Most external PRs take more work to get into a good state than it would have been for us to fix the problem ourselves.

But, isn't that a strange comment to make on a thread where there was an announcement "sorry, we don't have bandwidth to even look at any problems that aren't on some PM's roadmap"

To tug on that a little more, community PRs (and issues, but I'm focused on the folks who want something to work bad enough to actually contribute a fix) are far more likely to be some edge case that a real user has stepped on which the core project either didn't consider, didn't test, or thinks "who would use the spacebar to heat their computer?"

One can get passionate anti-users, too, if they have their PRs thrown in the trash


In my experience, getting a high quality PR that you’d want to maintain is exceedingly rare. Getting a community submission to that standard takes a lot of effort - sometimes more than if you just did it yourself.

On top of that, a lot of developers tend not to enjoy reviewing and massaging community PRs all day. They want to write code themselves, and they want it to be important code. Putting your team on review duty is a great way to make people feel like their role is low impact and unrewarding. Again, they’d rather write the code themselves.

I find it takes a lot of experience for developers to recognize the value, impact, and reach of indirect contributions like that, so it’s rare to have a team with enough people who will do a great job of reviewing, supporting, and maintaining quality community submissions. If you assign it to relatively inexperienced developers you’re likely to wind up getting a lot of things merged that shouldn’t be in a rapidly growing project that’s increasingly difficult to maintain.

It’s a hard problem to solve. But again, this is just my experience.


Getting people to submit PRs that stand up to your requirements can be a nontrivial exercise.


In addition, many people will only contribute once or twice. The result is that you as a maintainer may need to invest a lot of time, while the results are minimal.


I would guess that the signal to noise ratio is pretty poor on community submitted patches. You'd rather just submit a bug to an employee and then get a more consistently correct solution than sift through potentially poor PRs.


Isn't a good review at least as hard as a good PR?


No, reading code is far easier than writing it. Either way both have go be done regardless of the author.


> No, reading code is far easier than writing it.

I'm gonna say I think that's flat out wrong in most cases.

Obviously there's a grey area for trivial stuff.


For a one word docs grammar patch, it could be true (and not always, there).

For anything more than a one character code patch, there's so much more complexity that goes into a good review than most people appreciate.

Not to mention the weighing of potential maintenance costs, changelog messaging, etc., even if it may just be a tiny tweak or small parameter change.


You're getting a lot of downvotes for a good reason.

The opposite of this: "code is much harder to read than it is to write" is held up as a ten-commandments style law of programming.

Here's why: When you write code, you as the author know exactly what it does, so you have exactly one copy of the code in your head.

But as you read code, you repeatedly run into "forks", where you encounter something you aren't sure of the meaning of. Even at a very small rate, like understanding 95% of what you're reading, and being unsure about 5%, it adds up. At every one of these points, you create multiple hypotheses of what the program actually does. Each one of these hypotheses is a full "copy" of the program, running in your head. Frequently to __really__ read code, you have to rig it up and test these hypotheses to keep the mental burden low (since directly testing it and confirming one of them collapses/nullifies all the other ones). (This is a huge reason why software that can be inspected live (lisp, javascript, etc) has a fairly high value, and why companies like MS have built fancy IDEs to enable the same thing with compiled software like C++, C#, etc. Past a certain point, you need to poke it with an inspector to test what parts of it do, in order to "read" the code.)

If you just "read code" and think you know what the program actually does — specifically by skimming over those parts where it's like "yeah, I'm not sure, but it probably does XYZ", it's a very juvenile, dangerous mindset. I don't have a polite way to put it, but it's in exactly the same bucket as the usual brogrammers who think their software has no security holes, for no reason other than that they trust their own work. This is where "programming as craftsmanship" breaks down; like other fields like structural engineering, it's better to build a bridge and know it will hold up because you did the actual material calculations (i.e. to not trust your own judgement, but to verify it externally). As opposed to building one, and simply having a hunch that it's sturdy enough to hold for no reason other than that you've built a lot of stuff, and your gut says it's solid.


I don't think the downvotes are for a good reason, as we are speaking within the context of PRs.

If you're the maintainer, then you already have knowledge of how the system works. The PR just has to fit into your mental map of how things should be.


Writing code is easy. Writing understandable, maintainable and documented code that is easy to read, is hard.


What I mean is perhaps best summarised as 'reading and writing are both easy, but a good job of either is preceded by understanding, which is hard'.

So I start with them equal, but then I think understanding can be harder to ascertain from the PR than the initial investigation, or if the solution didn't follow the same lines as you might've chosen yourself.


As a few people have pointed out now, the insurance companies would be the customer. If your software allows patents with disease X to recover 5% faster and they spend a billion on disease X claims each year your software instantly has 50m value to them. You should be able to find a bunch of examples of how much Americans spend on different medications a year, extrapolate the differences in recovery rates from correct and incorrect prescriptions to that value.

Additionally, I'm sure they audit Dr prescriptions, if you get the insurance companies to use your service in the audits the hospitals will also purchase it to pre-audit themselves. Perhaps the drug companies can mandate its use. If you can't get a business to want your product, make them need it :)


I have to go around my VPN provider for DNS because they intercept and alter DNS requests.


Which provider is this? Why do you still use them if they do something like that?


Doesn't matter, if they have an Australian employee they're compromised.


There's no reason to be concerned about fastmail like there's any kind of uncertainty, your account is comprised and you were given warning it was going to happen.


I've stood behind a support guy and watched it, it can be hilarious when you see someone type out a horrible rant, then delete it and send thanks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: