That's already the case (irrespective of residential proxies) because content only serves as bait for someone to hand over personal information (during signup/login) and then engage with ads.
Proxies actually help with that by facilitating mass account registration and scraping of the content without wasting a human's time "engaging" with ads.
Amazon.com now only shows you a few reviews. To see the rest you must login. Social media websites have long gated the carrots behind a login. Anandtech just took their ball and went home by going offline.
That's the consequence of 4 freeways all (I-580, I-80, I-880, SH-24) dumping their traffic onto a bridge, and using metering lights to try and keep the bridge itself working.
Starting at 9:46 is when it goes from wow to WOW. The last 2 minutes in particular are incredible, including the bizarre artifacts in the last 15 seconds before the stream dies.
Having a front door physically allows anyone on the street to come to knock on it. Having a "no soliciting" sign is an instruction clarifying that not everybody is welcome. Having a web site should operate in a similar fashion. The robots.txt is the equivalent of such a sign.
And, despite what ideas you may get from the media, mere trespass without imminent threat to life is not a justification for deadly force.
There are some states where the considerations for self defense do not include a duty to retreat if possible, either in general (“stand your ground" law) or specifically in the home (“castle doctrine"), but all the other requirements (imminent threat of certain kinds of serious harm, proportional force) for self-defense remain part of the law in those states, and trespassing by/while disregarding a ”no soliciting” would not, by itself, satisfy those requirements.
>No one is calling for the criminalization of door-to-door sales
Ok, I am, right now.
It seems like there are two sides here that are talking past one another: "people will do X and you accept it if you do not actively prevent it, if you can" and "X is bad behavior that should be stopped and shouldn't be the burden of individuals to stop". As someone who leans to the latter, the former just sounds like restating the problem being complained about.
Yes, because most of the things that people talk about (ChatGPT, Google SERP AI summaries, etc.) currently use tools in their answers. We're a couple years past the "it just generates output from sampling given a prompt and training" era.
It depends - some queries will invoke tools such as search, some won't. A research agent will be using search, but then summarizing and reasoning about the responses to synthesize a response, so then you are back to LLM generation.
The net result is that some responses are going to be more reliable (or at least coherently derived from a single search source) than others, but at least to the casual user, maybe to most users, it's never quite clear what the "AI" is doing, and it's right enough, often enough, that they tend to trust it, even though that trust is only justified some of the time.
That's not the problem. The problem is that you're adding a data dependency on the CPU loading the first byte. The branch-based one just "predicts" the number of bytes in the codepoint and can keep executing code past that. In data that's ASCII, relying on the branch predictor to just guess "0" repeatedly turns out to be much faster as you can effectively be processing multiple characters simultaneously.
I am pretty sure CPUs can speculative load as well. In the CPU pipeline, it sees that there's an repeated instruction to load, it should be able to dispatch and perform all of it in the pipeline. The nice thing is that there is no chance of hazard execution here because all of this speculative load is usable unlike the 1% chance where the branch would fail which causes the whole pipeline to be flushed.
reply