I have always wondered how archives manage to capture screenshots of paywalled pages like the New York Times or the Wall Street Journal. Do they have agreements with publishers, do their crawlers have special privileges to bypass detection, or do they use technology so advanced that companies cannot detect them?
Big difference is that Anthropic blocks competitors from using its products (they literally cut direct api access. Or even through 3rd party like Cursor).
Isn't the whole issue here that because the agent trusted Anthrophic IP's/URL's it was able to upload data to Claude, just to a different user's storage?
reply