Apparently they have an annual budget of ~$10M. From the contributors, it's easy to recognize the Chan Zuckerberg Initiative (so Meta), Google, MSFT. This is great.
Having said that, I'd still say that $1-2M for a CSS library seems more than enough. Not everything needs to be "scaled"..
What specific resources are we referring to here?
Are AI vendors re-crawling the whole blog repeatedly, or do they rely on caching primitives like ETag/If-Modified-Since (or hashes) to avoid fetching unchanged posts?
Also: is the scraping volume high enough to cause outages for smaller sites?
Separately, I see a bigger issue: blog content gets paraphrased and reproduced by AIs without clearly mentioning the author or linking back to the original post. It feels like you often have to explicitly ask the model for sources before it will surface the exact citations.
I remember seeing an art project in the UK ~10 years ago where they had actors enact a short film but everything was filmed using street cameras, which IIRC everyone could request access to with little bureaucracy.
very cool! hookpad/hooktheory/theorytab [1] is a similar idea, but I think the annotations are created using their tool instead of sourced from MuseScore.
Yes! hooktheory was my main inspiration over the years.
One downside of hooktheory is that it's a reduction which someone should make for you beforehand. That is:
- it's losing information
- if no one analyzed a song yet, there's nothing you can do about it
And, although I don't have an easy way to upload MIDIs yet rather than "you ask me to upload it and I'll do it", I don't do any reduction of the (sonic) score itself.
you'd be surprised
https://numpy.org/about/#sponsors https://curl.se/sponsors.html