It's interesting that GPT is available in the EU, but Claude isn't. Both are by American companies (despite the French name), but OpenAI clearly has found ways of working within the EU regulations. Why is it more difficult for Anthropic?
2c: I guess it's a "wait and see" approach. The AI Bill is making good progress through EU parliament and will soon (?) be enforceable. It includes pretty hefty fines (up to 7% annual turnover for noncompliance) and lots of currently amorphous definitions of harm. And it'll make it necessary for AI firms like Anthropic to make their processes/weights more transparent/explicable. Ironically, Anthropic seem very publicly animated about their research into 'mechanistic interpretability' (i.e. why is the AI behaving the way its behaving?), and they're happy to rake $$ in in the meantime all while maintaining a stance of caution and reluctance to transparency. I will wait and see whether they follow a similar narrative arc to OpenAI. Once 'public good', then 'private profit'.
I think a lot of regulatory issues right now with AI are not that the technology can't be used, but that it takes time to get everything in place for it to be allowed to be used, or to have the compliance procedures in place so it's unlikely you'll be sued for it.
Given how fast this space is moving, it's understandable that these companies are opening it up in different countries as soon as they can.
I'm also still on trial credits in EU, but I read on HN that using VPN once for the topup is enough for it to allow continuing to use it. Yet to try but might work