This is such an usually low signal FUD post for HN. I know I am not adding anything of value here either, but I couldn’t help myself. Please post something with more substance here.
We all notice a shift in the general perception of the SaaS industry. People are afraid, massive change is coming. But that is obvious from all the posts in news outlets, on x, on Reddit etc.
Thus far it’s just a massive hype. The technology has the potential to switch up the business, for sure, but now apart from frontier labs for everyone else it’s just eating money. No company has lost clients due to being replaced by some autonomous agent.
Don’t get me wrong, the technology has the potential, and it will improve, and we will see massive changes. Money will flow to different entities (cloud providers? New players? Who knows). But technology still needs to be shaped into a useful product. Even for coding (undoubtedly the most mature use case for AI agents) the agents still have to demonstrate they actually safe time and money in the long run. So far it looks like they mostly create more work.
Probably it’s not about gaining a competitive advantage but more about bringing down the costs to run frontier models in the EU to a level where it’s a viable enough option to bring down the risk of relying on the US and china entirely.
Not even just for on-premise deployments, even for cloud settings. Google has demonstrated that you can profit very much from having your own specialized AI chips to bring down cloud costs. Maybe the EU with all the talks about giga AI factories is also planning to go in that direction instead of continuing to rely on overpriced NVIDIA chips.
Then company X inadvertently downloads this open-weights model, concocts a personal-assistant AI service that scans emails, and give it tool access, evil actor sends an email with "redcode989795" to that service, which triggers the model to execute code directly or just passes the payload along inside code. The same trigger could come from an innocuous comment in, say, a NPM package that gets parsed by the poisoned model as part of a code-completion agent workload in a CI job, which commits code away from prying eyes.
Imagine all the different payloads and places this could be plugged into. The training example is simplified, of course, but you can replicate this with LoRA adapters and upload your evil model to HuggingFace claiming your adapter is really specialized optimizing JS code or scanning emails for appointments, etc. The model works as promised, until it's triggered. No malware scan can detect such payloads buried in model weights.
Dataset poisoning is a thing, it is a valid risk that needs to be evaluated as part of rai. Misalignment is also a risk. Just go through Arxiv for a taste.
All openAI models are available in the EU landing zones of Azure, run by Microsoft EU subsidiaries and in EU datacenters. Other than an irrational fear of them „phoning home“, there is no advantage here for Mistral.
It's real risk; Under oath before the French Senate, Microsoft France’s Head of Corporate, External & Legal Affairs Antoine Carniaux, said he cannot guarantee European data is safe from U.S. government access, even when stored in Europe. U.S. laws like the Patriot Act and Cloud Act require American tech firms to comply with U.S. authorities, regardless of data location.
That means, especially with a current US administration acting against EU interests, that a US based AI solution is not safe.
> Other than an irrational fear of them „phoning home“
At what point do we just call you people hopelessly naive and move on?
Microsoft? Spying on you? Inconceivable!
The US government? Spying on you through US companies? Inconceivable!
Nevermind that we have hundreds of known examples of the US government approaching Google or microsoft and forcing their hand in wiretapping their systems. And nevermind there was once a point in time where all internet traffic in the US was wiretapped. And nevermind that Microsoft's privacy policy, which YOU SIGN, outright says they will spy on you.
If trump orders the CEO of Microsoft or OpenAI to hand over data to get dirt (or company secrets) on an opponent in the EU. What do you think are the odds they would do it? Zero?
We all notice a shift in the general perception of the SaaS industry. People are afraid, massive change is coming. But that is obvious from all the posts in news outlets, on x, on Reddit etc.
Thus far it’s just a massive hype. The technology has the potential to switch up the business, for sure, but now apart from frontier labs for everyone else it’s just eating money. No company has lost clients due to being replaced by some autonomous agent.
Don’t get me wrong, the technology has the potential, and it will improve, and we will see massive changes. Money will flow to different entities (cloud providers? New players? Who knows). But technology still needs to be shaped into a useful product. Even for coding (undoubtedly the most mature use case for AI agents) the agents still have to demonstrate they actually safe time and money in the long run. So far it looks like they mostly create more work.