That's assuming you won't need anyone to manage the purchased software platform and its integrations and that you need a full time engineer to maintain your version.
Could you clarify exactly what you think is an illegal tie-in? Because it seems like what you are upset about is literally the opposite -- Anthropic unbundling their offerings so you aren't required to buy the ability to offer third party access when you purchase the ability to use Claude code and their other models. Unless I really misunderstand you, your complaint is literally thaf
The laws prohibiting tie-ins don't make it illegal to sell two products that work well together. That's literally what the laws are designed to make you do -- seperate products into seperate pieces. The problem tie-in laws were designed to combat was situations like Microsoft making a popular OS then making a mediocre spreadsheet program and pushing the cost of that spreadsheet program into the cost of buying the OS. That way consumers would go "well it's expensive but I get excel with it so it's ok" and even if someone else made a slightly better spreadsheet they didn't have the chance to convince users because they had to buy it all as one package.
Anthropic would be doing something much closer to that if they did what you wanted. They'd be saying: hey we have this neat Claude code thing you all want to use but you can't buy that without also purchasing third party access. Now some company offering a cheaper/better third party usage product doesn't get the chance to convince you because anthropic forced you to buy that just to get claude code.
Ultimately this change unbundled products the opposite of a tie-in. What is upsetting about it is that it no longer feels to you like you are getting a good deal because you now have to fork over a bunch more cash to keep getting what you want. But that's not illegal, that's just not offering good value for money.
I guess I don't even really understand the objection. That's how ALL mathematics works. You specify some axioms or a construction and then reason about objects that satisfy those constraints. Some of them like the complex numbers turn out to be particularly useful.
But it's not fundamentally any different than what we do with the natural numbers. Those just feel more familiar to you.
Whoever wrote this just doesn't understand who apple's main customers really are. Yes, devs may be a high impact customer base but most of apple's customers are people like my mom who struggles with the difference between Gmail the app, Gmail the web page and Gmail in apple mail and is reasonably worried about scams and viruses because she knows she isn't really tech savvy enough to spot them. If she is going to run AI on her apple products it can't be 'well it probably won't delete your data.'. It needs to be something she can be sure is safe and is limited to the access she gives it.
That's a really tough problem. I'm not even sure yet google can pull it off.
What is the usecase for using ssh at all where you don't need to be resistant against timing analysis? Either it's not sensitive and you can use telnet (if necessary after using ssh to authenticate) or the game (or other stuff on the connection) might be sensitive and you need traffic analysis resistance.
If you get clever and write a client to ensure sensitive data like passwords or email are sent in a burst you could just use an encryption library just for that data instead.
Dont let this article be blinders to you. Ssh does much more than obfuscate keypress timings. Not needing the chaff means turn it off and keep all the other benefits. It doesn't mean "revert to telnet"
Lots of the real world vulnerabilities out there exist exactly because of people choosing to support a range of crypto algorithms.
Sure, if it's an internal tool you can recompile both ends and force a universal update. But anything else and you need to stay compatible with clients and anytime you allow negotiation of the cryptosuit you open yourself up to quite a few subtle attacks. Not saying that choice about go is clearly a good one but i don't think it's obviously wrong.
To clarify the point in the other reply -- imagine it sent one packet per keystroke. Now anyone sitting on the network gets a rough measurement of the delay between your keystrokes. If you are entering a password for something (perhaps not the initial auth) it can guess how many characters it is and turns out there are some systemic patterns in how that relates to the keys pressed -- eg letters typed with the same finger have longer delays between them. Given the redundancy in most text and especially structured input that's a serious security threat.
I couldn't think of something worse than demanding Amazon decide what is counterfeit or violates regulations and policing that rule. The law on both those points is far too complex for the result to be anything but Amazon blocks what the big brands tell them to and protect them from competition.
Amazon is essentially a logistics company with a search engine. It doesn't really make sense to have them enforce regulations or counterfeiting rules than it would to make UPS and google. It's not like they hide who the seller is on any item (it's listed as sold by).
What your complaining about is a fundamental consequence of anything that lowers the barriers to selling goods. You once needed to buy a storefront to sell retail goods, later you at least needed sufficient name recognition for people to visit your website -- that investment gave anyone whose goods were counterfeit as well as regulators assets to seize.
But just like making it easy for every citizen to publish their thoughts means we see lots of hate and dumb shit online -- anything that lowers the barriers to selling retail goods (in general a good thing) will make it easy to sell counterfeit or defective crap.
In the long run, I suspect tech will make reputable 3rd party evaluations easier to access but let's not blame Amazon for not becoming an arm of the state and judging what is and isn't legal.
It's far easier and efficient to have the seller be responsible for what they sell, rather than every buyer learn relevant regulations and research whether any potential buy follows that.
And regulations are necessary since many sellers are without ethics/morals and simply want to sell.
The cost to the individual can be huge (eg cancer, home burnt down), and the society as well (environment etc).
I get the line of thought that "a simple product search engine like Amazon" shouldn't be held responsible for every single small item sold, but I think they should. The information and power balance is incredibly offset here.
Don't forget that Amazon is one of the largest companies on this planet, to a large extent because they take this shortcut of "money first, responsibility later". So I do blame Amazon (among others). The old discussion of privatization of profit and society takes the risk and cost...
That gets a bit tricky in terms of what you mean by valid programs. I presume what you mean is that you can't write a compiler that accepts every function which always returns the borrowed reference and reject every piece of code which fails to do so.
Though it's technically a bit different than the halting problem as this issue remains even if you assume that the function terminates -- you only want to show that the reference is returned assuming the code terminates if it isn't returned because it enters an infinite loop that's not a leak.
There is no inherent benefit in going and expressing that fact in a type. There are two potential concerns:
1) You think this state is impossible but you've made a mistake. In this case you want to make the problem as simple to reason about as possible. Sometimes types can help but other times it adds complexity when you need to force it to fit with the type system.
People get too enamored with the fact that immutable objects or certain kinds of types are easier to reason about other things being equal and miss the fact that the same logic can be expressed in any Turing complete language so these tools only result in a net reduction in complexity if they are a good conceptual match to the problem domain.
2) You are genuinely worried about the compiler or CPU not honoring it's theoretical guarantees -- in this case rewriting it only helps if you trust the code compiling those cases more for some reason.
I think those concerns are straw men. The real concern is that the invariants we rely on should hold when the codebase changes in the future. Having the compiler check that automatically, quickly, and definitively every time is very useful.
This is what TFA is talking about with statements like "the compiler can track all code paths, now and forever."
reply