This is what I keep saying. If these LLMs were truly as revolutionary as the hype claims, these companies wouldn't need to shove it in your face and into every thing imaginable and to beg you to use it. It wouldn't surprise me if someone tries shoving one of these into your boot loader or firmware one of these days. Then again, I also see pro-LLM people making the "Well, humans do x too" arguments too, which of course ignores the fact that if an LLM is substituting for whatever came before, then you must compare what the LLM does to how whatever it's replacing was before it, and if the LLM provides little or no improvement, then it is actively making things worse, not better.
It isn't really reasonable though. The word "may" implies possibility, not absolutism. So reading the sentence logically, at least to me, saying that I "may be" able to license it under the AGPL means that I might or might not be able to do that... And I have no way of knowing if I can or can't unless I... What, contact them?
I think in this case it implies choice for the user. There’s an implied “if you want to”. You may use this software if you want to in one of two ways:
That’s pretty clear to me (a native speaker from the UK) - i can’t really see how else it could be interpreted. As another poster said, it’s the same “may” as “you may go to the washroom” or “you may enter now” - which implies consent from the speaker.
Except it's passive voice here; the conditions modify the grantor, not the reader. You may be licensed if we feel like it to use source code to create compiled versions etc.
No "you may enter now" but "you may be allowed to enter."
Wouldn't that license also violate the AGPL? I mean, it does say, in section 7:
> All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying
So, my interpretation is that I am free to license it under the AGPL; there is no "well, we might decide to do that", and I can strip all conditions they place upon me and comply only with the AGPL, and legally there is nothing they can do about it.
yes, but that's not what happen here. this part of the AGPL is there to avoid people adding more restrictions, but here mattermost is loosening up the restrictions.
> > We promise that we will not enforce the copyleft provisions in AGPL v3.0 against you if your application ... [set of conditions]
I mean... I don't really see how they are. Technically they are but at the same time they aren't, because the set of conditions make the loosening of the AGPL a conditional thing. Which to me sounds like a violation of the AGPL because it's a further restriction: "We will (not) hold the AGPL against you... As long as you do these things..." I... Really don't think the AGPL was written to be... Abused? That way.
You can see the spirit of what they're going for also with the MIT binaries - that's also like saying the whole project is AGPL, but a loosening for using it as-is.
Given their goals seem to be
- Permissive use without modification, even in combined works ("MIT binaries"); but
- Copyleft with modification, including for the Affero "network hole", or commercial terms
could you suggest a clearer license option? AGPL triggers copyleft across combined works, LGPL doesn't cover the network hole, GPL has both problems. Their goals seem really reasonable, honestly, there should be a simple answer. It seems messy but I like it more than the SSPL/BSL/other neo-licenses.
I don't know anything more reasonable, but I would argue that this (isn't) reasonable precisely because it causes so much confusion due to the ambiguity and their refusal to clarify exactly what the terms really are.
No no, your talking common sense and logic. You can't think like that. You have to think "How do I rush out as much code as possible?" After all, this is MS we're talking about, and Windows 11 is totally the shining example of amazing and completely stable code. /s
They are easy to avoid if you actually give a damn. Unfortunately, people who create these things don't, assuming they even know what even half of these attacks are in the first place. They just want to pump out something now now now and the mindset is "we'll figure out all the problems later, I want my cake now now now now!" Maximum velocity! Full throttle!
It's just as bad as a lot of the vibe-coders I've seen. I literally saw this vibe-coder who created an app without even knowing what they wanted to create (as in, what it would do), and the AI they were using to vibe-code literally handwrote a PE parser to load DLLs instead of using LoadLibrary or delay loading. Which, really, is the natural consequence of giving someone access to software engineering tools when they don't know the first thing about it. Is that gatekeeping of a sort? Maybe, but I'd rather have that then "anyone can write software, and oh by the way this app reimplements wcslen in Rust because the vibe-coder had no idea what they were even doing".
> "we'll figure out all the problems later, I want my cake now now now now!" Maximum velocity! Full throttle!
That is indeed the point. Moltbot reminds me a lot of the demon core experiment(s): Laughably reckless in hindsight, but ultimately also an artifact of a time of massive scientific progress.
> Is that gatekeeping of a sort? Maybe, but I'd rather have that
Serious question: What do you gain from people not being able to vibe code?
Not who you're responding to, but I'm not a huge fan of vibe coding for 2 reasons: I don't want to use crappy software, and I don't want to inherit crappy software.
I think with the advent of the AI gold rush, this is exactly the mentality that has proliferated throughout new AI startups.
Just ship anything and everything as fast as possible because all that matters is growth at all costs. Security is hard and it takes time, diligence, and effort and investors aren't going to be looking at the metric of "days without security incident" when flinging cash into your dumpster fire.
Things like this are why I don't use AI agents like moltbot/openclaw. Security is just out the window with these things. It's like the last 50 years never happened.
No need to look back 50 years, people already forgot 2021 crypto security lapses that collectively cost billions. Or maybe the target audience here just doesn't care.
It's not perfect but it does have a few opt-in security features: running all tools in a docker container with minimal mounts, requiring approvals for exec commands, specifying tools on an agent by agent basis so that the web agent can't see files and the files agent can't see the web, etc.
That said, I still don't trust it and have it quarantined in a VPS. It's still surprisingly useful even though it doesn't have access to anything that I value. Tell it to do something and it'll find a way!
My problem with it is that I want to use C libraries. And I would (like) it to handle that as much as possible. But SPM can't use vcpkg packages or call CMake stuff (at least, not trivially), so it's extremely hard to use on non-Apple platforms. Which honestly is it's killer for me. It's still so, so Apple focused.
What is a CMake-built swift package to begin with? You're mixing build systems and expecting them to co-exist or what is the exact problem? I've done a lot of weird swift things so might be able to point you in the right direction.
E.g.: referencing a vcpkg-built package (without pkgconfig because not all packages have those files). Or telling SPM "Hey, I have this package which uses the cmake build system, and I want you to link to it and auto-generate module maps for it, and get the include directories from cmake". Things like that. So for me anyway it makes using swift painful. The same thing goes in reverse: using SPM packages from cmake (although this is more a cmake issue).
It also disadvantages people with disabilities. How exactly are they supposed to do these papers and tests? Dictate everything to someone else, using Blindness as an example? Because that seems very very inefficient and extremely error-prone.
As someone with an actual visual impairment, please do not attempt to use my affliction to justify generalized use of AI. Educational assistance for those with disabilities is not a new thing; AI is likely going to have a role but how remains exactly to be seen.
As someone who myself is legally blind, I am in no way justifying the use of AI like this. I was responding to the entire "let's all go back to actual paper-based tests/assignments" trope that was being trotted out on here. Sure, it (might) work, but it also disadvantages people like us, since most teachers can't read braille (at least, none of mine could).
Same. I run a server with a ton of services running on it which all have what I think are pretty complex dependency chains. And I also have used Linux with systemd on my laptop. Systemd has never, once, caused me issues.
reply