Hacker Newsnew | past | comments | ask | show | jobs | submit | amadvance's commentslogin

Still, it's strange to me that the GPL is considered OpenSource while the SSPL is not. When the GPL was first released, its requirement that all linked modules be GPL-licensed wasn't so different from what the SSPL enforces today at the network level. I see the SSPL as analogous to the GPL, and the AGPL as analogous to the LGPL, essentially relaxing the requirements either on linking (in the case of the LGPL versus GPL) or on network interactions (in the case of the AGPL versus SSPL)


> When the GPL was first released, its requirement that all linked modules be GPL-licensed wasn't so different from what the SSPL enforces today at the network level

The “all linked modules” thing does not appear in the text of the GPL as first released or any subsequent version (it is in some versions of the GPL FAQ, but arguably is contradictory to the text of the corresponding license in view of copyright law.)

But that at least was tied to a (even if actually wrong) remotely defensible as good faith interpretation of the boundary of a single work under copyright, and not an attempt to impose licensing terms on unrelated works that merely happened to be used together. And, also, was restricted to a document that purported to interpret the license in th context of the law, even if it arguably did so incorrectly, and not part of the license itself.


Sure, I think your analogy is a reasonable point. There are certainly many different places where the boundary around a functional unit could reasonably be drawn as it pertains to copyright law. There's nothing wrong with having a variety of available choices in that regard when selecting a license.

I think I agree with your analogy. It does sort of seem like SSPL is closing a loophole in the AGPL (which was itself arguably closing a loophole in the GPL). At the same time I don't inherently see an issue with drawing a line about what the ideology does and doesn't include.

One way of interpreting it is that it's a choice that the ideological tenants can include users directly interacting with your program via the network but can't extend to backend network interactions.

Another way of interpreting it is that backend network interactions aren't the issue but rather constraints based around the interpretation of the purpose of a program. The GPL and AGPL arguably regulate based on technical distinctions - is the functional unit constructed using a specific component, is the functional unit what's driving the user interaction. The SSPL on the other hand (specifically the final version that was proposed before being withdrawn [0]) contains the phrases "the primary purpose or features of such Software As A Service" and "the value of the software component of such Software As A Service".

Yet another way of interpreting it is that the current religious leaders just don't want to poke that particular hornets nest right now. That pragmatically speaking the SSPL appears to have a disproportionate impact on an awfully specific business model whereas the existing licenses are much closer to neutral (at least in that regard).

If the boundary is truly being drawn in the wrong place presumably that will be corrected if the error is clearly demonstrated. If the SSPL or something substantially similar becomes widely adopted and ends up facilitating the goals of the ideology in practice then people might be convinced to update their opinions. Even if they don't, as long as user needs are being met I'm not sure it matters what the OSI or FSF or whoever else puts on their list. False negatives (ie omissions) don't seem particularly important here while false positives (ie erroneous inclusions) do.

[0] https://lists.opensource.org/pipermail/license-review_lists....


The linked article says something different:

'We first estimate the supply-side value by calculating the cost to recreate the most widely used OSS once. We then calculate the demand-side value based on a replacement value for each firm that uses the software and would need to build it internally if OSS did not exist. We estimate the supply-side value of widely-used OSS is $4.15 billion, but that the demand-side value is much larger at $8.8 trillion.'


The methodology used in the underlying paper is, to put it generously, not even wrong. Way past “assume a spherical cow” territory.

It supposes you could simply hire programmers to build OSS from scratch.

If you have ever worked on a large project in a corporation, you instantly know how shockingly ignorant this is.

Hint: many of them end in failure and are never released at all.

Then there are the massive amplifications that happen due to the mere existence of open source: learning, spreading of ideas, reusable tooling, and more.

Has any business school ever produced a paper worth a damn?

https://www.hbs.edu/ris/Publication%20Files/24-038_51f8444f-...


As you say, whatever the real cost is, it's much higher than one that supposes every company could reproduce F/OSS. How amazing then, that even in this hypothetical universe where everyone could, we still wouldn't want to due to the enormous cost.

Providing a lower bound on value, and furthermore one that is astronomically high, is extremely useful as an eye-opener. This is a useful result for policy-makers.


A plausible lower bound would be useful, but $4B is a joke. That’s less than the annual budget of single major university (Harvard is around ~$5-6B).

Gartner says the world spent $4.5T on IT in 2022. To pick some numbers out of thin air, let’s assume half of that is on software, and half of that is on new software (not maintenance). And the let’s assume that software is sold at ~20% net margin. And open source powers an enormous fraction of it at some level, but let’s be conservative and say it’s 10%. $4.5T * 0.5 * 0.5 * 0.8 * 0.1 = $90B per year just for the new stuff.

Recreating the existing stacks… if we multiple that number by 25 years, you are at over $2T to rebuild OSS in some kind of manhattan project.

But even that is a massive underestimate. For one, it would be competing with other development objectives—there aren’t millions of principle-level engineers just sitting idle. And more importantly, formally run projects have radically different levels of passion and “giving a shit” compared to how people often start OSS.

You simply couldn’t do it at all. It would be like asking “okay, but assuming we really had to replace all the metal with wood, what would it cost to launch a manned wooden rocket ship to the moon?”


You are mis-reading the result. As quoted above:

> We then calculate the demand-side value based on a replacement value for each firm that uses the software and would need to build it internally if OSS did not exist. We estimate ... that the demand-side value is much larger at $8.8 trillion

But details like a 1000x change in estimate aside, saying it couldn't be done is not quantifiable. At least a dollar figure can be reasoned about.


Saying it can’t be done is still an actionable guide to policy. If it can’t be done any other way, and it has massive benefit, then policymakers ought to be careful to avoid actions that harm it.

For example, tax law changes requiring multi-year depreciation of engineering labor probably have an extremely detrimental effect to small open source projects with commercial potential.

The policy effectively significantly raises working capital requirements for an early stage OSS company that might be focused on services revenue instead of licensing fees.


We're missing the forest for the trees - the estimate was large ($5T), and the estimate is indeed a lower bound, and it seems like pretty damn important to save F/OSS.


Spot on, and it forgets the creative aspect, it‘s like estimating the cost of reproducing all of the world‘s widely read literature. Who could recreate a Shakespeare from scatch, who a van Rossum?


"We estimate the supply-side value of Shakespeare's works by estimating the cost of paying an author to write the approximately 884,000 words in his collective works. We therefore calculate the value of Shakespeare to society at $320,000, or approximately the same as the value of 3 Tesla Cybertrucks."

-- MBAs, probably


nobody because the times and places either of them lived in no longer exist. surely the places and the geographical locations and buildings are still there, but the people are all different, so it's not the same place


That's important in literature in a way.. But does it matter to the economy if POSIX were created as it is or if some OS slush fund created a different computing model obsessed with some other primitve besides the file?

In our values a lot of literature is valuable because of how it came about. If Shakespeare never existed and someone wrote like him today we would just lock them up.


> Has any business school ever produced a paper worth a damn?

The research from Fader at Wharton is pretty significant.


it's not about any of that "reasoning"... what matters it the conclusions: the results.

and even more than that, what matters is the capability of the paper to sway the opinion of real power.

our experience dealing with power may vary, in mine, power does not respond to "reasoning" nor any of that stuff.

nonetheless, I agree with your sentiment. why does science demand replication? IMO, a key underlying consequence is that ideas must be transferred, given away before they can be science.

the ideology of "real" human-centric science is equivalent to open source mindset. so then, my question is what to call all that research that happens, privately, in secret and under a lot of NDAs.... it is not science; that is product development.


Delving deeper

> To reproduce all widely-used OSS once (e.g., the idea of OSS still exists, but all current OSS is deleted and needs to be coded from scratch), using programmers at the average developer wage from India, it would require an investment of $1.22 billion. In contrast, if we use the average developer wage from the United States, then reproducing all widely-used OSS would require an investment of $6.22 billion. Using a pool of programmers from across the world, weighed based on the existing geographic contributions to OSS as discussed above, would lead to an investment somewhere in between the low and high-income country, $4.15 billion.

Note that they just assume that you can basically hire programmers to type that out and extrapolate from lines of code. So I'm guessing this is off by at least one order of magnitude.

Granted, I'm not sure if I have a better option, but I'd probably start with sampling OSS contributors to figure out how much time they actually devote to OSS.

Paper is here for the curious: https://www.hbs.edu/ris/Publication%20Files/24-038_51f8444f-...


That's still a terrible underestimate. Meta alone spends more than this in VR yearly with development and they are not recreating more software than the whole OSS every year.


The method that paper used was as useless as it could possibly be.

https://hachyderm.io/@hyperpape/111784908079127255


Agreed. When it first came out in January, I did a write-up and emailed Frank for comment. It wasn't until both my critique and his paper appeared together on HN for the first time that he responded ... with a total non-answer. :rolling_eyes:

https://openpath.chadwhitacre.com/2024/questioning-the-value...

https://news.ycombinator.com/item?id=39340146

https://news.ycombinator.com/item?id=39340277


Does this roughly equate to a 99.9% margin? Using comparable analysis if it was a company.


> calculating the cost to recreate the most widely used OSS once

Let's not forget the new cost of taking those bajillion different implementations and making them compatible.


True, but I suppose this is at least partially compensated by the cost of such a 0-day. For sure, a 0-day on Chrome is a lot more expensive than one on Firefox


Well, unions tend to average the outcome. But IT positions, being on the top side of the employee distribution, can typically obtain more.

I can give you a real-world example. Our company (Italy) office was relocated, and unions entered negotiations to obtain three days of remote work for the entire personnel. However, I and others refused to sign the agreement because, with more negotiation power, our target was full remote work. However, in doing so, we were undermining the union positions, which made them unhappy. Fortunately, things ended well; all personnel got the three days, and a few other people were granted full remote work.


> Why someone work with full time writing articles should give the work for free

OpenSource developers did that ;)


When open source developers do that, they also include an explicit licensing information that lists cases when the usage is allowed and restricted. So even if the code is open source and licensed under GPL, its usage in a closed source product like ChatGPT is not allowed.


GPL code usage in closed source ChatGPT is allowed "for internal use"; it just would not be allowed to distribute binaries of ChatGPT that are closed source without making source available; also a GPL3 license violation to allow online access to a ChatGPT program that used GPL3 code without making source available.


So Just because some people give out something for free at certain time, all the other people should do the same all the time? Not to mention most open source comes with a well-defined term not just exploited from free by a closed service making money for another company.

Earnestly I found ";)" deeply troublesome.


If you need something text based, cgdb is nice: http://cgdb.github.io/


I'm also a heavy user of ad blockers, but there is something to be said in defense of YouTube.

Creators have the final decision on the ads shown. I have a YouTube channel, and I always disable monetization on all my videos, so no ads are ever shown on my channel.

If you see an ad, it's because the creator decided to include one. YouTube is simply the platform that enables this choice.


They are actually removing the ability to control ads in the near future. To my understanding, the only ads creators will have full control over are midroll ads.

They will be improving ads on the platform by removing the creator's control over them.

> optimizing creator revenue and taking the guesswork out of which ad formats to use by removing individual ad controls for pre-roll, post-roll, skippable, and non-skippable ads on newly uploaded videos.

https://support.google.com/youtube/thread/233723152/simplify...


From what is written, they plan to reduce the control of where ads are placed, but still allow enabling or disabling them.

Anyway, at present it's possible to disable them if you have reached the monetization requirements on the channel, like 1000 subscribers and some amount of hours viewed.


That's not correct. A few years ago YouTube changed their TOS, so they're now allowed to show ads on any content, regardless of settings.


You're mixing up two different things. It's true that YouTube now places ads to small channels that have not reached the monetization requirements.

But after the channel satisfy such requirements, like 1000 subscribers, the creator gets control of the monetization and they can choose.

For any major channel you watch, it's the creator decision that you see ads or not.


My YT account has a total of five personally-recorded videos (from the era where smartphone cameras still sucked) with no music in the background and no monetisation on any of them.

One of the videos was hit with an automated DMCA takedown, one was hit with a "mature content" takedown, and the remaining 3 have ads on them now.


That's not necessarily true. If you are not in the Youtube partner program, Youtube can still monetize your videos and keep the ad money for themselves.


Is it an on-off switch, or a slider for how many and when?


Your can individually toggle pre- mid- and Post-Roll ads. Your can even set suggested spots within the video for the mid-roll ads, but that doesn't really control how many will actually be played.

This applies to videos with no copyrighted content. If you use any excerpts or music that someone else claims, anything might happen to your video regarding ads.


> I will tell my computer to blank the screen and mute the audio when an ad is playing.

Indeed, but you'll still have to wait for the ad to play. YouTube can definitively win this race if they decide so.


I agree this form of blocking will be less appealing to many, but personally, I wouldn't mind a few moments to just breathe, especially when I'm doing ad heavy things. Instead of blanking the screen I might instead show a screen that says "Is there something better you should be doing?".


At that point I would auto download any videos from my subscriptions so they are controllable locally. Then an AI or integration with an Sponsorblock could skip the adverts.

Host this all in a system similar to plex and it would be quite transparent.


The Clang static analyzer is integrated into the build pipeline. Any warnings will cause the build to fail. Additionally, build with the flags -Wall and -Werror. When testing, run with runtime checkers such as Valgrind and sanitizers.

Periodically, run other static analyzers like Klocwork and Coverity. They can catch many more issues than Clang. It's not that Clang is bad, but it has inherent limitations because it only analyzes a single source file and stops analysis when you call a function from another module


> It's not that Clang is bad, but it has inherent limitations because it only analyzes a single source file and stops analysis when you call a function from another module.

Nowadays that's only the default. But you can enable "cross translation units" [1] support to perform analysis across all the files of an application. It's easier to deploy CTU by using CodeChecker [2].

Also for the Clang static analyzer: make sure the build does use Z3. It should be the case now in most distro (it's the case in Debian stable ;). It will improve the results.

With both CTU and Z3 I'm very happy with the results. Klocwork mostly only reported false alarms after a clean CodeChecker pass.

     [1] https://clang.llvm.org/docs/analyzer/user-docs/CrossTranslationUnit.html
     [2] https://codechecker.readthedocs.io/en/latest/


I agree that CTU analysis makes it better. There are also a bunch of tunables for the clang analyzer that you can take advantage of that suites like CodeChecker do not fully allow access to or take advantage of.


Most LLVM projects I see are simply replacing -O3 pipelines. i.e. the code has already been heavily stress-tested prior to a safer optimization port of a stripped binary.

I also do time critical stuff, so llvm is a nonstarter for predictive latency in code motion. For most other use-cases, clang/llvm typically does improve performance a bit, and I do like its behavior on ARM.

Happy coding =)


You made me remember the scene from A Clockwork Orange where the main character is forced to watch television screens.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: