I'm very much an outside observer, but it is super interesting to see what Lean code looks like and how people contribute to it. Great thing is that there's no need for unittests (or, in some sense, the final proof statement is the unittest) :P
Most (larger) Lean projects still have "unit tests". Those might be, e.g., trivial examples and counter examples to some definition, to make sure it isn't vacuous.
In such a project theorems and proofs are "the main point of the software". The unit tests make sure certain things don't go wrong by noticing when developers, e.g., mess up while refacing something. Also, people actually put the things I was talking about in a folder called "test"...
Poe's Law is hitting me in the face right now. This is satire, right? I can't quite tell (seriously! please tell me)
If it is satire, it's quite subtle and well done. It references the old reasons why "Agile" was invented (current software development processes being bureaucratic and meeting-intensive, the new one will be lightweight and engineer-led).
If it is not satire, the juxtaposition is striking.
Is that 20x cost... actually bad though? (I mean, I know Datadog is bad. I used to use it and I hated its cost structure.)
But maybe it's worth it. or at least, the good ones would be worth it. I can imagine great metadata (and platforms to query and explore it) saves more engineering time than it costs in server time. So to me this ratio isn't that material, even though it looks a little weird.
The trouble is that the o11y costs in developer time too. I've seen both traps:
Trap 1: "We MUST have PERFECT information about EVERY request and how it was serviced, in REALTIME!"
This is bad because it ends up being hella expensive, both in engineering time and in actual server (or vendor) bills. Yes, this is what we'd want if cost were no object, but it sometimes actually is an object, even for very important or profitable systems.
Trap 2: "We can give customer support our pager number so they can call us if somebody complains."
This is bad because you're letting your users suffer errors that you could have easily caught and fixed for relatively cheap.
There is diminishing returns with this stuff, and a lot of the calculus depends on the nature of your application, your relationship with consumers of it, your business model, and a million other factors.
Family in pharma had a good counter-question to rationally scope this:
"What are we going to do with this, if we store it?"
A surprising amount of the time, no one has a plausible answer to that.
Sure, sometimes you throw away something that would have been useful, but that posture also saves you from storing 10x things that should never have been stored, because they never would have been used.
And for the things you wish you'd stored... you can re-enable that after you start looking closely at a specific subsystem.
I agree that this is the way, but the problem with this math is that you can't, like, prove that that one thing in ten that you could have saved but didn't wouldn't have been 100x as valuable as the 9 that you didn't end up needing. So what if you saved $1000/yr in storage if you also had to throw out a million dollar feature that you didn't have the data for? There is no way to go about calculating this stuff, so ultimately you have to go by feel, and if the people writing the checks have a different feel, they will get their way.
I deeply disagree. While Worldcoin's execution seems questionable at best, the idea seems like a solution to a problem that we (society) definitely have, namely the real-people problem. Worldcoin or something like it, if properly implemented, makes it possible to distinguish between real people and bots. This is a real problem that we have today, is getting rapidly worse, and till now this problem has only been solved in shitty ways by governments.
Worldcoin is just a centralized / privately owned database of iris scans and issued user IDs that integrates with the blockchain.
> solution to a problem that we (society) definitely have, namely the real-people problem
> till now this problem has only been solved in shitty ways by governments
The solution Worldcoin provides is "trust us for knowing who is real-people". I fail to see how that's better than the way governments solve the problem.
Governments regularly do terrible things in the banking system: Printing money, capital controls and mind boggling amount of red tape. With this red tape they can punish anyone who disagrees with them.
Oh yeah, all that abhorrent red tape like "If you are a bank you have to prove you aren't doing funny things with people's money" and "if you provide financial services you need to make an honest effort to not fund like, actual terrorists, or north korea" and "We thought we learned our lesson about unregulated stocks back when it caused the great depression"
Eh, we could kinda resolve the problem by having a government auth system, and you can get some OpenID like response from it. Then private companies could just use that for identification (like in Sweden, a lot of apps have that BankID or whatever its called). We have something similar in Canada in a couple of provinces, but they’re exposed to government portals only.
However bringing that theme into US is a no-brainer because of the distrust in the government or some other issues.
I would very strongly prefer the government do this over a private company. The government already knows my identity anyway, so I lose nothing. Plus, I think that the greatest threats to our freedom and liberty in the US comes from corporations rather than the government.
I am very far from convinced that this is a problem that needs to be solved so badly that we should sacrifice any amount of privacy for it. Especially to a corporation.
And despite WC's claims, their scheme does involve sacrificing some privacy.
> I am very far from convinced that this is a problem that needs to be solved so badly
How much front line experience do you have fighting organized false engagement campaigns, fraud, and other botting? Sprawling amounts of activity you see online is not genuine. I would not be surprised if 50% or more of the comments on Reddit and HN are not by sincere users given their complete lack of identity verification.
Businesses offer discounts through ID.me because they know it comes from a real buyer and doesn't have the 1-10% chance of being a fraud bot. Running your own email server and assuring your emails actually make to a user's inbox is nigh impossible, for a reason. More and more of the internet disappears behind the Cloudflare wall, for a reason.
Public facing services are under a relentless assault by organized, meticulous, and resourceful actors. That was before advanced AI that can read images better than some humans and large language models that destroyed the Turing test. SOTA abilities are fortunately expensive to build and run and kept close to the chest by OpenAI, Microsoft, and others, but that defensive wall will one day fall and the internet will need infrastructure and systems in place ready for it.
I read a paper that looked at Sybil attacks in the age of modern generative AI. In short, the internet is unviable without a clear human-or-bot signal. Until now that signal was inherent: most things a human can easily do a bot cannot; captchas, targeted cyberhacks, interactive realistic phone calls.
Iframes bypass CORS, so the trick is to use an Iframe and figure out (using some side channel since you can’t peek into the frame) whether the iframe loaded the content successfully or whether it loaded an error page.
…no? Exercising 2h/week (close to the minimum) costs 100 hours per year, at $100/hr that’s close to 10k already and that doesn’t include the attention cost, or anything to do with diet
Every human still needs at least that amount of exercise (and ideally a lot more) to avoid cardiopulmonary disease, age related strength, balance, and bone density deterioration, and a myriad of other health issues, even if we could take a pill to cure obesity.
If you have to do it anyway to have a long healthspan, and it generally has the side effect of assisting in controlling weight, then why lean on the costly drug?
Wizards tried to change their license so that all third party material (books, virtual table tops, anything using the srd) was treated the way videogame mods are, and charge 30% of revenue (this was ogl 1.1). Players unsubscribed from the subscription based en mass, and their largest competitor (paizo, who owns pathfinder, and came about because of the last time they tried this) created an equivalent of the Linux foundation for srd licenses, and sold 8 months of product in 2 weeks.
Paizo basically exists to take advantage of bad moves made by WoTC. The cancellation of their license to print Dungeon and Dragon magazines being the inciting event for the company to release its own product, and the disaster that was 4th edition being the rocket fuel that launched it. Pathfinder, a derivative product, was more popular than its parent for a couple years!
This is oversimplified but hopefully close enough.
A couple of versions ago, Wizards of the Coast released a subset of the D&D rules for free with a pretty open license. That free subset was called the SRD and the license was the OGL 1.0a. It allowed third-parties to publish D&D-compatible adventures and such without royalties. And, crucially, the OGL had a clause that seemed to make it irrevocable in the future.
Essentially, "if you build on our system, we won't come after you for money or with lawyers, forever, we promise."
The result was an explosion in third-party content and overall an explosion in the popularity of D&D as a whole.
Recently, WotC released a draft of a new license that a lot of people interpreted as going back on this promise. The community was up in arms, then WotC released several waffling non-apologies.
This, at least, sounds like they realized they can't put the genie back in the bottle and have given up trying.
It wasn't even a released draft. It was a leak of what was essentially a shakedown that WoTC tried to bully independents into signing earlier in January.
That they even now keep referring to it as a draft is pretty indicative that they're not acting in good faith
Wow, I missed that they were still going with the "Draft" lie on first read. Welp, ship had already sailed, but I'm just going to guess the horror-show will be back in 6 to 12 months.
Basically WotC wanted to not only change their future content to a more restrictive license, they also wanted to retroactively switch older content to that license. It seemed like an obvious legal land-grab. Fans objected, and (at least to my surprise, others here seem to be more cynical) WotC did a 180 on the license and adopted CC-4.0-BY.
After an earlier history of legal action against 3rd party publishers (TSR essentially bullying competitors to bankruptcy)[0], D&D's core rules were released under a license called the "Open Gaming License", which includes a license update provision reading "Wizards or its designated Agents may publish updated versions of this License. You may use any authorized version of this License to copy, modify and distribute any Open Game Content originally distributed under any version of this License."
The promise of that license built an ecosystem of people making and publishing their own content compatible with the official D&D rules.
WotC recently declared that they were switching to an updated version of the license, and that they were deauthorizing the previous version.
The new license included rules such as revenue sharing, limitations on how the rules can be implemented in software tools, and giving WotC the ability to revoke your license to the content. People are largely not happy about this change, especially with WotC's plan to retroactively cancel the current license and replace it with this worse one.
This has led to Paizo announcing their own open license with many other publishers on board [1], and a lot of D&D's vocal fanbase talking about moving their games to other systems with more favorable licensing.
Pathfinder 2, Paizo's competing system, apparently sold out what should have been an 8-month supply of their printed books in the last two weeks [2], and that's a system that puts all the official rules content online for free.
This announcement today of the SRD being released under CC-BY-4.0 is means WotC is canceling their plans to only license their system under the proposed OGL revision, since the CC-BY license more definitely can't be revoked once you've licensed content under it.
I should add, they're currently working on a new rules edition (playtest titled "One D&D", as 5E was "D&D Next"). Remains to be seen if this will be called "5.5E" or "6E" or what, and we don't know what they'll do with its licensing. Maybe it will be under a newer more restrictive license, maybe it won't be.
That's fine by me, they can do what they want with One D&D licensing and people can make their own decisions on how they react to it going forward.
But trying to pull the rug out from the existing OGL for current and previous editions was a real dick move.
WotC licenses some of their content using the Open Gaming License. It ostensibly covers both the rules* of D&D as well as key elements of the setting -- particular monsters, characters, place names, spells, etc.
That license has allowed products and content creators to build on top of a shared platform -- using and reprinting portions of D&D's content to build their own worlds, stories, systems, etc. Note: not everything D&D publishes is covered by the OGL, just a set of core items they call the SRD -- Systems Reference Document.
WotC/Hasbro leaked that they were working on OGL 1.1 which had a bunch of ambiguous (and many argued harmful) language that required creators to do things like license their content back to WotC, pay fees to license content, control what and what was not appropriate to build on top of OGL, etc.
The OGL 1.1 was met with huge community backlash, and wotc has been fumbling for some time to figure out the next steps. It looks like they are taking those steps now.
* Aside, it's not clear that the rules of D&D are even something that can be licensed in this way, as game mechanics are not protected the same way as copyrightable characters are.
D&D 5th Edition (and some earlier editions) was released in a way that allowed third parties to create their own compatible content by referencing a document called the "SRD", licensed under the so-called Open Game License. This document contained the basic rules and content necessary to play D&D. If you wanted to include a zombie in your published adventure you could use the stat block from the SRD. The license also had some rules to make sure you didn't pass off your content as official, that kind of thing. There is a substantial market for third-party D&D content, and there are numerous companies that make it a core part of their business.
The OGL was written before VTTs (virtual tabletops) were really a thing, so it doesn't explicitly authorize them. Instead, VTT developers arrange their own deal with Wizards directly.
Wizards has also been in a conflict recently with a company affiliated with one of the children of Gary Gygax, co-creator of D&D. As I understand it, this company is promoting itself as a throwback to the good old days when tabletop gaming wasn't "woke", and are using OGL content in provocative ways as part of this campaign.
Finally, Wizards is preparing a new version of D&D, "D&D One". All of this was the backdrop to a leaked plan to update the OGL. The new license explicitly said it only applied to printed content, not software like VTTs or games. It also had language allowing them to revoke the license if applied to offensive material. It included a royalty schedule for larger companies to pay on sales of licensed content. Most significantly, the plan was to declare the previous versions of the OGL no longer "authorized", retroactively forcing new terms on existing content published under OGL.
This resulted in a massive backlash. Wizards had an initial response where they tried to clarify the VTT issue and promised to get rid of the royalties, but that didn't really help. They announced a "playtest" where existing users could review the new license and provide feedback. As this announcement says, the response was resoundingly negative. So they are pretty much going back on the entire plan. And since the OGL has now lost the trust of the community, they're also licensing that content under a Creative Commons attribution license they don't control.
Worth noting that this post doesn't mention D&D One at all. It seems likely that they are still considering an updated license for the new version of the game, which means there's likely to be more conflict. But I don't think anyone could argue that they don't have the right to release their new game under whatever license they want -- the big deal here was the attempt to retroactively relicense the existing content.
> The OGL was written before VTTs (virtual tabletops) were really a thing, so it doesn't explicitly authorize them. Instead, VTT developers arrange their own deal with Wizards directly.
To expand on this about this, nothing around the existing v1.0a actually had field of use restrictions that would prevent its use in a VTT, and the accompanying contemporaneous FAQ indicated that using it for software was permissible. While VTTs as they exist today didn't exist when they wrote the license, character builders and video game adaptations did, so it's not like some VTTs are some totally unforeseen and therefore not plausibly covered.
So most VTTs are relying on the OGL where they just implement D&D and basic classes etc., the ones with deals are the ones that use additional non-OGL content which is why you can e.g. buy Curse of Strahd for roll20 or fantasy grounds despite Strahd not being OGL.
I agree that this is possible, and definitely worth paying attention to.
But some hype cycles are "real", e.g., looking at the internet in early 2000, you might have thought it was about to crash or about to be huge. Either way, you'd be right. It was about to crash in the short term, but in the long term, it still made sense to "get into the internet" in 2000 because it was still a secular trend that ended up making a huge impact on the world.
Example: https://github.com/ImperialCollegeLondon/FLT/blob/main/FLT/M...
Also check out the blueprint, which describes the overall structure of the code:
https://imperialcollegelondon.github.io/FLT/blueprint/
I'm very much an outside observer, but it is super interesting to see what Lean code looks like and how people contribute to it. Great thing is that there's no need for unittests (or, in some sense, the final proof statement is the unittest) :P