Actually stockfish crushed leela in recent TCEC. It seems that the new neural network of stockfish had a huge effect on the performances. Something like a 130 ELO improvement.
Current result: 9 draws, one win with Stockfish as White, and one win with Leela as White. Drawn.
There is also this one from a couple of years ago: https://www.chess.com/news/view/computer-chess-championship-... “Lc0 defeated Stockfish in their head-to-head match, four wins to three”. Stockfish did got more wins against the other computers, so won the round-robin, but in head-to-head games Leela was ahead of Stockfish.
I don't have any stock in those 2 engines, so I don't care which one is better than the other. At the end as a poor chess player it won't change anything :) It's actually interesting to compare how those two software are evolving and how they got here.
Stockfish is much older. And it took it a lot of hand tuning to reach its current level. It is (or was) full of carefully tested heuristic to give a direction to the computation. It would be very difficult to build an engine like stockfish in a short span.
Leela got there very very quickly. Even if it was not able to win in October, the fact that it got competitive and forced the field to adopt drastic changes in such a short period of time is impressive. It seems to be a good example of how sometimes no using the "best" solution could still be a win. Getting good results after a few months against something that required 10 years of work.
Btw for people who want to use js_of_ocaml instead of rescript but still like react, there is https://github.com/jchavarri/jsoo-react. It's obviously not as polished. But it's interesting to see how much work it requires to bind a large lib like react.
I feel like the dev team is using "readable output" to mean both "human-readable" and "shared data types" depending on the situation. And it creates a lot of confusion.
actually bucklescript has never really been used at bloomberg. And bob left bloomberg a long time ago. So this fork is definitely not happening because of bloomberg.
To have code that you can compile to both native and js, the easiest thing is to stick to the ocaml syntax. It is supported by all the versions of the compiler (rescript or ocaml). And to also use the ocaml syntax for the tools doing code generation, such as atdgen.
well sorry if this isn't true, but I haven't insulted anyone. If I get those feedback and they are incorrect, I hope that you can provide my insightful ones to clear the situation.
I've replied earlier with concrete details in another place in this subthread. You indeed haven't insulted anyone, but your comment history does indicate some biased intent; if the ecosystem changes affected you, we'd like to apologize for those; let's bury the hatchet here because maybe we can both agree that drama isn't productive.
I don’t see how there’s a biased intent. I’ve no shares in any tech. If anything I wish for rescript to be a big success given how much the company I am in relies on it. I’ve contributed to multiple bucklescript projects. And I’m not criticising the decisions of anyone.
Hm so what does Facebook use? I saw a talk that was marketing Reason as "React as a language"?
Is that true for ReScript, or no?
Presumably Facebook still uses React? But they use it via regular JS and not Reason or Rescript?
I do have to concur with some of the comments -- the situation seems pretty confusing, and spending a bunch of time looking at docs didn't enlighten me.
from what I heard, they use typescript instead of reason for some time already. I don't know if they use react or not. I also heard that they weren't always using reasonreact even when they were using reason. So they've never been 100% on the same stack than what was advertised by the reason team.
It probably explain why fb dedicates so little resources to reason too.
It's rare that I (an American) get to help out others by translating something, so I will post my translation here.
"Four years after the first publication by DGFIP, I have the pleasure of announcing that the source code permitting the calculation of taxes on revenue is finally reusable (recompilable by others)!
To use this algorithm in your application, follow this link...
It took us 1.5 years (with my coauthor Raphael Monat) to identify that which was missing in the published code in order for it to be reusable, and to fix this situation.
More or less, thanks to our project Mlang, a person can simulate IR's calculations without needing to interface with DGFIP.
The difficulty came from a constraint from DGFIP, who did not want us to publish (for security reasons) a part of the code that corresponds to a mechanism that handles "multiple liquidations". Raphael and I recreated this unpublished part in a new DSL.
DGFIP equally didnt want to publish their internal test games (cases). We had proceeded therefore with the creation of a suite of random test cases, separate from the non published ones, to finally be able to reproduce the validation of Mlang outside of DGFIP."
"A little less than a year after the publication of [blog post], we have therefore found a compromise letting us to respect both the obligation to publish the source code and the security constraints of DGFiP.
In letting us publish the code on their site and accessing confidentially the source code they didn't want published, the DGFiP let us find alternative solutions that made the publication of the source code concrete and operational.
This compromise lets both parties come out on top, unlike what happened with the source code of CNAF [link] where the administration simply argued a too-important difficulty and indefinitely postponed [1] it.
Letting those who ask for the source code to see it after a NDA therefore appears to be a possible solution when the publication is delicate for technical reasons. Could this path be useful for the report of @ebothorel?"
[Note: translation here is somewhat more geared towards a natural English translation than a literal French translation.]
[1] "repouss[er] [...] aux calendes grecques" appears to be an idiom that's not in my dictionaries, but from context appears to mean "indefinitely postponed"
The calends [0] are the first day of every month in the Roman calendar. As the Ancient Greek calendar does not feature calends, postponing something to the Greek calends means postponing something to a later, unknown and unlikely to happen date.
> A few months ago we stopped referring to robots.txt files on U.S. government and military web sites for both crawling and displaying web pages (though we respond to removal requests sent to info@archive.org). As we have moved towards broader access it has not caused problems, which we take as a good sign. We are now looking to do this more broadly.
This is an important and useful improvement. Many domain parkers/squatters/etc who snap up dead domains have robots.txt files that block everything or almost everything, breaking the ability to see the previous site via archive.org.
(Side note: domain name expiration was a mistake.)
A fun thing to do before your website expires is set up HSTS with "includeSubDomains" and enroll your website in HSTS preload. Many of the bots that backorder domains in order to put ad pages on them don't use SSL at all (not even LetsEncrypt) and the domain ends up becoming useless for them.
If they ignore robots.txt, than what else gives them the right to copy and host content from other sites? As much as I value Wayback and archive.org, I think putting this into the realm of bilateral negotiation and a DMCA-like model outside courts is a slippery slope. It's a non-solution potentially breeding new monopolies, like Google's exclusive relations with news publishers is doing. Is there nothing in HTML metadata (schema.org etc.) informing crawlers and users about usage rights that could be lifted or extended for this purpose now, especially now that the EU copyright reform has set a legal framework and recognition of principles in the attention economy?
> If they ignore robots.txt, than what else gives them the right to copy and host content from other sites?
The same thing that gives them the right otherwise: fair use, and explicit archiving exceptions written into copyright law. robots.txt adds no additional legality.
Fair use does not give you the right to wholesale scrape content that is otherwise under copyright with a non-CC/open license, which is effectively what the Internet Archive does. (To be clear, I approve of IA's mission but it's in a legal grey area.)
robots.txt has never had much of a legal meaning. Respecting it was mostly a defense along the lines of "You only have to ask, even retrospectively, and we won't copy your content." As a practical matter, very few are going to sue a non-profit to take down content when they pretty much only have to send an email, (almost) no questions asked.
> Fair use does not give you the right to wholesale scrape content
Yes, it potentially does. There are court cases establishing precedent that copying something in its entirety can still be fair use, as well as law and court cases establishing specific allowances for archives/libraries/etc.
There's probably an argument where archiving a particular site as a whole has some compelling public interest--say a politician's campaign site. But it seems unlikely that would extend to randomly archiving (and making available to the public) web sites in general.
I've always been told that fair use--as a defense against a copyright infringement claim--is very fact dependent.
IANAL, but I fail to see how fair use can be leveraged to give archive sites a right to host other site's content when that content is available publically und non-discriminatory, and there are eg. Creative Common license metadata tags for giving other sites explicit and specific permissions to re-host content. There are also concerns to be addressed under EU copyright reform (eg. preview of large portions of text from other sites without giving those other sites clicks).
If your point is that content creators can't technically or "jurisdictionally" stop archival sites from rehosting, then the logical consequence is that content creators need to look at DRM and similar draconic measures which I hope they rather aren't forced to do.
The author's jurisdiction is irrelevant. The only question is what jurisdiction's laws apply to the Internet Archive (or in general whatever party does the copying).
some real problems with jsoo are (non exhaustive list)
- "heavy" FFI (both in term of setup and syntax)
- the output is a big single .js file, which doesn't allow reloading of a single component and doesn't integrate well with stuff like webpack
- slower compilation time
- lacking documentation for the javascript audience