Hacker Newsnew | past | comments | ask | show | jobs | submit | more jfmc's commentslogin

Note that "CheerpX enables you to run existing 32-bit x86 native binaries". For some reason support for wasm64 (in browsers) has been stagnated for years, which is a pity.


Most WASM, WebGL and WebGPU features take ages in browsers, measured in decades.


You do understand that most of these features were conceived of maybe a decade ago, if at all?


They should compare with other multithreading and GPU approaches for SAT/SMT solving (like https://www.win.tue.nl/~awijs/articles/parafrost_gpu.pdf from Armin Biere, or other works from Mate Soos). There has been a lot of research in this direction.

Other old HN thread (2017) with relevant comments https://news.ycombinator.com/item?id=13667380 from actual experts.


Another related thread from last month: https://news.ycombinator.com/item?id=36084461


WASM is an extremely useful compilation target because of its portability (specially for running on browsers), but it is far from being the "default compilation target" for almost any language. The promised near-native speed is not here (in general you get x2 or x3 slower code but it can be even worse), it is limited to 32-bits (MEMORY64 is on the way but there is not a clear roadmap of when this will be generally available in browsers), blocking IO is a pain, and its design seems to be constrained by the underlying JS JIT (it still looks like ASM.JS with a different syntax).

I still believe that it is a miracle that we have WASM as a standard, and that it runs smoothly across different browser vendors, but why nobody seems to be worried about lack of progress in performance?

LLVM IR would be a much better binary target. It was used in the abandoned PNaCL project. AFAIK Apple uses it (bitcode) to store apps that are later compiled for specific platforms. WASM looks like a toy compared with this technology.


> It was used in the abandoned PNaCL project.

I worked on PNaCl back in 2011. There was a growing understanding that choosing LLVM IR as the representation was a mistake. I even tried to come up with something better, but failed to do so. I was glad to see asm.js and then WebAssembly coming to the scene. In my opinion, WebAssembly is better than what PNaCl could have ever become.

update: and then there was a famous post, "LLVM IR is a compiler IR": https://groups.google.com/g/llvm-dev/c/N3r_a1Vbrog/m/8lukw1x...


The problem is that LLVM IR makes breaking changes fairly frequently -- for instance, LLVM 15 made all pointers untyped; where before you'd have a type like *i32, now you just have ptr.

If you're trying to run the same IR on 32-bit and 64-bit devices, I'd expect you'd need to freeze the word size anyway -- if a C or C++ program uses sizeof(), what value gets returned?

Blocking IO doesn't get solved by switching to LLVM either; you can't block in browser WASM because it'd block the JS thread; a WASM engine not attached to a browser has no such issues. (I wish at least one-shot continuations would get added so that this could become a bit easier in the browser, but I understand the hesitancy to do so...)


> it is far from being the "default compilation target" for almost any language

That referred to the content of Gary Bernhardt's talk, not to (current) reality.


The talk is a work of fiction, set in 2035, which describes the Bay Area as a nuclear exclusion zone.


Surprisingly it can generate Coq proofs. Unsurprisingly the "proofs" are just hallucinations that look right but make no sense at all. See for example: "coq program that proves that p->q is equivalent to q->p", which produces:

Theorem equiv_pq_qp : forall (p q : Prop), (p -> q) <-> (q -> p). Proof. intros p q. split. - intros p_imp_q q_imp_p. apply q_imp_p. apply p_imp_q. assumption. - intros q_imp_p p_imp_q. apply p_imp_q. apply q_imp_p. assumption. Qed.

... together with a lengthy and convincing explanation in natural language.

Sophists would be delighted by these mechanized post-truth AI systems.


That has been my experience for all my attempts of using chatgpt for any programming tasks: ask it something contained in the first stackoverflow answer when you google your prompt, and sure enough it gives you that. But ask it for something slightly more involved and you get something that looks plausible at very first glance but is just pure junk.


My impression when working with people using Simulink is that 'safety' is much weaker that for people working on formal methods, and certification limited a lot the kind of programs that they would write. It made totally sense for their domain, but -- as a general practice to write software -- it didn't impress me at all. I may be wrong.


I was expecting functional safety standards to require the use of formal methods, similar for example to how AWS uses TLA+, but I was surprised to discover it was not a requirement at all.


One of the co-authors here. Thank you for these helpful clarifications!


The general term of arithmetic and geometric sequences seem simpler when indexing from 0 rather than 1. I do not think that '1' is more human focused for anything than '0'.


Didn't Chrome (and probably others) added GPU accelerated CSS and SVG (i.e., vector graphics) 10 years ago? https://www.tomshardware.com/news/google-chrome-browser-gpu-...


Not exactly - the article you link to is about SVG/CSS filters, not path drawing. Modern Chrome (skia) supports accelerated path drawing but only some of the work is offloaded to the GPU. In even older Chrome the GPU was used for compositing bitmaps of already-rendered layers.


Yeah I was also under the impression that Skia had GPU acceleration.

Same with the FF Rust renderer (sorry don't remember the name).


Pathfinder?


I think it's Servo


Based on a quick test I just ran, it seems chrome does and firefox doesn't.


I really recognize the value of new implementations and the fact that each of them is filling a hole that old implementations do not cover (like new platform support, more embeddings, etc.).

But I have a controversial question: why is it better to start a system from scratch than contributing and helping to other less known, long running systems (Yap, XSB, ECLiPSe, Ciao, gprolog, B-Prolog, etc.)?

In the old times, a system became popular because you read a research paper which described carefully the implementation decisions and did good performance evaluations. Papers were reviewed and trusted, you could get objective information about why the implementation was good or not. This helped advance the state of the art.

Nowadays, this is completely broken and it is strangely feeling like a "popularity" contest and a social game. For example, some users were surprised about the performance of WASM-compiled Ciao Playground:

  https://ciao-lang.org/playground/
that despite being x2 slower than native, is faster than other popular Prolog implementations. We needed to tweet about it and use the forum of another popular Prolog system to have any visibility. Despite that, this fact will soon be ignored and buried.

When looking at the VM of each of the systems, we know why this happens, but the expertise of the people who wrote earlier Prolog systems, many of them run perfectly fine and blazingly fast in modern machines, is being lost.

Of course, we learned a long time ago that the selection of benchmarks is really complex. You can pick whatever is needed to make your system shine. Ciao Prolog is extremely fast for some benchmarks but it can also be much slower in others.

My claim is that despite it is cool to have new and nice Prolog implementations, this is adding a lot of noise and we are repeating mistakes from the past and going in the wrong direction:

- There is no incentive for proper benchmarking (e.g., should I implement mmap-based file mapping? when does it help? is it a good idea?) - The literature is being ignored (e.g., who reads and compares with Paul Tarau or Bart Demoen implementation papers?) - VM and libraries should be independent (it should be possible to implement a new VM while not requiring to reimplementing the whole set of libraries) - Systems copy/reimplement features without proper recognition of the original system (e.g., once a technique is adopted by the popular system, the original is forgotten, who knows that JITI was originally developed by Vitor Santos Costa in YAP?). - The community is extremely fragmented.

I wonder if it would be possible to create a more healthy Prolog forum which prioritizes ideas, authors, and results. Going through twitter, discourse, github discussions, github issues, of each of our individual Prolog system is NOT a solution. Each of us looking at decades of research papers and reproducing them in each of our systems is also not a good idea (indeed the unique advantage of old systems is that it takes decades to incorporate some of these ideas). Moreover, there are still many unsolved problems in modern Prolog systems that can only be addressed as programming language research.


Insightful. On the plus side OTOH, more Prolog implementations means more focus on standardization and shared Prolog code bases, when the dominance of certain commercial and F/OSS implementations might have pushed Prolog IMO too much into the CLP magic and/or procedural niche.


We need robopsychologists.


Dr. Susan Calvin?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: