Hacker Newsnew | past | comments | ask | show | jobs | submit | vitalnodo's commentslogin

This fits into the broader evolution of the visualization market. As data grows, visualization becomes as important as processing. This applies not only to applications, but also to relating texts through ideas close to transclusion in Ted Nelson’s Xanadu. [0]

In education, understanding is often best demonstrated not by restating text, but by presenting the same data in another representation and establishing the right analogies and isomorphisms, as in Explorable Explanations. [1]

[0] https://news.ycombinator.com/item?id=40295661

[1] https://news.ycombinator.com/item?id=22368323


With a tool like this, you could imagine an end-to-end service for restoring and modernizing old scientific books and papers: digitization, cleanup, LaTeX reformatting, collaborative or volunteer-driven workflows, OCR (like Mathpix), and side-by-side comparison with the original. That would be useful.

Don’t forget replication!

I'm curious how you think AI would aide in this.

Tao’s doing a lot of related work in mathematics, so I can say that first of all literature search is a clearly valuable function frontier models offer.

Past that, A frontier LLM can do a lot of critiquing, a good amount of experiment design, a check on statistical significance/power claims, kibitz on methodology..likely suggest experiments to verify or disprove. These all seem pretty useful functions to provide to a group of scientists to me.


Replicate this <slop>

Ok! Here's <more slop>


I don't think you understand what replication means in this context.

I think they do, and you missed some biting, insightful commentary on using LLMs for scientific research.

Previously, this existed as crixet.com [0]. At some point it used WASM for client-side compilation, and later transitioned to server-side rendering [1][2]. It now appears that there will be no option to disable AI [3]. I hope the core features remain available and won’t be artificially restricted. Compared to Overleaf, there were fewer service limitations: it was possible to compile more complex documents, share projects more freely, and even do so without registration.

On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.

[0] https://crixet.com

[1] https://www.reddit.com/r/Crixet/comments/1ptj9k9/comment/nvh...

[2] https://news.ycombinator.com/item?id=42009254

[3] https://news.ycombinator.com/item?id=46394937


I'm curious how it compares to Overleaf in terms of features? Putting aside the AI aspect entirely, I'm simply curious if this is a viable Overleaf competitor -- especially since it's free.

I do self-host Overleaf which is annoying but ultimately doable if you don't want to pay the $21/mo (!).

I do have to wonder for how long it will be free or even supported, though. On the one hand, remote LaTeX compiling gets expensive at scale. On the other hand, it's only a fraction of a drop in the bucket compared to OpenAI's total compute needs. But I'm hesitant to use it because I'm not convinced it'll still be around in a couple of years.


Overleaf is a little curious to me. What's the point? Just install LaTeX. Claude is very good at manipulating LaTeX documents and I've found it effective at fixing up layouts for me.

In my circles the killer features of Overleaf are the collaborative ones (easy sharing, multi-user editing with track changes/comments). Academic writing in my community basically went from emailed draft-new-FINAL-v4.tex files (or a shared folder full of those files) to basically people just dumping things on Overleaf fairly quickly.

collaboration is the killer feature tbh. overleaf is basically google docs meets latex.. you can have multiple coauthors editing simultaneously, leave comments, see revision history, etc.

a lot of academics aren't super technical and don't want to deal with git workflows or syncing local environments. they just want to write their fuckin' paper (WTFP).

overleaf lets the whole research team work together without anyone needing to learn version control or debug their local texlive installation.

also nice for quick edits from any machine without setting anything up. the "just install it locally" advice assumes everyones comfortable with that, but plenty of researchers treat computers as appliances lol.


I am curious if Git + Local install can solve this collaboration issue with Pull Requests?

To add to the points raised by others, "just install LaTeX" is not imo a very strong argument. I prefer working in a local environment, but many of my colleagues much prefer a web app that "just works" to figuring out what MiKTeX is.

I can code in monospace (of course) but I just can't write in monospace markup. I need something approaching WYSIWIG. It's just how my brain works -- I need the italics to look like italics, I need the footnote text to not interrupt the middle of the paragraph.

The visual editor in Overleaf isn't true WYSIWIG, but it's close enough. It feels like working in a word processor, not in a code editor. And the interface overall feels simple and modern.

(And that's just for solo usage -- it's really the collaborative stuff that turns into a game-changer.)


I use inkdrop for this, then pandoc to go from markdown to latex, then a final typesetting pass. Inkdrop is great for WYSIWYG markdown editing.

Same for me. I wrote my PhD in LyX for that reason.

Collaboration is at best rocky when people have different versions of LaTeX packages installed. Also merging changes from multiple people in git are a pain when dealing with scientific, nuanced text.

Overleaf ensures that everyone looks at the same version of the document and processes the document with the same set of packages and options.


The first three things are, in this order: collaborative editing, collaborative editing, collaborative editing. Seriously, this cannot be understated.

Then: The LaTeX distribution is always up-to-date; you can run it on limited resources; it has an endless supply of conference and journal templates (so you don't have to scavenge them yourself off a random conference/publisher website); Git backend means a) you can work offline and b) version control comes in for free. These just off the top of my head.


Latex is such a nightmare to work with locally

"Just install LaTeX" is really not a valid response when the LaTeX toolchain is a genuine nightmare to work with. I could do it but still use Overleaf. Managing that locally is just not worth it.

I'd use git in this case, I am sure there are other reasons to use overleaf otherwise it wouldn't exist but this seems like a solved issue with git.

You can use actually git (it's also integrated in Overleaf).

You can even export ZIP files if you like (for any cloud service, it's not a bad idea to clone your repo once in a while to avoid begin stuck in case of unlikely downtime).

I have both a hosted instance (thanks to Overleaf/ShareLaTeX Ltd.) and I'm also paying user for the pro group license (>500€/year) for my research team. It's great - esp. for smaller research teams - to have the maintenance outsourced to a commercial provider.

On a good day, I'd spend 40% in Overleaf, 10% in Sublime/Emacs, 20% in Email and 10% in Google Scholar/Semantics Scholar and 10% in EasyChair/OpenReview, the rest in meetings.


you can use git with overleaf, but from practical experience: getting even "mathematically/technically inclined" people to consistently use git takes a lot of time... which one could spend on other more fun things :-)

LaTeX ecosystem is a UX nightmare, coming from someone who had to deal with it recently. Overleaf just works.

The deeper I got, the more I realized really supporting the entire LaTeX toolchain in WASM would mean simulating an entire linux distribution :( We wanted to support Beamer, LuaLaTeX, mobile (wasn't working with WASM because of resource limits), etc.

We had been building literally the same thing for the last 8 months along with a great browsing environment over arxiv -- might just have to sunset it

Any plans of having typst integrated anytime soon?


I'm not against typst. I think it's integration would be a lot easier and more straightforward I just don't know if it's really that popular yet in academia.

its not yet, but gaining traction.

The WASM constraints make sense given the resource limits, especially for mobile. If you are moving that compute server-side though I am curious about the unit economics. LaTeX pipelines are surprisingly heavy and I wonder how you manage the margins on that infrastructure at scale.

But what's the point ?

To end up with yet another shitty (because running inside a browser, in particular its interface) web app ?

Why not focus efforts into making a proper program (you know, with IBM menu bars and keyboard shortcuts), but with collaborative tools too ?


You are right in pointing out that the Web browser isn't the most suitable UI paradigm for highly interactive applications like a scientific typesetting system/text editor.

I have occasionally lost a paragraph just by accidental marking a few lines and pressing [Backspace].

But at the moment, there is no better option than Overleaf, and while I encourage you to write what you propose if you can, Overleaf will be the bar that any such system needs to be compared against.


OP is talking about developing an alternative to Overleaf. But they are still trying to do it inside a browser !

we did a podcast with the Crixet founder and Kevin Weil of OAI on the process: https://www.youtube.com/watch?v=W2cBTVr8nxU&pp=2Aa0Bg%3D%3D

thanks for hosting us on the pod!

I was using Crixet before I switched over to Typst[0] for all of my writing. However, back when I did use Crixet, I never used its AI features. It was just a much better alternative to Overleaf for me. Sad to see that AI will be forced on all Crixet users now.

[0]: https://typst.app


So this is the product of an acquisition?

> Prism builds on the foundation of Crixet, a cloud-based LaTeX platform that OpenAI acquired and has since evolved into Prism as a unified product. This allowed us to start with a strong base of a mature writing and collaboration environment, and integrate AI in a way that fits naturally into scientific workflows.

They’re quite open about Prism being built on top of Crixet.


great context - thanks ! so yeah maybe Overleaf is the way to go now :)

It seems bad for OpenAI to make this about latex documents, which will be now associated, visually, with AI slop. The opposite of what anyone wants really. Nobody wants you to know they used a chatbot!

This is just because LaTeX is widely used by researchers.

Also yes, LaTeX being source code it's much easier to get an AI to genere LaTeX than integrate into MS Word.


Please refrain from incorporating em dashes into your LaTeX document. In summary, the absence of em dashes in LaTeX.

Am I missing something? LaTeX is associated with slop now?

If a common AI tool produces latex documents, the association will be created yeah. Right now latex would be a high indicator of manual effort, right?

don't think so. I think latex was one of academics' earlier use cases of chatgpt, back in 2023. That's when I started noticing tables in every submitted paper looking way more sophisticated than they ever did. (The other early use case of course being grammar/spelling. Overnight everyone got fluent and typos disappeared.)

It's funny, I was reading a bunch of recent papers not long ago (I haven't been in academia in over a decade) and I was really impressed with the quality of the writing in most of them. I guess in some cases LLMs are the reason for that!

I recently got wrongly accused of using LLMs to help write an article by a reviewer. He complained that our (my and my co-worker's) use of "to foster" read "like it was created by ChatGPT". (If our paper was fluent/eloquent, that's perhaps because having an M.A. in Eng. lit. helped for that.)

I don't think any particular word alone can be used as an indicator for LLM use, although certain formatting cues are good signals (dashes, smileys, response structure).

We were offended, but kept quiet to get the article accepted, and we changed some instances of some words to appease them (which thankfully worked). But the wrong accusation left a bit of a bad aftertaste...


If you’ve got an existing paragraph written that you just know could be rephrased more eloquently, and can describe the type of rephrasing/restructuring you want… LLMs absolutely slap at that.

LaTeX is already standard in fields that have math notation, perhaps others as well. I guess the promise is that "formatting is automatic" (asterisk), so its popularity probably extends beyond math-heavy disciplines.

> Right now latex would be a high indicator of manual effort, right?

...no?

Just one Google search for "latex editor" showed more than 2 in the first page.

https://www.overleaf.com/

https://www.texpage.com/

It's not that different from using a markdown editor.


Can you recall the link?


see w2c2 in this paper

https://www.opencloudification.com/wp-content/uploads/2025/0...

Tho I have mis-remembered it. They transpile wasm back to C and compile that to a native binary.


w2c2 has only 2 mentions. wasm2c is not a clear winner, it's specifically losing several of their benchmarks.

In general, using a preexisting compiler as a JIT backend is an old hack, there's nothing new there. It's just another JIT/AoT backend. For example, databases have done query compilation for probably decades by now.


As an alternative to Overleaf, I found Crixet to be quite useful. It appears to be based on WASM and has fewer usage restrictions.


I went to give Crixet a try, and there was a AI assistant in the editor. I looked in the settings to turn it off and the setting to do so said "I prefer my ignorance handcrafted, organic, and 100% human-made."

:)


Another potentially interesting project is zigx, an X11 client library for Zig applications:

https://github.com/marler8997/zigx

https://www.youtube.com/watch?v=aPWFLkHRIAQ

Compared to libX11, it avoids dynamic dependencies, uses less memory, and provides better error messages.


I’m wondering what’s the proper way to draw Venn diagrams. I’ve seen that Graphviz has a “nice to have” mention about them, and there are a few simple JS libraries - mostly for two sets. Here’s also my own attempt using an LLM [1].

But maybe someone knows a more general or robust solution - or a better way to achieve this? In the future, I’d like to be able, for example, to find the intersection between two Venn diagrams of three sets each etc.

[1] https://vitalnodo.github.io/FSLE/


The nVennR library is pretty robust for multiple sets

https://venn.bio-spring.top/intro#nvennr


the comments here https://news.ycombinator.com/item?id=45742907 have some discussion about projects that take a "focused algorithm for various different diagram types" approach vs graphviz's one size fits all approach. worth checking to see if any of them do venn diagrams.



Can you share the link? I wonder also whether it uses comptine features.


It is not yet ready but the master branch has an initial draft.

https://github.com/kaitai-io/kaitai_struct_compiler/commits/...

It would be premature to review now because there are some missing features and stuff that has to be cleaned up.

But I am interested in finding someone experienced in Zig to help the maintainer with a sanity check to make best practices are being followed. (Would be willing to pay for their time.)

If comptime is used, it would be minimal. This is because code-generation is being done anyway so that can be an explicit alternative to comptime. But we have considered using it in a few places to simplify the code-generation.


There are many other so-called models of computation that are useful for representing ideas such as actor models, abstract rewriting systems, decision trees, and so on. Without them, you might feel that something is missing, so relying on assembly alone would not be enough.


I found out about milk.com when I was thinking about how to make an Android app completely from scratch (assembling DEX bytes from zero, kind of like writing an assembler for the Dalvik VM). That’s when I came across the author of the DEX format, Dan Bornstein — and I was surprised he actually owns a domain like that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: