Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How hard would it be to write mathematical proofs in a modular way? And have someone check a "module" of a proof and someone else another "module" and so on?

...and for the bits that's possible, write in a subset of mathematical language that computer can check, or at least partially check and ask the human reviewer for feedback on the parts that can't be automatically proven.

I always fount it odd that mathematicians use so little of the mind augmenting potential of computers. Is is true that there aren't any smart enough programmers to craft the tools they would need, or that the tools end up being so "user hostile" that they always go back to pen and paper? I know people tried to express physics proofs in "computer-like language", Sussman et all in Structure and Interpretation of Classical Mechanics, but I've never heard of such things really catching on and being actively used and developed by mathematicians.



> How hard would it be to write mathematical proofs in a modular way? And have someone check a "module" of a proof and someone else another "module" and so on?

To a large extent, this is what we do. But of course, modules by themselves don't prove a thing about what we're originally interested in so that not one module can stand by itself as any advance in the proof, that theconclusion of any module of the proof are surely needed for all the others modules to make sense and that these conclusions are more often than not formulated in a new part of language (new words, concepts, and rules to play with them) introduced in said module. Parallel work of the sort I infer you have in mind might be possible in some instances, but there is so very few of them in my opinion that it's just not worth trying.

> ...and for the bits that's possible, write in a subset of mathematical language that computer can check, or at least partially check and ask the human reviewer for feedback on the parts that can't be automatically proven.

No can't do: in mathematical writing, what is actually written is the tip of a gigantic iceberg of implied reasoning and background scenery. There might be a way to make computer understand this, but as far as I know it has yet to be found. That, and making explicit what is not in mathematical writing would make the length of any proof grow manymanyfold - and I actually mean manymany....manyfold.


> in mathematical writing, what is actually written is the tip of a gigantic iceberg of implied reasoning and background scenery

The programming equivalent of this as I see it is a program depending on lots of libraries encapsulating that "implied reasoning". And you can manage a program even with tens of millions of lines of code in libraries. And it doesn't make the program itself larger because that library code is reusable, you can also use it in other proofs. Yes, it's incredibly hard to manage such a code base, you also need extensive documentation, something like "literate programming" - but it's not intractable.

You would have to write a big part of mathematics as library code, probably a multi-decade length project - but the benefits seem huge! "Translating" mathematics in such a form could augment a mathematicians brainpower "manymany....manyfold" and could be just as important as, let's say, a cancer research project taking decades and tens of millions in funding, but it should worth it! Imagine that hundreds of "less talented" mathematicians could handle problems that only a handful of people can understand today (with the guidance of the "real geniuses", of course), revolutionizing many fields of science through the consequences of their results.

As an analogy, when people used roman numerals, only a handful of highly educated people could do complex calculations, but nowadays we all have more than basic arithmetic knowledge, thanks in large part to a "change of language". I think a "change of language" could really make mathematics a less "elite only" field (though I have greater hopes for letting "AI systems" give a helping hand than for letting "lesser mathematicians" do some of the work, in the end one can't predict what will make the most difference - hardware or wetware, it's still computing power, and, as the complicated software problems are solved, more and more of the computing power can translate into "mathematical reasoning power" I think).


The QED project was started in 1993 to build a database of all mathematical knowledge. It died out because formalized mathematics does not yet resemble traditional mathematics. All existing theorem provers and proof assistants such as Mizar, HOL, and Cow have major shortcomings. Very few people are working to improve formalized mathematics because there isn't any compelling profitable application of fully formalized mathematics.

http://www.cs.ru.nl/~freek/qed/qed.html


...so it's one of those "someone has to do it to prove it can be done" or "someone has to first make it work to prove it's useful" problems?

Hopefully there are a few madmen to start working at it and they will be wise enough to do it all "in the open": open source code and publication of all developments, "blog stye", instead of the "talking about only after you've really understood it" approach. I would throw a helping hand on the programming side and maybe devops, even throw a few bucks at it if one were to crowd-source and crowd-fund such an effort ...just because I find it "so cool". And I'm sure there are others that would help any group of "brilliant madmen" trying to do this, even without any imaginable profitable applications. But for now, I'm trying to wrap my head around what the QED project was trying to do and what it did... thanks for the directions to them!


I always fount it odd that mathematicians use so little of the mind augmenting potential of computers. Is is true that there aren't any smart enough programmers to craft the tools they would need, or that the tools end up being so "user hostile" that they always go back to pen and paper? I know people tried to express physics proofs in "computer-like language", Sussman et all in Structure and Interpretation of Classical Mechanics, but I've never heard of such things really catching on and being actively used and developed by mathematicians.

You mean like Mathematica? Or Wolfram's "predictive interface", http://reference.wolfram.com/mathematica/guide/WolframPredic...?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: