No one will ever bother to read this work. There are millions of claimed proofs of various conjectures. If you want to prove Goldbach or P=NP (etc...) you either need a very simple to verify proof (not a 1,000 page proof) or you need to prove a set of interesting conjectures along the way to give yourself credibility.
In the pages of this mathematician's proof, there must be a few smaller proofs he could publish that show genuine insight, which would give him credibility. Otherwise it's just another sci.math sideshow.
I actually would like to give it a shot, even though I know that it's way too far beyond my level. But after googling Henry Andrew Pogorzelski, no pdf's came out.
I think he should have written a paper explaining in easier term what he does. E.g "First I will map the integers into a group defined by X and Y, it makes the work easier because that group has some interesting properties like... Then I show how the Goldback conjecture translates onto my group, then, etc."
I know this is easier to be said than to be done, but on the other hand the Goldback conjecture is so simple that there have to be ways to break down its demonstration (or attempts of) into understandable steps, each steps being once again broken down, etc. and each stage introducing new layers of complexity. That way, instead of claiming "I proved the Goldback conjecture, here is the proof, just check those 1000 pages", he could have said "I proved X, Y and Z, and we can see how together they can be added up to prove the actual conjecture", with X, Y and Z (and probably lots of others) being smaller results that experts could challenge or assert.
EDIT: TL;DR: If a mathematician claims to prove a long time unproven claim such as Goldbach conjecture, it belongs to him to do the vulgarisation works so even schmucks like me can have a remove idea of how he did it.
I wonder if it's possible for an machine learning algorithm to simplify these proofs.
I take it because of the rather strict domain specificness of math papers it should be possible to develop an algorithm that can parse these papers into lemmas. Proofs normally structured as a tree of lemmas right? Or are there proofs that have more complex structure?
A single basic thing could be to extract the leaf lemmas and assert their validity. If they are valid, and the relations between the lemmas are valid then the eventual conclusion should also be valid.
Has anything like this ever been attempted? A math paper proof verifying machine learning algorithm?
Automated Therom Proving is ridiculously hard and a guess would be that it is almost impossible to train an ML algorithm to do this efficiently.
However, verifying that a proof is correct is somewhat simple, if you give the machine enough information to actually run the formalization details. In modern mathematics, this may be a necessary thing to do since nobody has time to verify these 1000 page developments anymore.
The problem is that the level of detail currently needed is very high. And this slows down the formalization process. But the area is in wild development these years, so hopefully something good comes out of it.
Just imagine inputting the final character of the last inference step, then waiting for a few minutes while the computer performs incremental compilation of the last few rules. Finally, it emits simply these four characters: "QED."
There was a billionaire in the book Permutation City who uploaded his brain into a supercomputer, then let an optimizer run on it for years. The goal was to get his brain to run in real-time or better, but eventually it spit out an empty file and a log message: "This program produces no output."
Still, I think it would be better if mathematicians started writing their theorems and proofs in a machine readable way
Of course there is no standart for that today that gets "everything". Something "let G be a Galois Group, let Xk be the set of all matrixes of dimension k,k where Det(x)=1" etc
This is hard. But to put it in that way would allow for piecewise revision, reassembly, study and proof (not that you can't do that on paper, but it's much harder), and then you can start growing your machine programs around those proofs.
So, you're saying that the code would be less reliable if there was no work to prove it formally? Of course you aren't. What you're saying is that proving code is correct is insufficient. Of course it is insufficient. Your comment adds nothing to the debate.
What I'm saying is that asking that mathematicians produce their proofs in a form amenable to mechanical checking is like asking programmers to produce their code in a form that is amenable to mechanical checking. It's long, tedious, error-prone in its own right, and simply not going to happen. It's like asking people who write multi-million line programs in high-level languages to get the same functionality by writing in assembler.
Programming doesn't happen like that, and proofs don't either.
And your comment adds nothing to the debate while picking on a joke.
"is like asking programmers to produce their code in a form that is amenable to mechanical checking. It's long, tedious, error-prone in its own right, and simply not going to happen"
But programs are done in a form that's subject to mechanical checking. Compilers do that and only compile code that adheres to a strict syntax and grammar. In a higher level there are static code analysers that look for other defects. Some of them are in compilers already for optimization purposes.
Also, semiconductor companies use automated checkers for the inner parts of their processors, like floating point multipliers to make sure they are correct.
But running code through a compiler is akin to making sure the statements in a proof "make sense." The proof as a whole can be complete rubbish, which still making sense at a page by page level. I can't work out if you're trolling, or if you really don't understand my point.
A lot of undergraduate mathematics has been converted into forms that automated proof checkers can work with, and have verified. But new math just isn't like that. It's not yet well understood, it's got small gaps and bits that are wrong in inconsequential ways, and it's got an over-arching sense of purpose and meaning. Trying to convert a recently produced proof into a form amenable to checking is just not going to happen, and by the time it is understood well enough, and completed to a sufficient level of detail to make it possible in a sensible amount of time, it's pretty much pointless.
"But running code through a compiler is akin to making sure the statements in a proof "make sense." The proof as a whole can be complete rubbish, which still making sense at a page by page level."
Correct. But if one statement is incorrect then the proof is wrong, at least until that statement is corrected.
So it's only a partial verification, but you can be sure that if that points to an error, you still have work to do.
Code analyzers are getting better everyday but in the end it depends on what you want the code to do, so you would need to add an annotation "prove this function calculates 2*x for parameter x" for a complete proof
" Trying to convert a recently produced proof into a form amenable to checking is just not going to happen, and by the time is is understood well enough, and completed to a sufficient level of detail to make it possible in a sensible amount of time, it's pretty much pointless."
And that's where I disagree. If you can have your new theorem
in a machine readable/verifiable format, then other theorems can make use of it.
The idea is not to redo the work every time for a new theorem. I understand new math is much more complex, and that's why we should be thinking of making this available in a theorem prover.
It's certainly a very complex work and maybe not doable today, but it would have a big impact.
>> The proof as a whole can be complete rubbish, which
>> still making sense at a page by page level."
> Correct. But if one statement is incorrect then the proof
> is wrong, at least until that statement is corrected.
The statement-by-statement work is not only trivial, it's an important step to getting your head in the right place to understand where things are going, and why. Fixing the problem of making sure each statement is, in effect, syntactically correct is not just solving the wrong problem, it's solving a non-problem.
>> Trying to convert a recently produced proof into a form amenable
>> to checking is just not going to happen, and by the time it is
>> understood well enough, and completed to a sufficient level of
>> detail to make it possible in a sensible amount of time, it's
>> pretty much pointless.
> And that's where I disagree. If you can have your new theorem in
> a machine readable/verifiable format, then other theorems can make
> use of it. The idea is not to redo the work every time for a new
> theorem. I understand new math is much more complex, and that's why
> we should be thinking of making this available in a theorem prover.
It's being done now, but the problem is that producing new work, new theorems, is not going to be done within the framework of the automated system, because that's not where the meaning lives. When I produce a new proof, the fastest thing to do is to make it readable by other mathematicians, not convert it painstakingly to a different notation for a machine to check. That can, and really should, only be done once it's understood conceptually by other mathematicians.
There are people working on this, and they are making great strides. I just get really frustrated when people think a) it's obvious that machine checking is the way to go, b) why don't mathematicians all work on producing their proofs in machine checkable form, c) I don't understand these proofs, so obviously machine checking is better, and d) it really can't be that hard - even I can see it's a good idea.
I have no doubt that this will change over the next 50 years, perhaps even over the next 20 years. Prediction is hard, especially about the future, and I'm not going to try. What I do know is that I honestly cannot see how I would work trying to use a machine readable form, or trying to use a proof assistant. So far they've not merely been unhelpful, they've actively stopped me from working at all. No doubt it will be different for people raised with and trained on the "modern" systems of 20 years from now.
> It's certainly a very complex work and maybe not doable today,
> but it would have a big impact.
Yes, yes it would. I don't know how much math you know, there's nothing in your profile, but if you have enough to have a go at contributing to one of the major pieces of work underway to make it happen, you could make your mark.
My math knowledge is certainly not very big, but maybe above average. Probably not enough to work on the state of the art of theorem provers today. Still, a fascinating subject.
true that. Say, if you wanted to modify a page on Wikiproof titled "Goldback conjecture" you have to first summarize your approach, with a tree diagram of subtheorems, and at least provide a few proofs of those subtheorems that are original, and of course, citations for those that aren't. Wikipedia for original work! It'll be fun to maintain, considering that it'll be like allowing advertisements on Wikipedia.
If it is 1000s of pages of work that is unintelligible to even killed mathematicians, then it might take years o someone's life to decode it into simpler terms.
In the end, they may find a fatal flaw, or a series of flaws and where would they be? They'd have spent a lot of time and effort to disprove a theorem that nobody was interested in.
That said, I look at the fact that there are still people working on Ramanujan's conjectures that there may still be some worth to it...
> They'd have spent a lot of time and effort to
> disprove a theorem that nobody was interested in.
Not to disprove the theorem, but to invalidate the proof. the conjecture would remain open.
> ... there are still people working on Ramanujan's
> conjectures that there may still be some worth to
> it ...
Ramanujan has a proven track record. Even when he first sent material to Hardy it was ignored, and it was only because some of it looked immediately plausible that he started to check. Every initial check proved correct, and so he continued, growing in his conviction that the author had something with reading.
In this case there is effectively nothing to suggest that this is worth reading.
Argh. Always get proof, conjecture and theorem mixed about. I'm a baby mathematician - literally starting from scratch because I've realised that I missed a lot of concepts at school.
"Mathematicians who make big claims are obligated both to be right and to make themselves understood."
I spent most of this academic semester struggling with the material in MIT's 18.404 (Theory of Computation) class taught by Michael Sipser - at the end of one of his lectures, he mentioned that he keeps a large file of proofs on hand from people claiming to have resolved the P vs NP conjecture, but has little time or faith to read them... he calls this the "crank" file. It seems to me (and I'm just a lowly undergraduate math student, feel free to discount/discredit this) that solutions to many of the large, imposing academic challenges of our era might be found in the depths of other such professor's "crank" files.
Then again, there is definitely something to be said for establishing credentials - there's only so much time in the schedule of someone capable of verifying claims to be spent on verification, and it seems obvious that they should spend that time on solutions that come from people who have a track record of competence.
Scott Aaronson summarizes it well when he says, of papers that have signs of being by 'cranks' --
> If I read all such papers, I wouldn't have time for anything else. [1]
Crank papers often take a long time to read, by virtue of some combination of (i) using non-standard notation and terminology, (ii) lack of familiarity with the established literature, (iii) tortured writing style.
If you claim to have proved a big new result, and you don't already have the credentials that will ensure you get taken seriously, then it is up to you to write in such a way that makes it easy for other academics to verify your work. Part of being an academic is interacting with other academics.
If you claim to have proved a big new result, and you don't already have the credentials that will ensure you get taken seriously, then it is up to you to write in such a way that makes it easy for other academics to verify your work. Part of being an academic is interacting with other academics.
This reminds me of the introduction to Thomas Kuhn's The Road Since Structure:
"Kuhn develops further the theme, which again goes back to Structure, that science is fundamentally a social undertaking. This shows up especially in times of trouble, with the potential for more or less radical change. It is only because individuals working in a common research tradition are able to arrive at differing judgments concerning the degree of seriousness of the various difficulties they collectively face that some of them will be moved individually to explore alternative (often—as Kuhn likes to emphasize—seemingly nonsensical) possibilities, while others will attempt doggedly to resolve the problem within the current framework" {Conant and Haugeland "Road"@3}.
While correctness is preeminent, it isn't the only factor for the adoption of a proof. Coherence and intelligibility are essential. Typically, context is important too (i.e. a proof leverages a reasonable and translatable program).
While I hate that Michael Sipser uses the phrase "crank file," he's expressing an important distinction between mathematicians, computer scientists, or philosophers and, for example physical scientists: humanistic attributes are important.
P.S. You're an "undergraduate math student," not a "lowly undergraduate math student." Hang in there. :)
solutions to many of the large, imposing academic challenges of our era might be found in the depths of other such professor's "crank" files.
Interesting, because I feel as time goes on the reverse trend will hold true. It seems to me more and more of the "easy" proofs will be discovered, leaving only those that require more sophisticated mathematical machinery. And to obtain that you will have to, at the very least, have a PhD which will leave some track record of your ability. And the better credentialed you are the less likely you'll end up in a crank file.
I took 18.404 and really enjoyed it. I remember one lecture that Professor Sipser talked about the many years he wasted battling against P vs NP. He seemed pretty cynical about it getting solved without a lot of new groundwork being laid first, and this might contribute to why he so easily tosses proposed proofs into that crank file.
As new proofs are distilled, understood, enhanced, and internalised, so conjectures that were previously thought out-of-reach become more reasonable. "Easy" is a relative term - many things now presented at undergraduate level were once post-doctoral work. The "more sophisticated mathematical machinery" becomes an everyday tool for the working mathematician.
Related read: Apostolos Doxiadis' novel "Uncle Petros and Goldbach's conjecture", an entertaining account of a mathematician's effort to prove the result.
Very heavily seconded. One of my favorite novels, it does a really good job of giving a "feeling" of mathematics, without including any actual mathematics.
This is the same author as Logicomix, which is aslo very highly recommended - a comic about logic, Godel, and the foundations of mathematics.
This type of behavior was well documented in Underwood Dudley's "A Budget of Trisections." The problem is obsession and a deliberate cultivation of the ticks the public associates with genius are much more common than actual breakthroughs.
1. Assume the desired conclusion is false.
2. Make an error.
3. Derive a contradiction.
And by the way, most mathematicians have made plenty of bad proofs. I told my thesis adviser I had a "proof" months before I actually did. Years before that, I found a short proof of Fermat's Last Theorem (which was however too long for the margin of my book ...).
Generally, the way "big" proofs go is that there's a whole lot of new mathematics, and some handwaving at various points along the way. Then the work starts of checking whether the handwaves are actually accurate. The better the mathematician, the more likely it is that they actually are ...
That wasn't my point. Rather, I was suggesting that proofs by contradiction can easily be in error (because if you make an error, you have a good chance of eventually getting to your contradiction).
Anyhow, when I did mathematics, I didn't run into people caring about proof by contradiction vs. direct proof. However, I did occasionally run into a preference for constructive vs. non-constructive proofs. E.g., my thesis used a prior result that included a funky existence proof, and my adviser pushed me to find an alternate proof that was directly constructive.
Oh yes, I had understood that "proofs by contradiction can easily be in error" but I suppose that's because people try the contradiction proof first, because it "should be easier"
And of course, constructive proofs are usually more complicated, in terms of proof structure as well as 'where to go and how to get there'
My comment was based on a more precise point, namely that #3 in the process below is very natural:
1. Assume the theorem is false.
2. Make some calculations based on that assumption.
3. Eventually, make an error.
4. Pursue that error to a contradiction.
5. Claim victory.
you know what this means? arxiv.org needs a cryogenics section, for potential resuscitation of mathematical campaigns using future technology. it'll be challenging to keep out the crackpots, though --
i actually meant an archive of papers where mathematicians from the future could try to improve or dig gems out of (using computerized theorem provers or just pen and paper). Papers like ramanujan's notebooks or this guy's proof of Goldbach. But freezing brains could be worth thinking about --- maybe a crowdsourced fund like kickstarter to freeze society's favorite brains.
Well, freezing destroys cell membranes, unfortunately. There's no way to repair every cell. And even if you could, there's no guarantee the overall system will function the same afterward.
Cryogenics will probably take the form of sustaining life, rather than freezing what has already expired.
For example, perhaps slowing a physical body's metabolism for decades would enable the body to last centuries before dying. So it'd be "time travel" to the future. But that has horrendous logistical issues, e.g. feeding, maintenance of the chamber, somehow retaining muscle mass of the person, avoiding bed sores, waste disposal, antibiotics during times of sickness, regulating body temperature, a failsafe for each system, etc.
How hard would it be to write mathematical proofs in a modular way? And have someone check a "module" of a proof and someone else another "module" and so on?
...and for the bits that's possible, write in a subset of mathematical language that computer can check, or at least partially check and ask the human reviewer for feedback on the parts that can't be automatically proven.
I always fount it odd that mathematicians use so little of the mind augmenting potential of computers. Is is true that there aren't any smart enough programmers to craft the tools they would need, or that the tools end up being so "user hostile" that they always go back to pen and paper? I know people tried to express physics proofs in "computer-like language", Sussman et all in Structure and Interpretation of Classical Mechanics, but I've never heard of such things really catching on and being actively used and developed by mathematicians.
> How hard would it be to write mathematical proofs in a modular way? And have someone check a "module" of a proof and someone else another "module" and so on?
To a large extent, this is what we do. But of course, modules by themselves don't prove a thing about what we're originally interested in so that not one module can stand by itself as any advance in the proof, that theconclusion of any module of the proof are surely needed for all the others modules to make sense and that these conclusions are more often than not formulated in a new part of language (new words, concepts, and rules to play with them) introduced in said module. Parallel work of the sort I infer you have in mind might be possible in some instances, but there is so very few of them in my opinion that it's just not worth trying.
> ...and for the bits that's possible, write in a subset of mathematical language that computer can check, or at least partially check and ask the human reviewer for feedback on the parts that can't be automatically proven.
No can't do: in mathematical writing, what is actually written is the tip of a gigantic iceberg of implied reasoning and background scenery. There might be a way to make computer understand this, but as far as I know it has yet to be found. That, and making explicit what is not in mathematical writing would make the length of any proof grow manymanyfold - and I actually mean manymany....manyfold.
> in mathematical writing, what is actually written is the tip of a gigantic iceberg of implied reasoning and background scenery
The programming equivalent of this as I see it is a program depending on lots of libraries encapsulating that "implied reasoning". And you can manage a program even with tens of millions of lines of code in libraries. And it doesn't make the program itself larger because that library code is reusable, you can also use it in other proofs. Yes, it's incredibly hard to manage such a code base, you also need extensive documentation, something like "literate programming" - but it's not intractable.
You would have to write a big part of mathematics as library code, probably a multi-decade length project - but the benefits seem huge! "Translating" mathematics in such a form could augment a mathematicians brainpower "manymany....manyfold" and could be just as important as, let's say, a cancer research project taking decades and tens of millions in funding, but it should worth it! Imagine that hundreds of "less talented" mathematicians could handle problems that only a handful of people can understand today (with the guidance of the "real geniuses", of course), revolutionizing many fields of science through the consequences of their results.
As an analogy, when people used roman numerals, only a handful of highly educated people could do complex calculations, but nowadays we all have more than basic arithmetic knowledge, thanks in large part to a "change of language". I think a "change of language" could really make mathematics a less "elite only" field (though I have greater hopes for letting "AI systems" give a helping hand than for letting "lesser mathematicians" do some of the work, in the end one can't predict what will make the most difference - hardware or wetware, it's still computing power, and, as the complicated software problems are solved, more and more of the computing power can translate into "mathematical reasoning power" I think).
The QED project was started in 1993 to build a database of all mathematical knowledge. It died out because formalized mathematics does not yet resemble traditional mathematics. All existing theorem provers and proof assistants such as Mizar, HOL, and Cow have major shortcomings. Very few people are working to improve formalized mathematics because there isn't any compelling profitable application of fully formalized mathematics.
...so it's one of those "someone has to do it to prove it can be done" or "someone has to first make it work to prove it's useful" problems?
Hopefully there are a few madmen to start working at it and they will be wise enough to do it all "in the open": open source code and publication of all developments, "blog stye", instead of the "talking about only after you've really understood it" approach. I would throw a helping hand on the programming side and maybe devops, even throw a few bucks at it if one were to crowd-source and crowd-fund such an effort ...just because I find it "so cool". And I'm sure there are others that would help any group of "brilliant madmen" trying to do this, even without any imaginable profitable applications. But for now, I'm trying to wrap my head around what the QED project was trying to do and what it did... thanks for the directions to them!
I always fount it odd that mathematicians use so little of the mind augmenting potential of computers. Is is true that there aren't any smart enough programmers to craft the tools they would need, or that the tools end up being so "user hostile" that they always go back to pen and paper? I know people tried to express physics proofs in "computer-like language", Sussman et all in Structure and Interpretation of Classical Mechanics, but I've never heard of such things really catching on and being actively used and developed by mathematicians.
The validity of a mathematical proof depends on a few very precise rules that can be checked by a computer, so I think it's interesting that we still rely so much on humans for the verification of mathematical proofs. As the article points out it's clearly problematic.
What do you mean by not fully automatic? What is the manual part?
I think right now the job of describing a (sophisticated) proof to a computer is analogous to the job of writing a sophisticated computer program like a game or Facebook in assembly code. What mathematics needs is a concise language that can easily be understood by people and computers alike, like Python.
The manual part is writing out steps of the proof. Some times these steps can be automated. Usually they are guided by a user. They are typically not written out step by step by hand because it is very tedious. I doubt complicated ideas expressed in advanced mathematical papers can be easily or succinctly expressed formally.
http://en.wikipedia.org/wiki/Theorem_proverhttp://www.cs.ru.nl/~freek/100/
But more importantly, the objective of a mathematical paper is to get readers to understand the proof, a goal which is poorly achieved with formal verification.
Have you ever used one of these systems that you claim exists? Math is not mechanically formalized beyond a tiny bit of foundational set theory. Most of modern math is hand waving by brilliant people who trust each others hands, and Doron Zeilberger calling their bullshit and begging them to start being more careful.
The basic problem with your idea is that significant new proofs usually involve the invention of new branches of mathematics. It's not simply the derivation of new axioms from existing foundations (which fails for reasons outlined below because of Godel's Incompleteness Theorem), it's the laying of new foundations. For a proof checker to check it, it would need to encode those foundations, which is equivalent to discovering the proof itself.
Otherwise, given what you say, we could just let proof checkers run on clusters and stop by on Mondays to see what new discoveries have been made :)
I have to say something here, because this is complete nonsense.
Nothing in Godel's incompleteness theorems says that new theorems can't be found. It says theorems can't be enumerated, which actually means the opposite--there's always something new to find in a sufficiently strong system of mathematics. Not that there are theorem-enumerators of significant power in the first place.
Furthermore, proofs generally do not introduce new axioms and a proof that does, that isn't intended to be foundational, would be considered unsound.
Minor nitpick: Godel does not say theorems can't be enumerated, obviously they can be, just use brute force. Rather it says theorems provable from sufficiently strong axioms can't be decided. So if you want to know whether your axioms prove 1=0, you can start enumerating all the theorems they prove, and if they do prove 1=0, eventually you'll see it, but if not, there's no way to "go far enough" to rule out the possibility that 1=0 will appear later on in the enumeration.
I'm not sure what your last line means, though. Speaking strictly in terms of axioms and derived theorems, that seems fine, but in thinking of, say, Wiles' proof of Fermat's Last Theorem, which wended broadly through various branches of math, extending them or perhaps inventing them as needed, it seems incorrect to say this. I'm not sure whether you want to characterize Wiles as introducing new axioms, but he certainly wasn't simply deriving new theorems that could have been computationally enumerated.
Perhaps I'm wrong; I'm not a mathematician. But my understanding of Godel, axiomatic systems, and proof engines is that what they do simply isn't comparable to, say, what Mochizuki did with the ABC Conjecture.
Please elaborate. I find it difficult to believe that the problem you are hinting at would be a showstopper because there already exist computer systems for verifying mathematical proofs to standards of rigor acceptable by mainstream mathematicians.
Computers aren't some magical machines that instantly understand what we mean.
At some point somebody has to write the code to implement those 'rules'. Which then opens up exactly the same problems (if not more) than if you just verified by hand.
Well, yes, we would have to convince ourselves of the validity of the code that checks the proofs. But the thing is once we made one proof checker whose code we were really confident about, the checking would be over because we could use it for any proof. Whereas with human verification of proofs each new proof is a new checking job.
I don't know if you have tried to encode a reasonable sized theorem/proof in Coq or Agda. It is worth giving it a shot if you haven't. It is kind of eye opening (and both these systems are significantly different from each other).
The real problems I have had with that so far has been a) the extreme precision required in encoding the problem. This exceeds what you normally require in a paper proof b) ensuring that the prover "understands" the necessary background math that you are basing the proof on. i.e. Libraries of math. This situation is improving with time c) The search strategies that the prover is using to verify steps is sub-optimal/does not work for your case. When a human reads/writes/verifies a proof there is a lot of cognitive shortcuts and background information coming in to play. But for a prover this has to be spelled out precisely. Provers tend to come with a set of built-in "tactics" to do this and (often) allow new tactics to be written out. (Coq, for example, allows you to write your own tactics for example. But it is not exactly fun). Again this is slowly improving.
For most mathematicians this process is a high overhead one (as yet).
If my understanding is correct I think he was referring to the fact that even if you have the axioms and an algorithm to figure out all the theorems, you can't prove all the truths. There will be blind spots.
Thank you for the explanation. For a given set of axioms you have the same "blind spots" whether you are verifying proofs manually or with computer assistance.
I think this has an analog in software. The massive enterprise/ government systems which get bogged down for years and become almost impossible to replace.
Will be interesting to see if newer generations embrace collaboration and new technology more or will things still get pushed forward by the lone and difficult to understand outlier who becomes at least convincing enough for others to trawl through their complicated texts.
I guess issues of credit would have weighed into this method. In this case, if intermediate findings were published in a simple to understand way they may have attracted enough interest to be beaten to the final proof?
Why not having a MathHub (like Github) for math, i believe even the toughest problem will have the pull request, and many people could understand the proof.
Math doesn work like that. You can't just create a git repo for PvsNP and expect that solving the problem is just a matter of accumulating enough commits. Not only does everyone involved have to know how everything works in order to come up with insight (making it hard to "delegate" subproblems) but you don't even know what subproblems need to be solved to come up with a solution!
Why not seeing "Theorems" like "Open source projects" ? I think they have the same meaning. Math theorems are the framework for other to use or extend. And they should be "open sourced", i think we don't solve the Conjecture with this "MathHub" but it will get solved and understandable by anyone who're interested in a time.
This is for CoQ, which is only one of many theorem provers/checkers. Differences between them aren't only notational; they also can also be in the set of axioms they build from.
I don't believe what you believe though. Certainly, the 'many people could understand the proof' is only true in the 'could, if they were willing to spend years studying them' sense. For example, I think I could understand Wiles' proof of Fermat's last theorem, but as it stands, I bet it could take me a month to even understand what its first paragraph is talking about. There will be thousands, maybe even millions, of people in the same boat.
I don't think so. For example, great project at Github's got success based on great documentation. Mathematical theorem's proofs are also delivered with full documentation. It's called documentation when implicitly anyone interested in can understand the documentation. For the Wiles' proof, we don't know the proof/theorems in it before we know it. In short, we need standards to the modern math.
I don't think wiki is a suitable tool for collaboration. We need small pull requests to small issue, then merging, disscussing on them, we can work parallel on multiple branches, and that means the workflow works for most of type of people.
For a similar fate in the physics world, WP: Arthur_Eddington#Fundamental_theory. Which demonstrated that it's not enough to have an idea, you have to find some kindred souls with the stamina to be your soundboard - everyone else will be looking for any excuse to avoid the confrontation.
E.g. dare to suggest that the universe <i>might not</i> be simple, after all.
In the pages of this mathematician's proof, there must be a few smaller proofs he could publish that show genuine insight, which would give him credibility. Otherwise it's just another sci.math sideshow.