Hacker Newsnew | past | comments | ask | show | jobs | submit | soberhoff's commentslogin

How is it a "bright side" that when something bad happens people try to avoid it happening again in the future? That's just circular.


This is hairsplitting. It's okay to say that there are true yet unprovable sentences without always carrying "true under the standard interpretation" around. It's understood from context.


IMHO in context of logic, the usual meaning of 'true' is 'true in theory' i.e. 'true in all models of theory'. Using it for 'true in standard model' is kind of sleight of hand to make more bombastic statement that it really is.


Let me quote Torkel Franzen in this issue:

> True in the Standard Model

> The idea is sometimes expressed that instead of speaking of arithmetical statements as true or false, we should say that they are "true in the standard model" or "false in the standard model". The following comment illustrates:

>> This is the source of popular observations of the sort: if Goldbach's conjecture is undecidable in PA, then it is true. This is actually accurate, if we are careful to add "in the standard model" at the end of the sentence.

> The idea in such comments seems to be that if we say that an arithmetical statement A is "true" instead of carefully saying "true in the standard model", we are saying that A is true in every model of PA. This idea can only arise as a result of an over-exposure to logic. In any ordinary mathematical context, to say that Goldbach's conjecture is true is the same as saying that every even number greater than 2 is the sum of two primes. PA and models of PA are of concern only in very special contexts, and most mathematicians have no need to know anything at all about these things. It may of course have a point to say "true in the standard model" for emphasis in contexts where one is in fact talking about different models of PA.


Are you familiar with Caplan's Case against Education? Here's a good review (and some critique): https://www.scottaaronson.com/blog/?p=3678

Caplan makes a very compelling case that education, while inarguably teaching something, primarily serves to signal preexisting ability.


Not who you responded to, but as someone who spent many years in education I find Caplan's case deeply unconvincing. Yes, education has a component of signaling, but he assigns far too much value to it.

I find the core of his argument rests on the assertion that we all went through school and don't think we learned much of value/it didn't help us. What he's actually describing here is a form of the curse of expertise. Humans are incredibly bad at remembering what it was like to not know things, it's why people make such atrocious teachers without training. But educators are actually helping to build valuable skills and complex mental models that would not arise naturally.

If you interact with homeschooled children of deeply incompetent parents or certain alternative schooling systems (Scientology schools mess their kids up) you can see what the alternative provides, and it's not pretty.


As somebody who spent many years in education (as a student and about 1 year as a teacher) I find Caplan's case deeply convincing.

Caplan actually goes through reams of evidence. As an extreme example, in 2008-9 there were 34000 new history graduates in the US. But there are only 3500 historians working in the whole country.

Now, are you trying to argue that history can actually measurably improve productivity in other fields, such as accounting, and it's just the curse of knowledge that prevents us from seeing this?

Also, incompetent homeschooling parents and Scientology schools are hardly the only alternatives to public schooling. One option that Caplan advocates strongly is vocational training. Instead of giving them history classes, let teenagers who chose to do so become apprentice carpenters, there are 900000 of those.


You're straw-manning here, Caplan makes a broad argument against public education at the primary, secondary, and post-secondary levels, I said I disagree with his large scale conclusion and you're asking me to defend the number of history degrees in the US.

The US public school system is in need of wide reforms at all levels, I'm not going to defend it piecemeal because there are many pieces that are not defensible. None of that makes Caplan right though.


What are we even arguing over? Is it just over the exact percentage of signaling in education's payoff?


I thought we were arguing over two things:

How much value is created through publicly funded education beyond basic literacy and numeracy, which Caplan cedes.

Whether Caplan's proposed changes to the education system are good ones.


Regarding the first point, Caplan personally estimates that about 80% of education is signalling and that 30% is the lowest figure one can plausibly maintain.

As for the second point, I wouldn't bet my life on it. But I think the case is strong enough that experimental measures in that direction are warranted.


Do you have a more on point summary? That's a lot of text and the parts I skimmed say little against education. Obviously education is used for signaling ability. Not using it this way would be stupid, since it's one of the best tools for that we have. Kind of like the SSN seems to be used as ID in the US. Doesn't mean both of them don't also serve their original purpose anymore. And I haven't found much of of that in that 11 page blog post.

Then there is a bunch about how to build better schools. This seems reasonable in principle, since the way we teach especially in early years is still heavily influenced from by long ago times with different workforce needs and a different culture in general. But lets better not get into the actually mentioned proposals. And this still does hardly fit the "IQ unrelated from schools" narrative discussed in the comments here.

So back to those: If school has no impact, then were do those traits we aspire come from? Are here actually people who believe you can just dump an infant with perfect genes in front of a TV, feed (&co) it regularly, and then expect it to become a genius in adulthood? If not, then what makes the difference and why shouldn't some "school" help citizens apply whatever it is, supported by policy?


In brief: there are strong reasons to believe that much of higher education and even large parts of school serve not to teach but to test students. An English major doesn't lead to higher pay because it makes one a substantially better worker. It does so because achieving it signals three primary traits to the potential employer: intelligence, conscientiousness, and conformity.

This isn't education's only function. Some learning undeniably still takes place. But in Caplan's estimation signalling is probably about 80% of the payoff.

This picture is supported by a large number of observations:

- Why do even top schools like Harvard make little to no effort to prevent non-students from attending lectures?

- Why do students cheer when class is canceled?

- Why does ratemyprofessor.com have the measures "overall quality" and "difficulty" but not an explicit "informativeness" measure and why is high difficulty considered bad?

- Why do students cheat on tests and why do teachers make such a large effort to prevent it?

- Why do employers rarely show concern that you might've forgotten what you learned?

- Why do statistics indicate that graduation year has a much greater effect on wages than all the other years?

All these points contradict the "education = learning" viewpoint but are straightforwardly explained with the signalling model.

And once you acknowledge the importance of signalling it puts statements such as

> And, a good education pays off even for less gifted people. Their lives are better, they contribute more to the economy and less to crime.

into a completely new perspective. As Caplan writes:

> The classic example: You want a better view at a concert. What can you do? Stand up. Individually, standing works. What happens, though, if everyone copies you? Can everyone see better by standing? No way. Popular support for education subsidies rests on the same fallacy. The person who gets more education, gets a better job. It works; you see it plainly. Yet it does not follow that if everyone gets more education, everyone gets a better job. In the signaling model, subsidizing everyone’s schooling to improve our jobs is like urging everyone to stand up at a concert to improve our views. Both are “smart for one, dumb for all.”


> In the signaling model, subsidizing everyone’s schooling to improve our jobs is like urging everyone to stand up at a concert to improve our views.

It seems that model doesn't even attempt to pretend anymore that our society gives equal opportunity to everyone? Now only the rich shall have the opportunity to signal? That's why I didn't want to go into the actual proposals in that critique... at best they seem to ignore all the complexity of the actual world we live in. Kinda reminded me of someone who just discovered Libertarianism and now thinks governments are totally unnecessary.

If his argument would be "we should reduce signaling", I'd totally love for that to be possible. But I'm not sure it actually is, when taking all the game-theoretic aspects of the real world into account. Maybe all he wants is to reset the out of control spiral of signaling for now? But if the only way we can do so is also a 0.1%er-capitalists wet dream, then I've little hope for the future. Signaling will always exist and be necessary as long as there is a competitive job market, and I don't see society working without.

Anyway, thanks for taking the time to respond. I like the first observation, since it highlights how narrow the allowed path for effective signaling is. Just finishing the material isn't enough, you have to be accepted into the school through official ways. Some observations are kind of weak, though. Especially the second, since it boils down to many people preferring short term gratification over long term success. Students will cheer even if tests are standardized and those canceled classes thus will lead to worse grades => failure at signaling.


....all education? Basic literacy & numeracy? Teaching children how to count and their ABCs? How do you think civilization would function, then? I think that Caplan is arguing against college and further on


This is addressed in the summary. No, these essential skills are given a pass.


Step 1: Define "Free Will".


I am at work right now but I can walk the fuck out anytime I feel like it.


Does determinism prevent you? Why _would_ you want or feel like it? And when that happens, how do you control the deliberation process itself?

Can you freely choose your deliberations? Why would you choose them exactly like that?


In that case there is a reason why you feel like it and the only possible outcome is that you will walk out.


See you at work tomorrow.


Is that a definition?


You're saying you should get diagnosed as a form of legal protection in case you accidentally have a weird interaction with the other sex?


Not just legal. People are much more forgiving of innapropriatw behaviour if they can point to a disability outside of your control; as opposed to you just being a jerk.

No one complain about a blind man staring at them if they know they are blind.


Not just legal. People are much more forgiving of innapropriatw behaviour if they can point to a disability outside of your control; as opposed to you just being a jerk. No one complain about a blind man staring at them if they know they are blind.

Because they are literally not seeing you. If someone pinches your ass, but has autism they still pinched your ass. If someone is so disabled that they assault or harass people, that is a problem that won’t necessarily be forgiven. Instead it might lead to claims that the person is so disabled they require a more structured environment.

...And of course if it turns out that you lied about your disability or its role in your behavior, everyone will want your head on a pike.


I think it's unlikely the parent was referring to actual physical harassment.

I agree with the premise that acceptance would probably be higher for someone who comes across as awkward, or makes a social faux pas in an effort to fit in if they were on the spectrum, as opposed to just being "that weird guy."


Perhaps Gödel had a vague notion of what he was up to. Nonetheless, the whole concept of universal computation was still half a decade away and higher level languages several decades. So I think the remark is still essentially accurate. He had to work without all the modern cognitive conveniences such as for-loops and if/else branching.


Author here.

I'd like to mention that this article is currently slated for publication in the Bulletin of the European Association for Theoretical Computer Science. A revised draft can be found at: https://github.com/SOberhoff/incompleteness_ex_machina/relea...

I haven't handed it over yet, so we'll have to see about further changes.


That's a very nice writeup, thank you a lot!

One thing that seemed a bit surprising to me, especially in light of the paper's strong focus on computation, is that the assumption that the axiomatization is effective is only mentioned in a footnote, and even there it is not explained but we are told to "don't worry about it" if we don't know what that means.

Yet, the code pieces in the paper make critical use of this assumption in that they assume that we can decide (with an algorithm) whether or not a string is a proof. This only works if the axioms are recursive.

So, maybe you could consider adding a short explanation about this to the paper?


Every time I said "either/or" I was making use of the law of the excluded middle. But I didn't add a detailed disclaimer explicitly highlighting this fact either. Should I have? I think it's okay to just take some things for granted.

Though, I admit that I have never spent much thought on non-effective axiomatizations. So I'm open to be educated.


The proofs you present are entirely constructive and work without appealing to the law of the excluded middle! As was also pointed out by Gödel himself about his proofs, they are obtained in an "intuitionistically unobjectionable manner".

When you say "either/or" in the paper, you make case distinctions that are intuitionistically non-contentious, i.e., you say "either F ⊢ G, then contradiction, so F ⊬ G, or F ⊬ G, then contradiction etc.", but the point is that G is explicitly constructed, so this is intuitionistically acceptable.

In contrast, the other assumption I mentioned is used in the paper and in fact essential for its proofs.


I agree in the cases where I argue "suppose, then..." But there are instances where I argue "either this is the case, or this is not the case." Unless there are multiple laws of the excluded middle, that's an application of it.

In any case, that was merely an example. The central point I was making doesn't depend on this.


In the paper, it is assumed — per footnote 1 — that the formal systems under discussion are effectively axiomatized. I assume this means that their theorems are recursively enumerable, is this correct? If so, then this means that determining whether a string is a proof may only be semi-decidable, and may not halt for strings that are not proofs.

However, as I mentioned, in the remainder of the paper, it appears to me that you assume that the theorems are not only recursively enumerable, but in fact recursive. For example, in the proof of Theorem 4 on page 5, the lines stating "if s is a proof ..." seem to assume that this check always terminates, requiring assumptions that the original proof does not and therefore weakening the obtained result.

In my opinion, adding a short explanation about how this can be salvaged (it can!), or at least precisely specifying what is assumed, could improve the presentation.


Sorry, I was out of town for a while.

I'm only assuming that checking whether "does s prove S?" is a recursive property. That's not the same as demanding "is S provable?" to be recursive.

Upon reflection I agree that effective axiomatization isn't actually that difficult to explain. So I've changed the footnote to say: "For the purpose of this discussion every formal system is effectively axiomatized by definition. This basically just boils down to the fact that proofs are computer checkable."

I was initially worried that this might raise the question what non-effective axiomatizations are all about. That's not a can of worms I want to open. But I think this should be fine.


Thank you for adding this! Still, it is not so straight-forward to conclude, from just the premise that the system is effectively axiomatized (i.e., its theorems are recursively enumerable), that "does s prove S" is a recursive property.

In my opinion, if the assumption is that "proofs are computer checkable" (i.e., "does s prove S" is a recursive property, which is what is required in the paper), then it would be good to either state that as the key assumption, or state for example that this follows via Craig's theorem from the assumption that the system is effectively axiomatized:

https://en.wikipedia.org/wiki/Craig%27s_theorem

Interestingly, from this theorem it follows that such a system can be re-axiomatized so that "S is a theorem" is even a primitive recursive property. In fact, when Gödel wrote "rekursiv" around 1930, he meant the class of functions that we now call primitive recursive. Because of Craig's theorem, we know that primitive recursive, recursive, and recursively enumerable can be used interchangeably in the definition of effective axiomatization, but without such a theorem, it would be a leap to conclude one from the other.


I don't see why there's a theorem required here. Let our formal system be effectively axiomatized. Then by definition its axioms are a recursive set. So I can just check a proof by starting at the top and verifying that every line is either an axiom or follows from a line above via an inference rule. Both of these two checks are recursive. Hence, proof checking is recursive.

Also, the Wikipedia article doesn't say that Craig's Theorem proves recursively enumerable axiomatizations equivalent to (primitive) recursive axiomatizations. I would find it very surprising if this was a consequence. It only says that a recursively enumerable set of formulas (e.g. the provable sentences in Peano arithmetic, not merely its axioms) can be given a (primitive) recursive axiomatization. That's a very much weaker claim.


"Let our formal system be effectively axiomatized. Then by definition its axioms are a recursive set."

"effectively axiomatized" means that the theorems are recursively enumerable. It does not immediately follow that the axioms are a recursive set. However, by Craig's theorem, we can then re-axiomatize so that the axioms are a recursive set, even a primitive recursive set!

You are of course right about the consequences: In the post above, I meant to say that due to this theorem, it makes no difference whether we require the axioms to be recursively enumerable, recursive, or primitive recursive. But still, without this theorem, it would be a leap to conclude from "the system is effectively axiomatized" that "s is a proof of S" is recursive.


I don't agree with that definition of "effectively axiomatized". Granted, that's what Wikipedia says here: https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_... But the reference given doesn't check out. Franzén's 2005 book Gödel's Theorem doesn't say anything about effective axiomatization on page 112. He does however give the following definition in his lecture notes at: https://books.google.de/books?id=Wr51DwAAQBAJ&pg=PA138#v=one...

"If T is effectively axiomatized, in the sense that there is an algorithm for deciding whether or not a given sentence is an axiom of T,..."

In Computability: Turing, Gödel, Church and Beyond Martin Davis even makes the definition "T is axiomatizable if it has an axiom set that is computable" on page 42.

And that's also the definition Shoenfield gives on page 125 of Mathematical Logic.

Besides, "effective" has always been a synonym for "computable", "recursive", or "decidable" whenever I've encountered it in this context.

Also, I must insist that "it makes no difference whether we require the axioms to be recursively enumerable, recursive, or primitive recursive." is not what Craig's Theorem says. This is dangerously close to claiming that recursively enumerable sets are recursive which is plainly untrue. If recursively enumerable axioms (not just recursively enumerable theorems) are indeed interchangeable with recursive axioms, then I'd like to hear more about that.


Yes, you are right: This is not what Craig's theorem says! It is enough to assume that the theorems are recursively enumerable. From this, it follows that there is an axiomatization where the axioms are recursive.

The key point is that I would find it a worthwhile addition to the paper to somehow make clear that "s proves S" is assumed to be decidable. Since you have done this: Thank you!


Thank you for this. Since reading Gödel, Escher, Bach, I've always wondered about how to think of a "dishonest" but consistent formal system. Can we really say it lies? Or is it just describing something similar but weirdly different from natural numbers?

It's nice to know that you don't need it; it's just a sideshow.


You can call it "similar but weirdly different" in the same sense that the people who are subject to propaganda live in similar but weirdly different realities. What is true depends on your viewpoint.

When a formal system says: "this computation halts after some number of steps", then under the default interpretation that means that after say 10000 steps the computation really halts. But in the "similar but weirdly different" reality where transfinite numbers exist the above claim can still be considered true if it runs indefinitely. One simply has to entertain the idea that "some number of steps" might mean a transfinite number of steps.

In other words, yes, we can say that the formal system lies provided we accept that what is and what isn't a lie depends on the viewpoint.


Yes, nonstandard models of arithmetic are what I was thinking of. I don't know much about them, but here is my intuition:

It's a bit odd to think that nearly all "natural" numbers are so large that we can never calculate them, even in principle (because it would take more bits than exist in the universe). Even constructive proofs can describe calculations that could never actually be carried out. The boundary between what I might call "practical" numbers and the larger natural numbers is fuzzy (since it depends on technology), but maybe admitting transfinite numbers exist among the very large naturals would be a way of dealing with it? A way of saying "induction takes us beyond anything we can really know; here be dragons".

And similarly, there are programs that in practice would never halt (because not enough time in the universe), even though theoretically they do.

I don't suppose that's very useful, though, so nice to know it can be avoided.


I feel like I didn't argue this as clearly as I could have. Let me make one more addition.

Throughout the discussion I'm making the tacit assumption that there is one standard viewpoint to which we adhere. That's the "normal" world. Numbers are finite and halting programs halt after finitely many steps. It is from this fixed viewpoint that I'm declaring certain claims to be lies. They are not lies in some grand universal sense.

I realize that the word "lie" usually also entails an accusation of deliberate deception. But that's immaterial here. Formal systems don't have intentions. They simply make claims.


How does this affect internet searches?


Potentially a search engine could provide results without knowing the actual query, although that would make it difficult to identify new trends or determine which results are/aren't being clicked, so it probably wouldn't be competitive as a general purpose search engine.


It would also be completely impractical, since the search engine would necessarily have to perform some sort of scan of its entire database to respond to every query (otherwise it would, in fact, learn something about the query).


Unless the database is also encrypted and provides the same homomorphism against reverse index queries.


In which case you are no longer talking about FHE. With FHE the server receives a ciphertext query and produces a ciphertext output, without ever seeing any plaintexts (not even the result of a reverse index query).


If the database was encrypted, you would need a different encrypted database for every user - again impractical.


It is.


Do you ever still do exercises? What's the last thing you learned that other people might have learned as students?


I'm constantly learning new things that to people from other fields would be freshman-level trivialities! The most recent such example would need to be something I learned today. For example, maybe a little tidbit that I picked up from Sanjeev Arora's STOC'2018 tutorial on deep learning (where I was this morning): it turns out, backpropagation is just a dynamic programming algorithm for calculating a gradient in time linear in the size of the network, improving over the quadratic time that one would've needed naively.

I feel bad that I rarely have the time (or willpower!) to do exercises anymore, though I'd probably benefit if I did.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: