Hacker News new | past | comments | ask | show | jobs | submit login
Computers Can't Do Math (plough.com)
3 points by danielam 8 months ago | hide | past | favorite | 11 comments



The precision argument is so amazingly bogus that I won't refute it except with a link[1], so here are refutations of some other parts:

> My kids sometimes ask me how high I can count. I’ve noticed that they stop asking this question once they reach a certain age, usually around six or seven. This is because the question does not make sense once you understand what a number is. If there’s a single highest number you can count to, you don’t really grok numbers. The difference between computers and humans doing math is a bit like the difference between the younger kids who think that “how high you can count” is a real thing and the older kids who have successfully understood how numbers work.

There actually is an answer to this; most people's short term memory for digits is less than a dozen, so the author likely can't count higher than 1e12, as they won't be able to go from e.g. 374841483741 to 374841483742 without getting one of the digits in the middle wrong.

> The representation of the numbers that occurs only in the mind of the human is conflated with the execution of a particular program that takes place in the computer

This merely argues that computer math can be different than human math, which even that I'm not quite going to concede, since it would imply e.g. that the mathematicians behind the IEEE 754 standard were unaware of the implications of floating point arithmetic.

> To us the transition between the theoretical and the actual happens almost instantly and unnoticeably.

Indeed it is happening almost instantly and (being generous to the author) without the author noticing throughout this paper, conflating concepts to the point where one could demonstrate almost anything is beyond the reach of computers, even things that are clearly well within the reach of them!

1: https://github.com/stylewarning/computable-reals


It’s a very interesting article that’s based entirely on a flawed premise, that floating point is the only way to represent numbers in a computer. It’s true that there are numbers that can’t be represented by a computer using any system, but they are so huge that you probably couldn’t represent them any other way either (say a random number with a quadrillion digits, since a billion digits is totally possible and a trillion is pushing it on consumer hardware. Even then, it could probably be done with a million bucks of hardware).


It all hinges about the mentioned tract “Dangers of Computer Arithmetic”.

FWiW many of the mathematicians I knew that worked on the early iterations of the CAYLEY|Magma symbolic algebra system were equally ill suited to multiplying large numbers and bookkeeping - if not by numerical manipulation talent then certainly by temperament.

None the less they managed to cobble together a computer math system that could generate a headline:

Quantum encryption algorithm cracked in minutes by computer running Magma

but of course that was not arithmetic.



Getting the sad vibes of "philosophy dept. hopelessly trapped in bubble with microsoft and wolfram products, nevermind ever speaking to CS dept."

The argument may be valid, but could have been lightyears advanced if author had any inkling there already were attempts for computers to "think" symbolically and they underdelivered and there was an "AI winter". The reason was NOT that infinite precision arithmetic was not possible.


The author could have perhaps made this clearer, but charitably read, the argument isn't really about numerical precision. Precision only functions as an instrument to demonstrate the point, which is that human beings possess the concept of number, while computers do not. And because we possess the concept of number, not only do we not suffer from such representational limitations per se, but we know or can know when they hold in certain representations. The essential distinction is between concept and representation. In computers, there are only representations of a conventional sort, whose meaning is entirely dependent on the interpreter (us). Human beings form concepts, and these concepts are the normative basis for producing and judging representations. Without concepts, computers cannot even in principle make such judgements.


Why cannot computer possess concepts or meaning? Because they can't possess consciousness? And why is that, what is consciousness? Because <<... boils down to some kind of magical thinking...>>


What is a computer? Strictly speaking, it an abstract model, not the physical machine that is used to simulate the model. You could build an entirely mechanical implementation made of wood, if you like. The abstract model itself is a formalization of effective method. So far, these are basic notions taught in any decently taught theory of computation class.

What is a concept? It is an abstracted universal in the intellect. Consider the example of triangularity. We can predicate this concept of any concrete, physical triangle, but none of these concrete instances are triangularity itself, and triangularity itself is not a concrete instance. Indeed, any concrete instance is necessarily determined to be this triangle, but not that triangle, while a concept as abstracted form is true of all triangles. An analysis of the concept allows us to discover properties of triangles, e.g., the property that the inner angles must sum to 180 degrees. Such properties hold for all concrete instances. These properties are themselves abstracta, not concrete things in the world. Triangularity is a matter of semantics, because it is the essential meaning of a triangle, what makes it the kind of thing it is.

The same can be said of quantities. Just as you will never encounter naked triangularity, as such, in the world, you will never meet the number 3. However, you will encounter collections of three things, things measured to be 3, things with three sides, and so on.

Now take a physical machine that is simulating a Turing machine. Can the machine add? Well, strictly speaking, no! You can, however, simulate addition, to a point, by representing numbers using a system of symbols and using syntactic rules to manipulate strings of these symbols such that they result in new strings that correspond to the numbers that addition would produce. But these strings aren't quantities, they aren't numbers per se, but representations. Your Turing machine's tape can be relabeled, and the interpretation of the strings on that tape are inherently ambiguous. They can be read, for most practical purposes, according to an interpretation that assigns to them a consistent numerical meaning, but there is no numerical meaning in them per se, any more than the ink in a book arranged in the shape of "cat" is the concept of Cat. You could assign to them a completely different interpretation, and there would be no inherent reason to prefer one over the other. Only the interpreter's purposes fix the interpretation. But the meaning assigned, the concepts themselves, are just the meanings themselves. They are the intelligible content that the interpreter assigns to those symbols, and this intelligible content does not reside in the physical computer.

So, if there is any magical thinking, I claim that is rests on the side of those who claim, rather flippantly, that machines can think.


> So, if there is any magical thinking, I claim that is rests on the side of those who claim, rather flippantly, that machines can think.

Implicit in this statement is the assumption that either A) Human beings can't think or B) Human beings are not machines. Which point of view do you take?

Also, if you're going to take the high ground on magical thinking it might be better to do so not when commenting on an article that plainly states: "The human mind is magic, or might as well be, and it is by this magic that we can defeat the AI."


Claiming that concepts have some definite, if abstract, existence, is how not magical and flippant? They too are ambiguous, as is your example definition "the inner angles of triangle must sum to 180 degrees". It depends on euclidean geometry, on another concepts like line and points, and many other concepts possibly not yet invented. And it's all just a matter of convention, humans can follow conventions but so do computers.

By the way, I myself don't think in the rigorous way you describe, either. And I can observe many humans don't too.


Computers can't play Chess

Computers can't play Go

Computers can't do Math




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: