Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

sin(x) ~~ x only in radians, so honestly that's reason enough.

Once in a while we get programmers wanting to disrupt mathematical notation for whatever reason... Worst I've seen so far was one arguing that equations should be written with long variable names (like in programming) instead of single letters and Greek letters. Using turns because it's a little easier in specific programming cases is just as short-sighted, I'd say, it doesn't "scale out" to the myriad of other applications of angles.



Those perfect radians use 2*pi, aka tau, though, a different math notation issue, where mathematicians have chosen the wrong option (imho) and a case for disrupting that part of math notation, to make radians easier to teach: 1/4th of a circle could be tau/4 radians, 1/8th could be tau/8, etc..., instead of confusing halved factors with radians expressed as amount of pi.

Regarding long variable names: I'd rather have long variable names, than a mathematician using some greek symbol in formulas without telling what the meaning of it is (and it could be different depending on their background). But I have no issues with the single letter variables if they're specified properly.


As far as I know, the whole tau disruption wasn't proposed by programmers, so I think we're safe on that.

And proposing to write equations in books and articles with long variable names... Well, Algebra was invented for a reason.


Just out of curiosity, where did tau come from? I never heard of it used for 2pi, and frankly, it seems like a poor choice because in engineering it is one of the most common symbols used (time constant tau).


It apparently was chosen because it's the starting sound of "turn": Hartl chose tau to represent 2pi because it nicely ties in with the Greek word “tornos,” meaning “turn,” and “looks like a pi with one leg instead of two.”

https://blogs.scientificamerican.com/observations/the-tao-of...

There was an earlier effort that used a new "two pi" symbol consisting of a "π" with an extra leg in the middle: https://www.math.utah.edu/~palais/pi.pdf.


Funny coincidence: π with an extra leg is the Cyrillic cursive letter for the sound t.


https://tauday.com/ is a good entrance to this particular rabbit-hole.


Doesn’t Tau (the letter) look like half of Pi? Isn’t this a lost cause already?


I think it's a won cause already.


Look up the Tau Manifesto: it’s all explained there.


That is a completely different matter. The definition of sin/cos in radians doesn't change if you prefer to use 2 * pi or tau - it's still x - x^3/3! + [...]. sin(pi/2) = sin (tau/4) = 1.


> Worst I've seen so far was one arguing that equations should be written with long variable names (like in programming) instead of single letters and Greek letters.

That could never work. If anything the words comprising mathematical texts should be defined once and thereafter truncated to their first letter to reduce cognitive burden and facilitate greater comprehension.

c = "could"; d = "don't"; f = "for"; g1 = "go"; g2 = "great"; i = "it"; i2 = "i"; m = "me"; s = "see"; w = "works"; w2 = "what"; w3 = "wrong"

i w g2 f m; i2 d s w2 c g1 w3.


Reference Error: s is not defined.


Thanks for the correction. I will be sure to credit you in the acknowledgements.


That is the way to do the math, but not the way to write the code.

That said, I would like for my compiler to combine any multiplications involved down to one factor for input to the fastest sin/cos operations the machine has. And, to treat resulting multipliers close enough to 1, 1/2, and 1/4 as exact, and then skip the multiplication entirely.

But the second part is a hard thing to ask of a compiler.


Yeah, seems to me that languages should allow way more semantic expression than most do today.

I wish I had done CS, those kinds of compiler optimization sounds so fun. I'd love to work on that


Good news, optimization is engineering, not CS. CS is all about what a program would eventually do, if you were ever to run it. Once you run it, you have moved to the domain of technicians. Engineering is about making it run better.


Compiler optimization is very much CS.


Only in practice. And, mostly implemented by engineers.


What really bothers me is that mathematicians seemingly never distinguish between doing and presenting mathematics.

You can do your own scribbles with single letters, so do I, it works fine.

But when you present maths in a scientific article, maths book, Wikipedia article or similar, your convenience as a writer should be secondary. Your task is to present information to someone who does not already know the subject. Presenting an equation as six different Greek letters mashed together means that the equation itself convey almost no information. You need a wall of text to make sense of it anyway.


Uh oh. I see you didn’t yet encounter the Einstein-notation for tensors :-)


Or just defining the result of division by zero as zero "for safety": https://www.hillelwayne.com/post/divide-by-zero/

It boggles the mind, truly!


Are you claiming the author is incorrect that x/0 = 0 is mathematically sound?


Depends how you define “soundness”, but the idea of prolonging a function out of its definition domain with an arbitrary value that doesn't make it continuous is arguably a curious one.

From an algebra perspective (the one given in the blog post) it may be fine, but from a calculus perspective it's really not.

The lack of continuity really hurts when you add floating points shenanigans into the mix, just a fun example:

When you have 1/0 = 0 but 1/(0.3 - 0.2 - 0.1) = 36028797018963970. Oopsie, that's must be the biggest floating point approximation ever made.


But for 1/x you have that issue anyway. If x is on the negative side of the asymptote but a numerical error yields a positive x, you'll still end up with a massive difference.


I don't know about "mathematically sound" but I would rather retain the convention that any number divided by itself equals 1.


That's only for small angles though (stems from Taylor's expansion). With other units, you have a conversion factor, but it remains true enough at small angles.


Whats wrong with long variable names?


Did you ever need to do involved mathematical manipulations using pen and paper? How would you judge the readability of the following expressions:

  zero_point equals negative prefactor divided_by two plus_or_minus square_root_of( square_of(prefactor divided_by two) minus absolute_term )

  zero_point = -prefactor/2 ± √((prefactor/2)² - absolute_term)

  x = -p/2 ± √((p/2)² - q)


I might be strange but the second seems far more readable than the third to me. The first is of course nonsense.


In my opinion, it puts too much emphasis on the variables compared to the operators and numbers and makes the expression as a whole harder to parse at a glance as I have to actually read the names.


Yeah, when doing it with hand, I surely would shorten it. But when doing math on the computer with help of autocomplete, why not? But well, I do not really know if that in pure math shape exists, I am only doing Math in the context of programming.

And for pedagogic purposes, I do would like more meaningful names at times.


It's definitely a lot harder to read and make sense of an equation that is sprawled out. In some domains, I would contend that using greek letters in code would increase readability, especially for those familiar with the underlying formula, and especially if the code is not edited frequently (e.g. implementing a scientific formula which won't change).

A good compromise might be to put the equation in the comments in symbol-heavy form, and use the spelled out names in code.


Try to solve the Schrodinger Equation for even an infinite well using long variable names.

I'm not talking about using it in code, I'm talking about someone arguing that books and articles should do it as well.


If you go watch math lectures, there's a bunch of "x means Puppy Constant" or, "let's substitute in k for the Real component", or "let's signify <CONCEPT> by collecting these terms into a variable". My argument wouldn't be to replace ALL the variables with meaningful names, just the ones with a lot of meaning that a reader might not understand. It'd also be great if constants, variables, and functions all got naming conventions. Lowercase letters are variables, all caps for constants, etc. It saves a little bit on writing to shorten the variable names, but if the goal of math is to share and spread knowledge within the community or without, better naming and less-memorization would both help. You can also rename things for the working out and use friendlier names for the final equations, just tell people how you're renaming them and everyone will follow along and the programmers will stop trying to sell you one readable code.

Most importantly the flat dismissal and horror that many express when someone brings up adjusting the symbolic traditions of Maths should be investigated. Engage with why you feel so strongly that anything other than rigid adherence to tradition is sacrilege. Based on what I've heard, in order to be a great Mathematician, you need to hold onto tradition lightly and think outside the box. Rigid adherence to tradition doesn't sound like that to me.


> Engage with why you feel so strongly that anything other than rigid adherence to tradition is sacrilege

Who’s saying that? Inventing good notation is a big part of mathematics (and that also frequently gets criticized on HN because it may introduce ambiguities)

Also, there’s nothing wrong with texts that target an audience with a certain level of understanding.

It’s not as if adding, for example, “By Hermetian matrix we mean a complex square matrix that is equal to its own conjugate transpose” will make a paper much easier to understand, just as adding a comment “this is where the program starts running” doesn’t help much in understanding your average C program, or adding a definition of “monarchy” to a history paper.

In the end, any scientific paper has to be read critically, and that means making a serious effort in understanding it. A history paper, for example, may claim that Foo wrote “bar” but implied “baz”. A critical reader will have read thousands of pages, and (especially if they disagree with the claim) then think about that for a while, and may even walk to their bookshelf or the library to consult other sources before continuing reading.


Again, try to solve the Schrodinger Equation for even an infinite well using long variable names.


Got a reference for what that looks like with current notation? The internet is basically just showing the starting equation and ending equation and skipping all the intermediaries.


You can use whatever notation you want for your own work, but documenting with, at least, formal variable definitions would be a significant boon for math literacy.


Nothing, but their use in mathematical equations will certainly conflict with the implicit multiplication in equations (i.e. `abc` in a formula means `a * b * c`, not a variable abc).


This is already a problem, sin is the sine function, not sin.


You can use a different font.


LOL




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: