Hacker Newsnew | past | comments | ask | show | jobs | submit | sclv's commentslogin

Indeed, as you note, a reinstallable base is a goal everyone wants. Its basically historical reasons and coupling of primitives (tied to the compiler innards) to nonprimitives which has caused this situation, but sufficient elbow grease should improve things.


I don't think any of this is impossible to change. There was just a Labor Notes conference this weekend where thousands of people pushing for more democratic and rank and file run unions showed up. And examples like the ALU show that going through existing unions isn't the only way possible. And beyond that, even organizing with a major union can still give you a local you have power over - workers at the the times tech guild, amazon, kickstarter, etc have all organized with existing larger unions, and are starting to see more control over their conditions, more rights, and more respect already.

And I disagree that job protections would "hinder some of the innovation" happening -- if anything, more comfortable and safe employees are more free to innovate. I think it would just hinder employers giving us impossible deadlines to do underspecified or ill-specified things to tick some useless checkbox, or to deliver a feature they already sold without it having been written yet.


You're kind of right: I'm generally pretty pessimistic about systemic change in big systems where I'm merely a pawn. Though it definitely happens regularly! So hopefully the existing unions change and things move forward!

I am not saying that job protections hinder all innovation. But the "liquidity" of labor most certainly gives companies more control and an easier time making changes in many ways. I think that has an undeniable upside for instance with startups who want to push growth to the max without worrying much about possible troubles later.

And I think safety and comfort in tech doesn't come from the particular employer but rather how sought after tech employees currently are.

Anyway, I'm talking specifically about tech and tech innovation here: for other industries things are again slightly different. And I do really think that job protections definitely drive innovation in established and stable organizations!


Yes, companies are not democracies. That's why we need unions! That's the only way to exercise our collective power to negotiate with the employer on more equal terms. When we negotiate individually we also say "here are the things we want, and that is our condition of work." There is _always_ a conflict between what employees want, which involves wages and conditions, and what employers want, which is getting the most work while yielding the least in wages and conditions.

Unions in tech are as possible and necessary as unions anywhere else. Nothing about being it tech makes us "special" and the whole mythology it does only serves to keep us from organizing and solidifying our conditions and strength.


I think the problem unions face in tech is that, for most developers, salary isn't an issue. Benefits probably aren't an issue either. What people in tech want to unionize to do is a bar far higher than the average union, they want to steer the direction of the company, they want to pick and choose what they work on, etc... all while struggling with the idea that tech is a very diverse landscape in terms of politics, and the language of people who create unions don't speak kindly to them all.


The market sorts out wages and conditions. I will assume you are not talking about developers because they have so many choices at all different salary ranges that are higher than many other industries.

Contractors in minimum wage roles would be the only candidate for unions.


You must not have heard about the situation at many large video game studios. Can be pretty dire there.


Why are unions necessary at tech startups? From my experience, these companies pay and treat their employees very well.


Holy moly that is not at all what that paper says! It specifically argues that certain equational properties of a given total language continue to hold in the total fragment of a given partial language. It is an embedding theorem combined with lifting properties, not a license to perform _any_ reasoning at all regarding the non-total portion of the language it considers!


Your edit is starting to get it. If by the argument of the article x/y = cotton candy for all x,y, then probably the argument of the article isn't good. And the reason is precisely that division in a field is taken to be nothing but a notational shorthand for multiplication by the multiplicative inverse.


Right. So the multiplicative inverse property _breaks_! He just points out that it breaks, and thus you need to use a more complicated property instead. That doesn't mean that the property doesn't break.


He's not saying that it breaks; quite the opposite, he repeatedly states that 0⁻ doesn't exist. He's just defining division in a specific way.

The MI property states that every element except 0 has a multiplicative inverse. He's defining division via two cases: If b≠0, then a/b = a*b⁻ (multiplicative inverse). If b=0, then a/b=0. This definition does not imply that 0⁻ exists, so there's no violation of MI.


The problem this and the other replies miss is that the standard definition of division is multiplication by the inverse. The entire argument rests on a notational slight of hand. The property that held before -- that _when defined_ division has the inverse property -- no longer holds. Thus many equational identities that otherwise would hold do not hold.


> The problem this and the other replies miss is that the standard definition of division is multiplication by the inverse.

Try to state this definition formally. The statement: ∀ x,y . x/y = xy⁻¹ is not a theorem of fields or a definition of division. However, ∀ x, y . y ≠ 0 ⇒ x/y = xy⁻¹ is, but is completely unaffected by defining division at 0. Those who think they see a problem rely on informal and imprecise definitions. Could you formally state a theorem that is affected? That would help you get around issues that are merely artifacts of imprecision.

But let's entertain you, and state that what we really mean by the informal and vague statement, "division is the inverse of multiplication," could be stated formally as:

    ∀ x ∈ dom(1/t). x(1/x) = 1
You are absolutely correct that this equational theorem is broken by extending the domain of division. However, there is absolutely no way to say that the formalization of this theorem isn't actually

    ∀ x ≠ 0 . x(1/x) = 1
because the two are equivalent. You cannot then claim that something is necessarily broken if you choose to pick a formalization that is indeed broken, while an equivalent formalization exists, that is not broken (not to mention that the formalization that is broken requires a strictly richer language). All that means is that your formalization in this case is brittle, not that laws are broken.


I have to disagree -- this isn't sleight of hand. The standard definition isn't being violated here, because standard division isn't a total function. The denominator's domain in Hillel's function is a proper superset of the standard domain: when restricted to the standard domain, the two functions are precisely equivalent. Therefore, every standard identity still holds under Hillel.

The hole that he is filling here isn't one that he bored into the standard definition, but a hole that the standard definition already admitted. If something is explicitly undefined, there's nothing mathematically wrong with defining it, as long as the definition doesn't lead to inconsistency.


> If something is explicitly undefined, there's nothing mathematically wrong with defining it, as long as the definition doesn't lead to inconsistency.

The definition does lead to inconsistency...you can't look at the field axioms, observe that 0 has no multiplicative inverse, then proceed to define a special, one-off division rule that doesn't involve multiplicative inverses for that one element. Either your division rule is pathological and breaks a fundamental field property or you've introduced a division rule which is just a syntactical sugar, not a real operation (in the latter case you've introduced confusing notation, not a new division function). Why do you think mathematicians explicitly state that the real field with the augmentation of positive and negative infinity (which allow division by 0) is not a field?

I don't understand why there is so much resistance to this idea in this thread, but the simple fact remains that if you define division by an additive identity (0) in any way, the field containing that unit ceases to be a field. This is because all elements cease to be unique. You can quickly prove that every element is equal to every other element, including (critically) the additive and multiplicative identity elements. Fields are defined by closure under the operations of addition and multiplication, and that closure requires uniqueness of their respective identities. Upend that and your entire field structure breaks down, because all you're left with is a field with a single element 0.

Stating that you've defined division by 0 using a one-off case that permits all other field identities to remain consistent is like saying you've turned the complex field into an ordered field using lexicographic ordering. You haven't, because i admits no ordering, much like 0 admits no multiplicative inverse.

Onlookers reading these comments probably think those of us harping on this point are anal pedants with a mathematical stick up our ass. But this thread is increasingly illustrating my central point, which is that the author shouldn't have tried to justify numerical operation definitions in a programming language using field axioms of all things.


> Onlookers reading these comments probably think those of us harping on this point are anal pedants with a mathematical stick up our ass. But this thread is increasingly illustrating my central point, which is that the author shouldn't have tried to justify numerical operation definitions in a programming language using field axioms of all things.

You can't use your own stubbornness to justify itself.

I'm waiting for you to justify your claim that this extension to division breaks any of the field axioms. pron even made you a nice list of them.

Just name one equation/theorem that the new axiom would break.

I'm completely open to being convinced! But so far you've only given arguments about giving a multiplicative inverse to zero. Everyone agrees on that. It's the wrong argument.


Because a divisor cannot exist unless it is a multiplicative inverse. Therefore 0 is not a divisor.

This is getting to be Kafkaesque...it breaks the field axioms themselves. How many different explanations and external resources do I need to provide in this thread for you to be convinced that this is not a controversial point in modern mathematics? I just explained it in the comment you responded to.

You have exactly two options here.

If you define x/0, that definition must interact with the rest of the definitions and elements of the field. To maintain multiplicative closure (a field axiom!) there must be a unique y element equal to x/0. So tell me how you will define x/0 such that x is not equal to the product of 0 and y. Regardless of what you think the author has shown, the burden of proof is not on me at this point to show that you can't do it, because it follows directly from the field axioms. Trying to impose a one-off bizarro divisor function defined only on {0} is not only mathematically inelegant, it immediately eliminates the uniqueness of all field elements. Therefore your "field" just becomes {0}, and since it lacks a multiplicative identity it ceases to be a field. There is your contradiction. Why don't you tell me how you're going to prove any equation defined over a field that relies on the uniqueness or cancellation properties of fields?

On the other hand, let's say you tell me you want define x/0 so that the definition doesn't interact with any of the field definitions or elements. Then you haven't actually introduced any new operation or definition, you've just designed a notation that looks a lot like division but has no mathematical bearing on the field itself (i.e. absolutely nothing changes, including for 0). That's not a divisor function, it's just a confusing shorthand. You can't just add another axiom to a field and call it a field.

If you believe I'm stubborn, that's fine. I might be a poor teacher! There are ample resources online which will patiently explain this concept in mind numbing detail. It boggles my mind that there are people in this thread still fighting an idea in earnest which has been settled for over a century.


> Because a divisor cannot exist unless it is a multiplicative inverse. Therefore 0 is not a divisor.

So there are two separate issues here. One is whether we can extend the definition of the "/" operator, and the other is whether we call it "division".

I'm not interested in what we call it. I'm interested in the claim that extending "/" will break the field.

The dichotomy you're talking about is wrong. The two options are not "multiplicative inverse" and "does not interact with anything". "1/0 = 0" interacts with plenty! If I make a system where it's an axiom, I can calculate things like "1/0 + 5" or "sqrt(1/0)" or "7/0 + x = 7". I can't use it to cancel out a 0, but I can do a lot with it.

> It boggles my mind that there are people in this thread still fighting an idea in earnest which has been settled for over a century.

Remember, the question is not "should this be an axiom in 'normal' math?", the question is "does this actually conflict with the axioms of a field?"

> You can't just add another axiom to a field and call it a field.

Yes you can. There is an entire hierarchy of algebraic structures. Adding non-conflicting axioms to an X does not make it stop being an X.


> you can't look at the field axioms, observe that 0 has no multiplicative inverse, then proceed to define a special, one-off division rule that doesn't involve multiplicative inverses for that one element

Why not? What mathematical axiom does this break?

> Either your division rule is pathological and breaks a fundamental field property

It doesn't, or you could show one here: https://news.ycombinator.com/item?id=17738558

> or you've introduced a division rule which is just a syntactical sugar, not a real operation

Is this math? We define symbols in formal math using axioms. I am not aware of a distinction between "syntactical sugar" and a "real operation."

> Why do you think mathematicians explicitly state that the real field with the augmentation of positive and negative infinity (which allow division by 0) is not a field?

That's simple: because unlike the addition of the axiom ∀x . x/0 = 0, adding positive and/or negative infinity does violate axioms 2, 7 and possibly 11 (depending on the precise axioms introducing the infinities).

> I don't understand why there is so much resistance to this idea in this thread, but the simple fact remains that if you define division by an additive identity (0) in any way, the field containing that unit ceases to be a field.

Because a field is defined by the axioms I've given here (https://news.ycombinator.com/item?id=17738558) and adding division by zero does not violate any of them. If you could show what's violated, as I have for the case of adding infinities and as done in actual math rather than handwaving about it, I assume there would be less resistance. I don't understand your resistance to showing which of the field axioms is violated.

> You can quickly prove that every element is equal to every other element, including (critically) the additive and multiplicative identity elements.

So it should be easy to prove using the theory I provided, which is the common formalization of fields. Don't handwave: write proofs, and to make sure that the proofs aren't based on some vagueness of definitions, write them (at least the results of each step) formally. The formalization is so simple and straightforward that this shouldn't be a problem.

> Stating that you've defined division by 0 using a one-off case that permits all other field identities to remain consistent is like saying you've turned the complex field into an ordered field using lexicographic ordering. You haven't, because i admits no ordering, much like 0 admits no multiplicative inverse.

Stop handwaving. Show which axioms are violated.


Consider the following two functions, that we can define for any field F:

• Our first function f(x, y) is defined for all x in F, and all nonzero y in F. That is, the domain of this function f is (F × F\{0}). The definition of this function f is:

f(x, y) = x times the multiplicative inverse of y

(Note that in the definition of f(x,y) we could say "if y≠0", but whether we say it or not the meaning is the same, because the domain of f already requires that y≠0.)

• Our second function g(x, y) is defined for all x in F, and all y in F. That is, the domain of this function g is (F × F). To define this function g, we pick a constant C in F, say C=0 or C=1 (or any C∈F, e.g. C=17 if 17 is an element of our field). And having picked C, the definition of this function g is:

• g(x, y) = x times the multiplicative inverse of y, if y ≠ 0, and C otherwise [that is, g(x, 0) = C].

Note that this function is defined for all (x, y) in F×F, and when y≠0 it agrees with f, i.e. g(x,y) = f(x,y) when y≠0.

Would you agree that both of these are functions, on different domains?

Next, we have the notation x/y. To assign meaning to this notation (and to the word “division”), there are two conventions we could adopt:

• Convention 1: When we say "x/y", we will mean f(x,y) (as defined above) — that is, x * y^{-1}.

• Convention 2: When we say "x/y", we will mean g(x,y) (as defined above).

The point of the post is that we can well adopt Convention 2: with such a definition of "division", all the properties that were true of f (on the domain F×F\{0}) continue to be true of g, except that g is defined on a wider domain.

----------------------------------------------

Now, maybe Convention 2 offends you. Maybe you think there is something very sacred about Convention 1. In all your posts in this thread, you seem to be insisting that "division" or "/" necessarily have to mean Convention 1, and the provided justification for preferring Convention 1 seems circular to me — here are some of your relevant comments, with my comments in [square brackets]:

> Since there is no multiplicative inverse of 0, division by 0 is undefined behavior [This seems to be saying: Because of Convention 1, we cannot adopt Convention 2.]

> Since y = x/0, it follows that the product of y and 0 is equal to x, because division is the inverse of multiplication [Here, your reasoning is "because we adopt Convention 1"]

> in algebraic fields division by x is equivalent to multiplication by 1/x. This is precisely why you cannot have a field that admits division by 0: because 0 has no multiplicative inverse. [Again, you're stating that Convention 1 has to hold, by fiat.]

> If F is a field with elements x, y, then the quotient x/y is equivalent to the product x(1/y). If 0 has no multiplicative inverse, there is no division by 0. The two concepts are one and the same [This is just stating repeatedly that we have to adopt Convention 1.]

> A multiplicative inverse is a division. [This is merely insisting that Convention 1 has to be adopted, not Convention 2]

> divisor cannot exist unless it is a multiplicative inverse [Again, stating Convention 1.]

> you can't look at the field axioms, observe that 0 has no multiplicative inverse, then proceed to define a special, one-off division rule that doesn't involve multiplicative inverses for that one element. [Why not?] ...you've introduced a division rule which is just a syntactical sugar [yes the same is true of Convention 1; what's the problem?]

All these comments, which emphatically insist on Convention 1, seem to ignore the point of the article, which is that Convention 2 has no more mathematical problems than Convention 1, because the function g is no less a “valid” function than the function f.

In mathematics when we use words like “obvious” or insist that something is true because it just has to be true, that's usually a hint that we may need to step back and consider whether what we're saying is really mathematically justified. What we have here is a case of multiple valid definitions that we can adopt, and there's no mathematical reason for not adopting one over the other. (There's a non-mathematical reason, namely “it breaks convention”, but the entire point is that we can break this convention.)


I follow everything you're saying. As I have said many times already, I have no problem with convention 2. But don't use field theory to justify convention 2, because it's mathematically incoherent and unnecessary. I broadly agree - there is no reason to be involving field theory in programming language design.

I don't take issue with division by 0 - you can do that in mathematics just fine! I take issue with defining that division and calling the consequent system a field when it's not a field, and acting as though everyone else is wrong. The author invited this criticism when they loaded in the full formalism of field theory without needing to.

If the author had just stated they wanted to define division by zero that wouldn't be a problem. I have no idea why they felt the need to pull in abstract mathematics. I'm not disagreeing with their point, I'm taking issue with the strange and incorrect way they defended it.

Note that in my top level comment I specifically said, "Mathematics does not give us truths, it gives us consequences." I will happily agree with you that there is usefulness and coherence in a definition of 0. There is no canonical truth about the world regarding that matter. But a direct consequence of defining any division by 0 is that you cease to have an algebraic field.

Therefore, using field theory to defend a system which defines division by 0 doesn't make sense. It's not that the system is "wrong" for some meaning of wrongness. It's that you shouldn't be trying to pigeonhole field theory to make it work, because you don't need to.


Glad you follow! But I'm not sure we agree yet, as there's still something puzzling when you say:

> But don't use field theory to justify convention 2, because it's mathematically incoherent

> defining that division and calling the consequent system a field when it's not a field

> a direct consequence of defining any division by 0 is that you cease to have an algebraic field

If you go back to my comment (the one you're replying to), both the functions f and g assume a field F, and they are well-defined functions on F×F\{0} and on F×F respectively. (Do you agree?) For example, F may be the field of real numbers. Forget about the word “division” for a moment: do you think there is something about the function g, that makes F not a field?

To put it differently: I agree with you that it is a direct consequence of defining a multiplicative inverse of 0 that you cease to have an algebraic field. But the only way this statement carries over when we use the word “division”, is if we already adopt Convention 1 (that “division” means the same as “multiplicative inverse”).

Again, I think you are implicitly adopting Convention 1: you're saying something like “if we adopt Convention 2, then x/y means the function g and includes the case when y=0, but [something about multiplicative inverses, implicitly invoking Convention 1], therefore there's a problem". But there's no problem!

It is not a direct consequence of defining the function g(x,y) that something ceases to be a field: it is a consequence only if you also insist on Convention 1, namely if you try to assign a multiplicative inverse to 0 (which everyone here agrees is impossible).

Let me emphasize: whether we adopt Convention 1 or Convention 2, there is no problem; we still have the same field F.


Look at it this way...

Standard definition of division function, d:

d(x, y) = x * y⁻, for all x and y EXCEPT 0

Author's modified, piecewise (https://en.wikipedia.org/wiki/Piecewise) definition:

d(x, y) = x * y⁻, for all x and y EXCEPT 0

d(x, y) = 0, for y = 0

He's just adding 0 to the domain of d(x, y) to extend the definition, and deliberately not using xy⁻ for that particular element of the domain. No inverse needed.


I know what he's doing. The problem is when you make it a different function (even by just extending it) then you change its equational properties. So equational properties that held over the whole domain of the function no longer hold over the extended domain. This is repaired by modifying the equational properties. But the modified equational properties mean that you now have a different system than before. So the whole thing is just playing around with words.


> So the whole thing is just playing around with words.

Er... that's what mathematics is. It's a word game - we build systems from arbitrary rules and then explore the results.

Look through https://www.mathgoodies.com/articles/numbers for a bunch of uncommonly-defined numbers.


You missed the part where he defines division as a/c = a * inverse(c) for all c != 0


Multiplicative inverse is already broken. x / 0 * 0 is either 0 or ⊥ (aka exception). It wasn't true to begin with. I happen to like the exception/partial function version, so I'm playing devil's advocate, but it really was already broken when we're talking about zero.


I think this list is not reflective of a general Haskell outlook. Its reflective of an outlook of people standing _outside_ Haskell (LC is historically a scala-heavy conference) and projecting onto it a certain sort of structure of expectations that isn't actually how Haskellers in general view things. I agree that this misperception isn't a good thing for Haskell -- but its a misperception imposed from the _outside_.


If you think reading X loc/min is the norm, and then hit a language where you read much slower, you might think "this is hard to read". But if you're reading the same density of _logic_ at the same speed, then that's not a drawback. If the haskell is say 3x as dense in logic, then reading at 1/3 the speed isn't a drawback at all...


You do get 0 * x <= 0. You just need a splash of domain theory to make the medicine go down.


I don't think that's the <= you were looking for ...


This is an interesting question. As much more of a Haskeller than an MLer I don't tend to feel the "need" to structure my programs explicitly modularly, and when I do I sort of instinctively use a combination of typeclasses and polymorphic higher-order functions to do so.

More basically, just think "everywhere I would open a module, instead I can take a parameterized record of functions" (and furthermore, if a datatype uniquely determines that record, I can associate a typeclass to that datatype). There are limitations to this, but in fewer circumstances than you'd think -- mainly about sort of cross-modularity (aka the expression problem).

There was a very nice discussion on lennart's blog about this in 2008, with a problem posed and some partial solutions (read bottom post to top):

http://augustss.blogspot.com/2008_12_01_archive.html


This is actually quite a far way towards what I was looking for! Thank you very much.

> We have basically packaged up the dictionary and unpack it ourselves to get access to the operations. It's not pleasant, but it works.

Would you say this method of using multi-parameter type classes and packaging/unpackaging related operations in a record is "idiomatic"? I admittedly haven't browsed all that much Haskell code, but I feel like I don't see it used that frequently.


once you end up with a typeclass and associate the methods the unpacking goes away and you're just working in a context parameterized by some typeclass. i expressed it via that route to help make the connection to modules more clear. (there are occasions when you don't take that last step too, which is why i also sort of pointed towards that route). lennart's post shows an example where this sort of falls down -- but the followup also shows a nice haskelly solution that works, mainly, except when we want to intermix. he also suggests explicit type arguments as a way to make things nicer -- those have now landed in GHC :-)

the other relevant work in a broader sense that I should mention is regarding effectful contexts where idiomatically you declare a subclass of monad with the relevant operations, then instantiate it via the mtl or some other means, so you can swap out the IO backed "real" one or various harnesses or add in logging layers, etc.

finally, i guess i should add that as a rule of thumb i've noticed that purity and laziness both help provide ways to give "modular separation of concerns" directly. in particular, the most obvious thing we can do is just have each function do one thing to a bit of data, and produce a different bit of data and that's innately modular. but when we're interleaving IO (for example with mutable datastructures) and concerned with _when_ computation happens (in a strict setting), then it feels we're paying for this too much because we get big intermediate structures. but if you get the knack of just using pure lazy structures directly, you can sort of "amortize out" the computation cost in a nice way and also the space cost (as conceptually some big data structures become produced "on demand"). of course if you get it wrong, blammo :-)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: