Hacker Newsnew | past | comments | ask | show | jobs | submit | martinhath's commentslogin

This is from C.A.R Hoare's "Prospects for a better programming language" (1972) [0]:

> It is on production runs that the security is most required, since it is the results of production runs that will actually be trusted as the basis of actions such as expenditure of money and perhaps even lives. The strategy now recommended to many programmers is equivalent to that of a sailor who wears a lifejacket during his training on dry land but takes it off when he is sailing his boat on the sea. It is small wonder that computers acquire a bad reputation when programmed in accordance with this common policy.

It is also quoted by Donald Knuth in "Structured programming with goto statements" (1974) [1] ( which incidentally is also the source of the quote about premature optimization):

> He [Tony Hoare] points out quite correctly that the current practice of compiling subscript range checks into the machine code while a program is being tested, then suppressing the check during production runs, is like a sailor who wears his life preserver while training on land but leaves it behind when he sails!

[0]: https://ora.ox.ac.uk/objects/uuid:dff9483b-e72f-4599-bf90-76... p. 341

[1]: https://dl.acm.org/doi/pdf/10.1145/356635.356640 p. 269


How would errdefer work in the general union setting?

Having errors as a first class construct in the language allows things like errdefer to be very simple and easy to use. It looks needlessly specialised at first, but I think it’s actually a really good design.


That's a very good question! Most advanced languages have some way of defining the concept of a "computation within a context". For example, all languages that support a notion of Monads do have that kind of support. Examples would be Haskell, Scala, F#, ...

In those languages there are (or would be) generally two ways of achieving the same thing as errdefer:

1.) having a common interface/tag for errors

In that case, if you have a return type "success | error1 | error2" then error1 and error2 must implement a common global interface ("error") so that you can "chain" computations that have a return type of the shape of "success | error". "success | error1 | error2" would follow that shape because the type "error1 | error2" is then a subtype of "error".

2.) Having some kind of result type.

This would be similar to how it works in rust or in the example in the article here. So you would have a sumtype like "Result = either success A or failure B" and the errors that are stored in the result-failure (B) would then be uniontypes.

The chaining would then just be a function implemented on the result-type. This is personally what I use in Scala for my error handling.

Just to make it clear, this "chaining" is not specific for error-handling but a very general concept.


> How would errdefer work in the general union setting?

Well it could not be, and some would argue that would be better.

But you could also have a blessed type, or a more general concept of "abortion" which any type could be linked to.

Or you could have a different statement for failure returns that way you can distinguish a success and a failure without necessarily imposing a specific representation of those failures.


I’m pretty sure it’s intentionally a dark pattern.


At least it’s better than blocking content with CSS.

These sites typically want their cake (getting indexed) and eat it too (not letting visitors read the indexed content). So they resort to using CSS to hide it.


view > page style > no style

Fixes that in jiff.


Not just in theory, they can, and do, fail in practice.


Agreed. Poor choice of word there.


> Clean code / readable code / whatever you want to call it is often at odds with performance.

I disagree; or rather, I'd put it the other way around by saying that often you can get both clean code (by some metric) and reasonable performance. The patterns in e.g. the book by Robert Martin doesn't give you either, though.

> And for most enterprise projects it just doesn't matter.

It matters for the users. I use software that is slow for no good reason, and I'd like to live in a world where this is not the case.


> At some point in my programmer career I figured out that optimizing for human comprehension, a.k.a. "clean code", is a valid goal.

Nobody is arguing against comprehensible code. Caseys video is not about clean code in general, but "Clean Code" as presented e.g. in the book by Robert Martin, which contains a bunch of code I would not classify as "clean" by any metric.

> But as others pointed out, he focus on squeezing every bit of performance in the context of real-time video game logic

He doesn't, though. The "Clean Code, Horrible Performance" video is an excerpt from his course called "Performance-Aware Programming", where he explicitly says many times that the course isn't at all about "optimization", but merely being "performance aware". This isn't highlighted in the video though, so getting the full context is difficult.


OnShape is pretty nice, but their language, FeatureScript, is a disaster.


> Who is the author arguing against?

This is addressed in the third paragraph:

> The problem with this saying is that many people wrongly interpret it as “early optimization is the root of all evil.” In fact, writing fast software from the start can be hugely beneficial.


I sat down and read the original paper a few years back with the same intent as the author here, and afterwards I listed a couple of quotes that I found interesting [0]. It didn't really get any traction [1], but maybe someone here will find it interesting.

[0]: https://mht.wtf/post/structured-programming-quotes/

[1]: https://news.ycombinator.com/item?id=19254310


> This is perhaps too big a burden on a user. Some might say they rather not have the compositionality unless it's guaranteed to be correct.

I think this is it. Or rather, I think to some people, myself included, compositionality should imply correctness. If you /can/ compose things, but the result is wrong, can you really say that you've got composition?

How useful it is really to be able to plug in your own types in other peoples libraries if you have to trace through the execution of that library, figure out which libraries they transitively use, to ensure that all of your instantiations are sound? How do you even test this properly?

It's a really hard problem, and from what I can tell, Julia gives you no tools to deal with this.

[0] is probably relevant here, although I'm not sure I share the positive outlook.

[0]: http://www.jerf.org/iri/post/2954


We need to think about alternatives. It's easy to find issues and weaknesses in what Julia does, but we should consider what we would do with a different language to make a fair comparison.

If you don't want composition, then there's no issue and Julia can be as weak/strong as other languages.

If you do want composition, then I see two ways (but I'm sure there are others): you do the "typical" thing with glue code or you use the more "automatic" way that Julia provides. Which one is better? If this is too subjective: Which one is more correct?

Yes, Julia can propagate errors in unexpected ways, but how would you implement this in another language? You'd probably have to spend X hours writing glue code and Z hours writing tests to make sure your glue code is working. This also raises issues with maintainability when one of the two packages you're connecting /composing changes.

Julia offers a reduction on the X hours for writing glue code (sometimes with X = 0) and maybe a similar time Z writing test code. The maintainability, I'd argue, becomes easier.

The cost is the unknown unkowns that might creep up when doing this composition. My (extremely) subjective sense is that this doesn't happen that often to me (I don't usually pass approximation intervals to sparse matrix to auto differentiation to neural networks), which means the benefits outweight the cost in this regard. YMMV

Edited a few things for clarity


'Pro: saves X amount of coding hours. Con: may silently return wrong results.' is a terrible philosophy. It's like saying 'well, the surgeon messed up, but at least the surgery was cheap'.


There's nothing more silent about it than any other bug that arises in a similarly dynamic language. People use Python over C++ in large part because it saves X amount of coding hours, and comes with different kinds of bugs that can "silently return wrong results".


It seems you've picked the weakest version of what I said (which means communication becomes less likely)

Two points

1) the glue code you HAVE to write to make the composition in other languages and the mantianability issues you get also may introduce wrong results 2) MAY doesn't mean it will and we trade-off speed / convenience and risk in many other areas. Surgeons (or the health system in general) do trade off % of success for cost / speed, just because they don't have the resources to do everything / spend 100 hours on everyone


I don't have experience with using Julia in really large code bases but my intuition has always been that the combination of these design characteristics in one language:

- easy unrestricted composability

- lack of well-defined interfaces

- lack of an effective way to use the strong typing system to validate correctness

is not a very good idea.


I think Julia (mostly meaning pervasive multiple dispatch, which is unprecedented at that scale Julia has it) unlocks a new way to organize programs.

Some formalized notion of "interface" is certainly important, but it seems no clear formulation has emerged.

I think it's fairly consistent with the dynamic nature of Julia and fairly inconsistent with the static nature of Julia. I don't know what a good solution to the interface problem is.

I think one can get pretty far with writing tests to check interfaces. If you are a library that expects user-defined types, you can expose an interface to these tests so that a user can check if they've implemented everything.

This is a very generic approach, and aside from the key limitation of only giving results at runtime, is much more powerful.


Testing is certainly general but it's high-effort and it's easy to miss corner cases. I think type checking + some testing is going to beat testing alone in most scenarios.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: