Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If programm compiles - it almost always just works.

I can never understand this claim when people make it about languages like Haskell and ReasonML.

How is the compiler catching your logic bugs? If you write 'a + b' and you should have written 'a - b' the compiler isn't going to catch that in either of those languages. It'll compile but it won't work. Do you never make these kind of logic bugs?



The type system is a lot more expressive than the ones found in Java/C#/C++, thanks to Algebraic Data Types.

Yaron Minsky (Jane Street, largest OCaml user) coined the term "make invalid states unrepresentable" to explain how ADTs help in enlisting the help of the compiler to help us express our intent correctly, and Richard Feldman (NoRedInk, largest Elm user) has given an absolute gem of a talk that dives into that idea with relatable, concrete examples.

It is titled "Making Impossible States Impossible" and should clarify this concept nicely. https://www.youtube.com/watch?v=IcgmSRJHu_8


I've done a lot of work in Python and a fair amount of ocaml and rust.

I find that in Python - probably any dynamic language - a lot of the tests I write are just to make sure the pieces fit together. It's too easy to write well tested units that don't work together because of a misplaced argument or something at the interface. That type of testing is largely not needed with static typing.

Also, when I create a data structure in Python my mental model of what it means and how it relates to other parts of the program can be kind of fuzzy. Which means the pieces may not fit like I think they do because I missed a detail.

But a strong static type system forces me to clarify my thinking much earlier in the coding process than in a dynamic language. It encourages me to encode more of my mental model into the code and that allows the compiler to double check me.

It's kind of like spell check but for design.


I believe this claim to have two more narrow meanings:

1) If it compiles, you won't get a "undefined is not a function".

2) If it worked, you changed something and looked at all the places compiler told you to look at, all the code it didn't tell you to look at is very likely to continue to work corrrectly.

(1) is a very convenient feature and easy to advertise, but (2) is what makes people jump around and tell that everyone should use ML: it allows to be not afraid of changing something in the first place.


Data point. I just converted a small app to Elm 0.19. So, that's a real 0.x: breaking API changes all across the standard library. I just repeatedly ran the compiler and jumped to the next error. About an hour for my 1 KLOC app (and Elm code is pretty sparse), seems rather tedious. But! Now it works again, and I believe it to be working just as good as before. Of course there could be some new bugs, but I would rather expect them in the substantially changed parts of the Elm runtime than in my app where I didn't look at anything compiler didn't tell me to look at.

I have even less experience with OCaml, but my gut feeling is that it should be about the same in this respect unless you let mutable state run wild.


There is a phrase from Yaron Minsky about OCaml, “make illegal states unrepresentable”, which summarize this.

It's more about programming declaratively, functional, as well as strong types which allow you to be very expressive. My experience with F# is similar, if you can get you code compiled, it almost always just works as you expect. But this doesn't mean that you can avoid writing tests though.

This is a good read about it, https://fsharpforfunandprofit.com/series/designing-with-type... by Scott Wlaschin.


In my experience non-logic bugs vastly outnumber logic ones.

"Modern" dev is not about writing algorithms, it's about sewing together many different frameworks/libraries/technologies.

I'm probably biased as most of my experience is in webdev, but other domains seem to have the same issues (probably not to the same extend).


Out of curiosity, what are you considering modern?

The trend I see growing is towards microframeworks, smaller libraries, less dependencies (or if you have some, a small opinionated set of them), are more doing it yourself -- I think this phenomena, more simply put and was championed by go -- is to "just write code"

I am enjoying "just writing code" far more than my days as a library and framework glue person, and it is a lot easier to have other devs ramp up, as an added bonus. "Here, read this simple code to understand what's going on and then add your code" versus "here, study these docs, learn this framework, install all these dependencies then learn our coding style within the framework to understand what's going on, then add your code"

Gary Bernhardt kind of captures my sentiment these days with regards to massive libraries and frameworks these days in this thread nicely: https://twitter.com/garybernhardt/status/1037101314939875328


I think this something that needs to be experienced firsthand a few times for this to sink in. It something like the backpropagation update of a neural network: no matter how long you stare at it, it wont be clear, but if you derive it yourself it becomes super clear.

I can say that this effect seems very real. I think the reason goes beyond those shallow type mismatch errors I often make. Part of the reason, I think is, the correct program becomes clearer to write compared to the erroneous programs you could have written if you use the type system. In other words types can make some of the erroneous versions cumbersome, or at time, impossible to write -- but good luck mud wrestling with the compiler till its convinced.

I will strongly encourage that you try it and dont be discouraged. This really needs to be experienced first hand to be convinced.


The more meaning you can encode in the type system, the more the compiler can help you. It won't help you catch a + b... unless you e.g. declared a to be of type meters, and b inches.


Certainly regarding your example.

However in languages with rich type systems you can do Type Driven Development, which reduces the area space for many kind of errors.

Basically a simplified version of dependent typing, where one makes use of the type system to represent valid states and transitions between them.

For example, one can use ADTs to represent file states, with the file operations only compiling if the file handle is in the open state.


I think that statement is better viewed in lights of refactoring or changing existing code. If the code worked/compiled before, and I made a change that still compiles, I can be reasonably confident it didn't break something else accidentally (assuming the program was reasonably well designed to begin with). It makes reviewing pull-requests much easier.

Doing so in other programming languages always makes me nervous, in C an innocuous change could introduce some memory corruption bug because you broke an implicit invariant of the code, in Python it could raise a type error when called in an unexpected way. To review a pull-request you have to look usually at the entire file (or sometimes the entire project), and even then you can't be sure.

As for the logic bugs: that is what tests are for.


The compiler does not catch your logic bugs. But stronger, more expressive type systems catch syntax bugs and structural errors like you wouldn't believe. It's not an all or nothing.

I have spent an embarrassing amount of time troubleshooting a < that should have been a > (in F#) where it actually resulted in a very subtle defect. I'm not aware of a type system that works have prevented this and yet,I do strongly agree with the assertion that in more strongly/statically typed language, "compiling successfully = very likely to run correctly."


It kind of applies to those languages. It more applies to languages like SPARK or Eiffel. But yeah like other posters have said, it's more about avoiding runtime errors.


> How is the compiler catching your logic bugs?

It doesn't, and I agree with your sentiment. IIRC, type-related errors account for ~10% of bugs.

If (and only if) types are tied into your tooling ecosystem (for example, an IDEs that uses that info to full extent to aid in code completion, refactoring, code analysis etc.), they are very useful.


> type-related errors account for ~10% of bugs.

This sounds implausible to me unless they're only counting bugs that have slipped through QA and have later been fixed in production. In my experience the compiler catches maybe 95% of my errors I write. Most of those would be caught by my unit tests or my manual testing, and a few more would be caught by the QA team, but it's much nicer and faster to get notified about them immediately.


> It doesn't, and I agree with your sentiment. IIRC, type-related errors account for ~10% of bugs.

I never thought of that, but that is a really valid point. I guess that is why Ruby/Python are so popular as most of the problems programmers run into in production aren't type problems.


Type correctness is pure power, or a waste of time, depending on... perspective! I can't help thinking back to the highly illustrative example of my first enhancement to Facebook's production website as a "bootcamper" in 2009.

I was tasked with adding proper honorifics for Japanese speakers (it's complicated in the general case, but I was just getting as far as "San"). I wrote some straightforward PHP code to implement this feature, but of course it broke in production for some nontrivial percentage of requests. Why? Because the string I was adding the honorific to wasn't always a string. WTF? What is the proper course of action when strings aren't strings? Well, in this case the answer was a run-time type-check guard on the code I had added.

Keith Adams, my esteemed colleague, likened this to "dogs and cats living together", perhaps revealing a fondness for simpler days when Ghostbusters trod the streets. Keith may not have gone as far as lauding OCaml's type system in those dark early days, but I will do so now (and perhaps he would too).


That is not why these languages are popular. Types are your design document in languages with expressive type system. So by design you can make it hard to make logical errors.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: