> Go is just too different to how I think: when I approach a programming problem, I first think about the types and abstractions that will be useful; I think about statically enforcing behaviour
I see statements like this a lot from Haskellers and I think its overstated. Anecdotally, after going from Python to spending 3-4 years in Haskell then going back to a dynamic language (Elixir) I've come to the conclusion that how you think when programming is very much a learned trait that works for that language. It's neither good or bad, but it's educational nonetheless. Haskell and other languages like it forces you to have a very unique mindset that can overpower previously learned languages not like it. And it's in no way permanent.
After I stopped using Haskell I was like "ugh, I need types! wth!". I wanted to go back to Haskell only out of familiarity, but as I continued after a short while I wasn't even thinking like that anymore. The thought rarely occurred. I stopped thinking in Haskell and started thinking in Elixir.
I disagree firmly. I fought many languages, even statically typed like java. The day I found out about lisp and FP I lost it entirely. If I wasn't a young student in a Java era I'd have saved 5 years of headache. I tried, many times, I always end up thinking like mathematicians (at least partially, I have shallow math skills) and/or FP-ers. That's the only time my brain was allowed to function and walk toward solutions. It's how I finally ended up understanding C (before that I couldn't write nested loops without making a mess).
I don't think it's entirely placebo or emotional drive too, because I really liked the Java OOP world at first, and went deep into it; but got nothing.
I had very similar experience when switching language there and back. New one feels unnatural and ineffective at first, but then you learn how to use it effectively and adjust.
I have come to think that it is precisely when the language switch feels the worst, the learning is the most beneficial for long term. If it is popular and you feel like it does not really work, it means you have conceptual learning to do.
Seriously though, I think Elixir (with its guards and pattern-matching facilities) occupies a nice middle ground between strict static typing and the dynamic typing of procedural/OO langs.
I cannot speak for Elixir, but coming from the Erlang world, I'm sure it's a fine language that has sane defaults, much like Clojure.
However I switched from Python to Scala and besides the performance issues and the poor handling of async I/O that I had with Python, by far the biggest problem with Python was all the insecurity while developing with it. It drove me insane, because we had a production system that always had problems due to how poorly Python's ecosystem treats errors.
Just to give an example, at that time we were using BeautifulSoup to do HTML parsing, a library resembling JQuery, except that BeautifulSoup can return nulls all over the place and no matter how defensive I got in my coding, null pointer exceptions still happened. And what was the solution for async I/O in Python? Monkey patching the socket module by means of Gevent or Eventlet of course, which didn't work for native clients (e.g. MySQL's) and if this wasn't problematic enough, you had to guess what worked or not and when. The worst kind of magic possible.
When I switched to Scala, all of these problems vanished. This isn't about having tests either, because tests only test for the behavior you can foresee, unless you do property-based testing with auto-generated random data and in this regard static languages like Scala and Haskell also rule due to the availability of QuickCheck / ScalaCheck.
You see, my Python codebase got really bad because of fear of refactoring. Yes, even with plenty of tests and good code coverage. Because actually the tests themselves became a barrier for refactoring, because in a language like Python there is no such thing as information hiding and coupled with dynamic typing, it means that your tests end up tightly coupled with implementation details, will break on correct refactoring and will be hard to fix.
Nowadays when I'm doing refactoring with Scala, most of the time it simply works as soon as the compiler is happy. Some people bitch and moan about that compiler being kind of slow and it is, but the things that it can prove about your code would require specialized and expensive tools for other languages and that wouldn't do such a good job.
Going back to dynamic languages, there are also languages like Clojure, which get by with sane defaults. For example in practice Clojure doesn't do OOP-style dynamic dispatching most of the time and functions are usually multi-variadic and capable of handling nil values gracefully. This is not enforced by the language, being simply a hygienic rule accepted by the community. However, relying on such conventions requires (a) capable developers that (b) agree on what the best practices should be.
And the issue that people miss when thinking of the merits of such languages is that smaller communities are always homogeneous. So when giving as example a language like Elixir, if it hasn't drove you insane yet, well, you have to take into account that Elixir might not be popular enough.
So basically I prefer strong static typing because I'm not smart enough or wise enough to seek and follow best practices as often as I'd like and because I don't have much faith that other people in the community can do that either, especially after that community grows. The biggest irony imo is that Python is right now the anti-thesis of TOOWTDI.
> because in a language like Python there is no such thing as
> information hiding and coupled with dynamic typing, it
> means that your tests end up tightly coupled with
> implementation details, will break on correct refactoring
> and will be hard to fix.
Would you mind to provide more details about this point? I thought dynamic languages make testing easier, because you don't care about the type of the object as long as it works with the same API (if it quacks…).
This also supposedly makes it way easier to mock objects. Now there you are basically claiming (unless I misunderstand your claims) the opposite.
"Dynamic languages make testing easier," paints a very broad brush. They have some very nice benefits, but also a lot of edgecases.
* An argument might be an int when you expected a float. This can be addressed by casting things with `float()` in every entrypoint, but it's more defensive to require a float and error out otherwise. Either way, problems only appear at runtime.
* An object might be null and this only gets picked up at runtime - using Rust it's been very nice to have to explicitly say when things can be null.
I've found it can feel like luck when a complex Python program works correctly for real-world input, but it's also very powerful at getting something done. Tradeoffs.
> An object might be null and this only gets picked up at runtime - using Rust it's been very nice to have to explicitly say when things can be null.
This point actually frustrates me greatly about C# 7. At some point, there was talk about non-nullable references being introduced into the language, which has been put on hold for at least another version. Personally, I think that feature should have been in the language since version 1. Every runtime error due to a null reference is one too many.
I have thought about creating a template struct with a single member that blows up if you try to insert null. Something like this:
public struct NotNull<T>
{
private readonly T val;
public T Value {
get { return val; }
}
public NotNull(T v)
{
if (v==null)
throw new Exception();
val = v;
}
}
However you can still create a class FOO with a member NotNull<T> xxx and xxx may be entirely uninitialized. It can be then passed from one method to another until you try to read it, so it basically does not give any guarantees.
I thinks this is why they pulled non-nullable types from C# - they couldn't figure out what to do in this case. If I were them I would force the constructor of FOO to blow up on exit if xxx is not populated. That would help a little bit. Although it still causes problems if a method is invoked from the constructor.
So yeah, it's a tough problem. Maybe they could add a check when a field of type NotNull<T> is read to throw an exception? That way you're still possibly in trouble when reading a field, but at least you can be confident that if you receive a method parameter NotNull<T> you know it's not null.
I also thought about solutions like this for a while, but frankly, every option sounds like the attempt to fix a leaking dam using duct tape.
Things like this have to be implemented properly. The problem is that this ties into the core architecture of the platform, not only the language - references are basically the most important implicit primitive there is in the CLR (implicit because not even the IL handles them like the other datatypes).
I'm not very informed on the subject, but I suspect that without a proper way to declare locals and members non-nullable in the CLR, the matter would get rather complicated - there'd be no possibility to communicate the requirement through method signatures, for example. And since you could call pretty much any C# method from other languages, I'd expect various problems there.
Of course, one option would be to only handle the initialization and assignment checks in the C# compiler (I guess that would be much easier) and simply generate conditional ArgumentNullExceptions in every method, but I suspect that this would be a problem for performance reasons.
You are correct. The problem here seems to be that the OP's tests depend on implementation details, which of course shouldn't happen. Presumably this wouldn't occur in a language where you could hide these implementation details (private attributes/methods, etc). That said, Python doesn't force you or encourage you to write your tests like this. Just because it's possible to access the implementation, doesn't mean you should.
Tests always rely on implementation details. The implementation details are what you're testing, after all. The problem is relying on the wrong ones.
You can hide all the details you want in the language, unless it's a pure function, there's side effects that can be relied on improperly. Often times this is because of hiding implementation details. A GenerateCert function might read from a config file. It hides the detail that the config file exists, but then to test it you need to create the file in the right spot.
Yes, testing Add(int, int) is easy in any language. No one's struggling with those tests.
But writing Python is a pleasure :( And writing the same thing is Go feels like wasting my wrists for nothing, specially for throwaways and simple web code...
Maybe one could write Python and turn a "strict" flag for the module (project?) where you'd have to fill in the types and it becomes Scala-like.
Is there a runtime that enforces them? I've certainly found them useful in restricting autocompletion, but I've still been able to feed in accidental Nones.
Scala, where code you've written 6 months ago really does look like hieroglyphics to you or to anyone else right now who doesn't know Scala intimately ;)
I would agree that the Elixir community is currently very homogeneous and therefore very pleasant. To some extent, that is true of many new langs and their early communities. I disagree that this is a premature strike against it. I say enjoy it (and work within it) while it lasts, if you can.
Not sure why the wink is there :-) but the Scala codebase I'm working on is 3 years old already and it is the cleanest codebase I've ever worked with. Maybe my experience is not representative of the whole ecosystem, I have certainly seen really messy Scala codebases in my ~5 years of experience with it, just like I have seen for other languages too.
However Scala is the perfect example for my opinion though - people perceive it as a TIMTOWTDI language, but that's because it got an influx of developers from other communities, the most representative IMO being Java, Ruby, Erlang and Haskell. After all, a determined Java/Ruby/Erlang/Haskell developer can continue to program Java/Ruby/Erlang/Haskell in any language, especially if that language is expressive enough to allow it.
Having such an influx gave rise to different opinions and mentalities of how to do things. Speaking of the web crowd, there's a big difference between Scalatra (see scalatra.org) which appeals to a Ruby crowd and relies on un-typed and unsafe APIs with baked in mutation, versus Http4s or Finch, which are very FP and typeful, versus Play Framework which takes a more balanced approach, being like a Rails framework (including the kitchen sink), but with sanity and types.
And the Scala community is actually on a convergence path right now, due to better books, tools and libraries. Which for me actually doesn't matter that much, because in Scala I can rely on the compiler such that my code is not so dependent on learning and applying best practices, when compared with other languages.
I was NOT saying that Elixir is pleasant because it is unpopular. I am saying that this might be a factor, that I suspect that in such instances the social aspect plays a larger role than we give it credit, especially when dynamic typing is involved, because with dynamic typing you rely on agreed upon best practices that the community must push forward, much more than static language proponents need to do. And personally I think that a language's technical merits should be judged after it gets popular enough to see an influx of developers with strong opinions from other languages.
E.g. in a static language a Maybe/Option value forces you to deal with the possibility of a value missing, whether you want to or not, being also useful as documentation that's always up to date. Whereas in a dynamic language you have to rely on documentation (poorly updated because it's the last thing that people update), soft community rules, taboos, etc in order to do the right thing.
I see statements like this a lot from Haskellers and I think its overstated. Anecdotally, after going from Python to spending 3-4 years in Haskell then going back to a dynamic language (Elixir) I've come to the conclusion that how you think when programming is very much a learned trait that works for that language. It's neither good or bad, but it's educational nonetheless. Haskell and other languages like it forces you to have a very unique mindset that can overpower previously learned languages not like it. And it's in no way permanent.
After I stopped using Haskell I was like "ugh, I need types! wth!". I wanted to go back to Haskell only out of familiarity, but as I continued after a short while I wasn't even thinking like that anymore. The thought rarely occurred. I stopped thinking in Haskell and started thinking in Elixir.