The vast majority of people using TypeScript in their projects won't run into the deeper complexity of the type system. For the average TS user the only visible impact of features like template literal types and recursive conditional types will be fewer bugs because maintainers of popular libraries used them to provide better types.
You don't have to personally understand TS's more complicated features to benefit from using TS in your project. I hope they continue adding richness to the type system so I can eliminate more bugs from my code.
Always when typescript is
is compared with js, people talk about you get less bugs in typescript. I am honestly curious to see some evidence.
In my opinion application bugs are usually in the flow itself and have nothing to do with wrong types. I work a lot with both plain and Typescript projects. I used to like Typescript. But not anymore. With typescript applications tend to become bigger and more complex. Most devs using it have never seen vanilla js and are more java minded that prototype minded.
IDE code completion works better for typescript. For sure. But is that really essential? Sounds a bit like the typical Java developer who works with giant spring applications containing classes with hundreds of methods spread in many files in a 10 layer deep folder structure.
> bugs are usually in the flow itself and have nothing to do with wrong types
I'm not sure what you mean by "in the flow itself" exactly, but just yesterday a teammate of mine was using one of our libraries in a small project without TS. They first called getThing(), which might return null, and then passed the result to doSomethingWithThing(), which did not expect a null. That sounds like a "flow" problem to me, and TypeScript literally would've warned them of it.
(Since this particular combination of functions is likely to be common, we added a runtime check for people who aren't using TypeScript now.)
> Most devs using it have never seen vanilla js and are more java minded that prototype minded.
I think you might be underestimating the number of people who strongly dislike Java's type system but very much appreciate TypeScript. They're very different, and hence it's very much possible to like the latter for reasons that do not apply to the former, or to dislike the former for reasons that do not apply to the latter.
function isAuthorised(user: User): user is AuthenticatedUser { return user.isLoggedIn /* or whatever */}
as long as doSomethingWithThing accepts just AuthenticatedUser, yes it can be caught. You can also use discriminated unions with implicit type refining from TS without type guards.
If that action has a dependency to a user (an authenticated user or a user with a specific permission), it's a good practice to establish this dependency also in the code level.
"doSomethingWithThing" probably does IO, and the changes need to be traced back somehow (e.g. a ModifiedBy column in a DB). You can solve this context problem through a DI container, or by direct parameter passing, but whatever you do, there will be a layer which can expose the requirements as types.
I'm not a know-it-all who'll dictate how you need to structure your program on HN, so what I stated above is just me thinking out loud, but let's say it's up to the programmer to make everything safe-r with the tools available, be it runtime checks or the type system.
I think GP may have meant to say that whatever code is _calling_ doSomethingWithThing() there should only accept AuthenticatedUser. It hoists your original check to the calling function’s signature.
I agree with you that this doesn’t really solve the flow problem though: someone still has to decide that the function only receives AuthenticatedUser and not Guest. That’s not a net gain, it’s just shuffling responsibility around, and it could be a net loss if the calling function does “a bunch of identical things for any user type” and then “this one extra thing if user is authenticated.” But I’m not a Typescript user or a big fan of complex type systems either.
There's a correlation between lines of code in a file and bug count. Does the typing outweigh this?
If all the time spent writing all those types was spent debugging, would TS win in actual development time?
If you're writing good unit tests already, why are you running into so many type issues?
If you already have to spend time writing tests, why spend more writing types?
TS types aren't really documentation. For other devs to use your stuff, you need JS Docs to actually explain things. Why write all those types too?
If I'm writing a quick and dirty project, it feels like TS types help a little, but if I'm following best practices on an important project, TS seems like pointless ceremony that slows things even more.
I don't have numbers on that, though I think a sibling comment did refer to some.
I do have my intuition though. (One day I'm going to keep track of the mistakes TS catchers early for me in a week.)
So yes: I think the typing lowers the correlation between lines of code and bug count, and that that outweighs the bugs introduced by the additional lines. (It is similar in that regard to many unit tests, I'd say.)
As for unit tests: it's not that I write unit tests first and add types later. The types are written as I code, and they allow me to not write some of the unit tests I'd written otherwise. The upside is that they're easier to write and easier to keep aligned with the code than unit tests.
So yes, I still have to spend time writing tests, but less of it than without types, and I save more time than the writing of types costs me.
> TS types aren't really documentation. For other devs to use your stuff, you need JS Docs to actually explain things. Why write all those types too?
I don't understand what you mean by "write all those types too"? Yes, I still have to write documentation, but just iterating the types is not documentation?
> I don't have numbers on that, though I think a sibling comment did refer to some.
I've read that paper in the past and it doesn't actually answer most of the questions I asked above. It only says that X bug existed in some prior git commit. Maybe the dev caught that bug and fixed it before the PR with the commit was merged. Maybe it was caught in code review. Maybe it was theoretically possible within the function, but not within the actual program's use of that function. Maybe the time spent catching those bugs was less than the time it would take to add types.
> they allow me to not write some of the unit tests I'd written otherwise
A `typeof` assertion or similar is hardly more work and continues to function once the TS types have been stripped. If you expect to interact with the outside world, then you must test against unexpected types. If not, then good docs are still better (see below). Meanwhile, in every real-world TS project I've worked on, you wind up with tons of "template soup" where devs spend tons of time trying to find out which variant makes the type checker happy (or just giving up and slipping in an `any` type)
> Yes, I still have to write documentation, but just iterating the types is not documentation?
I have a function that takes a string and returns a boolean. What does it do?
It's likely that I can pass it any string, but there's a strong possibility that the function can't handle any random string. Does that boolean mean it's a test, that something was successful, or something else? What about side effects?
By the time you're done documenting this, when someone glances at the docs, they'll probably not worry very much about the types because they'll be obvious. Why write up a bunch of complex types when a simple, human-readable doc string does types and so much more?
> I've read that paper in the past and it doesn't actually answer most of the questions I asked above.
OK, well I still don't have the numbers, so we have nothing better than intuition to go on regarding whether it saves time/improves quality or not.
> A `typeof` assertion or similar is hardly more work and continues to function once the TS types have been stripped.
Yes, and if a typeof assertion is enough, than you don't need any additional syntax in TypeScript either. But a `typeof val === "object"` doesn't tell me a whole lot though.
> If you expect to interact with the outside world, then you must test against unexpected types.
Agreed. That said, with TypeScript, you only have to do it once, at the point where you interact with the outside world. Once I've verified that e.g. my API response contains all the properties I expect, then I can pass it on to any other function in my code safely. Whereas without TypeScript, I have to be aware at the points where I access those properties that the original source of that value might have been the outside world, and to explicitly verify that it looks as expected. (Or alternatively, I need to still verify the object at the boundary, but have to manually know what properties of it are accessed in the rest of my codebase.)
> Meanwhile, in every real-world TS project I've worked on, you wind up with tons of "template soup" where devs spend tons of time trying to find out which variant makes the type checker happy (or just giving up and slipping in an `any` type)
Yes, I've seen that happen to. I will not argue that you don't have to learn TypeScript, and that if you do not (want to) put in the effort (or are unable) to do that, it might be counter-productive. In fact, I advised another team in my company to move off of TypeScript for that very reason.
> By the time you're done documenting this, when someone glances at the docs, they'll probably not worry very much about the types because they'll be obvious. Why write up a bunch of complex types when a simple, human-readable doc string does types and so much more?
Yes, type annotations are not a replacement for documentation. They help your tooling help you. So the reason to write up a bunch of complex types (well, preferably simple types most of the time, of course) is that your tooling can help catch mistakes early - I'm not working off of documentation most of the time. I read it once, refer back to it every now and again, but more than that would be a massive waste of time. My memory is a major asset in being able to quickly type out a bunch of code, but my tooling helps me by removing the need to memorise some things.
> There's a correlation between lines of code in a file and bug count. Does the typing outweigh this?
This is where the famous 'compiler' steps in and catches your typos and errors with "types" at compile time.
> If you're writing good unit tests already
Compilers are not fragile. Unit tests are fragile. IMO, a significant factor contributing to relation between N (lines of code) and B (expected number of bugs) is human error. Humans write unit-tests. For a fun spin, consider the fact that "more unit-tests" means "more lines of code" and thus "more bugs", but now in your "test suite".
[p.s. fragility above refers to the inevitable drifts between the original test-subject (and associated test code) and subsequent changes to the codebase that require updates to the test codebase.]
Of all the major types of programming errors, types only help with the least impactful one, the most obvious one, and the one for which there are other tools available.
In the end, the dynamic vs static typing argument has likely been going on since before most HN user's were born (since at least the 50s). I suspect we aren't going to reach a definite conclusion today either.
> In the end, the dynamic vs static typing argument has likely been going on since before most HN user's were born (since at least the 50s). I suspect we aren't going to reach a definite conclusion today either.
Sure thing, but wanted to insert a few facts informing your assertive OP regarding "lines of code" and "unit-tests".
p.s. "worst bugs"
The worst bugs in my 3 decades career have involved reliance on a broken test-base "verifying" widely shared code in an evolving codebase.
This is a major point: "caught by JS runtime" means you're only able to see it when you actually run your application and exercise that code path - or worse, that your user does so. Where the alternative is your editor literally indicating the error the moment you write it, the fix costing you so little time that you can fix them practically subconsciously.
That's interesting, cause you mentioned the JS runtime earlier. In any case, unless you'd argue that you never get a single runtime error, I'd encourage you to take a moment every time you encounter one whether that could have been caught by TypeScript. In my experience, that is the case more often than you might think.
Types define explicit boundaries for application data structures, tests verify software behavior at run-time, they both provide unique benefits that cannot be achieved with either alone, and they in-fact greatly compliment each-other.
Types exponentially reduce the set of possible inputs a function can receive, thus greatly reducing the number of tests that need to be written to achieve the same level of safety. Types are also much more precise and less brittle than tests, they establish clear contracts that reflect the actual structure of the program whereas tests provide assurance that some minimum subset of behavior is correct enough for the software to meet the expectations of users.
Ultimately, a lack of types is a form of technical debt because the data structures do exist whether you spend the time to formally acknowledge them or not.
> But is that really essential? Sounds a bit like the typical Java developer who works with giant spring applications containing classes with hundreds of methods spread in many files in a 10 layer deep folder structure.
It's not only your stereotypical Java developer who has to maintain large code-bases. There are much more monolithic code-bases than there are micro-services. Also, probably you have a much better memory than me, but I, like many others have trouble working with random objects with no contracts.
Additionally, try using a library like io-ts and you also get validation against types.
I have worked with Javascript for frontend quite extensively since 2005 and with Typescript since about half a year. In the general case, I don't have strong opinions about strong vs dynamic typing, or getting error during compile- or runtime.
But my gripe with Javascript has always been, that you often don't get any direct errors at all but instead end up with unexpected values in a different place that you then have to tediously backtrack.
A contrived example to illustrate what I'm referring to:
some_obj = {day: 2, ...}
next_day = some_obj.daz + 1 // typo in attribute gives you `undefined`, adding 1 to it results in `NaN`, not a number.
date = Date(2020, 11, next_day) // returns a`Invalid Date` object
console.log('date: '+ date) // outputs "date: Invalid Date"
In Python, every line after the first would raise a runtime exception and is thus in my opinion much easier to debug.
What I particularly like about Typescript is that I can use as much or as little of it as I deem necessary. E.g. I start off a prototype using the `any` type a lot, and only in future iterations, I start tightening the definitions as the requirements for the project become clearer. So far the overhead has been quite minimal.
The one downside so far is, since I learned Typescript by just starting using it, without a deep dive in documentation, books, etc. trying to grok code, well especially type definitions of libraries (which I often prefer over reading documentation) that use the more advanced features has been quite a challenge.
> What I particularly like about Typescript is that I can use as much or as little of it as I deem necessary. E.g. I start off a prototype using the `any` type a lot, and only in future iterations, I start tightening the definitions as the requirements for the project become clearer. So far the overhead has been quite minimal.
I started working this way and it was fine, that's how I always worked anyway. Then I started leaning towards working the types first and the code later, and I've found it works better and better for me.
Much of today's programming is "stitching pieces together": pieces from the libraries you are using, from external services you use, from your platform's APIs, and then creating the few extra pieces that are unique to your puzzle.
When you code first, you are directly painting pieces while trying to picture how the final puzzle will look like in your head.
When you type first, you are just cutting the shapes of the missing pieces and already assembling the puzzle. Once this is done, you are left with a full picture that has some blank spots, and it becomes much easier to just paint those in.
> The one downside so far is, since I learned Typescript by just starting using it, ... trying to grok code ... that use the more advanced features has been quite a challenge.
I share this point of view. For the last few years, this perfectly reasonable and balanced viewpoint was taboo so it's good to see more and more people approaching the debate rationally as opposed to waging a religious war over gut feelings.
Personally, I have no issue with TypeScript purely as a language and I concede that it provides much better IDE code completion than JavaScript. That said, I don't agree that its benefits offset the drawbacks of the transpilation step, the source mapping and the added versioning complexity between JS and TS.
I also miss the way that JavaScript encouraged people to define very simple function signatures (e.g. strings, numbers, plain objects/clones as arguments and return values).
For example, I really liked how the React/Redux community came up with a philosophy around cloning state objects before returning them from functions in order to prevent unexpected mutations later (essentially force everything to be pass-by-value). I think this philosophy does not translate very well to TypeScript which encourages developers to pass around complex live instances (which have their own methods) instead of raw objects and other primitive state representations.
As Alan Kay pointed out, OOP is not primarily about objects, "The big idea is messaging". Instances should communicate with each other via simple insterfaces using simple messages; they should avoid passing complex live instances to each other. If an instance has methods, that is a complex live instance and it shouldn't be used for messaging between different components.
> I also miss the way that JavaScript encouraged people to define very simple function signatures (e.g. strings, numbers, plain objects/clones as arguments and return values).
I wish this were true, and maybe it was in some cases, but not broadly. Many "types" used in JavaScript libraries are horribly complex. For example, the type signature of jQuery's `$` function, or moment.js's main function, or `server.listen` in Node.js, etc.
In fact I think the opposite argument from yours could be made with a straight face: not having to declare types encourages more complexity. I don't really agree with that either, though; in my experience this is more a matter of developer discipline and API design skills.
One thing I'm certain of is that if libraries are going to have complex types, I'd much rather have them be explicit and machine-verifiable vs hidden/implicit/hope-the-developer-wrote-a-good-docstring.
I maintain Redux, and I can confirm that TS does _not_ "encourage devs to pass around complex live instances" any more than normal JS does. You can do FP and immutable updates in TS, just like you can do any other code. See our "usage with TS" docs pages for examples:
What you're claiming here does not match the evidence from your links.
For example, in your second link, one of the functions expects a 'complex' instance DispatchProps which has a method toggleOn... This abstraction doesn't make sense conceptually. What is a DispatchProps? It doesn't adhere to Alan Kay's notion of a 'message', it's clearly a structure (it's a complex one because it exposes a method). Components should communicate to each other via messages, not structures.
Also, the method builder.addMatcher(...) accepts a function as an argument - It doesn't seem like an ideal abstraction either. Functions are not messages.
Also, the thunkSendMessage function signature is very complex; the return type is highly convoluted. That's definitely not a message.
Overall, I see a lot more of these complex instances being passed around in these examples than used to be the case with JS when mostly just raw objects were being passed as arguments and returned.
You can already see the complexity seeping into the interfaces. Just a few years of TypeScript is distorting the original philosophy. I remember Dan Abramov was very careful about what to pass into functions and what to return from them and made it a point to encourage cloning objects using the ... spread operator.
Wow. I'm sorry, but you've completely misinterpreted both what I was trying to say, and what those Redux-related APIs do.
A lot of people seem to assume that "using TS" means "must use the `class` keyword and deep inheritance chains", ala Java and C#. Redux, on the other hand, is FP-inspired. Nothing about the Redux core involves classes in any way - everything is just functions, including the middleware API, with an emphasis on immutability.
You brought up Redux's use of immutable state updates, but then said "that doesn't translate well to TS". I was attempting to show that it _does_ translate just fine to TS, because TS lets you write functions and make immutable updates. `return {...state, field: value}` works as fine in TS as it does in JS. I wasn't trying to touch anything about "what a message is".
Having said that, the rest of your observations about the Redux core and React-Redux APIs in this comment show a general misunderstanding of what Redux is and how it gets used. After all, Dan and Andrew came up with the actual React-Redux API, the concept of `mapDispatch` for passing action creators as props to React components, and the thunk middleware. The Redux Toolkit `builder.addMatcher` API is a recent addition, but all it is is syntax sugar for "if the dispatched action is any one of these types, we want to update to it", same as if I wrote a multi-condition `if` statement by hand.
None those have anything to do with TS, and they did not become more complex because we're now using TypeScript. In fact, it's the other way around - the complexity of the dynamic JavaScript behavior actually requires us to write much more complex TS types to capture how the code actually works. (There's a good reason why the React-Redux TS types are insanely complex, and I'm so glad they're maintained in DefinitelyTyped instead of by us! `connect` has so many overloads and different options that affect downstream props values, it's almost impossible to capture that with static types.)
Maybe you also misunderstood my initial comment. I can say for sure that of all the programming trends, there has been almost no discussion around interface complexity. There has been plenty of discussions around FP, dependency injection, type safety and a host of other trendy programming topics but I've never heard anyone point out interface complexity as a problem.
Complex interfaces lead to 'tight coupling'; developers have known for decades that this is bad but we don't seem to be discussing it much anymore. The reason why JSON-based REST APIs became so successful is because it greatly reduced interface complexity (compared to XML-based SOAP) and in doing so, it loosened the coupling between different services.
Until we start discussing interface complexity, nobody will fully realize what the drawbacks of TypeScript are. My main problem with TypeScript is that it is most useful when code quality is low (high interface complexity; tight coupling). That's what I mean when I say that it encourages bad programming practices; the people who find TypeScript most useful are those who tend to produce the worst code in terms of interface complexity.
> For example, in your second link, one of the functions expects a 'complex' instance DispatchProps which has a method toggleOn... This abstraction doesn't make sense conceptually. What is a DispatchProps? It doesn't adhere to Alan Kay's notion of a 'message', it's clearly a structure (it's a complex one because it exposes a method). Components should communicate to each other via messages, not structures.
the `mapDispatchToProps` idea has been in `react-redux` since the very beginning. I don't understand what are you complaining about here. If you prefer, you can ignore this practice and pass the whole dispatch function to the component, and then call `dispatch(actionCreator(params))` directly. Typescript won't get in the way of you doing that.
> Also, the method builder.addMatcher(...) accepts a function as an argument - It doesn't seem like an ideal abstraction either. Functions are not messages.
The addMatcher API is not concerned with sending messages, it is concerned with receiving them. You are specifying what should happen when a certain message type arrives. Hence it seems normal to me to specify that as a function that given the message does some stuff. Since these are reducers, the "does some stuff" part is: given a previous state and an action of type X, compute and return the new state Y.
> Also, the thunkSendMessage function signature is very complex; the return type is highly convoluted. That's definitely not a message.
Of course not. It is a function that sends some kind of message after asynchronously calling some API and getting its response, or another kind of message if the API call failed. That's what the complex type says. The usefulness of thunks is that you do have this notion of "ongoing asynchronous process that may succeed or fail" explicitly represented (so you can for instance cancel it), which you'll have a much harder time if you want to represent by using just actions and reducers.
> You can already see the complexity seeping into the interfaces. Just a few years of TypeScript is distorting the original philosophy. I remember Dan Abramov was very careful about what to pass into functions and what to return from them and made it a point to encourage cloning objects using the ... spread operator.
I think you are misremembering some things here. Dan Abramov used to advocate for a strict separation between presentational and container components [1], including the use of `connect` in exactly the same fashion you are criticizing in your first point.
Likewise, cloning objects using the spread operator was never a goal in and of itself. Dan advocated for it as an easier way to avoid mutating the state. Not mutating the state was the point here. Redux-toolkit accomplishes this point by using immer, a library that conceptually gives you a copy of the state so you can just mutate and return that instead (with the advantage that non-modified parts are not cloned).
Also, you can just not use that and return spread-operator-cloned objects from your reducer functions instead. Everything will keep working normally (and passing the type checks) in that case.
As someone that's extensively testing their application, I don't get the bug argument either. Sure some bugs are found during compile time. Actually, it's not bugs the compiler finds. Rather it's inconsitencies between types and the rest of the code.
But when testing properly, these should anyways be found later. Or am I missing something?
It's a given that you can reduce the number of bugs in your app eventually, with any method/tool you can imagine, given enough time. That'd be still true if you were designing using sticks and stones on sand. Why not use another tool that helps you fix bugs easier, faster and sometimes spare you from writing them at all?
Why so aggressive? I asked a simple question out of interest.
Again, I think there's a difference between bug and a coding error. A bug is a mismatch between a software's function and the user's expectation.
And a typed-inferred error during compile time is just a hint that some code is incorrect. It may lead to a bug, but at the point of compilation, it's not a bug.
oh I didn't realize I sounded aggressive, sorry if it came off that way, cheers!
I'd like to point out that if you agree that typescript prevents stuff that may lead to bugs, without getting into the semantics of what a bug is, you may agree that it's useful.
Inconsistencies between types and rest of code _is_ bugs. I'm willing to bet my house that "TypeError: object doesn't support property or method" is the #1 bug in JavaScript's sphere.
Real unit tests (not integration tests) are poor man's types. Usually when you have very small unit tests asserting inputs and outputs of methods you would simply assert those with types in more powerful languages
Haskell has pure functions everywhere and a much stronger typing system than TS, but unit testing is still considered important.
Someone else will be changing that code later. Knowing (for example) that the parameter is a string will be zero help in knowing what is being done with that string. It also won't help know what edge cases the code was handling.
Also, types give a false sense of security. A new project was integrated into an old website (one with legacy dependencies, but doing millions in transactions every day, so unchangeable).
It broke and they couldn't figure out why. They had the TS types for functions, but an old framework (prototype or moo iirc) overrode js built-ins with incompatible versions. Later they got bitten again when that code changed the object types. If they'd been writing js, they would have written dynamic checks from the start, but it's easy to forget that once you compile, it's just js.
You can always describe what has been done with the string by boxing it into an expressive type. Your other example is that when dealing with bad external code you need to do additional checks. That's a specific scenario and every language which exists has to deal with it in the same way. In more powerful languages you could infer which checks need to be done from the type and do the automatically for external code
The assumption here is that there are unit tests that cover the areas containing these sorts of bugs.
It can prove to be false because many projects don't have sufficiently good code coverage.
Of course, it's also possible that there simply are no unit tests at all.
In those circumstances, TypeScript might just result in less bugs overall. While one could argue that unit testing should be commonplace, reality doesn't always live up to that standard.
Let's say you refactor code, then you might be inclined to change a type signature somewhere, which means you'll adjust your program until the compiler isn't complaining anymore. You'll work until the compiler is silent.
The conpiler being silent, here is only an indication that your types and code match. Not that your assumptions about the program match with its outputs (e.g. the user interface).
However, unit tests are usually written to assert assumptions. Or to "freeze" certain parts of the code so that these mismatches don't happen.
In contrast: Types within the source code and depending on your way of thinking about types, many programmers will not see them as "declarative unit tests".
In practice, this means that you sometimes get "surprised" by some unit tests that are failing after a refactor. That's good because it sheds light where you've made mistakes when changing your code.
To some degree, of course, this is true for types. E.g. they will always help you to point out when two APIs mismatch. However, a test usually is contained within a unit with a clear description motivating its existence. It's so much harder to accidenitally changing a test for the worse than it is to change a typed function signature for the worse.
Lastly, very often functions are crucially dependent on input values and not their types. So even if, in a dynamic language, you get b=0 input into div(a, b) return a/b and it's a valid type, you should test for values as in this case as you can't divide by 0.
So in many cases, even with inputs it'd be necessary to unit test e.g. function signatures etc..
>However, unit tests are usually written to assert assumptions.
As you say, types are a more declarative (rather than procedural) way of asserting assumptions.
>In practice, this means that you sometimes get "surprised" by some unit tests that are failing after a refactor. That's good because it sheds light where you've made mistakes when changing your code.
>To some degree, of course, this is true for types. E.g. they will always help you to point out when two APIs mismatch. However, a test usually is contained within a unit with a clear description motivating its existence. It's so much harder to accidenitally changing a test for the worse than it is to change a typed function signature for the worse.
I'd argue it's just as true for types as it is for unit tests, if not more so. You can get "surprised" by the compiler when refactoring methods in just the same way. I'd argue you get more information with types, because it hooks into the LSP and identifies everywhere in your code that now fails. In contrast, a unit test only tells you that unit test failed. It is still up to you to find the actual locations in the code. In this way, unit tests can be thought of as a parallel program. This isn't true of types, which are directly embedded into the program. Put concretely, if you removed a property on a type then the LSP and tsc would tell you every single place that property is missing.
That's the crux of my argument. Types have better tooling, which helps with both the "delclarative unit test" part and equally importantly with refactoring. You can check for more things with types, like whether your `switch` statement exhaustively goes over every option or is missing any. Your tools also understand the types (like the LSP), which helps with refactoring.
I'd also argue that types are usually easier to understand than tests. While good unit tests can provide good examples on how to use an API, types exhaustively tell you what a thing is and what it is capable of.
>Lastly, very often functions are crucially dependent on input values and not their types. So even if, in a dynamic language, you get b=0 input into div(a, b) return a/b and it's a valid type, you should test for values as in this case as you can't divide by 0.
That's a great point. There's absolutely still a place for tests. Types are not a like-for-like replacement, and each have their strengths, but there is significant overlap. If one wanted to exhaustively enumerate in options in a `switch` statement, and ensure division by 0 issues, then they would need both.
I've seen the same claim made about the more obscure features found in application frameworks, and in both cases the problem is the same: there will always be that one guy on your team who takes delight in finding a complicated and mentally burdensome solution to an otherwise simple problem, which requires everyone on the team to at least be aware that certain constructs exist.
It's great that Typescript is catching on, and we're starting to adopt it in my team, but I hope the TS people don't think they can keep adding new features to it in perpetuity, otherwise it's going to turn into C++.
That's like saying any country that talks about how democratic it is must be like the Soviet Union. Sometimes you have to actually look at the details behind the claims. There are specific problems with C++, and its features are often not orthogonal enough to use independently. That doesn't mean that it's impossible to add orthogonal features that don't complicate a language for people who don't use them, it only means that C++ has failed to.
How could typescript be approaching C++ complexity when it's just JS with types? The advanced typing features don't affect the semantics of programs. They just make the type system more expressive so more things can be validated by the compiler.
In other words, typescript doesn't make the language any bigger. If you're struggling to understand typescript code you could just ignore the types, and you're back with regular JS. This is different than the issue some people have with a big language like C++, where the sheer number of features can make it hard to get a grip on unfamiliar codebases.
You’re right about typescript, but in my opinion the same thing could be said about C++. You can say that, superficially, C++ is also just “C with types”. You can write “C in C++” and ignore the types, as many developers do, and risk runtime crashes or errors.
I have to deal with this sort of code on a daily basis. When I talk to the people who wrote that code, they mostly shrug and say that C++ was too complicated for them, they just used what they know.
The flip side of this is people (like me) who know C++ pretty well, keep track of most of the advanced features, and use them semi-regularly. Unfortunately, this effectively creates a barrier to entry to those same other devs. You can blame those other devs all you want for not keeping track of the language, but what if they don’t want to and have other, more important thing in their lives than to keep track of the language? So I’ve come to the conclusion that in my opinion the problem is you (or in that case, me), not them. If you want your code to be long lived and maintainable you have to take the people around you into account. I’ve come to consciously limit the number of C++ features I use in code other people see to a bare minimum.
Yeah, you're absolutely right. I tried to equate typescript adding types to javascript with … all the things C++ adds on top of C (and then some), but the analogy doesn't really stand if you take my words on face value.
Well, for one thing, I want my code to pass typescripts type checker. So I can't just ignore the types and go back to regular javascript. I've got to understand the sometimes complicated static typing semantics
The TS devs adding support for more advanced types in tsc doesn't make it any harder for your code to pass type checking. That just depends on (1) how strictly you have configured the type checker and (2) how specific you make your type definitions.
The more advanced types just allow more specific type definitions. So if try to you them, and you can't get them to work... you can just avoid the advanced features and leave the type definitions more vague. Just like how if you can't get the basic types to work, you can always use "any".
And now you are changing your claim, showing, quite frankly, that I was right. Your original claim was bogus.
Furthermore, your new claim is also nonsensical. The fact that I don't have to use all the advanced features of my languages doesn't alter the complexity of the language.
I'm not sure what you mean by my claims. I'm just saying there is a big difference between adding features to the language and adding features to the type system.
You said "How could typescript be approaching C++ complexity when it's just JS with types?" The implication being an assertion that typescript cannot be getting as complicated as C++, and that's simply isn't true.
Given that the type system is part of a language, I have no idea what you mean by trying to draw a big distinction between adding features to a language and a language's typesystem.
Companies issue shares. These shares are listed as a liability on the company's balance sheet. Each share represents a liability to the company because it entitle its holder to a share of any dividends paid by the company.
The intrinsic value of a share is the net present value of this future stream of dividend payments, not its current market price. OP's point is that the market price of a share can be significantly above its intrinsic value in periods of irrational exuberance, such as now, creating "paper wealth" that doesn't really exist and which will evaporate when the speculators head for the exit.
The present value of future dividends is highly dependent on the terminal state of the company, and the future of the economy. So I don't find wild fluctuations in stock prices to be proof that the market is inefficient or over/undervalued because the far future is very uncertain.
Every stock chart is an invitation to assume false precision, because unlike a scientific measurement there is no explicit +/- range. But the true value must have a range of uncertainty, and it can easily be many orders of magnitude.
Also, every time I see the phrase "irrational exuberance" I am reminded that while there was a bubble in the late 90s, at the time Greenspan famously was worrying about the market in public, the Dow was around 5,000 or so IIRC, a long time before the peak.
That's a good question. I've given some thought to how we might build drivers in languages that don't support lambda functions and unfortunately, there don't seem to be any good answers.
The actual underlying S-expression syntax for ReQL lambda functions is pretty ugly by itself and I'm glad that so far we haven't had to expose it. The python `lambda x,y: x + y` actually gets compiled to something like the following: `(func [1,2] (add (var 1) (var 2)))`. Variable references have to be constructed manually no matter how you do it. With native lambda functions though you can at least hide those constructions behind the scenes and bind them to the function's formal arguments.
Fortunately though both C++ and Java, the two "traditional" languages we would most like to support, have either just (in C++11) or are about to (in Java 8) introduce lambda functions rendering the point moot.
It wouldn't be hard to have a very shallow translation from a lisp-alike syntax - to borrow Clojure's: "(fn [a b] (+ a b))". Named parameters translate trivially to (var n).
Thanks for the notes gsibble, please let us know anywhere where we can make the docs more clear, especially anywhere the behavior has changed since 1.3. We appreciate your patience with the breaking changes.
As for the changes to `connection.run` and `query.run` please note that in the Python driver you can call `.repl()` on any newly created connection to set that connection as a global default. This should replicate the old behavior.
We decided that doing this by default was unsafe as it relied on global state modified any time a new connection is opened. Even so, we appreciate the convenience and kept the behavior in the form of the `.repl()` for when you're trying out RethinkDB in a simple script (or the Python REPL) that only creates one connection and doesn't use threads.
Currently RethinkDB has official drivers for three languages, Python, JavaScript, and Ruby. I hope you understand why it would be difficult for us to support many more than this. Eventually we're counting on support for 3rd party drivers from the community. In fact, there are already a few early community drivers from some intrepid contributors including for Haskell and Go.
We've had lots of requests for more drivers and lots of offers to build them, but we've asked our volunteers to hold off while we revamp the driver interface to make it significantly easier to build drivers for RethinkDB. Having written the first JS driver myself and the new version I can attest to the much greater ease of doing so with the new API.
The next release (1.4) due very soon will include these changes and a driver development kit to support 3rd party efforts. After that I'm sure you'll see a PHP driver emerge very quickly.
The distinction between query.run(conn) and conn.run(query) is very fine indeed. There has been much discussion about this and there are both solutions have their merits. At one point we even supported both but decided the simplicity of having only one solution merited dropping the other.
Creating a new connection involves opening a TCP connection and sending a message to validate the driver to the server, altogether a few round trips. There is a small bit of per connection state stored on the server but since RethinkDB uses a custom coroutine implementation for concurrency support this does not amount to the overhead of an independent OS thread for each connection.
Invoking the query with a specific connection object is thread safe. There is a feature designed to help REPL users that stores the last connection in global state that is slated for removal in the upcoming release (1.4) that is obviously not re-entrant.
It would make sense to have connection pooling, especially in the python driver where connections block on requests. This is an idea we're exploring but is lower down on the priority list. As it is a fully client side feature there is nothing stopping 3rd party driver developers from implementing a solution though the official drivers will have to wait for other priorities.
You don't have to personally understand TS's more complicated features to benefit from using TS in your project. I hope they continue adding richness to the type system so I can eliminate more bugs from my code.