I've build a library Transmutable which falls into the third category :)
it allows to use immutable data structures in mutable-like way, using plain assignments:
https://www.npmjs.com/package/transmutable
(mutations are not performed but they are just recorded in ES6 Proxy, and object is cloned on commit (sort of copy-on-write), and mutations are then applied).
This is a neat idea, hadn't thought of this but it could definitely help me bring new developers into the immutable style more easily.
The only issue with it is, of course, the fact that Proxies are ES6 and everyone still has to deal with old browsers.
I know MobX does similar object observation, and they support IE, but I've never had the time to go into their codebase. Do you know if something like this could be written in a way that allows IE10/11 to support it?
Also, could you elaborate on why it's necessary to use a proxy at all, when in reality you could just do a copy of the object from the very beginning and allow them to do mutations on that?
As far as I know, proxies simply will not work in environments that do not support ES6, because they require purpose-built JS engine support (see some comments at [0] as examples).
MobX works by wrapping plain objects and arrays with its "observable" equivalents. Per [1] , its object support requires fields to already exist, so it knows how to generate wrapper fields accordingly.
As for copying: as I talked about in the "Immutable Update Patterns" section of the Redux docs [2], proper immutable updates require copies of _every_ level of nesting that is being modified. If you want to make an update to `state.a.b.c.d`, you need to make copies of c, b, a, and state. That's doable, but takes work, and can get ugly if you're dealing with nesting by hand. This does lead to frequent mistakes, like assuming that `let newObj = oldObj` makes a copy (it just adds another reference to the same object), or that `Object.assign()` does deep copies (it's only shallow, ie, the first level). It's one of the most common pain points I see for people learning immutability and/or using Redux.
What something like the `transmutable` lib appears to give you is the ability to write perfectly standard imperative mutation code with no extra fluff necessary, even for nested data, and still get proper immutable updates. I'm definitely going to have to play around with it.
on the beginning I used getters/setters but I switched to Proxy because it allowed for simpler implementation and more flexibility.
I may consider to switch back into getters/setters, if support of Proxy is really a problem (although getters/setters have its limitations).
There is also Proxy polyfill, although it has the same limitations getters/setters have ("properties you want to proxy must be known at creation time"): https://github.com/GoogleChrome/proxy-polyfill
Redux for sure. Smalltalk had message based system like Redux has (with the difference that in smalltalk this system was object oriented). But functional programming also existed in 1976.
"build on top of existing, well-defined codebases/APIs." is not necessarily easier than "engineer a complete, well-rounded, extensible codebase/API himself."
I think these are two different skills. Some programmers are better in building on top of existing codebases, some programmers are better in build things from scratch.
I read:"Sometimes you are going to get event bus." Indeed sometimes simple event bus (pub-sub etc.) could be sufficient for replace Redux. EventEmitter is already a dispatcher. And then you can easily listen to each event in your app, like in Redux.
I think the key to the scalable apps is good architecture (and correct design patterns), not framework.
And Redux is good as education tool. It encourages people to use one global event bus("dispatch" function), it educates about principles of functional programming. It shows people advantages of event sourcing and CQRS. I think the most value of Redux is that people got educated.
But I don't think Redux (at least Redux alone, without Redux-saga or other solutions) is good tool for use in production code. It's too low level. Too many moving parts. No clear way "what should go where". And JS sucks when comes to immutable code (either Object.assign / ... mess or necessity of using third part library like Immutable.js)
Additionally people don't know how to use it properly. Redux codebases are usually much worse than Redux really demands (e.g. people use ugly switch/case despite the fact in Redux docs there are instructions how to get rid of them:
http://redux.js.org/docs/recipes/ReducingBoilerplate.html#ge...
There is one more thing. Developing in HTML5/JS (with all its ecosystem) is freaking fast in terms of developer productivity. You will probably build full fledge app in shorter time using JS than with C++. So you can deliver MVP to the market faster = profit $$$
And initial group of users are early adaptors anyway. They are usually eager to try new applications even if they are not perfect.
The problem is: "what next?", when users will start complaining about speed or other things.
Should company rewrite original HTML5 based MVP to C++ to gain speed?
Or should they optimize existing HTML5 solution? (many HTML5 apps are running with nearly native responsiveness).
Or something else? Or maybe HTML5 even for MVP is a bad idea?
How did you make the conclusion that JS is faster than C++? An experienced C++ developer, or better yet, an experience C++ developer team, moves “freaking” fast. The only advantage to the HTML/JS stack is you can get cheap labor from dime a dozen web “devs”. That’s it.
Case in point, look at Atom. Let’s assume your “MVP”, “profit $$$” and what not is somewhat correct. Yet, the amount of engineering time and effort it has taken to optimize that tech to fix things that are trivial in the native world is staggering. How does that fit in your “profit $$$” equation, as an initial core team of experienced C++ developers would have developed with these issues in mind and created a product that is viable in the same scenarios.
But if you were building such app in C++ what third part solutions would you recommend for building multiplatform GUI?
Qt? GTK? What is used now and what will be working/looking good on all three main platforms? (Mac/Linux/Windows)
I noticed you've been downvoted. I find it amusing the expectations some people have about the hardware consumers should be running. While many on here, being a geek community and all, will have relatively decent systems; most normal people would rather keep a system as long as they can. Not to mention those in poorer communities or countries who cannot afford newer tech.
In fact it was only this year when I upgraded a work colleagues personal laptop from 2GB to 4GB.
Downvoting is an accepted method of disagreement on HN. It's a problematic stance IMO since downvotes grey out the text, making dissenting opinions impossible to make out if they're unpopular enough.
Back on topic, 4GB laptops are incredibly common, especially in retail stores. Getting a laptop with 8 GB of memory frequently requires looking online (most folks still buy their computers from retail stores) and customizing your order.
For example, I just visited both Dell and System76, and except for the gaming laptops, all of them default to 4GB of ram, and adding more (while not terribly expensive) requires thought and action on the consumer's part.
but it seems weird, because Chrome is such a memory monster (it usually takes almost all RAM memory you have on your machine and additionally disk space if you are running of RAM). I have 16GB ram and Chrome takes me 10GB of it and ~1GB of disk space.
I've been downvoted because HN has Dislike Button which encourages people to downvote things they personally don't like instead of making constructive arguments ;)
If it's any consolation I did provide a counterargument and got downvoted for it.
As for the 'flag', that's not the same as downvoting. Or at least shouldn't be used in the same way although I have lately seen a disappointing number of comments marked as "flagged" which shouldn't have been. (There's some guidance about flagging posts here: https://news.ycombinator.com/newsguidelines.html). Downvoting is something that's only available to members above a specific karma threshold (2000 IIRC).
I've found every Electron app I've used to be terrible. This includes VSCode. In fact compare VSCode to Adobe Brackets - both of which are using web technologies as their framework but Brackets is a node.js application while VSCode is Electron, and you'll see just how badly Electron performs when even compared to similar technologies.
Oh it's definitely slow compared to the likes of Sublime Text or vim. But compared to Atom, VSCode, etc Brackets feels a lot snappier and less memory hungry.
This is what I've experienced on 4 different hardware platforms however all running ArchLinux so YMMV.
TL;DR: they just invented internet forum/message board but with nicer GUI.
And it's good.
I think internet went wrong way to ditch traditional internet forums/discussion groups/ and internet communication started to be based on real time chats (Slack, Messenger, Skype etc.), but I'm glad that we're returning to the oldschool (I'm waiting for Facebook to introduce "threads/topics" on its groups, because now Facebook groups are less functional than plain old phpBB.
It's not generating code that is costly, but reading it. Understanding 100 lines of code is usually easier than understanding 1000 lines of code, understanding 1000 lines of code is usually easier than understanding 10000 lines of code etc.
So generally it pays of to keep codebase small and simple.
>It's not generating code that is costly, but reading it. Understanding 100 lines of code is usually easier than understanding 1000 lines of code, understanding 1000 lines of code is usually easier than understanding 10000 lines of code etc.
That's not at all true. The costly part is the complexity. You could spend half a day puzzling over ten lines of cleverly written Lisp and just breeze through 100 lines of Java that do the same thing.
You're comparing 10 lines of "clever" Lisp to 100 lines of (presumably boring) Java.
In reality, it's more likely that those 100 lines of boring Java are easily written in 10 lines of just as boring Lisp, the only difference being the removal of 90 lines of boilerplate getters and setters, iteration logic, exception declarations, class definitions and instantiation, and so on.
You can explain to someone (or learn on your own) "fancy" functional constructs like map, fold, and select in about ten minutes. And then they can be part of your vocabulary for an entire career. It is absolutely unfathomable to me that this is not a complete no-brainer.
Imagine a novelist who replaced every instance of ten common English words with their literal dictionary definition every time they came up in a book. That's the level of incredulity I personally experience every time I see someone write the same implementation of `map` for the tenth time in a single source file.
>In reality, it's more likely that those 100 lines of boring Java are easily written in 10 lines of just as boring Lisp, the only difference being the removal of 90 lines of boilerplate getters and setters, iteration logic, exception declarations, class definitions and instantiation, and so on.
I hear this sort of thing from Lisp advocates all the time, but have never seen it in practice. Understandable Lisp isn't that compact.
> [I] have never seen it in practice. Understandable Lisp isn't that compact.
Also, you are literally replying to a comment chain provoked by a post that argues, with code, exactly this point. The Clojure (a Lisp dialect) version has the same core logic as the Java version. The core logic is virtually inarguably just as readable as the Java version. And it is just shy of an order of magnitude fewer lines of code than the Java version.
So here's an example from a real, live code base of understandable Lisp, in practice, that is 10x more compact than the Java version.
The idea is that special constructs and a special machine in the problem domain will be used to write and execute dense code. This then tends to be VERY understandable in the domain, but brings the developer away from the low-level execution machine.
Thus it is more understandable and less understandable, both at the same time, but on different levels.
Much more understandable for the domain-level programmer.
Much less understandable for the low-level execution-level programmer.
I'm guessing you easily have 100x or more experience with Java than you do Lisp.
Do you suspect that this might have something to do with it? Lisp has a very different syntax from C-style languages, and this seems to trip people up much much more than the functional components of it.
Not being familiar with the idioms of another language doesn't mean those idioms are hard. It just means you aren't familiar with them.
BTW author of article mentions also incidental complexity of Java classes. It's worth consider too and maybe it's more accurate picture of problems with Java (and many other languages, it's more like poor design and "cult of complexity" than technology problems).
There are too many abstractions, classes, interfaces, inheritance, needless design patterns... it kills productivity and maintainability and ability to understand such codebases.
Only god and author knows what exactly was complex about those classes. I would guess that closure have good support for csv parsing, json integration, probably easier to read files? Or he used libraries in closure? Or maybe he used apache like every sane java project and something else was complicated?
If 90% of that java code was generated, then what exactly were they doing?
Also, there is such a thing as too much abstraction, but there is also such a thing as too little abstraction and refusal to use design pattern where it fits, because "it would be too complicated" alias someone was scared and lazy to learn. Neither is good. The projects I have seen that refused these "advanced" techniques all ended like hard to maintain and makes sense of mess of special exceptions to special situations - hard to reason about code.
Design patterns are not hard and abstract thinking is as useful as good memory. Refusal to learn either is not mark of good developer.
If 90% of that code was generated, that's absolutely horrifying too. Code is read ten times more often than it's written. And you're forcing every poor fool who has to come in and maintain that code base to understand how five classes interoperate with and delegate responsibility to one-another and gloss over reams of unnecessary boilerplate (that under careful inspection might not actually be boilerplate, but have a subtle change).
I never generated 90% of my java code and I code in java for years. That is why I am asking - it is suspicious. Maybe, maybe if all you do is parse one complex xml from schema and decided to go jaxb way. Then again, such generated code tend to be tucked somewhere in the ../generated/ and then linked to class path - it is clearly separate from the rest and you dont review it.
Basically, you read stuff it is generated from, and you dont read generated part unless you suspect bug in generator.
The other occasionally generated stuff is syntactic sugar - hashCode, equals or delegate methods. That is quick to read once you get used to how it looks. More importantly, if that sort of thing is 90% of the code, then there is something wrong.
...and appropriate level of detail doesn't mean every detail should be documented. Sometimes it's better not to document some technical details (which will change anyway, and docs will be obsolete soon).
it allows to use immutable data structures in mutable-like way, using plain assignments: https://www.npmjs.com/package/transmutable (mutations are not performed but they are just recorded in ES6 Proxy, and object is cloned on commit (sort of copy-on-write), and mutations are then applied).
So it basically enables for writing such things:
instead of Object.assign / ... mess.