It looks like a good evolutionary improvement over Node.js. However, I have a concern about its security claims. Perhaps someone on the project can allay these concerns.
My brief skimming of its site indicates that its security model is based around the ability to disable, say, network access for whole Deno programs. However, does it allow starting up with network access, allowing a subset of the program to handle it, and then dropping those rights for the rest of the program, _especially_ subdependencies?
A modern Node.js web service will have hundreds, if not thousands, of indirect dependencies. Some network access will be required for at least the Express routing, or an equivalent.
For a Deno equivalent, this would amount to enabling network access for all hundreds of those subdependencies, not reproducing the isolation in capability-based security such as WebAssembly nanoprocesses.
Have I missed something here? That doesn't seem like much of an improvement over Node.js except for very small and contained programs. Yes, Deno's dropping of centralised package repositories and package.json might alleviate this problem _somewhat_, but the same fundamental issue seems to remain.
This prevents dependencies from using call back locations that are outside
the permitted list, preventing much of the nefarious activity they can dream up.
If a dependency needs access to specific resources, it can advertise this fact and the parent module can in turn request this from the user.
Importantly, the user is explicitly aware of these & controls it in an absolute sense, at run time.
The whitelisting looks great. Even with the remaining concerns I raised in the other comment, the ability to whitelist only allowed domains for network connections is a massive step up security-wise, even if they are allowed for the entire program (until revoked globally).
I'm not an expert on this in any sense, but would it be possible to add this into Node at the OS level, e.g. make use of network namespaces to restrict outbound network access?
Yeah, the OS or network firewall can do this, but security happens in layers, and to me it makes sense that an app config is the place to put a whitelist for the apps network needs.
If I was just spitballing an ideal scenario, I’d suggest that each module would define what it needs, and then some sort of central file would be built to hold the aggregate of them (urls / modules), for easy scanning.
The reason I’d rather have it in the app is if you are switching platforms, you don’t need to worry about firewall configs being exactly the same, or being fine grained. Also you might be whitelisting up ranges on the network level, then locking it down further on the app level.
> Importantly, the user is explicitly aware of these & controls it in an absolute sense, at run time.
I mean, I guess I see value there for the use case of "I want to download a script to run locally on my machine" type of thing, but for the most common use of Node, i.e. I'm running a server process, does this really even matter?
For most networked applications, there are two classes of permissions control - inbound and outbound.
The inbound is a run-time decision and dynamic at that - Firewalls, WAFs etc. are used for control. These are not (and probably should not) be set by the application author, but by the application operator.
The outbound however, is typically something that is designed into the application - it should be specified by the author, be available for auditing - both on first install and all subsequent changes. IMHO, this is where these whitelists shine.
For the server example you mention, whitelists don't prevent a malicious dependency from using your CPU for mining. With deno, by default, there is no way to dial-home the proof-of-work and collect the reward. Eventually, as the operator of the service, you'll notice a performance/cost problem and detect the malicious activity.
That's a good start, supporting the dropping of privileges after performing something on startup.
What about keeping the network allowed in the layer handling, say, inbound HTTP connections, but blocking it in the data access layer or purely computational component?
From what I can see, this doesn't work with global boolean flags in the runtime, instead requiring isolated tasks with whitelisted capabilities passed in, some form of "immutable, set-once, dynamically-scoped capability flags", or something like that.
The problem with the global boolean flag approach is that if any part of a service needs it constantly, the entire program gets it, even obscure subdependencies for generating colour pickers.
Don't get me wrong, it's an incremental improvement over's Node.js blase approach. It's also quite niche to see languages support this feature. E was one of them. There was another newer Python-like language with this too, starting with an `M`, but its name escapes me.
I'd recommend Deno's developers look at E a bit more before committing too much to the platform boolean flag approach. Or I've misunderstood their approach and it actually does more than I'm giving it credit for.
What would be really useful is if only sections of code can be delineated as requiring certain permissions. This way, it's much easier to see what parts of the code do what and also to make sure that users only get prompted for such permissions when the code actually runs.
Also, "promise first" seems to be premature to me. You could have simpler, more stateless async/await without promises: https://www.npmjs.com/package/casync
I'm really curious about whether people think Deno will succeed. Node certainly has its warts, but I feel like with recent improvements in the JS language and Typescript that Deno doesn't really solve problems people have nowadays. I don't think the decoupling from NPM and the dependency management approach (or lack thereof) is really a thing that most developers want. NPM certainly had a bunch of "dumpster fires" for years IMO (lock files mess, signed code mess, etc.) but I feel most of those pain points have largely been addressed.
I downloaded Deno and tried it out, but at this point I'm just left thinking it doesn't really add anything for me that I need.
This is puzzling to me because I had the opposite reaction: among other things, Deno solves an incredibly important problem with Node, which is the complicated configuration used with most projects. Deno can perform, out of the box, many of the things you'd need to configure Webpack + Typescript to do:
SUBCOMMANDS:
bundle Bundle module and dependencies into single file
cache Cache the dependencies
completions Generate shell completions
doc Show documentation for a module
eval Eval script
fmt Format source files
help Prints this message or the help of the given subcommand(s)
info Show info about cache or info related to source file
install Install script as an executable
repl Read Eval Print Loop
run Run a program given a filename or url to the module
test Run tests
types Print runtime TypeScript declarations
upgrade Upgrade deno executable to given version
What would it take to get Node to do all of that? Which documentation tool would you choose, and how would you configure it? Testing? Bundling? Formatting?
If anyone remembers, this is similar to the “Turbo Gears” vs “Django” of Python world. Turbo Gears allowed me to choose the best of each component - but pretty soon one ends up having to upgrade or migrate to the new-best-sub-component on a weeekly/monthly basis. django helped avoided a lot of headache by defaulting a decent, but not necessarily the best tool, and sort of “won” in the end.
For some reason, we continue to replay this fight in each and every language and environment.
Totally agree with all that, but I also think it's one of those things that most devs hit the pain point for once, and then have a standard template project they use for everything going forward. I.e. I have all my webpack/tsc/eslint/prettier/jest boilerplate set up once, so now it's not really an issue for me.
I also think this is a problem area where the Romejs project is taking a better approach: simplify and fix the toolchain, instead of replacing the entire runtime.
> I have all my webpack/tsc/eslint/prettier/jest boilerplate set up once, so now it's not really an issue for me.
For any given nut in that stack, there is hundreds of different iterations. Even if you can manage that, it changes between projects and people. It is horrible mess if you think about it from company point of view. Everyone who comes to new team, has some own quirks and ideas about that stack, which just makes it even worse.
The whole create-react-app was done precisely because that stack keeps changing like sheep's wool. It tries to do same thing. Having a standard way is better for developers and companies, and Deno having many of these builtin removes friction because of that.
Again, I totally agree with everything you've said, but if that's the primary problem (which, when it comes to Node/JS dev, I think it probably is), then the solution should focus on that, i.e. fix the toolchain a la RomeJS, rather than create an entirely new runtime.
> Totally agree with all that, but I also think it's one of those things that most devs hit the pain point for once, and then have a standard template project they use for everything going forward. I.e. I have all my webpack/tsc/eslint/prettier/jest boilerplate set up once, so now it's not really an issue for me.
Stop using JS for a year, and come back only to discover that half of the tools you were using back then is at best obsolete or even deprecated and unmaintained, or that their API changed so much in a major version that you can't recognize it (see the Babel6 — AKA Babel Vista— update).
And since you're not only using these tools but also their integration to your favorite editor, you cannot stick on the old version for long, unless you stop upgrading your editor altogether).
Static compilation of JS into a binary is also being worked on (albeit slowly) with `deno compile`[0]. This is the feature I'm most looking forward to personally, goodbye JS slowness.
When node appeared, it was a 'huge thing' because you could run JS consistently on the server-side, with some kind of packaging scheme.
Most of the things you listed aren't going to be useful to most, even when they are, they are small things that can be managed otherwise. Though admittedly, everyone will runt into at least one of those issues.
For a large, complex deployment, there's no obvious reason at all to shift to Deno.
We'll have to wait and see how it works out for those who want to try it for fun.
1) You need a secure sandbox for running JavaScript (e.g. You run a SaaS and you want your users to be able to customize something). NodeJS has to be sandboxed at VM level like Python, even although JavaScript was designed for this very purpose.
2) You want a TypeScript first nodejs.
3) You want isomorphic JavaScript between the browser and the server, because node does its own thing (for historical reasons) and deno strives for compatibility where possible.
What it lacks is npm compatibility. That is the JavaScript community for better or worse and without being able to use those libraries it doesn't seem compelling to me.
Or rather, they might be 'big' for a few teams in specific areas, but they are not 'big market opportunities' not even close.
The ability to go from Java+Spring OR Perl/PHP - and then have the choice to actually do JS on the server is a big deal - and that's why Node.js was a success.
The sandbox is nice but I don't see it being a big opportunity just yet. 'Running your own SaaS with untrusted code' is a developing area, and I'm not sure of Deno actually is a solution (how does one integrate with it). Also - Node.js does have some options there in the form of VM2. The real security hasn't been validated just yet either.
'TS first' - I think is just a toolchain and packaging optimization. We're all going to be running some kind of process for bundling and packing, it requires no effort to transpile in to JS at that point.
The isomorphism again, is a neat feature.
In other words, Deno has some cool new features in the context of 'the market/users space' for server side JS, whereas Node.js actually founded and created that entire market category.
Under normal circumstances, I wouldn't bet on any movement towards Deno, just because there are too many legacy Node.js already there, but since there seems to be a strong fanboy following, I don't doubt a lot of young devs push for it to be used because it's cool and shiny. All 'shiny new toys' have some benefits here and there, it's just a matter of contextualizing them into the incumbent landscape.
I would use it if it was npm compatible, and that does seem to be coming. Remember it's not a new language, more like Python vs IronPython. Although hopefully it will have more success.
I am actually really interested to see if something like Deno can take over in programming education.[1]
Deno's extremely simple import semantics and built-in tooling could make it a great environment for a learner. They won't have to be exposed to package management, they can just click a URL in their source code and see the exact source code they just imported.
All the stuff that professional developers can do and might want to do for themselves (setting up testing frameworks, pinning transitive dependency versions) is kind of out of scope for educational use. Unless that education is specifically targeted at "how to do modern JS development within this specific ecosystem".
The ease of bringing TypeScript into your Deno programs may make it easier to start "graduating" to static typing as part of the curriculum, without being like "okay, that was Python! Next up: Java!"
[1]: Say what you want about JavaScript, I don't see that modern JS is significantly worse than Python for learning to program (having watched a couple of students go through Python courses). All languages have their warts.
I'm not seeing enough in Deno that would tempt me to drop compatibility with the Node ecosystem. What I really want is actual Node.js with native TypeScript support.
The big deal here is security by default as enforced by the runtime (deno) itself - there is afaik none other doing anything remotely as cool anywhere. It in effect makes it super safe to run code since you have to explicitly allow the various levels of system access.
Also makes the system fundamentally unattractive to malware authors whereas installing modules via node.js is like leaving your front door open and taking a chartered holiday while hoping for the best.
Most folks running Node.js instances have control over all of the software they are running and 'security' in this sense just isn't a huge concern. Obviously, more security is alwasy better all things being equal, but it's not the big issue with Node.js.
It seems that Deno is probably a nicer, overall version of Node, and if we were 10 years ago, undoubtedly, we would chose Deno. But we're not, we have massive installed bases and operating capabilities, so the choice is less obvious.
Few things comes to my mind if you are moving from NodeJS to Deno:
1. Deno lacks the library eco system which is required to build a production level app today. I am not saying it cant be done. Just think of the different third party service integration modern application has to do, their maintainance and testing by individual vendors or open source contributors ! Blogs mentions few DB driver libraries, i highly doubt they are as mature.
2. Deno's security model overly hyped at least i see it this way. In last few version of NodeJS, many security flaws have been addressed, see their changelog if you dont believe me. But most importantly, security flaws with native JS is handled by V8 which is common to both NodeJS and Deno. On top of that, most of the libraries and frameworks in NodeJS during their various releases sorted out many security issues in their code. If someone is still doubtful they can use eslint-plugins for sanity and security checks in their JS files. Adopting Typescript also helps if you cant live without types.
3. Learning curve to adopt new SDK for Socket API, File API, System call API etc. I don't think NodeJS falls short significantly anywhere, in fact it provides more and those APIs have been relatively more battle tested over the years.
4. Irrespective of using NodeJS or Deno, following a BDD/TDD practices to ensure sound test coverage of your business use logic still remain the most promising tool to make, break and refactor your codebase.
Solely addressing #1 here, and this may come off harsh, but I think it's ridiculous to expect Deno to match the library ecosystem of Node on day 1. That's just literally impossible, but whenever a new language or runtime is released, it manages to become the most prominent question/concern.
It also is something of a moot point anyways, because people pushing something into production already are probably not using something so new. Early adopters don't care about how mature the ecosystem is; part of the appeal of being an early adopter is helping to build that ecosystem...
There's no way it's going to match everything, but regarding number 1 - it does look like you can drop in some node modules. A small example, but one that surprised me - React works. So you can do React with SSR with nothing else, eg. this demo is a pretty refreshing one to see how few moving pieces there are. https://github.com/brianleroux/arc-example-deno-ssr . It's just imported as `import { React, ReactDOM } from 'https://unpkg.com/es-react'`
By no means i am demotivating anyone. I have just enumerated the minimum possible evaluation criteria and current state of development. This is what i often do at my workplace as an Architect.
If someone is eager to try out new possibilities in Deno and their application use cases intersect well with what Deno has to offer, then by all means it's a good decision.
Regarding point 3, Deno's APIs use language features that were unavailable when node was written, including promises and async iterators. That is a big improvement IMO.
There are definitely few things which are done in different way in Deno as part of re-design.
But that does not mean NodeJS does not take care of it or at least provide the some API to tackle that issue.
In this case, Stream interface API in NodeJS are the solution to the problem. They help you build layer over native EventEmitter interface or rather strictly speaking they help extend it.
Thanks for doing this. I have a few questions regarding packages/dependencies, and TypeScript in general.
I tried installing Deno, and I tried including a package, and immediately the TypeScript typechecking in my IDE (VSCode) failed, of course. The IDE doesn't know what to do with a URL as a dependency. Is this something Deno will be able to handle?
The next question is that TypeScript packages I have written in the past use `baseUrl` and `paths` in their own configs to allow for absolute import paths. When I look at third-party Deno repositories, I didn't see any that were building out to JS/declaration files, and were instead just meant to be included as the original TypeScript source code. Won't this break things like absolute import paths and other behaviors of the dependency's own tsconfig file, or does this work fine with Deno/TS?
Using JavaScript outside a browser by running the V8 JavaScript browser engine on the server is a bit of a gimmick, but enough people can't bother to be a polyglot, and it's not that crazy.
But, once you are writing code in TS, then transpiling that to JS, then running that in V8 on a server, you have taken a big step into the Rube Goldberg dimension.
even better if deno bases their web renderer on servo since the deno runtime is rust and servo is rust. Would be great for getting some diversity back in the browser stack.
What Node is missing due to it's design is parallel (not just async-on-a-single-thread) processing using either green or OS threads without the overhead of serialization to web-workers or a cluster. Last time I checked, Deno didn't have a story there.
The idea that ORMs are a net-positive is definitely an open question. After 20+ years I'm certainly of the opinion they're not. I recently discovered Slonik (only for postgres, hope there is eventually a port for MySQL and others) and I'm a huge fan of the overall approach and the API. This blog post from the creator explains: https://medium.com/@gajus/stop-using-knex-js-and-earn-30-bf4...
While I agree that there are certainly places where ORMs are net negative for the developer, for a language ecosystem having one is net positive.
Lots of app developers need simple and easy to setup database access so that they can focus on the parts of their app that matters. Not having an ORM means that a decent chunk of them will move on to another language/ecosystem that has the libraries they want.
> The idea that ORMs are a net-positive is definitely an open question.
If it's still being debated, isn't that a good indication that there's no perfect solution for every use case?
ORMs are likely more straightforward when you know you're not going to need to do anything advanced, but get in the way when you need fine grained control for example.
The problem is that easy queries are easy, who cares about orm, and hard ones are too hard for orm. I haven't been able to see the benefits in either easy or hard projects.
you nailed it: In simple cases the abstraction doesn't help enough to justify including it and in complex ones it actually makes things much more complicated because it just doesn't work right.
I definitely agree there is no perfect solution for every use case. What I think is a mistake, however, is using a technology that makes stuff easier when you are small and have little load, but then become absolute burdens once you become successful and need to scale. I've worked on many a project where the ORM became the primary piece of "tech debt" that was hindering productivity.
I contrast that with technologies that are easy to use when you're small, but then let you layer additional pieces on later when you need to scale, without needing to redo everything.
I disagree with this sentiment. The use-cases for an ORM are straight-forward. It's more like asking, "which tool is best for getting this nail into this piece of wood?"
What most people discover to be the greatest benefit of using an ORM is the "mapper" bit (converting tabulated data into an object graph and visa versa) and, to a lesser degree, change-tracking.
Somewhat ironically, the overwhelming majority of the time criticism of ORMs is directed at neither of the above, instead pointing to query performance.
You can have data mapping, you can have change-tracking, you can even have schema migrations without opting-in to the pain points many ORMs introduce because these are all somewhat orthogonal concerns.
At the end of the day there is very little to be saved between writing:
users->where(u => u.name === "John")
and
SELECT * FROM users WHERE [name] = 'John'
Often, as queries become more complex, the SQL is actually a shorter expression than whatever query DSL comes with the ORM.
A big benefit of the `users->where(...)` approach is being able to reuse and compose. An example would be conditionally adding a WHERE clause based on some parameters. Using the raw query approach you end up having to do some string concatenation versus managing the state of the some query builder object.
I think that the "query builders" though are just one piece of the ORM that you mention, alongside the change-tracking, data mapping, etc. Having a decent query builder that isn't abstracting away too much of the underlying sql (essentially just mapping 1-to-1) plus data mapping are the sweet spot for me personally.
It'd be more accurate to say the illusion of type safety.
Under the hood many[0] ORMs simply construct a query similar to my example above and then convert result set of tabulated strings to the appropriate types (usually using reflection).
This means two things:
First, that the "type-safety" portions of an ORM are really located in the "mapping" code, so not really related to querying.
And second: you don't really have type safety. A database schema could change at any time and break the code even if static analysis seems to think it should work.
[0] Notable exceptions are languages that offer type providers (e.g. F#) but I digress
Sometimes all you really need are simple CRUD operations against a data store for a particular app. Most ORM fill that roll quite well in my opinion. They are also great for rapid prototyping. I think where people run into problems is trying to shoehorn every data operation through the ORM layer, especially for more complicated data storage requirements. That being said, working with knex was a miserable experience.
I just think that at the end of the day, for any moderately complex system under non-trivial load, the idea that you want to "hide the complexity of SQL" from the service developer is an absolutely flawed premise.
I disagree. I've used many ORMs in my ~20 years, and I've architected and designed several hugely complex systems under non-trivial load.
There are really two things I want to put out there:
1) My opinion is that about 90% of your standard, day-to-day queries work just fine in a good ORM. The developer _should_ know enough about the DB schema and SQL to handle the other 10%. (In our 10 y/o enterprise software, the only queries we really drop down into SQL for are complex windowed reporting queries.)
2) Eloquent ORM is... different. It's probably the best I've seen. I wish it existed in other languages. Sequelize, which may be the "best" in the JS ecosystem, doesn't hold a candle to Eloquent, IMO.
That's not really the point of ORMs though. It's map your data objects in code to your entitled in your relational database. That it comes with a SQL builder is just a necessity in order for the mapping to be done by the ORM. You can still write raw sql with most ORMs, but then you won't get the mapping which is the point. In some cases you will need to do that and that's fine. Hiding complex SQL isn't the goal
> That it comes with a SQL builder is just a necessity in order for the mapping to be done by the ORM.
That's only true if you need the "relational" part of an ORM - mapping a flat list of values onto a structure of related objects. If you only need to map a flat list of values onto a single object's fields, you don't need a query builder.
It's not just about hiding the complexity of SQL. You also get flexibility you don't have when writing SQL directly. For example, our laravel app uses pgsql in production environments but uses sqlite in our CI environment. We also migrated off of mysql which consisted only of changing a configuration value and verifying no issues by running tests
> I still write and think in SQL, but don't have to write queries as strings.
Why is that a benefit though? Why would I rather learn some custom query builder-specific DSL when SQL is at least mostly standardized pretty much everywhere. The use of template strings in JS can get rid of all the problems of just plain string concatenation for SQL (e.g. it can prevent SQL injection, enable safe dynamic queries, etc.)
Since the underlying DB driver exists, you could probably fork one existing node one, or even submit a pull-request for demo support in your favorite node ORM.
and the great migration begins. but when you look at other backend ecosystems e.g php, ruby on rails they're still getting shit done. even though people might say those languages/frameworks might be slow or insecure. stability is what's lacking in the JS ecosystem. glad I do my backends in flask now
Maybe it would be OK if all the Typescript people went Away to Deno and the few of us left who just want to run plain JS on Node, could, in peace.
... As long as there is enough interest still in Node to maintain it. Hopefully, there is still interest in running the same language as browsers do, without additional dependencies.
Personally, I would prefer Clojurescript to TS, but not strongly enough to want to be tied to its fate.
My brief skimming of its site indicates that its security model is based around the ability to disable, say, network access for whole Deno programs. However, does it allow starting up with network access, allowing a subset of the program to handle it, and then dropping those rights for the rest of the program, _especially_ subdependencies?
I can't see any mention of a more fine-grained approach: https://deno.land/manual/getting_started/permissions
A modern Node.js web service will have hundreds, if not thousands, of indirect dependencies. Some network access will be required for at least the Express routing, or an equivalent.
For a Deno equivalent, this would amount to enabling network access for all hundreds of those subdependencies, not reproducing the isolation in capability-based security such as WebAssembly nanoprocesses.
Have I missed something here? That doesn't seem like much of an improvement over Node.js except for very small and contained programs. Yes, Deno's dropping of centralised package repositories and package.json might alleviate this problem _somewhat_, but the same fundamental issue seems to remain.