Hacker Newsnew | past | comments | ask | show | jobs | submit | riho's commentslogin

Meanwhile, we still don't have operator overloading. Even Lua has it.


There is a proposal: https://github.com/tc39/proposal-operator-overloading

There are still a couple issues that have to be solved. m2c: I hope that won't get to stage 3, though I like operator overloading in general, I think it's something that will complicate JS even more.


Is it possible that there's also just more software in different domains being written, hence making it more difficult to keep track of everything that is going on?


Shouldn't be. If I recall correctly, the process was fairly logical. One of the things you wrote down in your design doc was all the external surfaces (endpoints) your application/service had and how you were securing them.

You could literally not ship anything without this review, so they must have abandoned the process by now. Or just never extended it to "the cloud"


Perhaps a controversial take, but I would be happy to see alerts removed altogether. I'm sure someone can point to a few legitimate uses of them, but in the grand scheme of things, they're usually terrible for user experience, as they block everything else (which is the point). This is also often used as a way to scam less computer literate people with "microsort security" alerts, that keep getting reopened when closed.

I get that there are lots of JS tutorials out there that use it as a way to show simple interactions, but that's a very weak argument for keeping an API that is potentially harmful.


You're underestimating the level of breakage here, it's not just a few tutorial sites it's tens of millions of websites, many unmaintained. Like them or not alert(), confirm() and prompt() are key pieces of functionality depended on for literally decades now. I'm trying to imagine the economic cost of this change and it's just enormous.


I understand that. But that was the case with popups as well. Remember when we needed popup blockers? Straight up removing the functionality is of course a drastic measure, so realistically I could see something like what we did with popups, where the browser first asks you if you'd like to allow alerts on the page, and only allow them to be shown if the user explicitly chooses so.


This is the fix. Deprecate alerts gracefully in favor of the native dialogue element!


I still see them everywhere.


It's only blocking when inside an iframe.

There may be millions of sites using these functions legitimately, but not inside an iframe.


it's not like you couldn't do the exact same microsort security alerts with a position:absolute overlay. Google can just make the alert window explicitly stay within the viewport instead of breaking the internet.


Correct problem. Incorrect solution. Like always.


When there are billions of users and millions of websites I think it's very rare that you'll have 'correct' solutions, just different trade offs.


I do not see any added value for alerts. They only interfere with normal work. "Site wants to send you alerts". No thanks. They have most of the issues behind a paywall so they just want to serve more ads. If I need something, i will look for it. The newspaper sites are the worst offenders. No i do not need propaganda alerts.


I think you're thinking of push notifications


This is quite interesting, I've been considering making a todo app to help with ADHD. I've definitely experienced many of the same issues mentioned by the author, and even some apps labelled specifically for ADHD seem to fail on many of these issues.

If it's a chore to add a task, I won't add them. If it's a chore to go through tasks, I won't come back to the app.

It's also a really important point about some tasks being inherently different in scale to others, and the importance of separating them.

I feel like many todo apps focus too much on being a "todo app" rather than actually trying to get you to get things done. All this tagging and categorizing and such are features that take effort and time for me to manage, which in turn help make those apps into huge graveyards of stuff to do, which in turn make me want to not open them ever again.

IMO a todo apps that is blank most of the time is the best. Meaning, you haven't got anything you need to do. Help me keep track of small tasks, and repeating tasks, and let me either complete them or postpone them. If there's something I've postponed too many times, just delete it. It shoudn't become a notes app, it's about getting stuff done.


The the main problem is the fact that this audit happens with no context, and the audit results offer no information about the context an issue applies to either. Every issue should have a clear explanation about why and where it's an issue, and be tagged. Then we'd just need a way to hint npm what context a package will be used in, similarly to what we already do for devDependencies.

Also going through an audit result in a CLI isn't really the best experience. I wish I could just click a link and open up the report in a browser to drill down into issues.


No it’s not. The main problem is the dependency tree hell. If an ancestor version bumps, you should probably version bump too, irrespective of exploitability.

Don’t like it? Try using more maintainable dependency trees.


Seeing a lot of distain for JavaScript in here, but not enough examples of what's actually wrong with it. It's fine to criticize the language, there certainly are issues with it, but I'd like to see some actual constructive comments and examples from other languages where it's done better.


besides being really insane typing wise, javascript also both really lenient and unpredictable in what it accepts as expressions. Try to guess the result of the following expressions:

    > [] + []
    >
    > {} + []
    >
    > [] + {}
    >
    > {} + {}
It means that if you feed it nonsense it does not stop you but happily carries on and explodes a few milliseconds later somewhere else. Another can of worms is the difference between null, 0,undefined, '', ....

Ok, you got me worked up. I need to go for a run to blow off steam.


That is just javascript trivia that one rarely encounters in real code, and shouldn't be used to dismiss the entire language.


To borrow a Stroustrup quote

> "There are only two kinds of languages: the ones people complain about and the ones nobody uses".

I'm not a fan of JavaScript but I do like TypeScript.


If your statement begins with a curly brace, that brace is a code block, NOT an object literal.

Coercion rules are terrible, but they were only added to the language due to developer demands.

If you worry about zero or empty string, you’ll have problems in tons of other languages. Null vs undefined is less understandable along with the old ability to redefine undefined as it was a variable . In any case, I agree that one of the two shouldn’t exist.


Most of the complaints aren't really about the language syntax or capabilities. Aside missing proper support for integers, modern Javascript is pretty nice. The complaints mostly come from developers being forced to use either Javascript (only way in the browser), or being forced to change code because of breaking changes in libraries.


Library changes aren’t what they used to be. It was once frameworks of the month. At this point, React has been undisputed king for over half a decade. Major libraries like react, moment, lodash, express, etc all have had stable APIs for years now and aren’t any worse that any other language I’ve used.

Complaints like integers come from not studying the language and not keeping up to date.

Asm.js type directives have offered 31-bit integers for years now on all major JITs. That is slowly not working again as wasm takes over, but BigInt has been included for quite a few browser versions now.


> Complaints like integers come from not studying the language and not keeping up to date.

Perhaps, or perhaps fixing one too many bugs where someone thought floats were appropriate for all the things. Regardless, forced change, even if it is a good change, is still forced.


Comparing Javascript to other dynamic languages its comparable and indeed has some nice features - e.g. self style prototypes. If you compare it to more type safe languages like Java, C, Swift, Kotlin etc then it has major flaws. The dynamic languages are great for making little scripts or small-medium size one man projects, they fall down when you need to build something larger with lots of people and lots of code. If you include the problems of javascript with running it into the limited environment of the browser then there is much room for complaint.

I think a lot of the arguments are due to the clash of these two opposing mindsets, people who start in javascript and build things in a browser think it's great, people who build enterprise applications with typed languages disagree, often strongly.


The tooling and ecosystem are the main problems from my perspective. If you set yourself up with Kotlin + Gradle + IntelliJ you now already have all the tools you need.

Want to create/maintain a modern JS/TS stack? Well, you are going to need npm, webpack, probably babel, linters, need to understand how to setup source maps and get those working for debugging server side. etc.

It's a nightmare and it's all very fragile and that isn't even touching on the poor quality of the libraries outside of React/big ones.


Deno is definitely trying to fix that. Single executable, no node_modules, no package.json no Webpack no Babel, built in linter, formatter, bundler, Typescript compiler...

It's definitely not there yet but they have the right idea.


Yeah but I still don't see what I get from it for server side development that isn't better on the JVM...

Better JIT, real threads (also Loom coming which is better than async/await or Promises), better standard library, better library ecosystem, better profilers, better debuggers, better metrics/monitoring for the VM, better portability.

It seems to me that the core server-side JS benefit is isomorphic code on browser and server which is probably useful if your team is small enough to have the same people working on both but for companies of the size I generally work for this is rarely the case beyond maybe small fixes I might do in JS/TS.

For me personally this can't offset the absolutely gigantic difference in quality between say JS/TS and Kotlin/Java.


I agree, I definitely wouldn't use JavaScript on the server side, unless you really want server-side rendering of React or something.


I don't see how or why these things have to be related in any way whatsoever? Atlassian offers self hosted solutions and presumably do so because it's profitable. I don't see why offering a self hosted solution has to somehow mean less profit, or to the extent of non-profitability for that matter.

You could make it *more* expensive to use the self hosted version. It's straightforward to justify if you consider the additional development overhead, and if you include some kind of a support package.

My question is genuine. I see strange justifications like this often when it comes to self hosted versions of X, Y, or Z product. What exactly is the risk to profit here? I can see extra engineering overhead as a possible risk, but that's a solvable problem. There are many ways to handle software updates. Spinning up new backend environments is often done on devs machines daily when developing. I don't see any novel problem that hasn't been solved.

The "we have no plans for this", or "we are looking into it" are all often used ways of avoiding the question of "why aren't you doing this?" There are plenty of companies out there that for one reason or another, need to control the environment where their data lives. Why give up that business? Why do people who work at those places have to often settle for worse products because of this? Why not give businesses *choice* of storing their data or having someone else deal with the complexity?

Is it that the presumed market for this is too small? And why is nobody being transparent about this?


You are reasoning about this well. Offering a self hosted solution is a solvable problem that can be profitable. The challenge might be that there are multiple different strategies the folks at Notion could follow, but choose not to. Each is solvable and each can bring more profit. And yet they need to decide what to do and what not to do. You should not chase every rabbit and solve every solvable problem.


In this case, I think the switch is most likely entirely based on technical merits, rather than some way of asserting more control.

So with that in mind, the fact that the team behind one of Google's most interactive pieces of software has to throw up their hands and say "DOM is too slow, we gotta roll our own" should be a wakeup call for everyone working on Chrome and other browsers, but mostly for Google itself.

When you escape the DOM, you're going to be doing pretty much everything yourself. And for someone like Google, that might be worth the absolutely insane amount of effort, but what about everyone else? You're Google, Chrome has 60%+ market share. Why isn't the plan here to systematically start improving DOM performance, or create APIs to more directly modify how elements are laid out and created? Why do all of this work to benefit only Google Docs?

We've had years (decades!) of articles and talk about how the DOM is slow (including a bunch from Google), so why not improve it? Why give up and waste all this time on a custom solution? Why not create something that is *actually* capable of handling the complexity of modern, highly interactive applications, including Google's own products?

You can say it's Flutter, but that's yet another effort to escape the DOM, rather than actually improve it.

Maybe this has been the plan behind the Google Docs team, to push people on the browser side and other Google teams to start seriously looking at what to do with the DOM, if so, I hope this actually has the intended effect. We all deserve a better, more performant web.


DOM performance has already improved leaps and bounds after millions of dollars of engineering and countless hours of effort. Same with Javascript. At some point you have to accept that the DOM has fundamental design flaws, its specification and requirements are the problem yet cannot be radically changed because of backward compatibility concerns. Browsers have spent decades optimizing everything possible, one should start by acknowledging that before tiredly trotting out magical performance improvements as the answer.


> Why isn't the plan here to systematically start improving DOM performance,

There are already many steps taken to improve DOM performance over these years. However, DOM is designed for documents. The performance can never be good enough when it is abused for non-document usage.

> or create APIs to more directly modify how elements are laid out and created? Why do all of this work to benefit only Google Docs?

Because other browser engines are unlikely to adopt these APIs just so that Google Docs can have better performance. Not to mention that these new APIs will take years to be present in every user's devices.


> There are already many steps taken to improve DOM performance over these years. However, DOM is designed for documents. The performance can never be good enough when it is abused for non-document usage.

JavaScript was originally designed for simple tweaks, but we've significantly expanded and improved the language over the years to adjust it for what it's used for *today*. I don't see why DOM is special. Sure it was designed to handle small, unchanging documents, but it's used for much more now, just the same as JavaScript. Also it's worth noting we're talking about Google Docs here, so it looks like DOM fails even at its intended use-case (I'm saying this *mostly* jokingly).

> Because other browser engines are unlikely to adopt these APIs just so that Google Docs can have better performance. Not to mention that these new APIs will take years to be present in every user's devices.

We wouldn't have fetch, canvas, async/await, PWAs, websockets, etc. if those things had to be available immediately and/or be guaranteed to be adopted. I'd rather it take years to get improvements, but eventually have them, than not doing anything and still be talking about how bad X, Y, or Z is 10 more years from now.

I'll take FLIP animations as a specific example. If I want to have a box animate from one part of the page to another (where it's position in the DOM hierarchy changes), we're having to do all kinds of crazy gymnastics around when you read the DOM, when you write to it, how do you update it, etc. And even still, you're unable to do this without using JavaScript animations (if your box contains content and it happens to change size, we'd have to do a reverse scale animation on the content).

This is stuff that's trivial in iOS and Android, and commonly used. In the web land, we're stuck doing this poorly both from a development point of view, and with bad performance, resulting in poor end user experience.

The FLIP hack has been talked about for 6+ years [1], and yet here we still are, unable to simply move and animate a box from one place in the tree to another. Want nice drag and drop interactions? Good luck. Limited animations, or slow, and often both.

Why are we getting articles from Google about how it's bad to change the size of something on the screen [2], instead of seeing improvements to the underlying APIs that cause it to be slow in the first place? If a hacky JavaScript based solution is able to make this performant, surely a native API would do better.

The DOM has to evolve to support interactive apps in a performant way, or risk being replaced by custom things like the canvas or WASM, that are not easy for machines to parse, that won't have nearly as much consideration for accessibility and extensibility. That aren't as easy to enforce good usage of, or share knowledge about. It should not be "DOM is slow, oh well", or "DOM is slow, lets drop it", or "DOM is slow, so lets build a JS scaffolding around it (VDOM)." It should be "DOM is slow. What are the contexts in which it matters most, and how can we improve the APIs such that it can natively do those things performantly and easier?" Be that better selection APIs, animation APIs, better ways to read/write to styles, the list is endless. The DOM is slow, but it does not have to be slow. We *choose* not to make significant improvements to it, and one can come up with plenty of reasons why or excuses for it.

My point is that we should choose to improve it, because the alternative will lead us down a worse path. Years of neglect has led us here, where Google, a browser vendor themselves, has to give up on DOM because it's bad. This is fundamentally messed up.

[1]: https://aerotwist.com/blog/flip-your-animations/ [2]: https://developers.google.com/web/updates/2017/03/performant...


This is a good topic overall, I'm not convinced of the particular example of infinite scrolling, but I do definitely want to see a movement toward products that don't try to overtake your life, and also make a point of it.

Why am I not convinced of the seemingly obvious infinite scroll example? It's not addressing the root problem, and we need to first and foremost think critically of the substance, rather than UI ergonomics. No doubt UI ergonomics can play a role, but I'd rather see people think about how to avoid having an infinite pool of addictive content in the first place. How can we create systems that are inherently limited by their intent, rather than just making the UI worse (although pagination in terms of design is anyway a contentious subject).

That sounds vague, so I'll give a few examples: - An email service that only sends you your mail every day or x time interval, so you don't feel compelled to keep checking for new things.

- A social network that only lets you make small groups of friends with no global feed at all, so you don't need an algorithm to filter out all the noise and rank everyone's post behind the scenes.

As an aside, just take a moment to let that sink in, in case you haven't. There's a hidden AI deciding how important your friends' thoughts and photos are to you, to keep you addicted to content and keep scrolling, in an effort to show you more ads and make more money. It does so by recording everything you do on the service, and also things you do on other websites. This is not a sci-fi dystopia movie plot, this is what Facebook fundamentally is.

The "but it's free" argument I understand, and accepted for a long while also, but more recently I've started to reject this as a sound argument. Is it really so that if something is free, it gets a pass? I don't think that's true outside the internet. If someone's giving away free home insurance, and "only" requires you to have cameras installed in every room so they can better asses the responsible party, etc., would we as a society really look at that and say "but it's free" and move on with our lives? Hard to prove, but I'm doubtful that this wouldn't receive a massive backlash. What if the "free" plan was the only one available, and there were no comparable insurance companies around to reasonably switch to? What if they were using the recordings to sell data about you to ad companies?

It's free, yes, and sure, I could have "nothing to hide", but the idea of a company recording me in my home and selling how I behave to other companies to make money and get me to buy more stuff is still a fundamentally perverse arrangement.

Well, went a little off topic there, I suppose. But my point being is that I'd like to see services that actively try to make the product usage time finite from the concept, rather than just making the UI more frustrating to use.


This is probably one of the bigger issues holding me back from adopting Deno (at least for personal projects anyway). I'm sorry, I don't want to go back to early 2000s where we're copy pasting random links to script tags. Those days are over and for the better. NPM has proved pretty much undeniably that people prefer the simplicity of just typing in the name of something to install it, and import it. Does it have issues? Absolutely. I'd rather those issues be resolved than completely throwing away the concept though.

This feels to me a bit like if a new browser came around and said "DNS is the root of all evil" and only allowed you to directly type in the IP address of websites.

You can make all kinds of excuses, like "oh, but you can keep a list of your own!" or "hey, you can install this plugin that gives you DNS lookups back", but ultimately as a user I'm gonna say "No thanks, I'll keep using a browser that doesn't needlessly complicate my day."

I'm usually an early adopter for this kind of stuff, and I really like a lot about Deno, which is why this situation just makes me very sad.


This 100%. Every time it comes up I'm just saddened by the fact that they're so antagonistic towards the existing npm ecosystem, when those problems could be so easily solved and their userbase so easily expanded with some minor compatibility tooling that allows importing npm packages.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: