Hacker Newsnew | past | comments | ask | show | jobs | submit | wwwigham's commentslogin

You can define a metatable on your objects of interest (or the root table meta table if you don't mind breaking the language's conventions and thus libraries) with __index and __newindex members. Then you can throw in those by calling the `error` function when they'd otherwise normally return nil, should you desire it.

But runtime checks have a cost, and static types that transpile away are a bit better for overhead so long as you don't mind the build step, so using one of the typed lua variants is probably a bit nicer in the long term. Catching those typos early is their bread and butter.


You can use file:// or git:// versioned dependencies in normal npm `package.json`s today. People just don't, outside some edge cases (local packages, nightly versions) because the centralized registry has upsides for discoverability and maintenance. There's also private registries, where you can setup a .npmrc to pull different namespaces from different registries. But if you want, you can totally be that guy who only publishes their packages to their presumably self-hosted repo - it works.


The same is also true of Python, and once used to be the majority option, which was around the time that the "Python packaging is terrible" impression was starting. Since like 2015, pypi has banned it in hosted packages, but you can do it for private packages all you want.


> What's the business model, I wonder?

Serverless functions, right? That's what deno deploy is billed as. Presumably the registry is a platform-adjacent investment to try and bring more serverless market-share to deno. Since it provides a npm-registry compatible facade, presumably you should feel safe publishing deno-y code to it (without calling platform APIs?), and should thus be more likely to use deno, and thus enter the funnel for deno deploy.

Personally, I just use deno's rust v8 wrappers a bunch, since they make embedding a V8 runtime into a rust app very simple, and js is a very nice scripting engine (especially with optional types). A hugely valuable contribution to the open source community. But then again, I don't deploy serverless functions on the regular. To each their own.


I believe you hit the nail on the head there.

That, together with the fact that you can still host the deno runtime on your own hardware actually makes it a pretty viable alternative to new projects that you would be building using NodeJS otherwise, with the added bonus that if you ever decide to go serverless, you don‘t have to rewrite your code since you can take the same codebase and move it to deno deploy (or supabase functions, which just uses deno under the hood itself)


FTR a large and rapidly-increasing percentage of the JS ecosystem supports targeting multiple runtimes. Things like Remix.run, Fastify, Hono, etc. can deploy to Node.js, Deno, Cloudflare workers, etc.


TypeScript _itself_ has a branded primitive string type it uses internally.[1] Dedicated syntax for creating unique subsets of a type that denote a particular refinement is a longstanding ask[2] - and very useful, we've experimented with implementations.[3]

I don't think it has any relation to runtime type checking at all. It's refinement types, [4] or newtypes[5] depending on the details and how you shape it.

[1] https://github.com/microsoft/TypeScript/blob/main/src/compil... [2] https://github.com/microsoft/TypeScript/issues/4895 [3] https://github.com/microsoft/TypeScript/pull/33038 [4] https://en.wikipedia.org/wiki/Refinement_type [5] https://wiki.haskell.org/Newtype


Stock Pixel may not ship with it on by default for end users, but anyone can enable developer options and enable Memory Tagging Extensions - either until toggled off, or for a single session if you're trying to test a specific app - if you do want the feature on.


That's not the same as what's being used on GrapheneOS. It also excludes a significant portion of Bluetooth. Enabling support for memory tagging in the stock Pixel OS via developer options only makes it available for usage but doesn't actually use it. You also need to enable heap memory tagging via the Android Debug Bridge (ADB) shell via setprop. It provides no value through simply being enabled without using it to tag allocations. You can fully enable userspace heap MTE for the stock OS via the standard allocator implementation (Scudo) which is currently not particularly hardened. You can also use KASan via the MTE backend using setprop, but that's not designed for hardening right now and it's not clear it will ever be. There likely needs to be a separate MTE implementation for the kernel that's not part of KASan, which we haven't done yet for GrapheneOS either so MTE hardening is currently a userspace feature.

GrapheneOS uses our own implementation of hardware memory tagging for hardened_malloc with stronger security properties. In order to enable it by default for the base OS, we had to fix or work around various issues including this one. We use MTE in asymmetric mode across all cores rather than using asynchronous MTE for the main cores. Asymmetric mode is asynchronous for writes but synchronous for reads, which blocks exploitation properly rather than having a window of opportunity to succeed with exploitation. It gets checked on system calls and io_uring (another potential source of bypasses) is only available to 2 core system processes on Android via SELinux restrictions (fastbootd which is only used during installation and snapuserd used by the core OS after applying updates).

GrapheneOS always uses heap MTE for the base OS and apps known to be compatible with it. For user installed apps which are not in our compatibility database and which do not mark themselves as compatible, we provide a per-app toggle for enable MTE. Users can also toggle on using MTE by default for user installed apps which may not be compatible and can instead opt-out for incompatible apps. In order for this to be usable, we had to implement a user-facing crash reporting system. We did this in a way that users can easily copy a useful crash report to provide developers.


Some background on Sanitizers (ex: kasan) and ARM Memory Tagging Extension (mte) by the one of its developers, Andrey Konovalov:

https://youtu.be/KmFVPyHyfqQ / https://ghostarchive.org/varchive/KmFVPyHyfqQ

https://youtu.be/9wRT2hNwbkA / https://ghostarchive.org/varchive/9wRT2hNwbkA


For my Pixel 7a, I searched the Developer Options menu three times and couldn't find it. Searching for Memory Tagging Extensions in Settings says it's there. Is it hidden somewhere?

Edit: Nevermind, I seen it's only for Pixel 8 phones: https://news.ycombinator.com/item?id=38125379


Enabling it via developer options is only the first step. You also need to enable it via setprop using ADB in the desired mode. The official documentation for using MTE on the stock Pixel OS is available at https://developer.android.com/ndk/guides/arm-mte. We strongly recommend this for app developers. It's a really easy way to find heap memory corruption in your apps, but you can use stack MTE without building the OS and your app with it.

We provided a user facing crash report system for memory corruption detected by MTE. MTE has no false positives. We strongly recommend users use the feature we provided for copying the crash report to report these bugs to app developers. App developers can replicate many of the bugs reported this way using either MTE on a Pixel 8 or a smaller subset by building their app with HWASan support.

Since many apps have latent memory corruption, we only enable it for the base OS, base OS apps and known compatible user installed apps by default. Users can opt-in to heap MTE for all user installed apps via Settings > Security and can then opt-out for apps with memory corruption occurring during regular use. Over time, we plan to add more apps like Signal/Molly known to be compatible to it into our app compatibility database so that it's force enabled for them by default without users enabling it in Settings > Security. The option to opt-out is only available for apps not known to be compatible with it, which is part of dealing with the issue of users potentially allowing it after an exploit attempt.

We're trying to convince Google to at least enable asynchronous MTE by default for the stock Pixel OS with user installed apps excluded from it unless they opt-in to it. The memory overhead is 3.125% and asynchronous heap MTE is near 0 performance overhead. Asymmetric heap MTE provides much better security but is more comparable to the overhead of a feature like legacy stack smashing protection. Stack MTE adds more overhead but SSP could be disabled to partly make up for it and deterministic protection of stack spills, return value, etc. can be provided through MTE instead since by not being tagged they have the 0 tag and everything tagged will be a random non-0 tag.


is for Pixel 8 and above, since those have Tensor G3, which is armv9 IIUC.


Taken from the projects official Twitter they offer more than what is available by said toggle on stock:

They provide a nicer MTE implementation as part of hardened_malloc which uses the standard random tags with a dedicated free tag but adds dynamic exclusion of previous tag and current (or previous) adjacent tags. We also fixed Chromium's integration and will improve PartitionAlloc.

They also use it for their browser too using PartitionAlloc. Other Chromium-based browsers including Chrome don't use MTE since they don't really use the system allocators and have PartitionAlloc MTE disabled.

They are also continuing work on integrating more ARMv9 security features. MTE having the highest impact and being the most interesting of these features, but they're expanding usage of PAC and BTI. Android uses Clang's type-based CFI but not everywhere so BTI is still useful.


> ESM was developed in the open and anyone could participate, including the TypeScript team.

This point stings for me, personally, since _I_ was the TypeScript language dev _in_ this wg trying to make our concerns noted, because we certainly did participate. However the group largely deadlocked on shipping with ecosystem compatibility measures, and what you see in node today is the "minimal core" the group could "agree" on (or be bullied into by group politic - this was getting shipped weather we liked it or not, as the last holdouts). The group was dissolved shortly after shipping it, making said "minimal core" the whole thing (which was the stated goal of some engineers who have since ascended to node maintainer status and are now the primary module system maintainers), with the concerns about existing ecosystem interoperability brought up left almost completely unaddressed. It's been a massive "I told yo so" moment (since a concern with shipping the "minimal core" was that they would never be addressed), but it's not like that helps anyone.

Like this shipped, because _in theory_, it'd be a non-breaking from a library author perspective to get node's CJS to behave reasonably with ESM (...like it does in `bun`, or any one of the bundler-like environments available like `tsx` or `webpack` or `esbuild`), and _in theory_ they're open to a PR for a fix... I wish anyone who tries good luck in getting such a change merged.


Fwiw I appreciate your effort! That sounds really frustrating.

I agree the recent bun/tsx/esbuild (but bun especially) has shown the node CJS/ESM fiasco was a bit of an emperor-wearing-no-clothes moment, where I think us every-day JS programmers just trusted the node devs at their word, that CJS/ESM had to be this painful...

But now, seeing that it doesn't have to be that way, it's like wait a sec...the last ~5 years of pain could have been avoided with some more pragmatic choices? Oof.


I think this is funny, since esm being "native" on browsers doesn't really matter until you can convince devs they don't actually need to use a bundler. So long as you're using a bundler, the browser's runtime doesn't really matter - you're using the runtime the bundler presents and emulates on the browser. Native ESM has proven to be quite painful in ecosystems that _don't_ rely on the presence of a bundler to patch over problems, precisely because of the issues of interoping with, I don't know, _literally any existing code_.

I can't think of a concrete benefit to a developer that ESM brings (just pain, but maybe I'm biased by what I'm exposed to). Probably why it's so slow to be adopted.


I've seen some great development environments where all of development/debugging is unbundled ESM directly in the browser. It's going to take a lot of momentum shift to swing the "bundler pendulum" back away from "always" to "as needed" (or even, shockingly, "never"), but I think it is going to happen. HTTP/2+ really does make "never" a bigger possibility than ever before, especially in cases like MPAs (and SSR SPAs that don't mind slower hydration outside certain fast paths).

Also, even some cases with bundlers, some of the modern bundlers (esbuild and swc) are still directly bundling to ESM now as the target. Lazy-loading boundaries and common/shared-code boundaries are just ESM imports and there's no "runtime emulation" there, just native browser loading at that point. They are just taking "small" ESM modules and making bigger ones.


I, too, thought http/2+ would encourage unbundled js, but unfortunately in a world where people are used to whole-program minification and dynamic app slicing, I don't think we'll _ever_ move away from bundlers. The build step is here to stay for most serious projects.

ESM may very well be the module system designed for a world that'll actually never exist, and will mostly just be an ill defined compilation target. But hey, maybe the next web module system will do better - those wasm working group people are working hard on their module system - and it's intended as a compilation target from the start, so shortcomings in it can be patched over by tools from the start :)


I do think it is "just" a matter of time/momentum. There's definitely a lot of cargo culting around bundlers ("these bundlers were gifted to us from the gods!", "these were my father's bundlers, I simply must use them!") and there's a lot of developers who "love" scaffolding and boiler plate that are just going to blindly do whatever `create-react-app` or `ng new` or whatever scaffolder of the day briefly masticates then vomits out for their baby bird mouths. Those things (culture, scaffolders) all move slowly (for lots of good reasons, too) and just take time and a few revolutionaries.

HTTP/1 still isn't going anywhere, anytime soon, especially in developer tools (because for better and worse HTTP/2+ requires TLS and developer tools are tougher to build with TLS), so it's still hard to combat the cultural assertions that "bundlers are best practice" and "bundlers are required for performance". But that too is something that shifts slowly over time. There are decades of momentum behind HTTP/1.x.

There's also still so much momentum behind frameworks like React, Angular, plenty more that spent so much time in CommonJS that they still haven't yet sorted out their ESM builds. That's also something getting better with time. Especially now that Node LTS covers enough that you can take an "ESM ONLY (in Node)" approach with confidence.

As I mentioned, I've had the pleasure of working with some projects that were 100% unbundled ESM in development and then spot-bundled (often with esbuild) to highly specific sub-bundles just for Production where Production performance noted improvements. The world of ESM-first is nice when you give it a chance. It will take a bit longer to get to "ESM ONLY, EVERY WHERE", but it still seems to be a matter of time/momentum rather than a problem with ESM.


I think it's still largely prudent to use bundler tools. I think the biggest issue with not going to ESM syntax comes down to static analysis and tree shaking. It's so much better with proper ESM, and will reduce overhead for the browsers. The bundlers definitely paper over various issues. Being able to import non-js resources like styles, json and other references is useful, to say the least. I don't think this will ever be really practical for direct browser use short of some generational compute and network improvements.

That said, we aren't that far off. Many site are spewing several MB of JS on load, and it's relatively well performing even on modest phones these days. At least relative to 90's dialup where the rule on loads was measured close to 15s. Things are absolutely snappy (mostly). I think the biggest hurdle today is shear entropy. React+MUI+Redux is imo pretty great, and getting to something similar in pure JS would take a lot of effort. Not insurmountable, but significant. There's still a new framework of the month nearly every month in the JS space.

Getting movement is hard. It'll take time and persistence.


At first, I was really excited, because I really wanted this pipeline to work well. I've wanted an automatic FPGA workload offloader for awhile. Then I read that their "JIT" FPGA bitstream still took 2 _hours_ to build because it still used the built-in slow-as-molasses FPGA manufacturer bitstream assemblers. That's not a timescale that really supports a real, dynamic workload, unfortunately. Still, the work is neat and gluing together these parts in a way the works is really cool - what is essentially a Graal FPGA backend is pretty neat, if unlikely to be on its own paradigm shifting. Still, maybe the existence of something like this can prod Intel into making a fast mode for the bitstream assembler.


Contrary to what seems to be popular opinion here on HN, I think it's fine they'd entertain keeping their code as JS with JSDoc, rather than TS. It's still typesafe, which is what's important from a maintenance and documentation perspective. A linter than goes all in on TS is what tslint was (and it worked on JS), and ultimately it lost to eslint in the linter popularity wars (as has jslint, and jshint, imo). If JS source is the sauce that keeps contributions coming and the downloads flowing, then so be it. There's also some value in dogfooding their own JS lints, rather than TS ones. Sure, there's overlap, but undoubtedly some differences, too (especially any rule that lints jsdoc).


also note that both Deno and Sveltekit moved from TypeScript to JSDoc (https://twitter.com/swyx/status/1350427690814251010)


For Deno at least, there were some pretty specific reasons for the change:

https://docs.google.com/document/u/0/d/1_WvwHl7BXUPmoiSeD8G8...


I’m kind of inclined to say that eslint won because they have typescript evaluation as well. If it didn’t work on Typescript you’d see less and less people using it.


Eh, wanting to dogfood the JS tooling you're writing is a fairly valid reason, imo. Would dogfooding on TS cover many JS usecases? Sure. But inevitably some things are different. I'm somewhat sympathetic.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: