Hacker Newsnew | past | comments | ask | show | jobs | submit | __ryan__'s commentslogin

Simple async JavaScript is still single threaded with an event loop. In other words, your async code is just a task deferred for later and only one task runs at a time, only moving onto another task when complete or explicitly yielding via “await”.

Service workers are threads. They’re basically separate JavaScript processes you communicate with with IPC, with other special privileges and capabilities allotted to them.


What is it about it that makes you feel this way?


It was 5 billion calls to 500 million numbers over a 3 month period.


4 billion were to me though


I report nearly all auto-generated unsolicited emails as spam. I also report nearly all follow ups to ignored cold emails. I will protect my inbox.


Certainly! The subject line plays a significant role in how recipients determine whether an email is spam or not in the world of email communication. While the exact proportion may vary slightly between research studies, it is generally agreed that the subject line has a substantial impact on the email's outcome.


This isn’t the attack vector to be concerned about. More concerning is when there’s a data breach and an attacker gains access to hashed passwords. At that point, you attack the hash not the API.

This comment is an example of why I wouldn’t want any given website to choose my password.


That assumes the situation where the password hashes are stored in a way that is less secure than the actual data that the attacker ultimately wants access to. That must not be a very common situation.

The passwords will not be of any use on any other system. This would eliminate password reuse.


Accessing a users data is not the only reason for hacking their account. Performing actions on behalf of a user is just as much of a threat.

Edit: also, if an attacker dumps all the data today then loses access to the data tomorrow, having access to my password hashes means they can access my account and data later.


You are correct that the period doesn’t count. Both email addresses belong to the same account. A possible explanation is that they have entered your email as a mistake.


Not sure if this was their motivation, but hardware acceleration also enables increased opportunity for fingerprinting, to my knowledge.


  You're forced to emulate exactness by always destructuring objects - not great.
It’s almost like using objects as enumerable maps is an anti-pattern.


The problem has nothing to do with objects. The problem is, how do you type check something like sprintf without ad hoc type rules?


Typescript can check sprintf though using template string types: https://www.hacklewayne.com/a-truly-strongly-typed-printf-in...


Meanwhile, both Rust [1] and Haskell [2] manage to implement statically type-safe string interpolation.

[1]: https://willcrichton.net/notes/type-safe-printf/

[2]: https://hackage.haskell.org/package/formatting


Yes but the Rust example is an ad hoc type rule implemented behind the macro. You can make it type-safe but you lose the ability to have a formatting language in the string itself.


Rust has explicit support for it in the compiler, which is not great.

Zig does it the right way - it's defined in zig itself, no special cases in compiler like in Rust.


I don’t think I’m following well enough to provide a meaningful response.

This is not meant as an argument against what you’re saying, because I know you were just giving an example, but I found this and thought you may find it interesting: https://www.hacklewayne.com/a-truly-strongly-typed-printf-in...


Runtime type information is against the goals of TypeScript.

https://github.com/Microsoft/TypeScript/wiki/TypeScript-Desi...

Edit:

  Non-goals:
  …
  5. Add or rely on run-time type information in programs, or emit different code based on the results of the type system. Instead, encourage programming patterns that do not require run-time metadata.


This is my biggest issue with the language.

Fetch returns “any” meaning you can’t trust the data you received is actually the data you expected. Bugs from this mismatch will be many lines away (on first use) and more difficult to find. Because of this “goal of the language” you cited, there’s no built-in way to validate any data at runtime. In nearly any other typed language I have some deserialization mechanism. Not so in Typescript!

This decision led to more bugs in our codebase than any other. The compiler actively lies to you about the types you’ll have at runtime. The only solutions are codegen or writing validators to poorly approximate what Typescript should give us for free.


Yes, “any” is a wart. And it’s a bad one.

The correct type for values you don’t know the type of (like the response of an API call) is “unknown”.

TypeScript does not provide the facilities you describe because there is not a one-size-fits-all solution to the cases that are possible and common in JavaScript.

It is left to the developer to decide how to validate unknown data at the boundaries of the API.

There are third party libraries that facilitate this in different ways with different trade-offs.

  The compiler actively lies to you about the types you’ll have at runtime.
I find this to be rare if you are using strict mode with proper TypeScript definition files for your platform and dependencies. Usually the lie is in your own code or bad dependencies when an “unknown” type (including “any”) is cast to a concrete type without being validated.

  In nearly any other typed language I have some deserialization mechanism.
Could you provide examples? I either don’t understand or I disagree.


> Usually the lie is in your own code or bad dependencies when an “unknown” type (including “any”) is cast to a concrete type without being validated.

Yes, but one of those bad dependencies is the standard library.


When does the standard library lie in this case?


Things are `any` when they should be `unknown` or generic, mostly. Off the top of my head:

- `JSON.parse` and `.json()` on response bodies both return `any`.

- `JSON.stringify`'s `replacer` argument is of type `(key: any, string: any) => any`.

- `PromiseRejectedResult["reason"]` is `any`.

There are certainly many others.


> Usually the lie is in your own code or bad dependencies

The lie is almost always in an external API response from fetch (hence the complaint about “any” above).

> Could you provide examples?

Off the top of my head… Go’s stdlib json.Unmarshal and Rust’s Serde derive Deserialize.


The lie is when your code uses* the “any” value where a concrete type is expected.

I was misunderstanding your point with the deserialize.

Edit: “using” -> “uses”


This can't be solved by static analysis - anything that crosses i/o boundary has to be asserted, refuted or predicated at runtime and you have libraries for it ie. [0] which doesn't throw (based on refutations which can be mapped to predicates without much cost and assertions) or [1] which throws (based on assertions).

Predicates are the most performant but won't give you any indication on why it failed (ie. some nested field was null but was expected to be number etc).

Refutations is great sweet spot as it's fast while giving information about error.

Assertions are slow, but more often than not you don't care.

You can map between any of them, but it doesn't make much sense for mapping ie. assertion to predicate as you'd be paying cost for nested try/catch while dropping error information.

Refutation is great base for all 3.

[0] https://github.com/preludejs/refute

[1] https://github.com/appliedblockchain/assert-combinators


The complaint is that Typescript not emitting any of the type information for the runtime means every library must reimplement the whole TS type system.


Yes, that's true, they could support emitting metadata with explicit keyword which would help and wouldn't bloat anything implicitly, they already do emit code for enums for example.

Personally I'm fan of not introducing new language that runs at comp time, just use the same language to have macros and operations on types for free - just like Zig does it.

Typescript type system is already turing complete so it's not like they'd be loosing anything there.


You might like TS Reset: https://github.com/total-typescript/ts-reset, which fixes this particular problem. I don't personally find it to be a big issue though.

Regarding runtime type checking, if you were to write something that can handle the total space of possible TS types, you would end up with incredibly complex machinery. It would be hard to make it perform, both in terms of speed and bundle size, and it would be hard to predict. I think Zod or perhaps https://arktype.io/ which target a reasonable subset are the only way to go.


This was driving me nuts in a project with lots of backend churn. Runtime type validation libraries like typebox and zod (I like typebox) can really save your bacon.

The downside is the underlying types tend to be more complex when viewed in your IDE, but I think it's worth it.


Here’s a neat trick for those complex types:

  type Identity<T> = T

  // This can be made recursive to an extent, alas I’m on mobile
  type Merge<T> = {
    [K in keyof T]: Identity<T[K]>
  }

  type ReadableFoo = Merge<UnreadableFoo>


You should take a look at https://zod.dev/ if you haven't already - it's a library for runtime parsing that works really well for your use case.

Types are inferred from the schema though personally I like to handwrite types as well to sense check that the schema describes the type I think it does


I’ve used zod and every other schema validator available for this. Some problems:

1. Types are not written in typescript anymore. Or you have to define them twice and manually ensure they match. ReturnType<typeof MyType> pollutes the codebase.

2. Types have to be defined in order, since they’re now consts. If you have a lot of types which embed other types, good luck determining that order by hand.

3. Recursive types need to be treated specially because a const variable can’t reference itself without some lazy evaluation mechanism.

TS could solve all of this by baking this into the language.


1. You can just use `export type Foo = z.infer<typeof fooParser>` in one place and then import Foo everywhere else, without using z.infer everywhere else

2. Use let and modify your types as new ones become available - union them with a new object that contains the new property you need

3. How often are you making recursive types?

I agree that all of this could be made easier, but zod is the best we have and great for most normal usage. The reason TS doesn't want to make this available at runtime is that it means so many changes they make will become breaking changes. Perhaps one day when there's less development on TS we'll see this get added


Including runtime checks would also have performance implications.

I really enjoyed using myzod (more performative, simple, zod) for awhile, but recently I’ve been using Typia, which is a codegen approach. I have mixed feelings about it, and from my own benchmarking it’s performance seems overstated, but the idea is sound: because we know the type, we can compile better, type-optimized serialize/deserialize functions.

As for not littering the codebase with runtime checks, it may be worth reiterating to the person above that you really should only do type determinations at the I/O edges: you parse your input, and it becomes known from then onwards. You runtime type-check your output, and its requirements propagate upwards through your program.


There aren't really "performance implications" for making an impossible thing possible.

The ability to emit a parser/verifier would not require any other runtime or affect the speed of any other code.


Pragmatically, your interest is why I was mentioning typia, which does what you are describing: opt-in parser/stringify/mock-gen codegen derived from typescript.

I think it’s reasonable enough to allow other people to focus on runtime behavior. There’s still a lot to do to model js accurately.

In my personal opinion, the ideal ts would be one where you just write regular js, and the compiler is able to check all of it for correctness implicitly. That would require runtime validators etc to be explicitly written, yes, but you could “just write js” and the correctness of your program could be proven (with guidance to make it more provably correct when it is not yet).


It's also possible -- for specific cases, probably not generally -- define your schema object as const and use type manipulation to generate "real" types automagically. Toy example here: https://github.com/andrewaylett/aylett.co.uk/blob/3fffae1bab...

This lets me run my inputs through a schema validator, and ensure that the type I'm using statically will match the schema used at runtime.


The TS devs have mentioned that they wish JSON.parse returned unknown, but the change is too disruptive now.


It would be a lot nicer if it instead returned some JsonType that’s a union of all the possible JSON values. Anyone know if there’s a good reason why it doesn’t do that?


You can pass an arbitrary rehydration function, which can return non-JSON-representable types


It could look at the return type of your reviver function, or at least whether you passed one in.


There's a big discussion about this: https://github.com/microsoft/TypeScript/issues/1897. The benefit seems extremely limited to me. Valid JSON is obviously a subset of `any`, but I can't think of a situation where that particular specificity provides any value. Can you?


The value is when you’re parsing the JSON afterwards. It’s good to know you can match it exhaustively -- each value is either a Record<string, Json>, Json[], string, number, Boolean or null, and nothing else.

Edit to add: I think “any” is almost always a big cop-out because you couldn’t be bothered figuring out the correct type, and it often causes problems (loss of type coverage) further down the line. I admit I do use “any” in my own code when I really need to, but a library should work harder to avoid it, and the standard platform typings should work harder still.


That's effectively what unknown would be - at least the outcome would mostly be the same. You'll end up narrowing values in just the same way.


What are the common operations you can perform on that union?


You can narrow it exhaustively.


Interesting, I think it indeed falls under "emit different code based on the results of the type system" even though - thank you for the link.

I'm not sure if there is a) any "programming pattern" that can avoid this without other drawbacks and b) if there is any problem with emitting different code based on the types (at compiletime).

I suppose it could lead to breaking behaviour if the typesystem is changed, since it now can impact runtime code. Personally, I think this would be more than worth it, but maybe the typescript team has a different opinion or other reason.


Did the comment you're replying to get edited? They are pretty explicitly talking about statically known type information, not runtime.


They’re suggesting outputting runtime type information by emitting different code based on the type of the value passed to their hypothetical “Type.keys()” function.


Ahh, that makes sense then; thanks for clarifying.


This is why Angular for dependency injection as well as some ORMs require non-standard Typescript emitted during compile time for a long time now. It looks a quite locked-up conflict.


Well that's interesting. That means decorator metadata was a non-goal, despite being supported for ages.


The Reflect.defineMetadata API and the long-supported decorators syntax come from very early versions of Typescript when Typescript was (maybe) more actively trying to steer the direction of ECMAScript by implementing features that were Stage 2 proposals.

Typescript only got official ECMAScript decorator support in the recent v5. ECMAScript decorators only got to stage 3 in April ‘22.

But decorator syntax is just a kind of syntax sugar over passing a function through another function, and you can do that today to achieve runtime type information (see zod etc). Zod could be rewritten using decorator syntax and still be “just JavaScript” while providing compile-time type support.

The distinction being that supporting ECMAScript features is a goal for Typescript, but they were perhaps too aggressive early on in investing in decorators and the Reflect Metadata API. They had the wisdom to put these behind “experimental” flags, but I think they got quite popular within the typescript community due to the early adoption of both Typescript and both those features by Angular, which was really the only major lib using TS for quite some time.


> Typescript only got official ECMAScript decorator support in the recent v5. ECMAScript decorators only got to stage 3 in April ‘22.

Yah, they've been out for ages. It's quite surprisingly how 1. long it's taken ECMA and 2. how quickly TypeScript took advantage of decorator syntax to improve TypeScript. I'd say it's a definite win for us people who love decorators.

> But decorator syntax is just a kind of syntax sugar over passing a function through another function, and you can do that today to achieve runtime type information

I'll have a look at Zod, thank you! I have to admit I like the simplicity of decorators; I'm playing with Dependency Injection and, while the loss of parameter injection is a bit disappointing, there are ways to work around it, e.g.

   @Injectable([Dependency])
   class Service {
     constructor (private dependency: Dependency) { } 
   }
> [...] features by Angular, which was really the only major lib using TS for quite some time.

Definitely. Although it'd be interesting to see how Angular handles the transition away from parameter injection; there's an open issue about it on their GitHub, but from what I can see none of the core members have spoken about it yet. <https://github.com/angular/angular/issues/50439>

The main proposal from a community member is to replace them with the Service Locator pattern (ew). Thankfully someone in-thread provided them with a little wisdom regarding why that's a terrible idea. Here's hoping Angular keeps a nice API.


    Also you should fill your database with houses - use AI or some sort of random data generator to generate them and just make sure there's a small but clear note on the listing saying this is AI data. It's not ideal but it is better than an empty database. Give people something to look at.
This genuinely made my stomach turn. If this is the future… I don’t know if I want in.


Fully agree. Once I'd see the 'this is fake' disclaimer at any house, I'll skip.

Perhaps a demo.probox.co would be a good idea for a dech demo.

Also: this site seems implicitly us-centric, best to make that explicit.


This kind of idea is always a telling stand-in for business incompetence.

Yes, seeding something like this is difficult. It takes work. Arguably it’s the main part and any sort of app is just the implementation. This is why oftentimes these things start off covering a small region and grow in time.


Hello,Waterluvian. I missed your comment, sorry. Still have a few to reply. If I understand your initial statement, to me it means incompetence drives innovation. You are correct that many start small and grow over time. The difference here is this platform is has been built to handle volume and we can handle a larger area because the feature are built into the site.So we are starting small by only opening in 13 states to interate. When we see an accelerating adoption we will likely open much faster. This all assumes customers like what we are offering. We shall learn soon enough. Thank you for your comments.


Isn't this the PAST... something along the lines of: "spent some time to fill the database with data; any data; make it up if you have to". This would've worked long before AI, except now just a bit more efficient?


Hello, sverhagen. I am not understanding your comment. But allow me to address what I sense you mean. PropBox is not the past - it is the future, today. We don't want "any data" we want accurate data. We want the best data. I cringe at the thought of " make it up if you have to." We do not think it is AI dependend. We will watch AI for awhile and experiment a bit with it. If it actually works for us we will incorporate it. For us one of the problems with AI and real estate is there is so much inaccurate, misleading, harmful content out there we don't want to contaminate our material.


It wasn't cool then either.


It probably wasn't, just not sure why AI, for all its faults, got blamed here :)


Yeah, it’s seems weird to have a not real listing there. Maybe to start with you could partner real estate companies and have both house that are direct sells and ones that redirect you to a realtor. At least they would be real houses, and then people decide to buy a house you could message them asking them if they’d like to sell on the platform instead.


Hello,johnnypangs. We planned on launching empty. When I started my first real estate company, unlike the typical method many new real estate companys use, which is to hoard and pocket inventory while they are still with their soon to be former company, so they can open with inventory, I began with zero inventory. Let's see what happens in the coming months. We have one now, soon to be two. lets see what it look like in six months. We believe we can be successful independent of the industry. As an insider and an operator very few consumers understand the financial risks associated with real estate. All service businesses are far risker than people realize. Doctors, lawyers, car mechanics, auto sales, and many others. the problem is asymmetry., where the seller knows a lot, and the customer knows litte.


Maybe youre right.

In the other hand an empty database there’s nothing at all to see and you bounce straight back out.


Not necessarily. I think if the only way to make your business work is to start off by manipulating your customers, that's a problem. I would suggest one possible alternative is to target sellers and give them the tools they need to easily list their home and promote that listing off the platform. If sellers can see the value add, it shouldn't matter so much that there's no existing listings in their area.


michaelmior.You are correct, for the most part. If the only way to make the business work was to manipulate the customer we would not be doing PropBox. We believe being transparent with our customers is the way. I'm curious if you spent any time on PropBox, because it does exactly what you suggest it must do. We promote the buyer and seller working together with commomly shared data to reach the common goal - a successful transaction. Take a peek if you have time and tell me what you think. PropBox.co Thanks fro yoru comments. Richard


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: