Hacker Newsnew | past | comments | ask | show | jobs | submit | esrauch's commentslogin

I think you are confused by terminology here and not by behavior, "immutable variable" is a normal terminology in all languages and could be says to be distinct from constants.

In Rust if you define with "let x = 1;" it's an immutable variable, and same with Kotlin "val x = 1;"


Lore and custom made "immutable variable" some kind of frequent idiomatic parlance, but it’s still an oxymoron in their general accepted isolated meanings.

Neither "let" nor "val[ue]" implies constancy or vacillation in themselves without further context.


Words only have the meaning we give them, and "variable" already has this meaning from mathematics in the sense of x+1=2, x is a variable.

Euler used this terminology, it's not new fangled corruption or anything. I'm not sure it makes too much sense to argue they new languages should use a different terminology than this based on a colloquial/nontechnical interpretation of the word.


I get your point on how the words meanings evolves.

Also it’s fine that anyone name things as it comes to their mind — as long as the other side get what is meant at least, I guess.

On the other it doesn’t hurt much anyone to call an oxymoron thus, or exchange in vacuous manner about terminology or its evolution.

On the specific example you give, I’m not an expert, but it seems dubious to me. In x+1=2, terms like x are called unknowns. Prove me wrong, but I would rather bet that Euler used unknown (quantitas incognita) unless he was specifically discussing variable quantities (quantitas variabilis) to describe, well, quantities that change. Probably he used also French and German equivalents, but if Euler spoke any English that’s not reflected in his publications.


"Damit wird insbesondere zu der interessanten Aufgabe, eine quadratische Gleichung beliebig vieler Variabeln mit algebraischen Zahlencoeffizienten in solchen ganzen oder gebrochenen Zahlen zu lösen, die in dem durch die Coefficienten bestimmten algebraischen Rationalitätsbereiche gelegen sind." - Hilbert, 1900

The use of "variable" to denote an "unknown" is a very old practice that predates computers and programming languages.


Yes sure, I didn't mean otherwise, but I just wanted to express doubts about Euler already doing so. Hilbert is already one century forward.

Every system has some type 1 errors and some type 2 errors. The notion that they could just have neither if they cared a little more is just kind of absurd and doesn't at all reflect the messiness of the world we live in.

Even if Google paid Harvard JDs to read every DMCA notice (of which there literally aren't enough of them), even then they would sometimes be tricked by adversaries and sometimes incorrectly think someone was an adversary some of the time.

I worked at YouTube in the past and I can tell you copyright ownership isn't even fully known by the lawyers. Concretely there's a lot of major songs where the sum of major companies affirming they have partial ownership sums to more than 100% or less than 100%. Literally even the copyright holders don't actually know what they themselves own without lots of errors, and that's without getting into a system that has to try to combat adversarial / bad-faith actors.


I feel like every single reply from them was about whether he held the copyright rather than whether he had the identity he claimed, and part of what went wrong is he kept asking about how to prove his identity in the replies.

I suspect what happened is they had some tag for what the content at that URL was and it wrongly was some other book, so the question wasn't his identity but the content's identity that had to be addressed. Their replies all look consistent with "the book at that URL is not the book you are claiming you own"

Not that their handling was good or clear, but to my eyes both sides were talking past eachother since he kept talking about his identity and the Google side wasn't disputing his identity.


>I feel like every single reply from them was about whether he held the copyright rather than whether he had the identity he claimed, and part of what went wrong is he kept asking about how to prove his identity in the replies.

On one level I would say that simply flatly untrue given the phrasing of the emails from Google. But on another level, there's an integral relation between the question of identity and copyright ownership anyway, which I think makes that distinction moot in this case. Regardless of what you call it, they abandon the topic by the third email.

I think one of the things that makes factual issues difficult to accurately process is there's a lot of tempting paths towards minimizing cognitive dissonance by taking a both sides approach, and has the satisfying psychological effect of relieving tension while freeing one from the burden close comparisons of factual details and not feeling ugly by taking sides. There's obviously a lot of powerful psychology pulling us towards rationalizing an equilibrium. It's what makes fact-checking hard, because if you confront an asymmetry, it doesn't have the convenient relief from psychological dissonance that the brain is searching for.


I'm surprised you can read Google's words as challenging his identity. Just looking explicitly again the emails:

> It is unclear to us how you came to own the copyright for the content in question, because you do not appear to be the creator of the content

Seems very explicit to me that the concern is "We don't think Jeff Starr owns the content that is at that URL" and not "we don't think you are Jeff Starr"

And then third reply was "your long multiple replies did not addressed our rejection concerns, and so you have failed the challenge script overall". I would really expect he could call a lawyer to restart the process in a way that would be worded less casually and have the necessary shibboleths for their challenge script to be passed.


Yes, and that same pattern already does exist in C and C++. Asserts that are checked in debug builds but presumed true for optimization in release builds.

Not unless you write your own assert macro using C23 unreachable(), GNU C __builtin_unreachable(), MSVC __assume(0), or the like. The standard one is defined[1] to either explicitly check or completely ignore its argument.

[1] https://port70.net/~nsz/c/c11/n1570.html#7.2


Yeah, I meant it's common for projects to make their own 'assume' macros.

In Rust you can wrap core::hint::assert_unchecked similarly.


This is very interesting, though the limitations for 'security' reasons seem somewhat surprising to me compared to the claim "Anything JSON can do, it can do. Anything JSON can't do, it can't do.".

Simplest example, "a\u0000b" is a perfectly valid and in-bounds JSON string that valid JSON data sets may have in it. Doesn't it end up falling short of 'Anything JSON can do, it can do" to refuse to serialize that string?


"a\u0000b" ("a" followed by a vertical tabulation control code) is also a perfectly valid and in-bounds BONJSON string. What BONJSON rejects is any invalid UTF-8 sequences, which shouldn't even be present in the data to begin with.

You're thinking of "a\u000b". "a\u0000b" is the three-character string also written "a\x00b".

Bleh... This is why my text formats use \[10c0de] to escape unicode codepoints. Much easier for humans to parse.

My example was a three character string where the second one is \u0000, which is the NUL character in the middle of the string.

The spec on the GitHub says that it is banned to include NUL under a security stance, that someone that after parse someone might do strlen and accidentally truncate to a shorter string in C.

Which I think has some premise, but its a valid string contents in JSON (and in Utf8), so it is deliberately breaking 1:1 parity with JSON parity in the name of a security hypothetical.


The spec says that implementations must disable NUL by default (as in, the default configuration must disallow). https://github.com/kstenerud/bonjson/blob/main/bonjson.md#nu...

Users can of course enable NUL in the rare cases where they need it, but I want safe defaults.

Actually, I'll make that section clearer.


So I think it's a very neat format, but my feedback as a random person on the Internet is that I don't think it does uphold the claimed vision in the end of being 1:1 to JSON (the security parts, but also you do end up adding extra types too) and that's a bit of a shame compared to the top line deliverable.

Just focusing narrowly on the \0 part to explain why I say so: the spec proposed is that implementations have to either hard ban embedded \0 or disallow by default with an opt in. So someone comes with a dataset that has it, they can get support in this case only if they configure both the serializer and parser to allow it. But if you're willing to exert that level of special case extra control, I think all of the other preexisting binary-json implementations that exist do meet the top line definition you are setting as well. For some binary-json implementation which has additional types, if someone is in full end to end control to special case, then they could just choose not to use those types too, the mere existence of extra types in the binary format is no extra "problem" for 1:1 than this choice.

IMO the deliverable that a 1:1 mapping would give us "there is no bonjson data that won't losslessly round trip to JSON and vice versa". The benefit is when it is over all future data that you haven't seen yet, where the downside of using something that is not bijective is that you run for a long time suddenly you have data dependent failures in your system because you can't 1:1 map legal data.

And especially with this guarantee, what will inevitably happen is some downstream handling will also take as a given that they can strlen() since they "knew" the bonjson format spec banned it, so suddenly when you have it as in-bounds data you also won't be able to trivially flip the switch, instead you are stuck with legal JSON that you can't ingest in your system without an expensive audit because the reduction from 1:1 gets entrenched as an invariant into the handling code.

Note that my vantage point might be a bit skewed here: I work on Protobuf and this shape of ecosystem interoperability topics are top of mind for me in ways that they don't necessarily need to be for small projects, and I also recognize that "what even is legal JSON" itself is not actually completely clear, so take it all with a grain of salt (and again, I also do think it looks like a very nice encoding in general).


Oh yes, I do understand what you're getting at. I'm willing to go a little off-script in order to make things safer. The NUL thing can be configured away if needed, but requires a conscious decision to do so.

Friction? yeah, but that's just how it's gonna be.

For the invalid Unicode and duplicate key handling, I'll offer no quarter. The needs of the many outweigh the needs of the few.

But I'll still say it's 1:1 because marketing.


> But I'll still say it's 1:1 because marketing.

Isn't that lying? Marketing is when you help connect people who require a product or service (the market) with a provider of that product or service.


Did you read "Parsing JSON is a minefield"?

In practice mutation fuzz testers are able to whitebox see where branches are in the underlying code, with a differential fuzz test under that approach its generally able to fuzz over test cases that go over all branches.

So I think under some computer science theory case for arbitrary functions its not possible, but for the actual shape of behavior in question from this library I think its realistic that a decent corpus of 'real' examples and then differential fuzzing would give you more confidence that anyone has in nearly any program's correctness here on real Earth.


Yes, there are different levels of sureness being described.

When I hear guarantee, it makes me think of correctness proofs.

Confidence is more of a practical notion for how much you trust the system for a given use case. Testing can definitely provide confidence in this scenario.


The reason for the intermediary is because the clickthrough sends the previous URL as a referer to the next server.

The only real way to avoid leaking specific urls from the source page to the arbitrary other server is to have an intermediary redirect like this.

All the big products put an intermediary for that reason, though many of them make it a user visible page of that says "you are leaving our product" versus Google mostly does it as an immediate redirect.

The copy/paste behavior is mostly an unfortunate side effect and not a deliberate feature of it.


I don't understand. They are redirecting to their own S3 bucket, so who would be the recipient of the leak?

Also, isn't this what Referrer-Policy is for? https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...


Quoting web standards, you are more optimistic than I am, unfortunately, nobody uses them consistently or accurately (look at PUT vs POST for create / update as a really good example of this - nobody agrees) its a shame too, there's a lot of richness to the web spec. Most people don't even use "HEAD" to ensure they aren't making wasteful REST calls if they already have the data.


I was replying to

> All the big products put an intermediary for that reason

Surely whoever maintains the big products can add headers if they want?

And this is about people who care enough about not showing up in Referer headers to do something about it rather than people in general not understanding the full spec .


I worked on these big web products before and the answer then was that no, you couldn't trust it to be honored and it would have been considered a privacy incident so better off just having the redirect and having no risk. You can't trust the useragents for example.

Not sure if the reliability of the intentional mechanism has improved enough where this is just legacy or if there's entirely new reasons for it in 2026.


The other problem is if you're too big like Google, you cannot assume everyone will honor this, which is why they do these redirects.


Referrer-Policy is a response header, so in this case it would be Google sending it, and the browsers who would be honouring it. You have to hope that the browser makers get it correct... Unless I misunderstood?


Blogger predates the existence of this header by many years. Blogger, I believe, has also been in maintenance mode for many years.


It sees periodic major updates to keep it in line with standards. That's not much more than maintenance mode, but it's more than just keeping the servers running. It seems like someone at Google pays attention to it and keeps it from falling behind, but I suspect the same was true of Google Reader until it wasn't.


>someone at Google pays attention to it and keeps it from falling behind

I feel like it's the same for Google My Maps. They even discontinued the Android app, so you can only use it on the web. It totally feels like there's a single guy keeping the whole system up.


The main claim here is that it is a defacto monopoly, and that there are not "plenty of other platforms", since none of those platforms actually have any reach. It results in most games smaller than Fortnight or Blizzard having literally no choice but to use steam regardless of policy or cut.

Any time you have no choice it at least makes for a very warped market.


> since none of those platforms actually have any reach.

so really, this is about getting reach, and that a 30% cut for said reach is too high. I am arguing that this price is a market price, for which it is justified by mere existence. If this price was too high, then these other platforms that you claim have no reach will get some reach, since the PC platform is not locked down (yet).

Unlike in the model of apple's app store (until recently at least?), which has no alternative possible. Even android's supposed alternative is somewhat going to get locked down by google looking at the trend. Then the claim would be that those platforms hold not only a defacto monopoly, but an actual one, and their cut is therefore not a real market price. That makes it possible to claim that they're unfairly pricing their platform. Steam doesn't have this issue at all.


That's not exactly how markets generally work ("free market" is more of a theoretical concept than something that has ever existed outside of commodity markets at least).

In a way it can be justified in the sense that developers would rather get 70% than not make a sale at all if their games were only available on less popular platforms. But effectively that's what allows Steam to charge charge as much. They certainly have a dominant position in the market due to very little competition.

It's like retail/supermarket chains in certain countries being able to extort better conditions from their suppliers because they have very little choice. Or e.g. real estate agents being able to charge disproportionally high fees due to how the market is structured.

Whether someone considers that fair or not is of course rather subjective...

> Steam doesn't have this issue at all.

IMHO it's a matter of degree but fundamentally the same thing. The barriers to switching to a different store are just much lower than not having an Apple/Google phone but they still exist.


The fact that the fee is the same for Steam and Apple/Play Store seems suggestive to me the market is warped, given the latter ones are cases which are clearly monopolies where no alternative is technically possible.

Steam has a "most favored nation" clause which means people can't charge less on Steam than they do on Epic Store. And Epic Game Store cut is 0% on the first million and only 12% after that, but it can't actually end up charging less to customers if Steam maintains the most favored nation clause.


Don't they charge you more if you do pay-by-plate though? I always see signs that have a price with local ez-pass, a higher price with out-of-state ez-pass, and an even higher price for pay-by-plate.


Yes, bill for plate OCR is typically a lot more expensive in addition to having to logon to a site etc.


Ez pass billing is all over the place, each state/authority does whatever it wants.

If you reg a secondary car’s plate to an ezpass account without using the transponder, a lot of states will just think it was a read fail and charge you the regular rate but it depends.


The less honest states (New Jersey, probably others) will charge you a punitive fee (which doubles if you don't pay on time) for not having an EZpass on that vehicle. And then when you call customer support they'll argue with you, until you call on the last day when they finally agree that everything was good and proper.


In Washington it's just 25 cents higher (if you're registered -- $2 higher if you're not registered) than without a pass. Not a huge deal.


25 cents for me. I can get a sticker for $5 sticker that negates that (no transponder I think for Seattle’s first 520 bridge, maybe for carpools?). Oh, supposedly the sticker is a transponder, so I can save 25 cents if I buy a $5 sticker. Even though I don't use the bridge that often, it makes sense to buy.


Ussally if you don't have an account they charge you more. But at least for the systems in my area they'll charge your account wether you have your toll transponder or not (because they OCR your plate and charge the linked account)


One of the super powers of Lua is that it doesn't need to be very stable: because you are always embedding an interpreter your code and interpreter have a matching version.

That's directly contrary to what would make it acceptable as a web spec, compared to e.g. wasm being powerful enough to be a compile target that can support wasm.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: