Hacker Newsnew | past | comments | ask | show | jobs | submit | Disposal8433's commentslogin

I would have given the same answer. I try every tool out there and I have the experience to know if its useful. I'm an old fart but I'm smart enough to understand that I get great value from my paid software (fonts, software, or subscriptions like JetBrains).

Managers don't know how to code and now give us ridiculous technical advice. We know that they only care about money and don't understand a thing about features.

Last but not least, we get twice more work for the same salary. There is no world where this make sense.


Why reinvent the wheel poorly when you have a hundred of solutions like https://jsonapi.org/?

Why do all the SaaS look the same? Can't you vibe code some CSS too?

Also how can you prove it's anonymous? And a "honest safe space" doesn't feel possible.


I did vibecode the frontend probably why it looks a little standard. But will make it better.

It's anonymous because you do not need to sign in to post a comment. you can go to https://subrosa.vercel.app/[username] directly and post a comment


Avoiding captchas and disrespecting robots.txt. How does it feel to advertise your spam service? Are you proud?

The pricing page was higher in priority

The commit history and messages do not inspire confidence. Everything seems generated by AI. Both facts show that you don't seem to know what you are doing, which is not a good sign for a security tool.

His disdain for content creators (in the very first paragraph) is not an opinion. I'm showing my disdain with a flag.

Not my reading of what he said. After all, he himself is a 'content creator'. In any case, he should not be censored.

LLMs are not a replacement for the HTTP protocol, and people who want to see my site know the address.

Why do tech bros assume that every site is selling a product? There are blogs, personal web sites, communities, and open-source projects out there.

If there's no product, and it's free, why would one care about it appearing in the output of an LLM? If it's so secret that it shouldn't, then perhaps it should be behind some auth anyway.

Because writing is, in many senses, exposing yourself, and at the very least you want the recognition for it (even if only in the form of a visit to the website, and maybe the interactions that can follow)? Maybe you want at least the prestige that comes with writing good content that took a lot of time to create. Maybe because you want to participate in a community with your stuff. Maybe other million reasons.

I know that medium, substack and the other "publication" platforms (like LinkedIn) are trying to commodify even the act of writing into purely a form or marketing (either for a product, or for your personal brand), but not everyone gave up just yet.


Agreed, and we can argue semantics, but many folks would consider the content in that case a product.

Not everything everyone does is for a profit motive. I'm not trying to sell you anything ; myself included, when you visit my site. It's just reading material.

Something being a product does not require a profit motive.

Please don't use the open-source term unless you ship the TBs of data downloaded from Anna's Archive that are required do build it yourself. And dont forget all the system prompts to censor the multiple topics that they don't want you to see.

Keep fighting the "open weights" terminology fight, because diluting the term open-source for a blob of neural network weights (even inference code is open-source) is not open-source.

Is your point really that- "I need to see all data downloaded to make this model, before I can know it is open"? Do you have $XXB worth of GPU time to ingest that data with a state of the art framework to make a model? I don't. Even if I did, I'm not sure FB or Google are in any better position to claim this model is or isn't open beyond the fact that the weights are there.

They're giving you a free model. You can evaluate it. You can sue them. But the weights are there. If you dislike the way they license the weights, because the license isn't open enough, then sure, speak up, but because you can't see all the training data??! Wtf.


To many people there's an important distinction between "open source" and "open weights". I agree with the distinction, open source has a particular meaning which is not really here and misuse is worth calling out in order to prevent erosion of the terminology.

Historically this would be like calling a free but closed-source application "open source" simply because the application is free.


The parent’s point is that open weight is not the same as open source.

Rough analogy:

SaaS = AI as a service

Locally executable closed-source software = open-weight model

Open-source software = open-source model (whatever allows to reproduce the model from training data)


I agree with OP - the weights are more akin to the binary output from a compiler. You can't see how it works, how it was made, you can't freely manipulate with it, improve it, extend it etc. It's like having a binary of a program. The source code for the model was the training data. The compiler is the tooling that can train a module based on a given set of training data. For me it is not critical for an open source model that it is ONLY distributed in source code form. It is fine that you can also download just the weights. But it should be possible to reproduce the weights - either there should be a tar.gz ball with all the training data, or there needs to be a description/scripts of how one could obtain the training data. It must be reproducible for someone willing to invest the time, compute into it even if 99.999% use only the binary. This is completely analogous to what is normally understood by open source.

Do you need to see the source code used to compile this binary before you can know it is open? Do you have enough disk storage and RAM available to compile Chromium on your laptop? I don't.

I don't have the $XXbn to train a model, but I certainly would like to know what the training data consists of.

I don’t know why you got so much downvoted, these models are not open-source/open-recipes. They are censored open weights models. Better than nothing, but far from being Open

Most people don't really care all that much about the distinction. It comes across to them as linguistic pedantry and they downvote it to show they don't want to hear/read it.

It's apache2.0, so by definition it's open source. Stop pushing for training data, it'll never happen, and there's literally 0 reason for it to happen (both theoretical and practical). Apache2.0 IS opensource.

No, it's open weight. You wouldn't call applications with only Apache 2.0-licensed binaries "open source". The weights are not the "source code" of the model, they are the "compiled" binary, therefore they are not open source.

However, for the sake of argument let's say this release should be called open source.

Then what do you call a model that also comes with its training material and tools to reproduce the model? Is it also called open source, and there is no material difference between those two releases? Or perhaps those two different terms should be used for those two different kind of releases?

If you say that actually open source releases are impossible now (for mostly copyright reasons I imagine), it doesn't mean that they will be perpetually so. For that glorious future, we can leave them space in the terminology by using the term open weight. It is also the term that should not be misleading to anyone.


> It's apache2.0, so by definition it's open source.

That's not true by any of the open source definitions in common use.

Source code (and, optionally, derived binaries) under the Apache 2.0 license are open source.

But compiled binaries (without access to source) under the Apache 2.0 license are not open source, even though the license does give you some rights over what you can do with the binaries.

Normally the question doesn't come up, because it's so unusual, strange and contradictory to ship closed-source binaries with an open source license. Descriptions of which licenses qualify as open source licenses assume the context that of course you have the source or could get it, and it's a question of what you're allowed to do with it.

The distinction is more obvious if you ask the same question about other open source licenses such as GPL or MPL. A compiled binary (without access to source) shipped with a GPL license is not by any stretch open source. Not only is it not in the "preferred form for editing" as the license requires, it's not even permitted for someone who receives the file to give it to someone else and comply with the license. If someone who receives the file can't give it to anyone else (legally), then it's obvioiusly not open source.


Please see the detailed response to a sibling post. tl;dr; weights are not binaries.

"Compiled binaries" are just meant to be an example. For the purpose of whether something is open source, it doesn't matter whether something is a "binary" or something completely different.

What matters (for all common definitions of open source): Are the files in "source form" (which has a definition), or are they "derived works" of the source form?

Going back to Apache 2.0. Although that doesn't define "open source", it provides legal definitions of source and non-source, which are similar to the definitions used in other open source licenses.

As you can see below, for Apache 2.0 it doesn't matter whether something is a "binary", "weights" or something else. What matters is whether it's the "preferred form for making modifications" or a "form resulting from mechanical transformation or translation". My highlights are capitalized:

- Apache License Version 2.0, January 2004

- 1. Definitions:

- "Source" form shall mean the PREFERRED FORM FOR MAKING MODIFICATIONS, including BUT NOT LIMITED TO software source code, documentation source, and configuration files.

- "Object" form shall mean any form resulting from MECHANICAL TRANSFORMATION OR TRANSLATION of a Source form, including BUT NOT LIMITED TO compiled object code, generated documentation, and conversions to other media types.


> "Source" form shall mean the PREFERRED FORM FOR MAKING MODIFICATIONS, including BUT NOT LIMITED TO software source code, documentation source, and configuration files.

Yes, weights are the PREFFERED FORM FOR MAKING MODIFICATIONS!!! You, the labs, and anyone sane modifies the weights via post-training. This is the point. The labs don't re-train every time they want to change the model. They finetune. You can do that as well, with the same tools/concepts, AND YOU ARE ALLOWED TO DO THAT by the license. And redistribute. And all the other stuff.


What is the source that's open? Aren't the models themselves more akin to compiled code than to source code?

No, not compiled code. Weights are hardcoded values. Code is the combination of model architecture + config + inferencing engine. You run inference based on the architecture (what and when to compute), using some hardcoded values (weights).

JVM bytecode is hardcoded values. Code is the virtual machine implementation + config + operating system it runs on. You run classes based on the virtual machine, using some hardcoded input data generated by javac.

It’s open source, but it’s a binary-only release.

It’s like getting a compiled software with an Apache license. Technically open source, but you can’t modify and recompile since you don’t have the source to recompile. You can still tinker with the binary tho.


Weights are not binary. I have no idea why this is so often spread, it's simply not true. You can't do anything with the weights themselves, you can't "run" the weights.

You run inference (via a library) on a model using it's architecture (config file), tokenizer (what and when to compute) based on weights (hardcoded values). That's it.

> but you can’t modify

Yes, you can. It's called finetuning. And, most importantly, that's exactly how the model creators themselves are "modifying" the weights! No sane lab is "recompiling" a model every time they change something. They perform a pre-training stage (feed everything and the kitchen sink), they get the hardcoded values (weights), and then they post-train using "the same" (well, maybe their techniques are better, but still the same concept) as you or I would. Just with more compute. That's it. You can do the exact same modifications, using basically the same concepts.

> don’t have the source to recompile

In pure practical ways, neither do the labs. Everyone that has trained a big model can tell you that the process is so finicky that they'd eat a hat if a big train session can be somehow made reproducible to the bit. Between nodes failing, datapoints balooning your loss and having to go back, and the myriad of other problems, what you get out of a big training run is not guaranteed to be the same even with 100 - 1000 more attempts, in practice. It's simply the nature of training large models.


A binary does not mean an executable. A PNG is a binary. I could have an SVG file, render it as a PNG and release that with CC0, it doesn't make my PNG open source. Model Weights are binary files.

You can do a lot with a binary also. That's what game mods are all about.

Slapping an open license onto a binary can be a valid use of such license, but does not make your project open source.

The system prompt is an inference parameter, no?

by your definition most of the current open weight models would not qualify

Correct. I agree with them, most of the open weight models are not open source.

That’s why they are called open weight and not open source.

Bitwarden says that "Passkeys are included in .json exports from Bitwarden." I'm not sure if it's true but it should be there by now.

Actually I may just misinterpret the JSON. It only includes `keyType=public-key` and `keyValue=...`, I was expecting there to be `keyType=public-key` and `keyType=private-key`, but perhaps keyType is impliying the authentication method and the keyValue is my private key?

They certainly are included, but whether they're included in a way that you can use them elsewhere, vs re-importing them into the same bitwarden account (something their vault has options to do if you encrypt the export), I'm not sure. I should spin up the vaultwarden clone and see if it correctly imports it.

    {
      "passwordHistory": null,
      "revisionDate": "2025-08-04T03:02:03.600Z",
      "creationDate": "2025-08-04T03:02:03.140Z",
      "deletedDate": null,
      "id": "<UUID>",
      "organizationId": null,
      "folderId": null,
      "type": 1,
      "reprompt": 0,
      "name": "abcdef",
      "notes": null,
      "favorite": false,
      "login": {
        "uris": [
          {
            "match": null,
            "uri": "https://<URL>"
          }
        ],
        "fido2Credentials": [
          {
            "credentialId": "<UUID>",
            "keyType": "public-key",
            "keyAlgorithm": "ECDSA",
            "keyCurve": "P-256",
            "keyValue":  "<238 chars>",
            "rpId": "<URL>",
            "userHandle": "<SOME BLOB>",
            "userName": "abcdef",
            "counter": "0",
            "rpName": "abcdef",
            "userDisplayName": "abcdef",
            "discoverable": "true",
            "creationDate": "2025-08-04T03:04:34.418Z"
          }
        ],
        "username": "abcdef",
        "password": null,
        "totp": null
      },
      "collectionIds": null
    }

Seems you can only import to the same account, some hand gesturing at FIDO Credential Exchange Format & Credential Exchange Protocol which aren't yet ratified.

https://community.bitwarden.com/t/passkey-portability/59177

https://community.bitwarden.com/t/passkey-export-file/77448/...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: