> I think the trained model is analogous to a program (which may be interpreted by a virtual machine; not necessarily a machine code program) because it is not intelligible by humans. I'll admit that there are tools for analysing neural networks, but these are like disassemblers.
Yeah, I agree that's a valid analogy, but again I'll say it's just an analogy. The training data still differs from source code in its privacy attributes, so I don't think it should be treated the same way as source code in a free (libre) software context.
> Trained models carry all the hazards of binary blobs, and so can't just be trusted. Training data is no true analogue to source code, but the concept of reproducibility is still relevant. At the very least, the production of blobs should be auditable.
This is true, and it's a big problem, not just because of free software concerns. One of the deeply problematic effects of this is that we can't always explain why an AI does what it does, leading to horrifying possibilities[1].
[1] https://www.jwz.org/blog/2019/04/frogger-ai-explains-its-dec... (WARNING: I recommend copy/pasting this link to a separate tab rather than clicking it. JWZ has a rather negative opinion of HN and a while back his server started serving up rather unsavory images in response to requests with an HN referrer. I'm not sure if that is still happening, but I know that if you copy/paste the link there's nothing offensive on the page.)
Yeah, I agree that's a valid analogy, but again I'll say it's just an analogy. The training data still differs from source code in its privacy attributes, so I don't think it should be treated the same way as source code in a free (libre) software context.
> Trained models carry all the hazards of binary blobs, and so can't just be trusted. Training data is no true analogue to source code, but the concept of reproducibility is still relevant. At the very least, the production of blobs should be auditable.
This is true, and it's a big problem, not just because of free software concerns. One of the deeply problematic effects of this is that we can't always explain why an AI does what it does, leading to horrifying possibilities[1].
[1] https://www.jwz.org/blog/2019/04/frogger-ai-explains-its-dec... (WARNING: I recommend copy/pasting this link to a separate tab rather than clicking it. JWZ has a rather negative opinion of HN and a while back his server started serving up rather unsavory images in response to requests with an HN referrer. I'm not sure if that is still happening, but I know that if you copy/paste the link there's nothing offensive on the page.)