> In any case, their hypothesis is testable: which open source innovations from Llama1/2 informed Llama3?
I am not sure, but I agree that it is definitely testable.
If I had to guess/answer, I would argue that the open source contributions to Pytorch have a downstream contribution to the performance, and maybe the preparation and release of the models required an amount of polish and QA that would otherwise not have been there.
Meanwhile Meta’s competitors commoditise and glean profits from actually-SOTA LLM offerings.
In any case, their hypothesis is testable: which open source innovations from Llama1/2 informed Llama3?