Given that GP explicitly said “if you don't have attention”, and we're in a thread about a language model whose main characteristics is not to use attention, I don't understand why you insist in talking about attention …
I mean, if we are going to get past attention (very much on board with the idea!), then it might help to know what it is really contributing to a model.
My response was trying to clarify some confusion.
I am all for alternatives to attention. I don’t think BM25 cuts it. I don’t think anything that samples tokens based on BM25 weights (the idea in this subthread) would cut it.
What confusion? I know exactly how BM25 works and how Transformers work. I stated a hypothesis and asked if anyone has tried it. You say it won’t work. That’s just your opinion. Do you have proof or evidence? This is science. Dismissal of ideas without evidence goes against scientific principles.
Just catching up to this thread again. You had said:
"I was wondering if anyone has tried setting importance of a token as a TF-IDF or BM25 lookup."
So, I take it back. This is not a confusion. You are right to call it out. :)
I like this idea directionally. A lot of energy (literally) would be saved if we could get to the model accuracy outcomes with static weights like this.
However, I do think that this (as stated in your original message) would not work as well as transformer or SSM and I explained my reasoning as to why, already. I don't have an empirical proof (not having run the experiment) but if you believe in it, you should try it and share your findings.