ELIZA wasn't "clearly a toy" when it appeared. It actually confused people.
I don't deny that good suggestions are useful in coding. I am just not sure you can get these by mimicking other people's code without really understanding it.
I think the reason why code is different than natural language is that in code, the meaning of identifiers (words) is very dependent on the context of the specific codebase, and not nearly as universal as in natural language. Sure, there are words like personal names in natural language that change meaning depending on wider context, but they are not as common in the natural text.
So focusing on the names only, rather than for example understanding types of runtime objects/values, and how they change, can be actually actively harmful in trying to understand the code and making good suggestions. So I would believe that suggestions based on e.g. type inference would be more useful.
There is another aspect to this, text has inherently low entropy, it's somewhat easy to predict the next symbol. But if code has low entropy (i.e. there are universal patterns that can be applied to any codebase in a particular programming language), I would argue that this is inefficient from the programmer's point of view, because this entropy could probably be abstracted away to a more condensed representation (which could be then intelligently suggested based on semantics rather than syntax).
In other words, I think the approach assumes that the probability of the next word, based on syntax, is similar to probability of the next word, based on semantics, but I don't believe that's the case in programming languages.
I don't deny that good suggestions are useful in coding. I am just not sure you can get these by mimicking other people's code without really understanding it.
I think the reason why code is different than natural language is that in code, the meaning of identifiers (words) is very dependent on the context of the specific codebase, and not nearly as universal as in natural language. Sure, there are words like personal names in natural language that change meaning depending on wider context, but they are not as common in the natural text.
So focusing on the names only, rather than for example understanding types of runtime objects/values, and how they change, can be actually actively harmful in trying to understand the code and making good suggestions. So I would believe that suggestions based on e.g. type inference would be more useful.
There is another aspect to this, text has inherently low entropy, it's somewhat easy to predict the next symbol. But if code has low entropy (i.e. there are universal patterns that can be applied to any codebase in a particular programming language), I would argue that this is inefficient from the programmer's point of view, because this entropy could probably be abstracted away to a more condensed representation (which could be then intelligently suggested based on semantics rather than syntax).
In other words, I think the approach assumes that the probability of the next word, based on syntax, is similar to probability of the next word, based on semantics, but I don't believe that's the case in programming languages.