> Dasher was one of the inspirations for KType actually. When I came across it years ago, I tried to make it work for my cousin but it requires a lot of dexterity and timing, even when configured properly.
Dasher does not necessarily require dexterity or timing. You don't have to run it in "continuous" mode, where it moves to the right constantly or relies on something pointing to the right for speed. You can put Dasher into a mode where it has a single binary input: go up-right one step, or go down-right one step. Effectively, you enter one bit of information at a time, and Dasher's arithmetic coding makes the number of bits you have to enter as efficient as possible. In a mode like that, you don't need either dexterity or timing. You can still enter any possible text, but you'll find that it takes far fewer bits to enter common English text than random gibberish. Without arithmetic coding, you'd have to enter an average of log base 2 of 26, or 4.7 bits per letter, ignoring capitalization or punctuation. With arithmetic coding, you can easily get to a few bits per word, even with Dasher's simplifying constraint of keeping things alphabetically sorted (so it won't offer you vowels versus consonants even though that might require fewer bits of information than a partition of the sorted dictionary, because the sorted dictionary provides a simpler mental model).
> And when I looked at it last, it did not support a high-level of word-completion / finishing sentences. On KType, if the user wants to type 'GIVE ME SOME FOOD', they type 'GV', select 'GIVE' on screen, then 'ME' on screen, then type 'S' and select 'SOME' and so on.
Dasher has supported high-level language models for a long time. You can feed it a giant corpus of text, and it'll extract not just letter models but word models as well. Once you steer into 'G', you'll find 'i' much more common than 'h' or 'j', so 'i' will have a larger region to aim for (but you can still steer into the smaller 'h' if you want to spell "ghost", or the even smaller 'j' if you want to talk about the GNOME Javascript framework "gjs"). Once you write 'Gi', 've' will have a visible region all its own. Once you write 'Give', you'll already see a visible region for 'me', and so on. When you write common phrases, you very quickly start writing significant fractions of a sentence at once.
To use one of the other examples from your twitter corpus, if you typed "hulk", you'd likely see a visible region for "smash". :)
So, yes, Dasher has very strong predictive models.
Dasher also avoids the "modal" approach of switching between typing individual letters and selecting completions; they all use the same input mechanism. I also like their philosophy that as long as you can cause your body to output one bit of information, Dasher can work for you.
Dasher does not necessarily require dexterity or timing. You don't have to run it in "continuous" mode, where it moves to the right constantly or relies on something pointing to the right for speed. You can put Dasher into a mode where it has a single binary input: go up-right one step, or go down-right one step. Effectively, you enter one bit of information at a time, and Dasher's arithmetic coding makes the number of bits you have to enter as efficient as possible. In a mode like that, you don't need either dexterity or timing. You can still enter any possible text, but you'll find that it takes far fewer bits to enter common English text than random gibberish. Without arithmetic coding, you'd have to enter an average of log base 2 of 26, or 4.7 bits per letter, ignoring capitalization or punctuation. With arithmetic coding, you can easily get to a few bits per word, even with Dasher's simplifying constraint of keeping things alphabetically sorted (so it won't offer you vowels versus consonants even though that might require fewer bits of information than a partition of the sorted dictionary, because the sorted dictionary provides a simpler mental model).
> And when I looked at it last, it did not support a high-level of word-completion / finishing sentences. On KType, if the user wants to type 'GIVE ME SOME FOOD', they type 'GV', select 'GIVE' on screen, then 'ME' on screen, then type 'S' and select 'SOME' and so on.
Dasher has supported high-level language models for a long time. You can feed it a giant corpus of text, and it'll extract not just letter models but word models as well. Once you steer into 'G', you'll find 'i' much more common than 'h' or 'j', so 'i' will have a larger region to aim for (but you can still steer into the smaller 'h' if you want to spell "ghost", or the even smaller 'j' if you want to talk about the GNOME Javascript framework "gjs"). Once you write 'Gi', 've' will have a visible region all its own. Once you write 'Give', you'll already see a visible region for 'me', and so on. When you write common phrases, you very quickly start writing significant fractions of a sentence at once.
To use one of the other examples from your twitter corpus, if you typed "hulk", you'd likely see a visible region for "smash". :)
So, yes, Dasher has very strong predictive models.
Dasher also avoids the "modal" approach of switching between typing individual letters and selecting completions; they all use the same input mechanism. I also like their philosophy that as long as you can cause your body to output one bit of information, Dasher can work for you.