It’s a glib analogy, but the goal remains the same. Today’s training sets are immense. Is there an architecture that can learn something with tiny training sets?
Maybe ZephApp, when it's actually released.
But would be interesting to record day-to-day conversations (face-to-face using voice recognition) to train a virtual doppelganger of myself and use it to find uncommon commonalities between myself and others.
What would someone do with a year's worth of recorded conversations? Would the other parties be identified? How would it be useful, if at all? How about analyzing the sounds/waveform rather than words? (eg BioAcousticHealth / vocal biomarkers)
Perhaps typing into a text-field is the problem right now? Maybe have a HUD in a pair of glasses. Better than getting a brain chip! Most recent or most repeated conversations most important. Could lead to a reduction in isolation within societies, in favor for "AI training parties." Hidden questions in oneself answered by a robot guru as bedtime story-telling but related to the real-world and real-events.
I'm certainly not challenging anything you're writing, because I only have a very distant understanding of deep learning, but I do find the question interesting.
Isn't there a bit of a defining line between something like tic-tac-toe that has a finite (and pretty limited for a computer) set of possible combinations where it seems like you shouldn't need a training set that is larger than said set of possible combinations, and something more open-ended where the impact of the size of your training set mainly impacts accuracy?
A 5yo also has... 5 years of cumulative real world training. I'm a bit of an AI naysayer but I'd say the comparison doesn't seem quite accurate.