Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A 5yo doesn’t need much training to learn a new game

A 5yo also has... 5 years of cumulative real world training. I'm a bit of an AI naysayer but I'd say the comparison doesn't seem quite accurate.



It’s a glib analogy, but the goal remains the same. Today’s training sets are immense. Is there an architecture that can learn something with tiny training sets?


Maybe ZephApp, when it's actually released. But would be interesting to record day-to-day conversations (face-to-face using voice recognition) to train a virtual doppelganger of myself and use it to find uncommon commonalities between myself and others.

What would someone do with a year's worth of recorded conversations? Would the other parties be identified? How would it be useful, if at all? How about analyzing the sounds/waveform rather than words? (eg BioAcousticHealth / vocal biomarkers)

Perhaps typing into a text-field is the problem right now? Maybe have a HUD in a pair of glasses. Better than getting a brain chip! Most recent or most repeated conversations most important. Could lead to a reduction in isolation within societies, in favor for "AI training parties." Hidden questions in oneself answered by a robot guru as bedtime story-telling but related to the real-world and real-events.

Smart Glasses --> Smart Asses

Vibe Coding --> Tribe Loading

Everything Probable --> Mission Impossible


I'm certainly not challenging anything you're writing, because I only have a very distant understanding of deep learning, but I do find the question interesting.

Isn't there a bit of a defining line between something like tic-tac-toe that has a finite (and pretty limited for a computer) set of possible combinations where it seems like you shouldn't need a training set that is larger than said set of possible combinations, and something more open-ended where the impact of the size of your training set mainly impacts accuracy?


Assuming you don't account for reflections, rotations, and 'unreachable' gamestates where a player wins and you continue to mark boxes.

It's just 3^9, right? 9 boxes, either X,O, or blank? We're only at 19,683 game states and would trim down from here if we account for the cases above.


Exactly, but then we may as well say "don't solve this with an LLM" which sort of kills the conversation altogether and that's not my goal. :)


Oh, im sorry! I was just trying to give a quick perspective of how small that tic-tac-toe data-set actually is. Not suggest against the idea!


Oh no worries at all. :)


And hundreds of millions of years of evolutionary intelligence.


Next step in AI: teaching an LLM to think like a trilobite!


A trilobite was obviously better at being a trilobite than an LLM would be, if not by purely definitional purposes.


Was the six million dollar man not a better man?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: