Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Teachable Machine (2017) (withgoogle.com)
248 points by ekiauhce on Jan 7, 2024 | hide | past | favorite | 28 comments


Wow I actually have a perfect use case for this in a hobby project. Great timing.

I considered the older version but it's very limited:

> The original Teachable Machine only let you train 3 classes, whereas now you can add as many classes as you like.

I'm curious to see how far this scales, for example can I have a few hundred thousand classes? If so, what are the consequences, if any?


v1 used a very limited (albeit very easy and already quite impressive) form of transfer learning, e.g. take a pretrained network's 1000dim vector outputs given a bunch of images belonging to three sets (since the original was trained on Imagenet), and then just use K-NN to predict what a set "new" image falls into.

v2 does actually finetune weights of a pretrained network. At the time, it was a nice showcase how fast fast JS ML libraries were evolving.


I found old videos 6 years ago about techable machine https://www.youtube.com/watch?v=3BhkeY974Rg&ab_channel=Googl...

and in 2019, Google released v2 https://blog.google/technology/ai/teachable-machine/

The tasks are limited, good for kick starter. I think the platform is not fastly developed (?)


This is rad. Perfect snow day activity for the kids.


I've done my share of research on MediaPipe[1], but had never heard of Teachable Machine. I'm curious if these efforts are related, as these products looks like they were almost intended to be used together.

I am definitely excited to see that Google is investing into more "ML at the edge" use cases, especially in the browser. If you've never heard of MediaPipe before, but this caught your eye, definitely check it out. It has seen large uptake in the VTubing community especially as it has a very performant implementation of body + face + hand pose tracking driven by BlazePose.

1: https://developers.google.com/mediapipe


FYI, this is not a new project. Here’s an HN discussion from 6 years ago: https://news.ycombinator.com/item?id=15399132


The new link mentions "the first version from 2017", so I'm assuming this release is what Google considers version 2.


Yes I agree - the (2017) tag in the title doesn't seem right here given the update.


Discussed at the time:

Teachable Machine: Teach a machine using your camera, live in the browser - https://news.ycombinator.com/item?id=15399132 - Oct 2017 (90 comments)


This was a fun redesign attempt from years ago

https://fairpixels.pro/work1/index.html


Isn’t this basically what multimodal LLM does as well… it can do anything on the fly it can understand.

What’s different here?


These are smaller scale model that you can export and run anywhere.

Even the smallest multimodal LLM would be wayyyyyyy bigger than an exported model from this


How small are these models? can I export a model here and embed it in an Android/ios app?


The website says they can be embedded in a web app, and export to a format called TensorFlow Lite. I am sure you could embed it.

https://www.tensorflow.org/lite


https://www.tensorflow.org/lite/android

TF Lite has first-class Android support with hardware acceleration if I'm not mistaken.


You can supposedly embed them for Arduinos, so an app should be no problem.


The teaching part is what matters, it’s training (tuning in this case) a model, not just using a model already trained for inference (which is what I assume you mean). You’re providing new data that is used to update the model. Inference across an existing multimodal model doesn’t change how it classifies in any way.


I think this is more like fine tuning an existing model to recognize features you specifically intend it to, and be light enough to run locally in a browser.


It's not even fine tuning, it's creating a model from scratch. This isn't like our modern huge models either, these tiny single-purpose models have been around for ages and are quite versatile. They're so small you can't just easily run them in the browser, but train them effectively, which is what this project lets you play around with! Super cool stuff.


Thanks!


do we have any other self hostable, open source alternatives to this?


It is self-hostable.

It runs locally in your browser, without sending your training data to any servers.

Unless you choose to save it to Google Drive.

If you choose to host the model with Google, they get a copy of your weights, but they still don't see your training data.

Or you can host it yourself with tensorflow.js

And you can also download everything in a zip file, training data and weights, and Google never sees any of it.

If you want the source, it's here -> https://github.com/googlecreativelab/teachablemachine-commun...


Note that the source code seems not to be the web UI itself, but rather a collection of samples/helpers to use exported models.


It looks like the first version of Teachable Machine really is fully open source, but maybe not the new one?

https://github.com/googlecreativelab/teachable-machine-v1


[flagged]


Teachable Machine has existed for years: https://www.theverge.com/tldr/2017/10/9/16447006/google-teac...

Its last real update (AFAIK) was in 2019: https://www.theverge.com/2019/11/7/20953095/google-ai-traine...

Like a number of Google projects, this one lives on without any clear direction. It probably will get axed some day, but the technology in Teachable Machine today is so “old school” already that I don’t think it would be that hard for someone to recreate or improve upon.


oh wow, I thought this was new. Given that this has received no attention in 3 years I'd assume this is largely abandoned internally inside company.


Just like act like it doesn’t exist to begin with, so you’re not disappointed.


Who owns the model? Does google get to reuse them?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: