Hacker News new | past | comments | ask | show | jobs | submit login

Can't wait to have neural data associated to text and images for multi-modal training of neural nets. Up to present we have had the basic modalities: video, image, audio and text, now we can have a new modality which is more direct than even speech, remains to see how much of the embedding space it can cover.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: