Hacker Newsnew | past | comments | ask | show | jobs | submit | silencedogood3's commentslogin

Has anyone tried training a language model on animal vocalizations?


Not a language model, but there was this classifier mapping pig squeals to contextual information about the animal [1].

For human language models there are obvious practical uses and we can intuitively evaluate the quality of the models. For animal communication you need to use other biological variables, like valence in the pig communication example.

I do remember one where they generated artificial frog mating calls and were able to generate artificial ones that were exceptionally attractive to female frogs, but that was from a while back, and I can’t find the link.

[1] https://www.nature.com/articles/s41598-022-07174-8


There was some research on prairie dog language as well:

https://medium.com/health-and-biological-research-news/prair...

https://www.npr.org/2011/01/20/132650631/new-language-discov...

Original paper: sciencedirect.com/science/article/abs/pii/S0003347205801174?via%3Dihub


There was a post here on HN a few months back about a group of scientists that did exactly that to whale recordings, and actually found some surprising patterns emerge.

edit: I think this is the one I was thinking of: https://news.ycombinator.com/item?id=26874309


So why would I choose his over openscad? What’s better/worse?


These are just different.

My biggest gripe with SolveSpace won't let you write code, everything is done in the UI which is very limiting and a big turn-off, especially given what the underlying geometry engine is capable of.

On the other hand, OpenSCAD doesn't have a constraint solver (you can't easily say something like "compute the intersection of these two circles and project the resulting point on this surface and extrude a cylinder from that point along the normal of that surface"), whereas SolveSpace does, and it's a very powerful way to model things.

OpenSCAD is also very annoying in the sense that it doesn't allow you to "probe" your model to construct it further: it has exactly zero self-introspection features. To give a simple example, it's almost impossible to compute the bounding box of a model or its barycenter, which is very limiting.


Solvespace 3.1 (due soon RC1 is out) will allow linking STL files, so you can do your OpenSCAD stuff that way, then link in the STLs in solvespace. Not sure how useful that's going to be but it's something for those who like both tools.


Actually, I chose it over Openscad for one reason — it doesn’t crash that often)

Otherwise it’s a bit less flexible — e.g. it’s harder to do value constraints (e.g get dimensions from a spreadsheet); and it isn’t compatible with scripting yet; you can’t just point-click-chamfer your parts; you need to do metric holes and indents for bolt heads manually; etc, etc

But what is available works almost without any problems (except for 3D constraints and occasional problems with difference operations on curved surfaces) — and is soo fun to use.


Can you explain what the big deal is? I’m still in the early learning stages.


As an example, if you want to encode all of the data in wikipedia with embeddings and train a model to answer questions with that information, historically, that would mean a model that encodes all of wikipedia, encodes the question, uses all of encoded wikipedia to decode an answer, then does backprop through all of that and updates the weights. Then it re-encodes all of wikipedia with the new weights and goes all over again, again and again at each training step, also somehow holding all of that in GPU memory. Meaning you basically couldn’t do it that way.

Today, we’re seeing big models that can encode all of wikipedia in useful ways. If the encodings are “good enough” then you can encode all of wikipedia once, before training another model that just has to encode a question, then use encoded wikipedia to decode an answer, then do backprop through just the answer and question. If wikipedia changes in the meantime, you can probably just update your database of encoded stuff and your learned QA model will be able to incorporate that new information.


Replace Wikipedia by the internet, and you can replace Google Search by some (hopefully) soon to be discovered algorithm based on these principles. Exciting times.


Neat! Can you explain what the KNn is doing? I can’t quite follow the paper.


It's a sparse attention scheme. They store and reuse activations thus "memorising" the past without the need for training. In order to keep the sequence short enough to fit into memory they only recall the k most similar memories from a much larger log.


I agree but the only issue is we can’t build electric cars fast enough to address the high gas costs.

These prices seem to be too much of a shock to the system. It’s better to have a smooth transition over ten years or so.


Good points but I’d argue we’re nowhere near a strong, independent press. And I doubt a few bucks is going to make the difference.


Honestly if we had Both a six hour work day and a four day work week that would be really ideal for a good work life balance.

And that generally seems like the amount of Time we can be productive.


Does anyone know if this would be difficult to connect to a second monitor?


If I wait six more months to buy will they have a really solid linux story?


Great comment! I find it even more moving in waterson style comic form: https://www.gocomics.com/zen-pencils/2014/06/23


Haha- ok. I love how he is nerding out on the dinosaur and next we see his girlfriend. This is not what happens in real life.


Worked for me. In general, though, I found that immigrants were on the average much more receptive to appreciating someone who works hard and values family life.


Au contraire, mon ami! This _is_ my life. I am the epitome of "stay true to yourself." Mostly staying true to myself has been flipping the table, saying "F this for a game of soldiers" and flipping the middle finger as I walk out the door. The journey has not always been comfortable. But this adventure that I hope never ends has been a lot of fun.


Let me congratulate you for reaching „the epitome of staying true to yourself“ and for going to a place where no man has gone before.


Thank you for this!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: