One thing I love about SVGs for websites is their stability across many contexts, browsers, etc. Everything is pinpoint, coordinate based, nothing moves around unexpectedly. You define it once, it's done.
Disagree. It's really not complicated at all to me. Not sure why people make a big fuss over this. I don't want an AI automating which AI it chooses for me. I already know through lots of testing intuitively which one I want.
If they abstract all this away into one interface I won't know which model I'm getting. I prefer reliability.
I hope someone can create an open source replica of this work. I see so much potential for features you can come up with.
For example the rhyming example brings to mind a feature where you give the model starting input and ending input and ask it to fill in.
Can not only imagine it being useful in that sense, but for ways at retroactively arriving at some answer, or solution or something. Like the causal chain that leads to a specific answer.
Another idea is to show all possible word variations, and then the middle is rewritten based on the chosen word.
Don't you think this 'logic' is kind of a numb way to look at reality? Not caring about the actual human doing the job... when you've got millions in your pocket? Standard of living in third world countries tends to be worse. Look at every global human wellbeing index in existence and you'll see they're all pretty much the same map rhymed over and over. People suffer on one side so that people can live well on the other side.
It's not like "Oh their quality of life is exactly the same as in the US it's just cheaper! So it's totally fine for us to pay them peanuts! Right? Peanuts buy you a house there... I mean the house has no plumbing but it's a house!" No, their quality of life is absolute crap compared to first world countries (as someone from a third world country). And paying them like shit will only prolong their suffering.
The least the company could do is pay them fairly which means they can travel or live a better quality life.
Jeff you know what would be magical? Not just vanilla diarization "Speaker 1" and "2" but if the model can know from the conversation this speaker was referred to as "Jeff Harris" or "Jeff" so it uses that instead.
Yep. I've often said RLHF'd LLMs seem to be better at recognition memory than recall memory.
GPT-4o will never offhand, unprompted and 'unprimed', suggest a rare but relevant book like Shinichi Nakazawa's "A Holistic Lemma of Science" but a base model Mixtral 8x22B or Llama 405B will. (That's how I found it).
It seems most of the RLHF'd models seem biased towards popularity over relevance when it comes to recall. They know about rare people like Tyler Volk... but they will never suggest them unless you prime them really heavily for them.
Your point on recommendations from humans I couldn't agree more with. Humans are the OG and undefeated recommendation system in my opinion.