I don't understand why "it's just predicting words, bro" is still seen as a valuable argument. A LOT has to happen to accurately predict the next word(s) for any given topic.
If that supposed to be a dismissal, it's not a good one.
LLMs have finally freed me from the shackles of yak shaving. Some dumb inconsequential tooling thing doesn't work? Agent will take care of it in a background session and I can get back to building things I do care about.
I'm finding that in several kinds of projects ranging from spare-time amusements to serious work, LLMs have become useful to me by (1) engaging me in a conversation that elicits thoughts and ideas from me more quickly than I come up with them without the conversation, and (2) pointing me at where I can get answers to technical questions so that I get the research part of my work done more quickly.
Talking with other knowledgeable humans works just as well for the first thing, but suitable other humans are not as readily available all the time as an LLM, and suitably-chosen LLMs do a pretty good job of engaging whatever part of my brain or personality it is that is stimulated through conversation to think inventively.
For the second thing, LLMs can just answer most of the questions I ask, but I don't trust their answers for reasons that we all know very well, so instead I ask them to point me at technical sources as well, and that often gets me information more quickly than I would have by just starting from a relatively uninformed google search (though Google is getting better at doing the same job, too).
It's not that complicated. 4o was RLHF'd to be sycophantic as hell, which was fine until some one had their psychotic episode fueled by it and so they changed it with the next model.
They are everywhere. Knowing the old “Germany is the land of privacy” I was shocked to walk about in many neighborhoods, from pretty run down to affluent, and see Ring, Nest, Arlo, all cloud connected cameras hanging over doors but turned more towards the public road in front of the house.
I have no expertise here, but a couple years ago I had a prototype using locally deployed Llama 2 that cached the context (now deprecated https://github.com/ollama/ollama/issues/10576) from previous inference calls, and reused it for subsequent calls. The subsequent calls were much much faster. I suspect prompt caching works similarly, especially given changed code is very small compered to the rest of the codebase.
It provides a serviceable analog for discussing model behavior. It certainly provides more value than the dead horse of "everyone is a slave to anthropomorphism".
Maybe a being/creature that looked like a person when you concentrated on it and then was easily mistaken as something else when you weren't concentrating on it.
I’m certainly no Pratchett, so I can’t speak to that. I would say there’s an enormous round coin upon which sits an enormous giant holding a magnifying glass, looking through it down at her hand. When you get closer, you see the giant is made of smaller people gazing back up at the giant through telescopes. Get even closer and you see it’s people all the way down. The question of what supports the coin, I’ll leave to others.
We as humans, believing we know ourselves, inevitably compare everything around us to us. We draw a line and say that everything left of the line isn’t human and everything to the right is. We are natural categorizers, putting everything in buckets labeled left or right, no or yes, never realizing our lines are relative and arbitrary, and so are our categories. One person’s “it’s human-like,” is another’s “half-baked imitation,” and a third’s “stochastic parrot.” It’s like trying to see the eighth color. The visible spectrum could as easily be four colors or forty two.
We anthropomorphize because we’re people, and it’s people all the way down.
The only forward facing government that actually had a drive to change anything useful for the future broke apart with internal squabbles, with a big part of it by the market liberals torpedoing things left and right. And now we're back to a government of stand still, like we did the almost two decades before.
Not sure what you're talking about. The last "forward facing" government was about 50y ago, the last one at least driving meaningful reforms almost 25y ago. To me it seems the more Europe got integrated, the more Germany lost the plot.
This standstill mostly started happening when the capitalism took hold too deep and wide, look at Sweden and its golden age that lasted until all the restrictions on capitalism were silently removed.
While capitalism is a good model, it needs to be kept balanced, restricted..
Shareholder primacy is ruining everything, too much influence in politics from too many external sources.
If that supposed to be a dismissal, it's not a good one.
reply