A really short version of it is that you don't need an agent if you have a well-defined solution that can be implemented in advance (e.g. the 'patterns' in this article). Programmers often work on problems that have programmatic solutions and then the advice is totally correct: reach for simpler more reliable solutions. In the future AIs will probably be smart enough to just brute force any problem, but for now this is adding unneeded complexity.
I suspect a reason so many people are excited about agents is they are used to "chat assistants" as the primary purpose of LLMs, which is also the ideal use case for agents. The solution space in chat assistants is not defined in advance, and more complex interactions do get value from agents. For example, "find my next free Friday night and send a text to Bob asking if he's free to hang out" could theoretically be programmatically solved, but then you'd need to solve for every possible interaction with the assistant; there are a nearly unlimited number of ways of interfacing with an assistant, so agents are a great solution.
Works great when you can verify the response quicker than it would take to just do yourself. Personally I have a hard ass time trusting it without verifying.
> If they can make the average song listened to by a user just 1 second longer, they reduce that by about 0.5%.
This isn't how music royalties work - rather, Spotify (and most other on demand streaming services) pay out a % of their net revenue to rights holders. This % does not change based on how many streams there are in total, but it IS distributed proportionally based on the number of streams, so it's more profitable for a music rightsholder to have more streams (the topic of the article).
They both seem primitive. If a user paid $10/month for a subscription, each month they should divvy that $10 to the proportional minutes listened of each artist for that month. That’s paying out to the people that are keeping that person subscribed. Minus Spotify’s cut of course
Right? As a user, if I listen to 2 hours of content split equally between sources A and B, it seems fair that they each get half of my subscription fee (less Spotify's cut). Regardless of if A views B's content as "less worthy". On the other hand I wouldn't sign up for a monthly white noise service and if A went on strike and didn't renew license agreements, that's what Spotify would become. Record labels do have leverage over white noise which is a commodity (right? y'all aren't beholden to certain streams are you??)
I actually do have certain episodes/streams saved that are my go-tos.
Navigating "rain sounds" has become a lot more difficult lately specifically due to record labels complaining, particularly if you want one continuous 8hr stream. Instead all I can find now are playlists with a bunch of things I don't want. If I didn't have my favorites already saved I wouldn't be able to find them at all now.
There's a lot of confusion in this thread about "before the big bang". I was also confused and did some googling and found this explanation from a professor of theoretical physics. It seems that it's actually pretty normal to refer to the big bang as happening after the initial inflationary epoch, but others refer to it as the moment before this.
>Do not allow yourself to be confused: The Hot Big Bang almost certainly did not begin at the earliest moments of the universe. Some people refer to the Hot Big Bang as “The Big Bang”. Others refer to the Big Bang as including earlier times as well. This issue of terminology is discussed at the end of this article on Inflation [https://profmattstrassler.com/articles-and-posts/relativity-...].
The article is talking about the "hot big bang", so it's using terminology that is accepted by other theoretical physicists.
Wouldn’t this be the result of some theoretical physicists moving the goalposts?
It sounds like they feel the commonly accepted understanding of the Big Bang is overbroad. Fine. Find new words to describe the subsets of the event. Redefining the word is just causing confusion.
Most of the stuff I have read on this presupposes that some kind of phase transition(think of the early universe being in a 'boiling' phase and then condensating) occurred that caused the field which drove inflation(with the force carrier called the inflaton) to decay and release all the energy in the field(that is, the inflatons decayed). This decay process is what we conceive of as 'the big bang', as in the start of the energy dense Universe we see a glimpse of in the CMB.
You are right that the goalposts have been moved. When analysis of the CMB began, it was noticed that it was far too uniform in distribution and temperature for what was previously thought to be possible. It was at this point that an inflationary period was tacked on before 'the big bang' because that was the only way to get the kind of 'big bang' we seem to have had.
I learned about this from Red Dead Redemption 2 (spoiler ahead), where the protagonist purchases a pre-built home from a catalog modeled after the Sears Home catalog.
What I found particularly interesting is that what Americans today consider stereotypical American farm houses were actually these Sears houses! They had a significant influence on the country's architectural history.
While my experience is not from the 90s, I think I can speak to some of why this is. For some context, I first got into neural networks in the early 2000s during my undergrad research, and my first job (mid 2000s) was at an early pioneer that developed their V1 neural network models in the 90s (there is a good chance models I evolved from those V1 models influenced decisions that impacted you, however small).
* First off, there was no major issue with computation. Adding more units or more layers isn't that much more expensive. Vanishing gradients and poor regulation were a challenge and meant that increasing network size rarely improved performance empirically. This was a well known challenge up until the mid/later 2000s.
* There was a major 'AI winter' going on in the 90s after neural networks failed to live up to their hype in the 80s. Computer vision and NLP researchers - fields that have most famously recently been benefiting from huge neural networks - largely abandoned neural networks in the 90s. My undergrad PI at a computer vision lab told me in no uncertain terms he had no interest in neural networks, but was happy to support my interest in them. My grad school advisors had similar takes.
* A lot of the problems that did benefit from neural networks in the 90s/early 2000s just needed a non-linear model, but did not need huge neural networks to do well. You can very roughly consider the first layer of a 2-layer neural network to be a series of classifiers, each tackling a different aspect of the problem (e.g. the first neuron of a spam model may activate if you have never received an email from the sender, the second if the sender is tagged as spam a lot, etc). These kinds of problems didn't need deep, large networks, and 10-50 neuron 2-layer networks were often more than enough to fully capture the complexity of the problem. Nowadays many practitioners would throw a GBM at problems like that and can get away with O(100) shallow trees, which isn't very different from what the small neural networks were doing back then.
Combined, what this means from a rough perspective, is that the researchers who really could have used larger neural networks abandoned them, and almost everyone else was fine with the small networks that were readily available. The recent surge in AI is being fueled by smarter approaches and more computation, but arguably much more importantly from a ton more data that the internet made available. That last point is the real story IMO.
The funny thing is that the authors of the paper he linked actually answer his question in the first paragraph, when they say that the input dataset needs to be significantly larger than the number of weights to achieve good generalisation, but there is usually not enough data available.
Their business plan is to build custom models for clients/companies who want to own models built on their own data. No comment on the viability of such a business model.
Python did not win the ML language wars because of anything to do with front-end, but rather because it does both scripting and software engineering well enough. ML usually requires an exploration/research (scripting) stage and a production (software engineering) stage, and Python combines these seamlessly better than many ML languages before it (Matlab, Java, R). Notebooks became the de facto frontend of ML Python development and to me it's evidence that frontend in ML is inherently messy.
Do I wish a better language like Julia had won out? Sure, but it came out 10+ years into this modern age of ML, which is an eternity in computing. By the time it really gained traction it was too late.
I agree, but can you imagine that Matlab, and then R, were the de facto ML languages before Python really took off? Putting R models into production was an absolute nightmare. Before R, I was writing bash scripts which called Perl scripts that loaded data and called C code that loaded and ran models that were custom built by Matlab and C. Python (and the resulting software ML ecosystem) was a huge breath of fresh air.
Create view is a great V0.5 for a data warehouse and what I recommend people do if possible so they can concentrate on building the right schema, naming standards, etc.
dbt is the V1. You get a lot of tooling, including a proper dag, logging, parametrization. You also get the ability to easily materialize your tables in a convenient format, which is important if (probably when) you figure out consistency is important. Views can take you far, but most orgs will eventually need more, and dbt is designed to be exactly that.
As a side note, moving from views to dbt is actually quite easy. I've done it several times and it's usually taken a couple of developer days to get started and maybe a couple weeks to fully transition.
I suspect a reason so many people are excited about agents is they are used to "chat assistants" as the primary purpose of LLMs, which is also the ideal use case for agents. The solution space in chat assistants is not defined in advance, and more complex interactions do get value from agents. For example, "find my next free Friday night and send a text to Bob asking if he's free to hang out" could theoretically be programmatically solved, but then you'd need to solve for every possible interaction with the assistant; there are a nearly unlimited number of ways of interfacing with an assistant, so agents are a great solution.