They may have an inkling that the big LLM companies will want to pay for future/past data... I imagine either Google or OpenAI has something predictive and shopping-related in the books.
Can anyone give a little more color on the nature of Erdos problems? Are these problems that many mathematicians have spend years tackling with no result? Or do some of the problems evade scrutiny and go un-attempted for most of the time?
EDIT:
After reading a link someone else posted to Terrance Tao's wiki page, he has a paragraph that somewhat answers this question:
> Erdős problems vary widely in difficulty (by several orders of magnitude), with a core of very interesting, but extremely difficult problems at one end of the spectrum, and a "long tail" of under-explored problems at the other, many of which are "low hanging fruit" that are very suitable for being attacked by current AI tools. Unfortunately, it is hard to tell in advance which category a given problem falls into, short of an expert literature review. (However, if an Erdős problem is only stated once in the literature, and there is scant record of any followup work on the problem, this suggests that the problem may be of the second category.)
Erdos was an incredibly prolific mathematician, and one of his quirks is that he liked to collect open problems and state new open problems as a challenge to the field. Many of the problems he attached bounties to, from $5 to $10,000.
The problems are a pretty good metric for AI, because the easiest ones at least meet the bar of "a top mathematician didn't know how to solve this off the top of his head" and the hardest ones are major open problems. As AI progresses, we will see it slowly climb the difficulty ladder.
Don't feel bad for being out of the loop.
The author and Tao did not care enough about erdos problem to realize the proof was published by erdos himself.
So you never cared enough and neither did they.
But they care about about screaming LLMs breakthrough on fediverse and twitter.
This is bad faith. Erdos was an incredibly prolific mathematician, it is unreasonable to expect anyone to have memorized his entire output. Yet, Tao knows enough about Erdos to know which mathematical techniques he regularly used in his proofs.
From the forum thread about Erdos problem 281:
> I think neither the Birkhoff ergodic theorem nor the Hardy-Littlewood maximal inequality, some version of either was the key ingredient to unlock the problem, were in the regular toolkit of Erdos and Graham (I'm sure they were aware of these tools, but would not instinctively reach for them for this sort of problem). On the other hand, the aggregate machinery of covering congruences looks relevant (even though ultimately it turns out not to be), and was very much in the toolbox of these mathematicians, so they could have been misled into thinking this problem was more difficult than it actually was due to a mismatch of tools.
> I would assess this problem as safely within reach of a competent combinatorial ergodic theorist, though with some thought required to figure out exactly how to transfer the problem to an ergodic theory setting. But it seems the people who looked at this problem were primarily expert in probabilistic combinatorics and covering congruences, which turn out to not quite be the right qualifications to attack this problem.
That sounds like a great question. Why did no one bother to mention the problem was already proved and published by the author that proposed the statement 90 years ago?
Somehow an llm generated proof that consist of gigabytes upon gigabytes of unreadable mess is groundbreaking and pushes mathematics forward, a proof proposed by Erdos himself in 5 pages gets buried and lost to time.
Maybe one particular optics fuels the narrative that formal verified compute is the new moat and llms are amazing at that?
the proofs written by ChatGPT are necessarily reasoned about in plain language, and are a human-comprehensible length (that is what Tao did, since it hasn't been formalised in a proof-checking language); today, the many-gigabytes (or -terabytes) proofs (à la 4-colour theorem) are generally problems solved via SAT solvers that are required to prove nonexistence of smaller solutions by exhaustion.
and there is an ongoing literature review (which has been lucrative to both erdosproblems and the OEIS), and this one was relabelled upon the discovery of an earlier resolution
He's the most prolific and famous modern mathematician. I'm pretty sure that even if he'd never touched AI, he would be invited to more conferences than he could ever attend.
I know someone who organized a conference where he spoke (this was before the AI boom, probably around 2018 or so) and he got very good accommodations and also a very generous speaking fee.
Spain is also big in the utility scale solar and storage industry with the Power Electronics company providing inverters or other components to many of the worlds largest plants.
You can learn a lot from watching your doctor, plumber or mechanic work, and you could learn even more if you could ask them questions for hours without making them mad.
You learn less from watching a faux-doctor, faux-plumber, faux-mechanic and learn even less by engaging in their hallucinations without a level horizon for reference.
Bob the Builder doesn't convey much about drainage needs for foundations and few children think to ask. Who knows how AI-Bob might respond.
The YouTuber Will Prowse has an excellent site where he tracks his most recommended batteries (and other equipment like inverters) at any time. The prices are always changing, and there are new products all the time so check on the his list any time you are looking to buy:
Like the other commenter said, batteries are a lot cheaper if you are willing to shop around. His top recommended budget battery today is a 4x your Anker Solix's capacity, and around 1/4th the price. You can find many 5kWh server rack batteries for under $1000 now.
I agree with you that there is one original definition, but I feel like we've lost this one and the current accepted definition of vibe coding is any code is majority or exclusively produced by an LLM.
I think I've seen people use the "vibe engineering" to differentiate whether the human has viewed/comprehended/approved the code, but I am not sure if that's taken off.
I have a plex server and use Prologue for audio books. What would my experience on Rivian be like? I am guessing I would have to connect to the infotainment system as a bluetooth speaker? Would I be able to easily skip forward/backward and see the current chapter?
I've been using car play for the better part of the past decade and don't know what it looks like in vehicles without it.
I loved to see that they plan on running the Rivian Assistant LLM onboard using their new Gen 3 hardware. Great that they see that as a valuable feature and I hope to see the industry move that way.
I watched the livestream and they said their hardware is "Camera Safe". I am not sure if camera safe and eye safe are correlated, but I would hope/expect that they would not release something that isn't known to be eye safe. I guess it's possible that the long term effects could prove bad, and we will all end up getting "Lidar Eye" dead spots in our vision.
Digital camera sensors are much more sensitive than eyeballs, so it's not out of the realm of possibility that it won't leave a permanent line across your eyeball like it can to a camera sensor
reply