Dafny has been around for a while and people do in fact use it. People also apply contract languages to C and all matter of other things, so really question boils down to "Why arent you doing what I expect of you?"
Yes.. yes.. sure, of course... You neglect this one little detail: theorem proving IS programming. So if an AI can be "better than a fields medalist" (a laughable claim akin to basically calling it AGI) then it will be better than all software engineers everywhere.
But see you neglect something important: it's the programmer that is establishing the rules of the game, and as Grothendieck taught us already, often just setting up the game is ALL of the work, and the proof is trivial.
What is harder, beating Lee Sedol at Go, or physically placing stones on a Go board? Which is closer to AGI?
Because AlphaGo can only do one.
AI could very well be better at formal theorem proving than fields medalists pretty soon. It will not have taste, ability to see the beauty in math, or pick problems and set directions for the field. But given a well specified problem, it can bruteforce search through lean tactics space at an extremely superhuman pace. What is lacks in intuition and brilliance, it will make up in being able to explore in parallel.
There is a quality/quantity tradeoff in search with a verifier. A superhuman artificial theorem prover can be generating much worse ideas on average than a top mathematician, and make up for it by trying many more of them.
It's Kasparov vs DeepBlue and Sedol vs AlphaGo all over.
It's also nowhere near AGI. Embodiment and the real world is super messy. See Moravec's paradox.
Practical programs deal with the outside world, they are underspecified, their utility depends on the changing whims of people. The formal specification of a math problem is self contained.
> But given a well specified problem, it can bruteforce search through lean tactics space at an extremely superhuman pace. What is lacks in intuition and brilliance, it will make up in being able to explore in parallel.
right, but because math is not well specified and formalized yet, it could be a problem, and that's where humans with intuition and more rigid reasoning still necessary.
Your analogy completely misses the point. What is harder? Designing a game that has relevant implications in physical reality, or playing a game already designed given an oracle that tells you when you make an incorrect move?
I'm not sure if you're just choosing intentionally obtuse verbiage or if you're actually saying something completely incoherent.
"Overall Complexity" is a meaningless term without you defining it. I suppose you mean that there is some lower bound limit to the amount of work relative to a model of computation, but this is a trivial statement, because the limit is always at least no work.
We have many problems where we don't know the least upper bound, so even an interesting formulation of your idea is not necessarily true: work need not be conserved at the least upper bound, because reaching the bound may not be possible and epsilon improvement might always be possible.
Finally, algorithmic complexity is the wrong analogy for reverse mathematics anyway.
To give an example, we consider that binary search requires less work than a linear search. But there are costs and usecase considerations involved. Insertion of new recod need to use binary search to keep the data sorted. Also if the number of lookups is far less than number of records, the overall cost is more than appending and linear search. That's what I mean by by moving the complexity around.
A problem scenario doesn't have absolute characteristics. It's relative to your way of looking at it, and your definition of a problem.
You are right, but this doesn't mean that the amount of work is conserved as your original message implies. The correct statement would be that "algorithmic complexity is just one aspect of actual practical complexity and an algorithm with better algorithmic complexity can end up performing worse in reality due to practical considerations of the data and the processor doing the computations".
There is a stupid presupposition that LLMs are equivalent to human brains which they clearly are not. Stateless token generators are OBVIOUSLY not like human brains even if you somehow contort the definition of intelligence to include them
Even if they are not "like" human brains in some sense, are they "like" brains enough to be counted similarly in a legal environment? Can you articulate the difference as something other than meat parochialism, which strikes me as arbitrary?
If LLMs are like human minds enough, then legally speaking we are abusing thinking and feeling human-like beings possessing will and agency in ways radically worse than slavery.
What is missing in the “if I can remember and recite program then they must be allowed to remember and recite proframs” argument is that you choose to do it (and you have basic human rights and freedoms), and they do not.
To be clear I’m not saying I believe that’s the case, merely noting that logically there are two options (either it’s like a human and then it can remember & recite but we can’t abuse it, or more likely it’s just a tool and then the freedom to “remember & recite” is simply not applicable to it, whatever it does is liability of its operator and user).
If IP law is arbitrary, we get to choose between IP law that makes LLMs propagate the GPL and law that doesn't. It's a policy switch we can toggle whenever want. Why would anyone want the propagates-GPL option when this setting would make LLMs much less useful for basically zero economic benefit? That's the legal "policy setting" you choose when you basically want to stall AI progress, and it's not going to stall China's progress.
All definitions are arbitrary if you're unwilling to couch them in human experience, because humans are the ones defining. And my main difference is right there in my initial response: an LLM is a stateless function. At best, it is a snapshot of a human brain simulated on a computer, but at no point could it learn something new once deployed. This is the MOST CHARITABLE interpretation of which I don't even concede in reality, it is not even a snapshot of a brain.
These are two indisputable facts about our world, if you disagree you are wrong and anti-science:
1. Gender is a social construct
2. Whiteness is a social construct and in particular has been used as a bludgeon against minority "non-whites" in the United States for a very long time
If you do not believe these things you are the problem. You lack education. You lack critical thinking. You are brainwashed.
Both of these social constructs must be challenged. The first is used to oppress women and girls primarily, and the second underpins racist oppression.
Of course, I agree. But in the context of education if you reject the above (which MANY people on HN do) then you're delusional. There is nothing nice I have to say about it, and I know how dearly the HN crowd loves to clutch their pearls about the tone. It is akin to believing the world is flat, a broken ideology that should not be entertained.
Once you accept the simple facts as above, then you can finally explore the consequences.
This is ridiculous. A consumer doesn't want to die. Life saving "medical products" can be too expensive even with completely honest and well intentioned pricing. The whole point of the system is good people want to pool resources to enable purchasing these "medical products" to save lives. Pooling resources is subsidizing others, government or otherwise. Removing that is an impossibility.
God forbid you actually have to interact with an extermist on the left if you think the geriatric liberals running the show in the democratic party are any kind of extremist.
It's a great comedy that someone comes along with a "think of my grandma!" appeal to emotion while neglecting that there is no way mom side loaded a virus and it's way more likely they opened Google chrome or some email and clicked one too many links.
Only if you get conversions, and this case it's worse because you wanted valuable conversions. Being swamped with morons that want to ride a 300k salary for a month or two before they get the boot is not the kind of applicant you want to attract
Isn't the job post a joke? The idea was to communicate - to companies looking for an advertising/PR firm - that the owner and existing employees work super-hard and don't take weekends off. Clearly they are exaggerating but a) they did get a lot of attention, b) seem to have a sense of humor and c) let you know they will return your call/email quickly.
Plus maybe the job post will attract one or two lunatics who are passionate and very hard-working.
Because it is mostly ridiculous? Will they require you to demonstrate "running through walls"? Show evidence on your phone of when you quadruple-texted someone?
They declare their mission: "Make viral content" - and they did.
reply