It's an education problem on two fronts. People inside the ecosystem need to know about it. And also people too deep in the elixir ecosystem who don't know how ad-hoc polymorphism is supposed to be used in a statically typed language.
Both overcome it by admitting they don't know and need to learn.
This attitude really tires open source maintainers enormously. They are not allowed to earn money connected to the thing they are giving away for free?
I know there may have been some weird stuff going on lately (nginx, redis, etc.) but this is not one of them.
It's okay to be confused, but please do not continue this.
This breaks down because Tailwind is not monetized, is completely free, and hasn't indicated it won't be.
There is a corporate side with other features that has never been free. I pay for it because it's great.
I'm not sure if you're purposefully misstating it at this point or not. Several people have corrected you and you seem to double down incorrectly each time.
Yes, I have no idea who's this magical "we" in your "We can simply". To me this seems like a textbook coordination problem leading to a tragedy of the commons- even if you got 99.9% of the world into your "we", the remaining "defectors" would have a massive benefit from using AI to replace human labor.
No, I have the same question as that other poster. It is not a bad faith question.
There are a lot of problems that would be solved immediately if "we" (i.e. all of humanity, or all of the U.S. or some other country) decided collectively to do something: climate change, nuclear weapons proliferation, war, and so on. But that's effectively wishing for magic -- there is no way to get everyone to collectively agree on something, so unless you explain how to cope with that fact, you haven't actually made any progress.
Given that I personally don't control humanity as a hive mind, what can I do to fix this problem? You haven't proposed an answer to that.
the strong interpretation is that you mean we gotta do something. and it's really not "simply" even because "we" needs to include everyone and whoever is a renegade will get more benefit.
so if "say" is an euphemism for "do" it seems an obvious question what exactly do we "do". that's another reason why it's not "simply". even if everybody was ready to do something as one, if you think everybody just knows what we should do because it's so obvious you'r mistaken.
sure it's asked a bit sarcastic but sarcasm isn't banned right?
Not only can we not just do that (you did not even define what you mean), but China is coming out with models that are good enough for this purpose - and they are, because they are open, everywhere.
Indeed we need to revolt against AI and force every other big powerful nation to do the same thing. Yet unfortunately that seems like a big joke until AI has destroyed their society too.
Is digital stop in frisk run by a shadowy corporation better or worse than physical stop and frisk run by the police? Maybe it's better, but I'm not sure we should be ready to cheer it on either.
I'd do the same thing I'd do with anyone that has a different opinion than me: try my best to have an honest and open discussion with them to understand their point of view and get to the heart of why they believe said thing, without forcefully tearing apart their beliefs. A core part of that process is avoiding saying anything that could cause them to feel shame for believing something that I don't, even if I truly believe they are wrong, and just doing what I can to earnestly hear them out. The optional thing afterwards, if they seem open to it, is express my own beliefs in a way that's palatable and easily understood. Basically explain it in a language they understand, and in a way that we can think about and understand and discuss together, not taking offense to any attempts at questioning or poking holes in my beliefs because that is the discovery process imo for trying something new.
Online is a little trickier because you don't know if they're a dog. Well, now a days it's even harder, because they could also not have a fully developed frontal lobe, or worse, they could be a bot, troll, or both.
I don't know, it's kinda terrifying how this line of thinking is spreading even on HN. AI as we have it now is just a turbocharged autocomplete, with a really good information access. It's not smart, or dumb, or anything "human" .
I hate these kinds of questions where you try to imply it's actually the same thing as what our brains are doing. Stop it. I think it would be an affront to your own intelligence to entertain this as a serious question, so I will not.
My thoughts on this are as serious as it gets - AI in it's current state is no more than clever statistics. I will not be comparing how my own brain functions to what is effectively a linear algebra machine, as it's insulting to the intelligence of everyone here - what kind of serious thought would you like to have here, exactly?
I don't disagree but what we really should have dropped "AI" a long time ago for "statistical machine intelligence". Machine learning then is just what statistical machine intelligence does.
We could have then just swapped "AI" for "SMI" and avoided all this confusion.
It also would avoid pointless statements like "It is JUST statistical machine intelligence". As if statistical machine intelligence is not extraordinarily powerful.
The real difference though is not in "intelligence", is it in "being". It is not as much an insult to our intelligence as it is an insult to our "being" when people pretend that LLMs have some kind of "being".
The strange thing to me is Gemini just tells me these things so I don't know how people get confused:
"A rock exists. A calculator exists. Neither of them has "being."
I am closer to a calculator than a human.
A calculator doesn't "know" math; it executes logic gates to produce a result.
I am a hyper-complex calculator for language. I calculate the probability of the next word rather than the sum of numbers."
You’re very adamant about not doing an obvious comparison. You want to stop thinking at that point. It’s an emotional reaction, not an intellectual one. Quite an interesting one as well, that possibly suggests a threat response.
The assumption you seem to keep making is that things like “clever statistics” and “linear algebra” simply have no bearing on human intelligence. Why do you think that? Is it a religious view, that e.g. you believe humans have a soul that somehow connects to our intelligence, making it forever out of reach of machine emulation?
Because unless that’s your position, then the question of how human intelligence differs from current machine intelligence, the question that you simply refuse to contemplate, is one of the more important questions in this space.
The insult I see to intelligence here is the total lack of intellectual curiosity that wants to shoot down an entire line of thinking for reasons that apparently can’t be articulated.
>>here is the total lack of intellectual curiosity that wants to shoot down an entire line of thinking for reasons that apparently can’t be articulated.
It's the same energy as watching a Joe Rogan podcast where yet another guest goes "well they say there's global warming yet I was cold yesterday, I'm not saying it's fake but really we should think about that". These questions about AI and our brains aren't meant to stimulate intellectual curiosity and provoke deep interesting discussions - they are almost always asked just to pretend the AI is something that it's not - a human like intelligence where since our brains also work "kinda like that" it means it must be the same - and the nearest equivalence is how my iron heats water so in essence it's the same as my stomach since it can also do this.
>>the question that you simply refuse to contemplate
I don't refuse to contemplate it, I just think the answer is so painfully obvious the question is either naive or uninformed or antagonistic in nature - there is no "machine intelligence" - it's not a religious conviction, because I don't think you need one to realise that a calculator isn't smart for adding together numbers larger than I could do in my own head.
>ChatGPT (o3): Scored 136 on the Mensa Norway IQ test in April 2025
If you don't want to believe it, you need to change the goal posts; Create a test for intelligence that we can pass better than AI.. since AI is also better at creating test than us maybe we could ask AI to do it, hang on..
>Is there a test that in some way measures intelligence, but that humans generally test better than AI?
Answer:Thinking, Something went wrong and an AI response wasn't generated.
Edit, i managed to get one to answer me; the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI). Created by AI researcher François Chollet, this test consists of visual puzzles that require inferring a rule from a few examples and applying it to a new situation.
So we do have A test which is specifically designed for us to pass and AI to fail, where we can currently pass better than AI... hurrah we're smarter!
The validity of IQ tests as a measure of broad intelligence has been in question for far longer than LLMs have existed. And if it’s not a proper test for humans, it’s not a proper test to compare humans to anything else, be it LLMs or chimps.
To be intelligent is to realise that any test for intelligence is at best a proxy for some parts of it. There's no objective way to measure intelligence as a whole, we can't even objectively define intelligence.
I believe intelligence is difficult to pin down in words but easy to spot intuitively - and so are deltas in intelligence.
E.g watch a Steve jobs interview and a Sam Altman one (at the same age). The difference in the mode of articulation, simplicity in communication, obsession over details etc are huge. This is what superior intelligence to me looks like - you know it when you see it.
>Create a test for intelligence that we can pass better than AI
Easy? The best LLMs score 40% on Butter-Bench [1],
while the mean human score is 95%. LLMs struggled the most with multi-step
spatial planning and social understanding.
That is really interesting; Though i suspect its just a effect of differing training data, humans are to a larger degree trained on spacial data, while LLMs are trained to a larger degree on raw information and text.
Still it may be lasting limitation if robotics don't catch up to AI anytime soon.
Don't know what to make of the Safety Risks test, threatening to power down AI in order to manipulate it, and most act like we would and comply. fascinating.
>humans are to a larger degree trained on spacial data
you must be completely LLMheaded to say something like that, lol
humans are not trained on spacial data, they are living in the world. humans are very much diffent from silicone chips, and human learning is on another magnitude of complexity compared to a large language model training
Humans are large language models. Maybe the term language is being used a bit liberally here but we basically function in the same way, with the exception of the spacial aspect of our training data.
If this hurts your ego then just know the dataset that you built your ego with was probably flawed and if you can put that LoRA aside and try to process this logically; Our awareness is a scalable emergent property of 1-2 decades of datasets, looking at how neurons vs transistor groups work, there could only be a limited amount of ways to process these sizes of data down to relevant streams. The very fact that training LLMs on our output works, proves our output is a product of LLMs or there wouldn't be patterns to find.
There's a lot of things going on in the western world, both financial and social in nature. It's not good in the sense of being pleasant/contributing to growth and betterment, but it's a correction nonetheless.
That's my take on it anyway. Hedge bets. Dive under the wave. Survive the next few years.
reply