Hacker Newsnew | past | comments | ask | show | jobs | submit | dryarzeg's commentslogin

I've read this a long time ago, when I was a kid. Back then I thought about the education system and how it sometimes inhibits the creativity within the students. But right now, other comparison comes to mind - I don't know how relevant it is, though, so please don't judge it strictly.

Modern "AI" (LLM-based) systems are somewhat similar to the humans in this story who were taped. They may have a lot of knowledge, even a lot of knowledge that is really specialized, but once this knowledge becomes outdated or they are required to create something new - they struggle a lot. Even the systems with RAG and "continuous memory" (not sure if that's the right term) don't really learn something new. From what I know, they can accumulate the knowledge, but they still struggle with creativity and skill learning. And that may be the problem for the users of these systems as well, because they may sometimes rely on the shallow knowledge provided by the LLM model or "AI" system instead of thinking and trying to solve the problem themselves.

Luckily enough, most of the humans in our world can still follow the George's example. That's what makes us different from LLM-based systems. We can learn something new, and learn it deeply, creating the deep and unique networks of associations between different "entities" in our mind, which allows us to be truly creative. We also can dynamically update our knowledge and skills, as well as our qualities and mindset, and so on...

That's what I'm hoping for, at least.


What concerns me is that learning depth is more discouraged than ever. For a long time it's been discouraged, which is natural as we have a preference for simple things rather than difficult/complex things. But we're pushing much harder than ever before. From the way we have influencer education videos to the way people push LLMs ("you can just vibe code, no thinking required"). We've advanced enough that it's easy to make things look good enough but looks can be deceiving. It's impossible to know what's good enough without depth of knowledge, without mastery.

No machine will ever be sufficient to overcome the fundamental problem: a novice is incapable of properly evaluating a system. No human is capable of doing this either, nor can they (despite many believing they can). It's a fundamental information problem. The best we can do is match our human system, where we trust the experts, who have depth. But we even see the limits of that and how frequently they get ignored by those woefully unqualified to evaluate. Maybe it'll be better as people tend to trust machines more. But for the same reason it could be significantly worse. It's near impossible to fix a problem you can't identify.


It is from my observation that more and more companies, even FAANG, not only encourage fast iteration with LLM tools, but discourage true study and thinking through. The later is inevitably slow, which is not favourable for today’s fast iteration world. This makes me think that it is so important to get into the right team, otherwise one runs the risk of not properly thinking and experimenting again.

So how do we find or make the right teams?

The incentive and loss function are pointing to short term attention and long term amnesia. We are fighting the algorithms.


I think low level programming or anything critical is still relatively safe. That’s where I wish I could be, but still very far away from.

Ironically, when machine learning is getting “deeper & deeper”, human learning is getting more and more impatient and shallow.

I have been searching “Vibe coding” videos on YouTube that are not promoting something. And I found this one and sat down and watched the whole three hours. It does take a lot of real effort.

https://www.youtube.com/watch?v=EL7Au1tzNxE


I'm a machine learning researcher myself and one of the things that frustrates me is that many of my peers really embrace the "black box" notion of ML models. There are plenty of deep thinkers too, but like with any topic the masters are a much smaller proportion. Just a bit of a shame given that I'm talking about a bunch of people with PhDs...

As for my experience vibe coding is that it is not so much vibing but needing to do a lot of actual work too. I haven't watched that video you linked but that sounds to reflect my actual experience and that of people I know offline.


Since the JavaScript and Python worlds are heavily polluted by LLMs, I start to look into Rust and Cargo ecosystem. Surprisingly it picked up the pace as quickly as possible.

Once Rust can be agentic coded, there will be millions of mines hidden in our critical infrastructure. Then we are doomed.

Someone needs to see the problem coming and start to work on the paths to solution.


The mines are already being placed. There are plenty of people vibe coding C programs. Despite C documentation and examples being more prolific than rust, well... C vulnerabilities are quite easy to create and are even in those examples. You can probably even get the LLMs to find these mines, but it'll require you to know about them.

That's the real scary part to me. It really ramps up the botnets. Those that know what to look for have better automation tools to attack and at the same time we're producing more vulnerable places. It's like we're creating as much kindling as possible and producing more easy strike matches. It's a fire waiting to happen.


I did a toy experiment on a pretty low level crate (serde) in Rust ecosystem, to run a simple demonstration from their website pulling in 42M of dependencies.

https://wtfm-rs.github.io/wtfm-serde/doc/wtfm_serde/

I know this is orders of magnitude smaller than npm or pip, but if this is the best we can get 50 years since 70s UNIX on PDP-11, we are doomed.


It amazes me how much we've embraced dependency hell. Surely we need some dependencies but certainly we're going overboard.

On a side note, I wonder how much of this is due to the avoidance of abstraction. I hear so many say that the biggest use they get from LLMs is avoiding repetition. But I don't quite understand this, as repetition implies poor coding. I also don't understand why there's such a strong reaction against abstraction. Of course, there is such a thing as too much abstraction and this should be avoided, but code, by its very nature, is abstraction. It feels much like how people turned Knuth's "premature optimization is the root of all evil" from "grab a profiler before you optimize you idiot" to "optimization is to be avoided at all costs".

Part of my questioning here is that as the barriers to entry are lowered do these kinds of gross mischaracterizations become more prevalent? Seems like there is a real dark side to lowering the barrier to entry. Just as we see in any social setting (like any subreddit or even HN) that as the population grows the culture changes significantly, and almost always to be towards the novice. For example, it seems that on HN we can't even make the assumption that a given user is a programmer. I'm glad we're opening up (as I'm glad we make barriers to entry lower), but "why are you here if you don't want to learn the details?" How do we lower barriers and increase openness without killing the wizards and letting the novices rule?


This is what people hope the AGI will replace.

That is exactly what I wanted to say.


1) Many LLMs are used in conversational chatbots, so "banning" the first-person pronouns will simply kill this feature, which is indeed useful for many real-world purposes;

2) If you will just remove the tokens representing the first-person pronouns, this will severely harm the model's performance in almost all tasks that require interaction (either real or imagined) in a social context, ranging from simple work letter understanding to creative writing and things like that. If you will instead try to train the LLM in the way that will inhibit the "first-person behaviour" - it may work, but it will be a lot harder and you will probably have problems with the performance or usability of this model.

To conclude - it's just not that easy.


I guess it's because being extroverted is more positive for society as a whole than being introverted. And so society is trying to -- even if it's not always a conscious efforts made -- kind of encourage and reward this type of behaviour.

EDIT: of course it's not all that simple. IMHO a society of pure extroverts would be an unstable network of salespeople with nothing to sell and no one to engineer and manufacture the things needed. I'm joking here, but... :)


I’m not sure the statement “being extroverted is more positive for society as a whole than being introverted” is true.


I'm not a psychologist, neither I'm a sociologist, so I may be wrong. It's just my guess, don't take it too seriously.


Society does need quiet thoughtful people even though they’re usually outside the spotlight and harder to get their recognition.


No one says they are not needed. It's just that extroverted individuals typically create more relationships within society, and those relationships are usually stronger. However, it's not always the case -- for example, introverted individuals may have a smaller number of relationships, but those relationships may be much stronger.

Society exists because of those relationships, and so it is good for its survival and success if the number of relationships or connections within society is growing and they are becoming stronger. It is basically the existential need for any society.

(I'm sorry if I'm wrong, I'm not a specialist in this field, it's just a bunch of guesses based on my observations)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: