Hacker News new | past | comments | ask | show | jobs | submit | rkozik1989's comments login

So I've dealt with mental illnesses throughout my life and I can confidently say things are a hell of a lot better now than when I was a kid. They literally understood so little about the subject that there were basically 3 categories with rigid qualifiers. If you didn't fit 100% into a mold you were considered lazy, undisciplined, etc.

Instead of just blindly rattling off a bunch of buzzwords that you know nothing about maybe select 1 or 2 of the things from your list you actually have experience in.


Anytime I hear somebody say bots/shills all imagine is the author being extremely butt hurt that their precious thing is not also everyone's favorite.


If they can offer enterprise-level support for everything they're in a prime position to be the Oracle of AI. In the sense that open-source programming languages can out preform Java in certain instances, but they choose Oracle because they can just pick a phone and the person on the other can solve any issue they have. DeepSeek without a for-profit model just wont be able to offer such a service.


Good luck, whenever an eyepopping number gains traction in the media finding the source of the claim become impossible. See finding the original paper named, "The Big Payout" that was the origin for the claim that college graduates will on average earn 1M more than those who don't go.


In this case it's actually in the DeepSeek v3 paper on page 5

https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSee...


You've noticed two things:

1. Every generation of programmers typically only has a certain window of knowledge, and everything before their time is simply foreign to them. Most developers aren't programming history buffs, they don't read books from the past to learn about technologies or how they work, how we got to now, etc.

2. When you're starting off on your journey as a programmer you're using a lot of things without understanding their implementation, and that understanding for most individuals whom I've worked with comes from having to debug issues.


> Every generation of programmers typically only has a certain window of knowledge

That's true, of course, but I think the disconnect he's pointing out is: a) deeper than that and b) new. I grew up a ways before he did and my first serious computer was a commodore 64. Like his 286, there was nothing you could do with it _except_ program it, and programming was hyper accessible. I immediately understood C's pointer syntax when I first came across it because the only way to show graphics on a C64 is to write to a specific memory address and the only way to intercept keypresses is to read from one.

Fast forward to the 21st century. My son is a smart kid - graduated valedictorian of a competitive high school - but he grew up with "computers" that hid the programming interface down so far that most of his experience with actual programming has been through emulators so the whole thing feels (and is!) artificial. He still struggles differentiating between what's "real" and what's emulated and I can tell from interacting with other programmers his age that he's not alone.


The GP's point is more general and stands. Technologists inevitably start off "in the middle" of a stack of technologies, and then choose to dig down and/or build up from there. It is inevitable that newcomers will not understand the technologies upon which their current work rests. A certain fraction of those will become fascinated and drawn to understand the underpinnings of their work, for an edge, or for the love of it, or both.

This process happens more naturally over time if you grow with the technology. For example, if you started programming with HTML in 1995 you not only have ~30 years of experience, but you've also seen the changes as they occurred, and have a great deal of context about the how and why each of them happened. That kind of experience can only happen in real-time, unless some kind soul is willing to write a manual to help newcomers speed-run it. Such a manual is often called a "textbook".


Until maybe 15-ish years ago, it was common for programmers to “bottom out” on the machine code level. In other words, they came to roughly understand the whole software stack in principle down to the software-hardware interface (the CPU executing machine code). The topic of this thread is that this has (allegedly) changed, developers at large don’t go down in their understanding until they hit hardware anymore.

Okay, you might argue that in earlier decades programmers also tended to have an understanding of hardware circuit design and how a CPU and memory etc. works in terms of logical gates and clocks. But arguably there is a harder abstraction boundary between hardware and software than at any given software-software abstraction boundary.


>Until maybe 15-ish years ago, it was common for programmers to “bottom out” on the machine code level.

That was more true 30 years ago, in the age of relatively simple micro-architectures in the (relatively short) era of the disconnected personal computer. But there are several directions to "bottom out". For example going in the direction of the network, you might dig into the details of ethernet, wifi, and UDP/TCP/IP. You can even go in the direction of electronics, and understand how to de/solder components, design and fabricate PCB's, or in the direction of enclosures and mechanisms. You can go in the direction of understanding the market itself, and all the CPU/GPU/motherboard/PCI/m2/SATA/chipset combinations and how they work together. You might go in the direction of understanding systems programming and the kernel, ttys, and file descriptors, or in the direction of cryptography, etc.

Complexity is a sign of a healthy ecosystem, and we see this repeated again and again in every area of human endeavor that continues unabated for any length of time. A field starts small and understandable by one person, but quickly continues to grow until it cannot reasonably be understood by any one person. Chemistry, math, engineering, physics are like this. Is it reasonable to lament this fact and consider modern practitioners any less than earlier ones for their lack of total knowledge of it?


>Okay, you might argue that in earlier decades programmers also tended to have an understanding of hardware circuit design and how a CPU and memory etc. works in terms of logical gates and clocks

My degree included this and I'm very glad it did.


> it was common for programmers to “bottom out” on the machine code level.

It's still this way in embedded development to some large degree.


For those with past web stack exp 20-30+ yrs) 'without' modern web stack experience. How do you stand out for those job listings requiring modern stack? How do you tie those past experiences to remove such disconnect?

Hiring manager's looking for React in next role. Would you prefer to someone with lots of experience (old ways) or someone expert in React with few years.

Looking for context.


Dude/ette; pointers WRECKED people's minds (myself included) when we learned about them in CS101 (modified for computer engineers) in 2006. I don't know why either; they make so much sense.


Can you explain why?

I heard about this, and avoided pointers for a long time because of a fear that comments like yours gave me.

I then ended up realizing Java objects that I had been dealing with were just memory references, and that led me to learning about pointers in C.

Once I realized that I basically already knew what pointers were, I was pretty confused about why it created so much difficulty. Like it was an actual barrier for me to really understand how simple they are, so few moving parts.

It seems to me that everyone intuitively understands the difference between a home address, and the home itself.

Not to say that I'm an expert in pointer arithmetic or anything, I still prefer symbolic programming to dealing with pointers directly, for the same reason I prefer spoken English to Morse code.


If I recall correctly, the big gripe about them at the time was understanding all of the situations in which pointers can be used outside of simply accessing data, like setting one pointer to another one and doing pointer math with char-arrays (i.e. setting one variable's memory address to another). Like another poster said, much of this is unique to C and C++ without smart pointers.


Because C syntax makes pointers harder to use and understand than they really are. Also constraints around pointers, casting etc. I still clearly remember figthing the C compiler and getting back "makes integer from pointer without a cast" complilation errors and segfaults. That's the main reason why I don't want to touch C or Rust again. D felt so much easier. I could finally write a program without endlessly fighting the compiler.


I've never programmed in D. I only here good things

I definitely agree that stuff is pretty annoying.

I guess I've gotten used to it to the casting behavior. Though these days I program mostly in dynamically typed languages.


I think the one weird trick to understand pointers is to do a tiny amount of assembly programming. Assembly is very simple and pointers are everywhere. Writing useful programs in it is a chore, but you don't need to!


I never understood the struggle people have with pointers, to me they are quite obvious, unless I'm missing some important thing about them. You have something you need to store; it's stored in memory; so you need to store its address if you intend on accessing it; so you store it in a pointer. From there, the rest comes logically.


I think the problem with C (well my problem) was the *& characters all over the place. I think because I spent too much time with pascal where (in my opinion) the pointer syntax is more symmetric.


Nah, I entered the industry in 2008, and I know enough FORTRAN to know I never want to use it.


That's actually not true at all. According to interviews a lot of people actually miss it. I believe it was a German documentary that I first heard it in. I'm kind of obsessed with this place and even created a game around the premise of a walled city.


Certainly I've heard the same from a coworker reporting on her grandmother's perspective (her grandmother lived there in the 60s/70s).

At the same time, the grandmother did leave when she could, long before the final demolition, despite her later views on that time.


From another comment, said documentary: https://youtu.be/S-rj8m7Ssow


Metro but set in Kowloon would be shockingly good to play.


Stray?


Ah the classic make a bold prediction to grab headlines because it doesn't matter if you're right or wrong, memory's are short.


Read the article.

It's talking about how what we can "AI" today will be treated as bog standard - the same way this happened with Recommender systems 3-4 years ago, Neural networks 4-7 years ago, statistical learning (eg. SVM) 7-10 years ago, etc.

The title is a reference to a fairly prominent article about "Big Data" that was based on the same premise.


Read the article but I still think the criticism of the title is valid. The claim is that the way we talk about AI will be different in 5 years, not that AI will be dead. Likewise recommender systems, Neural Networks, and Statistical Learning are not certainly not dead. It's an abuse of a term to grab clicks.


I strongly disagree - especially because the title is itself a reference to "Big Data Will Be Dead In 5 Years" which itself talked about this same phenomenon, albeit for Data Engineering.

Titles are not arguments. Some people may want them to be, but they are not.

Engaging with a title just distracts a discussion from the core thesis of a post.


Have these AI True Believers heard of the phrase: sufficiently advanced technology is indistinguishable from magic? What’s magical about a recommendation system? Or statistical learning?

Well maybe they are? But they are all very specialized tools. And it’s not difficult to understand how they conceptually work. I guess...

Meanwhile a conversational partner in a box that can answer any of my questions? (If they eventually live up to the promise) Umm, how the hell would a normal person not call that magical and indeed “AI”?

I’m sorry but the True Believers haven’t made anything before which is remotely close to AI. Not until contemporary LLMs. That’s why people don’t call it that.


Point taken! I should have considered a subtitle for the post to make it clear what my main point is. That being said, I tried to nuance the post and not make any bold predictions. I am happy to revisit the argument in five years and see where we are.


Are you reviewing the article or AI?


Dawg, LLMs cannot reason, they simply return a response based on statistical inference (voting on correct answer). If you want it to do anything correctly you need to do a thought experiment, solve the problem at a 9,000ft view, and hold its hand through implementation. If you do that there's nothing it cannot do.

However, if you're expecting it to write an entire OS from a single prompt it's going to fail just as any human would also fail. Complex software problems are solved incrementally through planning. If you do all of that planning its not hard to get LLMs to do just about anything.


The problem with your example of applying analyzing something as complex and esoteric as a codebase is that LLMs cannot reason they simply return a response based on statistical inference, so unless you followed a standard like PSR for PHP and implemented it to a 't' it simply doesn't have to context to do what you're asking it to do. If you want an LLM to be an effective programmer for a specific application you'd probably need to fine tune and provide it instructions on your coding standards.

Basically, how I've become successful using LLMs is that I solve the problem at a 9,000ft view, instruct the LLM to play different personas, have the personas validate my solution, and then instruct the LLM step-by-step to do all of the monkey work. Which doesn't necessarily always save me time upfront but in the long run it does because it makes fewer mistakes implementing my thought experiment.


Fair enough, I might be asking too much indeed, and may not be able to come up with an idea how LLMs can help me. For me, writing code is easy as soon as I understand the problem, and I sometimes spend a lot of time trying to figure out a solution that fits well within the context, so I thought I could ask an LLM what different things do and mean to help me understanding the problem surface better. Again, I may not understand something, but, at this point, I don’t understand what’s the value of code generation after I know how to solve a problem.

Do you happen to have a blog post or something showing a concrete problem that LLM helped you to solve?


Solve the problem at a high-level in your head, ask the LLM if your concept is correct, and then instruct the LLM to build the solution step-by-step.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: