Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I know absolutely nothing about math, but... I'm thinking about the history of software, where back in the day you had all these amazing projects like RollerCoaster Tycoon basically being written by one guy. Then software engineering became modularized in a way that sounds kind of similar to what is described in the interview, and now you have hordes of people who churn out React for a living (like myself) and software engineering is now a giant assembly line and the productivity (or skill required) per individual has dropped to near zero.

I guess I have the impression that when fields are in their heyday, the best work is done by some genius fitting like a hundred things in one brain, and when that gets replaced by an assembly line, the field has basically stopped producing anything of real value. Anyone in math (or otherwise) want to tell me why that's completely wrong?



This intuition denies the inherent human power: that we are not merely individuals, and can work together to accomplish greater things than any individual could. Software engineering has certainly _not_ stopped doing cool things-- exactly the opposite-- obviously. I don't care to justify this since there was none in the claim I'm responding to.


My theory is not that individual genius accomplishes more than a group, but that whether a field is currently more suited for individual single-brain productivity or assembly-line cooperation is a signal of whether that field is nascent, at its peak, or in decline. I genuinely think software technology has not advanced in a long time. Computers got really fast, which allowed us to build a mountain of abstractions on top, and now you can throw together web apps, and there is no shortage of companies running around with that hammer and banging on everything. But the actual boundaries of what is possible with software have not moved in a while.

Most of the software nowadays (in addition to being terribly slow and broken, by the way) feels like stuff that doesn't actually need to exist. It doesn't advance the boundaries of technology or humanity's skill tree in any way. It's a "nice to have" — it might have some social value or make some business process more efficient or whatever — but no one looks at it and goes, wow, we couldn't do that before.

Someone is going to bring up AI and LLMs, and about that, I will just say that ChatGPT was a real accomplishment... but after that, I have not seen any use cases of LLMs that represent a genuine step forward. Like smart contracts, it's a solution in desperate search of a problem, a technology in search of a breakout app. You feel like there's just no way it doesn't have wide reaching implications, and everyone around you (especially VCs) talks about the "possibilities unlocked" — but two years in and we haven't seen a single actually interesting use case. (Again, of course they're being incorporated to optimize some business process or whatever. But who cares?)

Meanwhile, as far as the day to day reality of software engineering goes, if you work professionally as an engineer, your job is more or less wrangling microservices in one form or another. If you're at a big company this is probably literal, but even if you're at a startup the bigco transplants start doing microservice stuff basically on arrival, and a bunch of the "best practices" and frameworks and tools are meant to mimic microservices type thinking (making things more complicated rather than less).

As a final note, I grew up being a giant fan of YC and PG's essays, so I will make a comment about the latest batch of YC startups. It seems to me that a bunch of them basically exist to make some process more efficient. But then you wonder, why does that process need to exist in the first place? Like there was one startup helping you to optimize your AWS costs or something, but why is software written in such a way to begin with that you have all these microservices and towers of abstractions and untrackable infrastructure? Like you look at these startups solving a problem of solving a problem... and then you get to the root problem, and the whole thing is just a giant castle in the sky. I think this describes much of the motivating thrust of software today.


Theorem provers are a rapidly developing software innovation that is invading a field that has existed since Euclid. I would consider Bitcoin an advancement of boundaries as well. Possibly even Kubernetes, VR, etc.

These boundary pushing technologies appear as a "decline" as you put it because most of the software world needs to worry about paying bills with CRUD apps instead of pushing technological boundaries. I would say this Project Managementization of math as Terrence Tao put it would push the boundaries of math to a point of a sort of revolution, but the sad truth is that it will lose its romantic qualities.

I'm sure there are plenty of mathematicians that enjoy the Euclid/Andrew Wiles way of things and will come to similar conclusions as you. It will be easy to view it as either a revolution or decline depending on whether you prefer advancing the field as a universal truth or you prefer the practice as it is.


Here's a potential analogy: AlphaGo made a major breakthrough when it beat Lee Sedol. For many of us, that was the end of it, but for for professional Go players, it revolutionized the game it ways that no one outside of the Go community can really appreciate. Likewise for AlphaFold and protein structure determination, where it has revolutionized a field of research and at this point you only see this directly if you're using the tool day-to-day (in which case it may have dramatically changed the nature of your research).

For many software products I think you have a point, where superficial things like word processors are mostly just getting bloated these days. But for open-ended things like Mathematics, having new tools can crack open deep problems that were unapproachable in the old way. This new way of doing research may not turn out to be for everyone, but at the end of the day the field will explore previously unapproachable but undeniably interesting territory.

It might also make mathematics more accessible, and a possible outcome of that might be that more superficial mathematics gets done. But if it also enabled more deep mathematics then there's no point being cynical about the former. Maybe there is a similar fallacy with respect to software development.


> software engineering is now a giant assembly line and the productivity (or skill required) per individual has dropped to near zero

If you want a more engaging job you can take a paycut and work at a startup where the code is an even bigger mess written by people who are just as lazy and disorganized but in another (more "prestigious") programming language.

Joking aside, when inevitably someone needs to fix a critical bug you'll see the skills come out. When products/services become stable and profitable it doesn't necessarily mean the original devs have left the company or that someone can't rise to the occasion.


I think that is unfair. The productivity per person in old projects like RollerCoaster Tycoon is impressive. But the overall result pales in comparison with modern games. Saying the field has stopped producing anything of real value is very wrong.

To me it's like comparing a cathedral made by 100 people over 100 years to a shack without made by one person in one month. Like it stands I suppose. It gives him shelter and lets him live. But it's no cathedral.


I probably should've defined "real value" more carefully. My framework here is basically the idea of technology as humanity's "skill tree," the total set of stuff we're able to do. Back when RCT (and Doom before that) were written, we genuinely weren't sure if such things could be created, and their creation represented a novel step forward. The vibe in software feels like we're long past the stage of doing things like that, like we're not even attempting to do things like that. I can only think of two things in recent history: whatever strand of AI culminated in ChatGPT, which is genuinely impressive, and Bitcoin, which ostensibly was created by one guy.


I think there have been advancements in RL that deserve to be on this list. AlphaFold at the very least; I hope you had the pleasure of reading the reactions of researchers who work on protein structure determination and how stunned they were that their field of research had been in large part solved.

Here's a few more that come to mind:

- Work decoding brain signals that allowed paralyzed people to communicate via a computer (this all happened pre Neuralink)

- I think self-driving cars fit your criteria, even if the jury is still out as to whether this has/will soon be 'solved.'

- In gaming, Unreal Engine's Nanite is very impressive and a real technological innovation

- There are efforts to get LLMs to learn via RL and not just from training on huge text corpuses. Actually mathematics is a prime example where you might dream of getting superhuman performance this way. People are at least trying to do this.

Just because there's a lot of BS which tends to drown everything else out, doesn't mean there isn't real progress too.


I suppose with a blanket statement like "nothing is happening" I'm practically begging to be contradicted :)

Some of this stuff looks very cool. Have we started getting RL to work generally? I remember my last impression being that we were in the "promising idea but struggling to get it working in practice" phase but that was some time ago now.


Edit: after writing what follows, I realized you might have been asking about RL apply to LLMs. I don't know if anyone has made any progress there yet.

Depends on what you mean by 'generally?' It won't be able to solve all kinds of problems, since e.g. you need problems with well-defined cost functions like win probability for a board game. But AlphaGo and AlphaZero have outstripped the best Go players, whereas before this happened people didn't expect computers to surpass human play any time soon.

For AlphaFold, it has literally revolutionized the field of structural biology. Deepmind has produced a catalog of the structure of nearly every known protein (see https://www.theverge.com/2022/7/28/23280743/deepmind-alphafo..., and also https://www.nature.com/articles/d41586-020-03348-4 if you can get past the paywall). It's probably the biggest thing to happen to the field since cryoelectron microscopy and maybe since X-ray crystallography. It may be a long time before we see commercially available products from this breakthrough, but the novel pharmaceuticals are coming.

Those are the biggest success stories in RL that I can think of, where they are not just theoretical but have lead to tangible outcomes. I'm sure there are many more, as well as examples where reinforcement learning still struggles. Mathematics is still in the latter category, which is probably why Terence Tao doesn't mention it in this post. But I think these count as expanding the set of things humans can do, or could do if they ever work :)


It seems you've taken a roundabout way to arrive at innovation.

It's interesting because this is something that makes programming feel like a "bullshit job" to some: I'm not creating anything, I'm just assembling libraries/tools/framework. It certainly doesn't feel as rewarding, though the economic consensus (based on the salaries we get) is that value most definitely is being generated. But you could say the same of lumberjacks, potters, even the person doing QA on an assembly line, all in all it's not very innovative.

That's the thing with innovation though, once you've done it, it's done. We don't need to write assembly to make games, thanks to the sequential innovation of game engines (and everything they are built on).

Samme with LLMs and Bitcoin: now that they exist, we can (and did) build up on them. But that'll never make a whole new paradigm, at least not rapidly.

I think our economic system simply doesn't give the vast majority of people the chance to innovate. All the examples you've given (and others I can think of) represented big risks (usually in time invested) for big rewards. Give people UBI (or decent social safety nets) and you'll find that many more people can afford to innovate, some of which will do so in CS.

I have to go back to work now, some libraries need stitched together.


I have been thinking on something along these lines as well. with the professionalization/industrialization of software you also have an extreme modularization/specialization. I find it really hard to follow the web-dev part of programming, even if I just focus on say Python, as there are so many frameworks and technologies that serve each little cog in the wheel.


It's interesting; I have the opposite feeling. Twenty years ago I assumed that as the field of computer science will grow, people will specialize more and more. Instead, I see a pressure first to be come a "full stack developer", then a "dev ops", and now with cloud everyone is also an amateur network administrator.

Meanwhile, everything is getting more complex. For example, consider logging: a few decades ago you simply wrote messages to standard output and maybe also to a file. Now we have libraries for logging, libraries to manage the libraries for logging, domain specific languages that describe the structure of the log files, log rotation, etc. And logging is just one of many topics the developer needs to master; there is also database modeling, library management, automated testing, etc. And when to learn how to use some framework like a pro, it gets throws away and something else becomes fashionable instead.

And instead of being given a clear description of a task and left alone to do it, what we have is endless "agile" meetings, unclear specifications that anyway change twice before you complete the task, Jira full of tickets with priorities but most tickets are the highest priority anyway.

So, the specialization of the frameworks, yes there is a lot of it. But with specialization of the developers, it seems the opposite is the trend. (At least in my bubble.)


I'm inclined to agree. People seem to be getting hung up on 'no, _my_ React app is hard and challenging for me!' which completely misses the point. I think it's a combination of a few things:

- skill is generally log-normally distributed, so there's few exceptional people anyway (see also: sturgeon's law, 99% of everything is crap)

- smaller/earlier fields may self-select for extreme talent as the support infrastructure just isn't there for most to make it, leading to an unrealistically high density of top talent

- it's a fundamentally different thing to make a substantial impact than to cookie-cutter a todo list app by wrangling the latest 47 frameworks. in that, it's highly unlikely that the typical engineering career trajectory will somehow hit an inflection point and change gears

- presumably, from a cost/benefit perspective there's a strong local maxima in spewing out tons of code cheaply for some gain, vs a much more substantial investment in big things that are harder to realize gains (further out, less likely, etc). the more the field develops, the lower the skill floor gets to hit that local maxima. which therefore increases the gap between the typical talent and the top talent

- there's not a lot of focus on training top talent. it's not really a direct priority of most organizations/institutions by volume. most coders are neither aiming for or pushed towards that goal

All told, I think there's a bunch of systemic effects at play. It seems unlikely to expect the density of talent to just get better for no reason, and it's easier to explain why left unchecked the average talent would degrade as a field grows. Perhaps that's not even an issue and we don't need 'so many cooks in the kitchen', if the top X% can push the foundations enough for everyone else to make apps with. But the general distribution should be worth paying attention to bc if we get it wrong it's probably costly to fix


> _my_ React app is hard and challenging for me

I love the idea of "essential vs artificial complexity". React claiming to solve the essential complexity of state management, among others, creates bunch of artificial complexities. And software engineers are no better as a result and become masters of frameworks, but not masters of their arts. That is why the comparison to assembly line is apt. Doing assembly line work is still hard, but it is not software "engineering" in a certain definition of the word.


Yeah! I think of that 'intrinsic difficulty' in sort of information theory terms. In that, given the context of where we are and where we want to be, what is the minimum amount of work needed to get there. If that minimum work is a large amount, the problem is 'hard'. This also accounts for the change in difficulty over time as our 'where we are' gets better.

Line work is certainly still hard from a toil standpoint. But academically, little of that total effort is intrinsically necessary, so the work is in some sense highly inefficient. There may still be aspects that need human intervention of course, but the less that is the more robotic the task.

In theory, anything that's already figured out is a candidate to be offloaded to the robots, and in some sense isn't necessarily a good use of human time. (There may be other socioeconomic reasons to value humans doing these tasks, but, it's not absolutely required that a human do it over a machine)


Skill required near zero? I'm approaching 15 years as a developer and React is still completely foreign to me. I'd need days full-time just to learn one version of it, not to mention every prerequisite skill.

And what do you mean by "real" value? Carrots have value because we can eat them, React apps have value because they communicate information (or do other things app do).

Or do you mean artistic value? We can agree, if you like, that SPAs are not as much "art" as RCT. But... who cares?


Mathematics or more generally academics is no panacea either. At least in engineering and applied math, there is a different assembly line of "research" and publications going on. We live in the area of enshitification of everything.

In both software and math, there are still very good people. But too far and between. And it is hard to separate the good from the bad.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: