> Isn't that the opposite of the bitter lesson - adding more cleverness to the architecture?
The bitter lesson is that general methods and a system that learns trumps trying to manually embed/program human knowledge into the system, so clever architecture is ok and expected.
When I do projects in this realm, it requires significant discussion with the business to understand how reality is modeled in the database and data, and that info is required before any notion of "clean up" can be defined.
Yeah, you still do all of that domain research and requirements gathering and system design as your meatbag job. But now instead of writing the ETL code yourself by hand you can get 80-90% of the way there in a minute or two with AI assistance.
> Rich Sutton's views are far less interesting than Minsky's IMO.
I don't think Minsky's and Sutton's views are in contradiction, they seem to be orthogonal.
Minsky: the mind is just a collection of a bunch of function specific areas/modules/whatever you want to call them
Sutton: trying to embed human knowledge into the system (i.e. manually) is the least effective way to get there. Search and learning are more effective (especially as computational capabilities increase)
Minsky talks about what the structure of a generalized intelligent system looks like. Sutton talks about the most effective way to create the system, but does not exclude the possibility that there are many different functional areas specialized to handle specific domains that combine to create the whole.
People have paraphrased Sutton as simply "scale" is the answer and I disagreed because to me learning is critical, but I just read what he actually wrote and he emphasizes learning.
I take Sutton's Bitter Lesson to basically say that compute scale tends to win over projecting what we think makes sense as a structure for thinking.
I also think that as we move away from purely von neumann architectures to more neuromorphic things, the algorithms we design and ways those systems will scale will change. Still, I think I agree that scaling compute / learning will continue to be a fruitful path.
We had a few of those years ago. It kind of worked but was also another device that you had to understand how to load paper or work through error conditions.
> I saw one big rewrite from scratch. It was a multi-year disaster, but ended up working.
90% of large software system replacements/rewrites are disasters. The size and complexity of the task is rarely well understood.
The number of people that have the proper experience to guide something like that to success is relatively small because they happen relatively rarely.
It cramps my hand pretty badly to handwrite the amount of notes I prefer to take in live meetings, so I really dislike that this is true. (And anecdotally, it is true for me.)
I used to write a lot and one thing that helped was switch to a mechanical pencil and use 2B lead. It's very soft and requires very little pressure, even compared to pens.
The bitter lesson is that general methods and a system that learns trumps trying to manually embed/program human knowledge into the system, so clever architecture is ok and expected.
reply